text
stringlengths 1
1.03M
| id
stringlengths 1
7.38k
| metadata
dict |
---|---|---|
\section{Introduction}
Provided that General Relativity itself is accurate, a number of cosmological observations, e.g. the dynamics of galaxies and galaxy clusters and the Large-Scale Structure (LSS), provide us with ample evidence that most of the matter in the Universe is not made up of familiar baryonic matter, but of Dark Matter (DM) \citep{Zwicky:1933gu,Zwicky:1937zza,Dodelson:2001ux,Hawkins:2002sg,Spergel:2006hy,Riess:2006fw}. Moreover, the largest fraction of the energy budget in the present Universe is occupied by Dark Energy (DE), which seems to be accelerating the expansion of the Universe \citep{Riess:1998cb,Knop:2003iy,Riess:2004n,Riess:2006fw,Komatsu:2008hk}. The current concordance model, with radiation, baryons, cold DM and DE (in the form of a cosmological constant $\Lambda$) is known as the $\Lambda$CDM model.
It has been demonstrated \citep{Albrecht:2006um,2006ewg3.rept.....P} that the nature of the dark components of the Universe can be constrained to a high degree of accuracy by using wide and deep imaging surveys; weak lensing, in which the shear and redshift information of every galaxy is used, has the potential to constrain the equation of state of such dark components by using surveys such as Euclid\footnote{http://sci.esa.int/science-e/www/area/index.cfm?fareaid=102} \citep{2008SPIE.7010E..38R,Refregier:2010ss} or Pan-STARRS\footnote{http://pan-starrs.ifa.hawaii.edu} \citep{2002SPIE.4836..154K,2002AAS...20112207K}. As a direct probe of the mass distribution, gravitational lensing is an excellent tool for cosmological parameter estimation, complementing Cosmic Microwave Background (CMB) studies. One of the most useful manifestations of gravitational lensing by intervening matter is the alignment of nearby images on the sky. Detection of DM on large scales through such cosmic shear measurements -- the small, coherent distortion of distant galaxy images due to the large-scale distribution of matter in the cosmos -- has recently been shown to be feasible.
At a statistical level, it has been shown \citep{Hu:1998az,Hu:1999ek} that there is some extra information on cosmological parameters which can be gained by dividing the sample into several redshift bins; this technique is known as weak lensing tomography. However, a more comprehensive representations of the shear field can be called 3D weak lensing \citep{Heavens:2003jx,Castro:2005bg,Heavens:2006uk,Kitching:2006mq}, in which, by using the formalism of spin-weighted spherical harmonics and spherical Bessel functions, one can relate the two-point statistics of the harmonic expansion coefficients of the weak lensing shear and convergence to the power spectrum of the matter density perturbations. Such a tool is relevant in view of the present and next generations of large-scale weak lensing surveys, which will provide distance information of the sources through photometric redshifts.
Recently, rather than considering DM and DE as two distinct components, it has been suggested the alternative hypothesis that DM and DE are two states of the same fluid. This has been variously referred to as ``Unified Dark Matter'' or ``quartessence'' models. Compared with the standard DM plus DE models (e.g. $\Lambda$CDM), these models have the advantage that we can describe the dynamics of the Universe with a single scalar field which triggers both the accelerated expansion at late times and the LSS formation at earlier times. Specifically, for these models, we can use Lagrangians with a non-canonical kinetic term, namely a term which is an arbitrary function of the square of the time derivative of the scalar field, in the homogeneous and isotropic background.
Originally this method was proposed to have inflation driven by kinetic energy, called $k$-inflation \citep{ArmendarizPicon:1999rj,Garriga:1999vw}, to explain early Universe's inflation at high energies. Then this scenario was applied to DE \citep{Chiba:1999ka,dePutter:2007ny,Linder:2008ya}. In particular, the analysis was extended to a more general Lagrangian \citep{ArmendarizPicon:2000dh,ArmendarizPicon:2000ah} and this scenario was called $k$-essence \citep[see also][]{Chiba:1999ka,Rendall:2005fv,Li:2006bx,Calcagni:2006ge,Babichev:2006cy,Fang:2006yh,Bazeia:2007df,Kang:2007vs,Babichev:2007dw,Babichev:2007tn,Ahn:2009xd}.
For zmodels, several adiabatic or, equivalently, purely kinetic models have been investigated in the literature: the generalised Chaplygin gas \citep{Kamenshchik:2001cp,Bilic:2001cg,Bento:2002ps,Carturan:2002si,Sandvik:2002jz}, the single dark perfect fluid with a simple two-parameter barotropic equation of state \citep{Balbi:2007mz,Quercellini:2007ht,Pietrobon:2008js} and the purely kinetic models studied by \citet{Scherrer:2004au}, \citet{Bertacca:2007ux}, \citet{Chimento:2009nj}. Alternative approaches have been proposed in models with canonical Lagrangians with a complex scalar field \citep{Arbey:2006it}.
One of the main issues of these UDM models is to see whether the single dark fluid is able to cluster and produce the cosmic structures we observe in the Universe today. In fact, a general feature of UDM models is the possible appearance of an effective sound speed, which may become significantly different from zero during the evolution of the Universe. In general, this corresponds to the appearance of a Jeans length (or sound horizon) below which the dark fluid does not cluster. Thus, the viability of UDM models strictly depends on the value of this effective sound speed \citep{Hu:1998kj,Garriga:1999vw,Mukhanov:2005sc}, which has to be small enough to allow structure formation \citep{Sandvik:2002jz,Giannakis:2005kr,Bertacca:2007cv} and to reproduce the observed pattern of the CMB temperature anisotropies \citep{Carturan:2002si,Bertacca:2007cv}.
In general, in order for UDM models to have a very small speed of sound and a background evolution that fits the observations, a severe fine tuning of their parameters is necessary. In order to avoid this fine tuning, alternative models with similar goals have been analysed in the literature: \citet{Piattella:2009kt} studied in detail the functional form of Jeans scale in adiabatic UDM perturbations and introduced a class of models with a fast transition between an early Einstein-de Sitter cold DM-like era and a later $\Lambda$CDM-like phase. If the transition is fast enough, these models may exhibit satisfactory structure formation and CMB fluctuations, thus presenting a small Jeans length even in the case of a non-negligible sound speed; \citet{Gao:2009me} explore unification of DM and DE in a theory containing a scalar field of non-Lagrangian type, obtained by direct insertion of a kinetic term into the energy-momentum tensor.
Here, we choose to investigate the class of UDM models studied in \citet{Bertacca:2008uf}, who designed a reconstruction technique of the Lagrangian, which allows one to find models where the effective speed of sound is small enough, and the $k$-essence scalar field can cluster (see also \citealt{Camera:2009uz}, \citealt{Camera:2010}). In particular, the authors require that the Lagrangian of the scalar field is constant along classical trajectories on cosmological scales, in order to obtain a background identical to the background of the $\Lambda$CDM model.
Here, we wish to investigate whether this class of UDM models can be scrutinised in realistic scenarios. Specifically, we compute the weak lensing signals expected in these models as they would be measured by a Euclid-like survey.
The structure of this paper is as follows. In Section~\ref{udm} we describe the UDM model we use in this work. In Section~\ref{3dlensing} we detail the theory of weak gravitational lensing on the celestial sphere, with a particular interest in the cosmic shear observable (Section~\ref{3dshear}). In Section~\ref{fisher} we outline the Fisher matrix formalism we use to calculate the expected statistical errors on cosmological parameters, and with the same formalism we compute the expected Bayesian evidence for UDM models over the standard $\Lambda$CDM model as a function of the sound speed parameter ${c_\infty}$ (Section~\ref{B-evidence}). In Section~\ref{results} we present our results, such as the matter power spectrum obtained in these UDM models (Section~\ref{matterpowerspectrum}) and the corresponding 3D shear signal (Section~\ref{signal}); the parameter estimations for a Euclid-like survey are presented in Section~\ref{estimation}, while in Section~\ref{selection} we use the Bayesian approach to ask the data whether our UDM model is favoured over the $\Lambda$CDM model or not. Finally, in Section~\ref{conclusions}, conclusions are drawn.
\section{Unified Dark Matter models}\label{udm}
We consider a UDM model where the Universe is filled with a perfect fluid of radiation, baryons and a scalar field $\varphi(t)$, the latter mimicking both DM and DE in form of a cosmological constant. In particular, \citet{Bertacca:2008uf}, by using scalar-field Lagrangians $\mathscr L(X,\varphi)$ with a non-canonical kinetic term, where\footnote{We use units such that $c=1$ and signature $\{-,+,+,+\}$, where Greek indices run over spacetime dimensions, whereas Latin indeces label spatial coordinates.}
\begin{equation}
X=-\frac{1}{2}\nabla^\mu\varphi\nabla_\mu\varphi,
\end{equation}
have outlined a technique to reconstruct UDM models such that the effective speed of sound is small enough to allow the clustering of the scalar field. Specifically, once the initial value of the scalar field is fixed, the scalar field Lagrangian is constant along the classical trajectories, namely $\mathscr L_\varphi=-\Lambda/(8\pi G)$, and the background is identical to the background of $\Lambda$CDM. In other words, the energy density of the UDM scalar field presents two terms
\begin{equation}
\rho_\mathrm{UDM}(t)=\rho_\mathrm{DM}(t)+\rho_\Lambda,
\end{equation}
where $\rho_\mathrm{DM}$ behaves like a DM component ($\rho_\mathrm{DM}\propto a^{-3}$) and $\rho_\Lambda$ like a cosmological constant component ($\rho_\Lambda=\mathrm{const.}$). Consequently, $\Omega_\mathrm{DM}=\rho_\mathrm{DM}(a=1)/\rho_c$ and $\Omega_\Lambda=\rho_\Lambda/\rho_c$ are the density parameters of DM and DE today, where $\rho_c$ is the present day critical density; hence, the Hubble parameter in these UDM models is the same as in $\Lambda$CDM,
\begin{equation}
H(z)={H_0}\sqrt{\Omega_m{(1+z)}^3+\Omega_\Lambda},
\end{equation}
with ${H_0}=100\,h\,\mathrm{km\,s^{-1}\,Mpc^{-1}}$ and $\Omega_m=\Omega_\mathrm{DM}+\Omega_b$, where $\Omega_b=\rho_b/\rho_c$ is the baryon density in units of the critical density.
Now we introduce small inhomogeneities of the scalar field $\delta\varphi(t,\mathbf x)$, and in the linear theory of cosmological perturbations and in the Newtonian gauge, the line element is
\begin{equation}
\mathrm d s^2=-(1+2\Phi)\mathrm d t^2+a^2(t)(1+2\Psi)\mathrm d\mathbf x^2,\label{flrw}
\end{equation}
in the case of a spatially flat Universe, as supported by CMB measurements \citep[e.g.][]{Spergel:2006hy}. This scalar field presents no anisotropic stress, thus $\Phi=-\Psi$. With this metric, when the energy density of radiation becomes negligible, and disregarding also the small baryonic component, the evolution of the Fourier modes of the Newtonian potential $\Phi_\mathbf{k}(a)$ are described by \citep{Garriga:1999vw,Mukhanov:2005sc}
\begin{equation}
{v_\mathbf{k}}''+{c_s}^2k^2v_\mathbf{k}-\frac{\theta''}{\theta}v_\mathbf{k}=0,\label{eq-Mukhanov:2005sc-lcdm}
\end{equation}
where a prime denotes a derivative with respect to the conformal time $\mathrm d\tau=\mathrm d t/a$, $k=|\mathbf k|$ and
\begin{align}
v&\equiv\frac{\Phi}{\sqrt{\rho_\mathrm{UDM}+p_\mathrm{UDM}}}\label{udiphi-lcdm},\\\theta&\equiv\frac{1}{a\sqrt{1+\frac{p_\mathrm{UDM}}{\rho_\mathrm{UDM}}}};
\end{align}
here,
\begin{equation}
{c_s}^2(a)=\frac{{p_\mathrm{UDM}}_{,X}}{{\rho_\mathrm{UDM}}_{,X}}\label{c_s}
\end{equation}
is the effective speed of sound, where $_{,X}$ denotes a derivative w.r.t. $X$.
By following the technique outlined by \citet{Bertacca:2008uf}, it is possible to construct a UDM model in which the sound speed is small enough to allow the formation of the LSS we see today and is capable of reproducing the observed pattern of the temperature anisotropies in the CMB radiation. We choose a Lagrangian of the form
\begin{equation}
\mathscr L_\varphi\equiv p_\mathrm{UDM}(\varphi,X)=f(\varphi)g(X)-V(\varphi)\label{L_phi},
\end{equation}
with a Born-Infeld type kinetic term $g(X)=-\sqrt{1-2XM^{-4}}$ \citep{Born:1934gh}, where $M$ is a suitable mass scale. Such a kinetic term can be thought as a field theory generalisation of the Lagrangian of a relativistic particle \citep{Padmanabhan:2002sh,Abramo:2003cp,Abramo:2004ji}. It was also proposed in connection with string theory, since it seems to represent a low-energy effective theory of $D$-branes and open strings, and has been conjectured to play a role in cosmology \citep{Sen:2002nu,Sen:2002in,Sen:2002vv,Padmanabhan:2002sh}. By using the equation of motion of the scalar field $\varphi(t, \mathbf x)$ and by imposing that the scalar field Lagrangian is constant along the classical trajectories, i.e. $p_\mathrm{UDM}=-\rho_\Lambda$, we obtain the following expressions for the potentials
\begin{align}
f(\varphi)&=\frac{\Lambda {c_\infty}}{1-{{c_\infty}}^2}\frac{\cosh(\xi\varphi)}{\sinh(\xi\varphi)\left[1+\left(1-{{c_\infty}}^2\right)\sinh^2(\xi\varphi)\right]},\\V(\varphi)&=\frac{\Lambda}{1- {{c_\infty}}^2}\frac{\left(1-{{c_\infty}}^2\right)^2\sinh^2\left(\xi\varphi\right)+2(1-{{c_\infty}}^2)-1}{1+\left(1-{{c_\infty}}^2\right)\sinh^2\left(\xi\varphi\right)} \;,
\end{align}
with $\xi=\sqrt{3\Lambda/[4(1-{{c_\infty}}^2)M^{4}]}$. Hence, the sound speed takes the parametric form
\begin{equation}
c_s(a)=\sqrt{\frac{{\Omega_\Lambda {c_\infty}}^2}{\Omega_\Lambda+(1-{{c_\infty}}^2)\Omega_\mathrm{DM} a^{-3}}}\label{c_s-udm},
\end{equation}
and it is easy to see that the parameter ${c_\infty}$ represents the value of the speed of sound when $a\rightarrow\infty$. Moreover, when $a\to0$, $c_s\to0$.
In UDM models the fluid which triggers the accelerated expansion at late times is also the one which has to cluster in order to produce the structures we see today. Thus, from recombination to the present epoch, the energy density of the Universe is dominated by a single dark fluid, and therefore the gravitational potential evolution is determined by the background and perturbation evolution of this fluid alone. As a result, the general trend is that the possible appearance of a sound speed significantly different from zero at late times corresponds to the appearance of a Jeans length \citep{Bertacca:2007cv}
\begin{equation}
\lambda_J(a)=\sqrt{\left|\frac{\theta}{\theta''}\right|}c_s(a)\label{jeans}
\end{equation}
below which the dark fluid does not cluster any more, causing a strong evolution in time of the gravitational potential. In Fig.~\ref{lambdaJ} we show $\lambda_J(a)$, the sound horizon, for different values of ${c_\infty}$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{lambdaJ.ps}
\caption{Sound horizon $\lambda_J(a)$ for ${c_\infty}=10^{-4},10^{-3},10^{-2},10^{-1}$ from bottom to top.}\label{lambdaJ}
\end{figure}
\section{Weak lensing on the celestial sphere}\label{3dlensing}
In the linear r\'egime, corresponding to the Born approximation, where the lensing effects are evaluated on the null-geodesic of the unperturbed (unlensed) photon \citep{Hu:2000ee,Bartelmann:1999yn}, it is possible to relate the weak lensing potential $\phi$ for a given source at a 3D position in comoving space $\mathbf x=(\chi,\hat{\mathbf n})$ to the Newtonian potential $\Phi(\mathbf x)$ via
\begin{equation}
\phi(\mathbf x)=\int_0^\chi\!\!\mathrm d\chi'\,\frac{W(\chi')}{\chi'}\Phi(\chi',\hat{\mathbf n})\label{phi}
\end{equation}
where
\begin{equation}
W(\chi')=-2\int_{\chi'}^\infty\mathrm d\chi\,\frac{\chi-\chi'}{\chi}n(\chi)\label{W(z)}
\end{equation}
is the weight function of weak lensing, with $n\left[\chi(z)\right]$ representing the redshift distribution of sources, for which $\int\!\!\mathrm d\chi\,n(\chi)=1$ holds, and $\chi(z)$ being the radial comoving distance, such that
\begin{equation}
\frac{1}{H(z)}=\frac{\mathrm d\chi(z)}{\mathrm d z}.
\end{equation}
Spin-weighted spherical harmonics and spherical Bessel functions are a very natural expansion for weak lensing observables, such as the potential $\phi(\mathbf x)$ \citep{Heavens:2003jx,Castro:2005bg}. Since cosmic shear depends on the Newtonian potential, the use of this basis allows one to relate the expansion of the shear field to the expansion of the mass density field. The properties of the latter depend in a calculable way on cosmological parameters, so this opens up the possibility of using 3D weak shear to estimate these quantities.
In the flat-sky approximation, the weak lensing potential (\ref{phi}) reads
\begin{equation}
\phi(k,\bmath{\ell})=\sqrt{\frac{2}{\pi}}\int\!\!\mathrm d^3x\,\phi(\mathbf x)kj_\ell\left(k\chi\right)e^{-i\bmath{\ell}\cdot\hat{\mathbf n}},
\end{equation}
where $\ell=|\bmath\ell|$ is a 2D angular wavenumber, $k$ a radial wavenumber and $j_\ell(k\chi)$ a spherical Bessel function of order $\ell$. The covariances of these coefficients define the power spectrum of the weak lensing potential via
\begin{equation}
\langle\phi(k,\bmath{\ell})\phi^\ast(k',\bmath{\ell}')\rangle={(2\pi)}^2\delta_D(\bmath{\ell}-\bmath{\ell}')C^{\phi\phi}(k,k';\ell),
\end{equation}
where $\delta_D$ is the Dirac delta.
\subsection{The 3D shear field}\label{3dshear}
In this paper we are interested in the information brought by the cosmic shear. We now introduce a distortion tensor \citep{Kaiser:1996tp,Bartelmann:1999yn}
\begin{equation}
\phi_{,ij}(\mathbf x)=\int_0^\chi\!\!\mathrm d\chi'\,\chi'W(\chi')\Phi_{,ij}(\chi',\hat{\mathbf n})\label{phi,ij},
\end{equation}
where commas denote derivatives w.r.t. directions perpendicular to the line of sight. The trace of the distortion tensor represents the convergence
\begin{equation}
\kappa(\mathbf x)=\frac{1}{2}\left(\phi_{,11}(\mathbf x)+\phi_{,22}(\mathbf x)\right)
\end{equation}
and, defining $\gamma_1(\mathbf x)=\frac{1}{2}\left(\phi_{,11}(\mathbf x)-\phi_{,22}(\mathbf x)\right)$ and $\gamma_2(\mathbf x)=\phi_{,12}(\mathbf x)$, the linear combination
\begin{equation}
\gamma(\mathbf x)=\gamma_1(\mathbf x)+i\gamma_2(\mathbf x)
\end{equation}
is the differential stretching, or shear. \citet{Castro:2005bg} have shown that the complex shear is the second ``edth'' derivative of the weak lensing potential
\begin{equation}
\gamma(\mathbf x)=\frac{1}{2}\eth\eth\phi(\mathbf x),
\end{equation}
where, in Cartesian coordinates $\{x,\,y\}$, $\eth=\partial_x+i\partial_y$.
We can now express the power spectrum of the 3D cosmic shear as a function of the gravitational potential via
\begin{equation}
C^{\gamma\gamma}(k_1,k_2;\ell)=\frac{\ell^4}{\pi^2}\int\!\!\mathrm d k\,k^2I^{\Phi}_\ell(k_1,k)I^{\Phi}_\ell(k_2,k)P^{\Phi}(k,0),\label{Cgammagamma}
\end{equation}
where $P^{\Phi}(k,z)$ is the Newtonian potential power spectrum and, for a generic field $X$, we have defined
\begin{equation}
I^X_\ell(k_i,k)=\int\!\!\mathrm d\chi\,\frac{X_k(\chi)}{X_k(0)}W(\chi)j_\ell\left(k_i\chi\right).\label{Iphi}
\end{equation}
\section{Fisher matrix analysis}\label{fisher}
Cosmological parameters influence the shear in a number of ways: the matter power spectrum $P^\delta(k,z)$ is dependent on $\Omega_m$, $h$ and the linear amplitude $\sigma_8$. The linear power spectrum is dependent on the growth rate, which also has some sensitivity to the parameter of the $\Lambda$-like component equation of state $w_\Lambda=p_\Lambda/\rho_\Lambda$. It is well know that the speed of sound (Eq.~\ref{c_s}) is strictly related to $w_\Lambda(z)$, and it also affects the $\chi(z)$ relation and hence the angular diameter distance $\sin_K\left[\chi(z)\right]$. These parameters $\{\vartheta_\alpha\}$ may be estimated from the data using likelihood methods. Assuming uniform priors for the parameters, the maximum a posteriori probability for the parameters is given by the maximum likelihood solution. We use a Gaussian likelihood
\begin{equation}
2\ln L=-\mathrm{Tr}\left[\ln\mathbfss C-\mathbfss C^{-1}\mathbfss D\right],
\end{equation}
where $\mathbfss C=\langle(\mathbf d-\mathbf d^\mathrm{th})(\mathbf d-\mathbf d^\mathrm{th})^T\rangle$ is the covariance matrix and $\mathbfss D=(\mathbf d-\mathbf d^\mathrm{th})(\mathbf d-\mathbf d^\mathrm{th})^T$ is the data matrix, with $\mathbf d$ the data vector and $\mathbf d^\mathrm{th}$ the theoretical mean vector.
The expected errors on the parameters can be estimated with the Fisher information matrix \citep{Fisher:1935,Jungman:1995bz,Tegmark:1996bz}. This has the great advantage that different observational strategies can be analysed and this can be very valuable for experimental design. The Fisher matrix gives the best errors to expect, and should be accurate if the likelihood surface near the peak is adequately approximated by a multivariate Gaussian.
The Fisher matrix is the expectation value of the second derivative of the $\ln L$ w.r.t. the parameters $\{\vartheta_\alpha\}$:
\begin{equation}
\mathbfss F_{\alpha\beta}=-\left\langle\frac{\partial^2\ln L}{\partial\vartheta_\alpha\partial\vartheta_\beta}\right\rangle\label{fisherm}
\end{equation}
and the marginal error on parameter $\vartheta_\alpha$ is $\left[\left(\mathbfss F^{-1}\right)_{\alpha\alpha}\right]^{\frac{1}{2}}$. If the means of the data are fixed, the Fisher matrix can be calculated from the covariance matrix and its derivatives \citep{Tegmark:1996bz} by
\begin{equation}
\mathbfss F_{\alpha\beta}=\frac{1}{2}\mathrm{Tr}\left[\mathbfss C^{-1}\mathbfss C_{,\alpha}\mathbfss C^{-1}\mathbfss C_{,\beta}\right].
\end{equation}
For a square patch of sky, the Fourier transform leads to uncorrelated modes, provided the modes are separated by $2\pi/\Theta_\mathrm{rad}$ where $\Theta_\mathrm{rad}$ is the side of the square in radians, and the Fisher matrix is simply the sum of the Fisher matrices of each $\ell$ mode:
\begin{equation}
\mathbfss F_{\alpha\beta}=\frac{1}{2}\sum_\ell(2\ell+1)\mathrm{Tr}\left[\left(\mathbfss C^\ell\right)^{-1}{\mathbfss C^\ell}_{,\alpha}\left(\mathbfss C^\ell\right)^{-1}{\mathbfss C^\ell}_{,\beta}\right],
\end{equation}
where $\mathbfss C^\ell$ is the covariance matrix for a given $\ell$ mode.
\section{Bayesian evidence}\label{B-evidence}
In this paper we compute parameter forecasts from 3D cosmic shear for UDM models. It is important to notice that we are dealing with an alternative model with respect to the standard $\Lambda$CDM model; hence, besides determining the best-fit value (and the errors) on a set of parameters within a model, we can also ask if this particular alternative model is preferable to the standard. Model selection is in a sense a higher-level question than parameter estimation. While in estimating parameters one assumes a theoretical model within which one interprets the data, in model selection, one wants to know which theoretical framework is preferred given the data. Clearly if our alternative model has more parameters than the standard one, chi-square analysis will not be of any use, because it will always reduce if we add more degrees of freedom. From a Bayesian point of view, this involves computation of the Bayesian evidence and of the Bayes factor $B$.
We refer to the two models under examination with $M_\mathrm{UDM}$ and $M_\textrm{$\Lambda$CDM}$. We know that, in this context, $M_\textrm{$\Lambda$CDM}$ is simpler than $M_\mathrm{UDM}$ because it has one fewer parameter, i.e. ${c_\infty}$; in the same way, it is also contained in $M_\mathrm{UDM}$, because, if $\vartheta^\textrm{$\Lambda$CDM}_\alpha$ and $\vartheta^\mathrm{UDM}_{\alpha'}$ are the parameters of the two models (with $\alpha=1,\ldots,n$ and $\alpha'=1,\ldots,n+1$), respectively, then
\begin{equation}
\{\vartheta^\textrm{$\Lambda$CDM}_\alpha,\,{c_\infty}\}=\{\vartheta^\mathrm{UDM}_{\alpha'}\}
\end{equation}
holds; here, ${c_\infty}\equiv\vartheta^\mathrm{UDM}_{n+1}$.
The posterior probability for each model $M$ is given by Bayes' theorem
\begin{equation}
p(M|\mathbf d)=\frac{p(\mathbf d|M)p(M)}{p(\mathbf d)}.
\end{equation}
The Bayesian evidence is defined as the marginalisation over the parameters
\begin{equation}
p(\mathbf d|M)=\int\!\!\mathrm d^m\vartheta\,p(\mathbf d|\bvartheta,M)p(\bvartheta|M),
\end{equation}
where $\bvartheta$ is the parameter vector, whose length $m$ is $n$ for the $\Lambda$CDM model and $n+1$ for UDM models. The posterior relative probabilities of our two models given the data $\mathbf d$ and with flat priors in their parameters $p(M)=\textrm{const.}$, is then \citep{Heavens:2007ka,Heavens:2009nx}
\begin{multline}
\frac{p(M_\textrm{$\Lambda$CDM}|\mathbf d)}{p(M_\mathrm{UDM}|\mathbf d)}=\frac{p(M_\textrm{$\Lambda$CDM})}{p(M_\mathrm{UDM})}\\\times\frac{\int\!\!\mathrm d^n\vartheta^\textrm{$\Lambda$CDM}\,p(\mathbf d|\bvartheta^\textrm{$\Lambda$CDM},M_\textrm{$\Lambda$CDM})p(\bvartheta^\textrm{$\Lambda$CDM}|M_\textrm{$\Lambda$CDM})}{\int\!\!\mathrm d^{n+1}\vartheta^\mathrm{UDM}\,p(\mathbf d|\bvartheta^\mathrm{UDM},M_\mathrm{UDM})p(\bvartheta^\mathrm{UDM}|M_\mathrm{UDM})}.
\end{multline}
If we choose non-committal priors $p(M_\mathrm{UDM})=p(M_\textrm{$\Lambda$CDM})$, the posterior evidence probability reduces to the ratio of the evidences, which takes the name of the Bayes factor and in the present case reads
\begin{equation}
B\equiv\frac{\int\!\!\mathrm d^n\vartheta^\textrm{$\Lambda$CDM}\,p(\mathbf d|\bvartheta^\textrm{$\Lambda$CDM},M_\textrm{$\Lambda$CDM})p(\bvartheta^\textrm{$\Lambda$CDM}|M_\textrm{$\Lambda$CDM})}{\int\!\!\mathrm d^{n+1}\vartheta^\mathrm{UDM}\,p(\mathbf d|\bvartheta^\mathrm{UDM},M_\mathrm{UDM})p(\bvartheta^\mathrm{UDM}|M_\mathrm{UDM})}.
\end{equation}
Now, let us focus on the priors $p(\bvartheta|M)$. If we assume flat priors in each parameter, over the range $\Delta\bvartheta$, then $p(\bvartheta^\textrm{$\Lambda$CDM}|M_\textrm{$\Lambda$CDM})=\prod_\alpha\left(\Delta\vartheta^\textrm{$\Lambda$CDM}_\alpha\right)^{-1}$ and
\begin{equation}
B=\frac{\int\!\!\mathrm d^n\vartheta^\textrm{$\Lambda$CDM}\,p(\mathbf d|\bvartheta^\textrm{$\Lambda$CDM},M_\textrm{$\Lambda$CDM})}{\int\!\!\mathrm d^{n+1}\vartheta^\mathrm{UDM}\,p(\mathbf d|\bvartheta^\mathrm{UDM},M_\mathrm{UDM})}\Delta {c_\infty}.
\end{equation}
The Bayes factor $B$ still depends on the specific dataset $\mathbf d$. For future experiments, we do not yet have the data, so we compute the expectation value of the Bayes factor, given the statistical properties of $\mathbf d$. The expectation is computed over the distribution of $\mathbf d$ for the correct model (assumed here to be $M_\mathrm{UDM}$). To do this, we make two further approximations: first we note that $B$ is a ratio, and we approximate $\langle B\rangle$ by the ratio of the expected values, rather than the expectation value of the ratio. This should be a good approximation if the likelihoods are sharply peaked.
We also make the Laplace approximation, that the expected likelihoods are given by multivariate Gaussians, i.e.,
\begin{equation}
p(\mathbf d|\bvartheta,M)=L_0e^{-\frac{1}{2}\left(\vartheta-\vartheta_0\right)_\alpha\mathbfss F_{\alpha\beta}\left(\vartheta-\vartheta_0\right)_\beta},
\end{equation}
where $\mathbfss F_{\alpha\beta}$ is the Fisher matrix, given in Eq.~(\ref{fisherm}). \citet{Heavens:2007ka} have shown that, if we assume that the posterior probability densities are small at the boundaries of the prior volume, then we can extend the integration to infinity, and the integration over the multivariate Gaussians can be easily performed. In the present case, this gives
\begin{equation}
\langle B\rangle=\frac{\sqrt{\det\mathbfss F^\mathrm{UDM}}}{\sqrt{\det\mathbfss F^\textrm{$\Lambda$CDM}}}\frac{L^\textrm{$\Lambda$CDM}_0}{L^\mathrm{UDM}_0}\frac{\Delta {c_\infty}}{\sqrt{2\pi}}.
\end{equation}
One more subtlety has to be taken into account to compute the ratio $L^\textrm{$\Lambda$CDM}_0/L^\mathrm{UDM}_0$: if the correct underlying model is $M_\mathrm{UDM}$, in the incorrect model $M_\textrm{$\Lambda$CDM}$ the maximum of the expected likelihood will not, in general, be at the correct parameter values \citep[see][Fig.~1]{Heavens:2007ka}. The $n$ parameters of the $\Lambda$CDM model shift their values to compensate the fact that ${c_\infty}$ is being kept fixed at the incorrect fiducial value ${c_\infty}=0$. With these offsets in the maximum likelihood parameters in the $\Lambda$CDM model, the Bayes factor takes the form
\begin{equation}
\langle B\rangle=\frac{\sqrt{\det\mathbfss F^\mathrm{UDM}}}{\sqrt{\det\mathbfss F^\textrm{$\Lambda$CDM}}}\frac{\Delta {c_\infty}}{\sqrt{2\pi}}e^{-\frac{1}{2}\delta\vartheta_\alpha\mathbfss F^\mathrm{UDM}_{\alpha\beta}\delta\vartheta_\beta},\label{B}
\end{equation}
where the shifts $\delta\vartheta_\alpha$ can be computed under the assumption of a multivariate Gaussian distribution \citep{Taylor:2006aw}, and read
\begin{equation}
\delta\vartheta_\alpha=-\left[\left(\mathbfss F^\textrm{$\Lambda$CDM}\right)^{-1}\right]_{\alpha\beta}\mathbfss G^\mathrm{UDM}_{\beta,n+1}\delta {c_\infty},\label{shifts}
\end{equation}
with $\mathbfss G^\mathrm{UDM}_{\beta,n+1}$ a subset of the UDM Fisher matrix (a vector in the present case).
It is usual to consider the logarithm of the Bayes factor, for which the so-called ``Jeffreys' scale'' gives empirically calibrated levels of significance for the strength of evidence \citep{Jeffreys:1961}, $1<|\ln B|<2.5$ is described as ``substantial'' evidence in favour of a model, $2.5<|\ln B|<5$ is ``strong,'' and $|\ln B|>5$ is ``decisive.'' These descriptions seem too aggressive: $|\ln B|=1$ corresponds to a posterior probability for the less-favoured model which is $0.37$ of the favoured model \citep{Kass95bayesfactors}. Other authors have introduced different terminology \citep[e.g.][]{Trotta:2005ar}.
\section{Results and discussion}\label{results}
We use a fiducial cosmology with the following parameters: Hubble constant (in units of $100\,\mathrm{km\,s^{-1}\,Mpc^{-1}}$) $h=0.71$, present-day total matter density (in units of critical density) $\Omega_m\equiv\Omega_\mathrm{DM}+\Omega_b=0.3$, baryon contribution $\Omega_b=0.045$, cosmological constant contribution $\Omega_\Lambda=0.7$, spectral index $n_s=1$, linear amplitude (within a sphere of radius $8\,h^{-1}\,\mathrm{Mpc}$) $\sigma_8=0.8$.
In Section~\ref{matterpowerspectrum} we compute the predicted matter power spectrum for UDM models, with a comparison to $\Lambda$CDM. In Section~\ref{signal} the 3D shear matrix $C^{\gamma\gamma}(k_1,k_2;\ell)$ is shown. In Section~\ref{estimation} we present the parameter forecasts, and in Section~\ref{selection} we show the expected Bayesian evidence for UDM models over the $\Lambda$CDM model.
\subsection{The matter power spectrum}\label{matterpowerspectrum}
Our class of UDM models allows the value $w=-1$ for $a\to\infty$. In other words they admit an effective cosmological constant energy density at late times. Therefore, in order to compare the predictions of our UDM model with observational data, we follow the same prescription used by \citet{Piattella:2009kt}, where the density contrast of the clustering fluid is
\begin{equation}
\delta\equiv\frac{\delta\rho_m}{\rho_m}=\frac{\rho_\mathrm{DM}\delta_\mathrm{UDM}+\rho_b\delta_b}{\rho_m},
\end{equation}
where $\delta_b$ and $\delta_\mathrm{UDM}$ are the baryon and the scalar field density contrasts, respectively, and we emphasise that $\rho_\mathrm{DM}=\rho_\mathrm{UDM}-\rho_\Lambda$ is the only component of the scalar field density which clusters.
\subsubsection{Linear r\'egime}
The today matter power spectrum $P(k)\equiv P^\delta\left(k,z=0\right)$ is the present value of the Fourier transform of the density perturbation correlation function. To construct $P(k)$ in the $\Lambda$CDM model, we need the growth factor $D(z)=\delta(\mathbf x,z)/\delta(\mathbf x,z=0)$ on linear scales (i.e. in absence of free-streaming) and the transfer function $T(k)$, that describes the evolution of perturbations through the epochs of horizon crossing and radiaton-matter transition. Here, we use the transfer function suggested by \citet{Eisenstein:1997jh}, which, with an accurate, general fitting formula, calculates the power spectrum as a function of the cosmological parameters quite efficiently. \citet{Eisenstein:1997jh} show that baryons are effective at suppressing power on small scales compared to DM-only models. Moreover, the small-scale limit of this transfer function can be calculated analytically as a function of the cosmological parameters \citep{Hu:1997vi}. Hence, we can write the matter power spectrum as
\begin{equation}
P(k)=2\pi^2{\delta_H}^2\left(\frac{k}{{H_0}^3}\right)^{n_s}T^2(k)\left[\frac{D(z)}{D(z=0)}\right]^2;
\end{equation}
here, $\delta_H$ is a normalisation.
To obtain $P(k)$ in UDM models, it is useful to remember that the class of UDM models we use here is constructed to have the same properties of the $\Lambda$CDM model in the early Universe; in Eq.~(\ref{eq-Mukhanov:2005sc-lcdm}), which describes the time evolution of Fourier modes of the Newtonian potential $\Phi_\mathbf{k}(a)$, we thus set the same initial conditions for both the UDM and the $\Lambda$CDM potentials. Gravity is GR, so we can use the Poisson equation
\begin{equation}
\Phi_\mathbf{k}(a)=-\frac{3}{2}\Omega_m{{H_0}}^2\frac{\delta_\mathbf k(a)}{k^2a}\label{poisson},
\end{equation}
which relates $\Phi_\mathbf{k}(a)$ to the matter power spectrum via
\begin{equation}
\langle\delta_\mathbf{k}(a){\delta_{\mathbf k'}}^\ast(a)\rangle=\left(2\pi\right)^3\delta_D\left(\mathbf k-\mathbf k'\right)P^\delta(k,a).
\end{equation}
Clearly, if we solve Eq.~(\ref{eq-Mukhanov:2005sc-lcdm}) with $c_s=0$, we obtain the standard $\Lambda$CDM matter power spectrum.
Fig.~\ref{matter_power_spetrum} shows the matter power spectrum $P(k)$ for $\Lambda$CDM and UDM models, for a number of values of ${c_\infty}$. By increasing the sound speed, the potential starts to decay earlier in time, oscillating around zero afterwards \citep{Camera:2009uz}; at large scales, if ${c_\infty}$ is small enough, these UDM models reproduce the $\Lambda$CDM model. This feature reflects the dependence of the gravitational potential on the effective Jeans length $\lambda_J(a)$. It is easy to see that if ${c_\infty}\lesssim10^{-3}$ the perturbations of the UDM reproduce the behaviour of the concordance model within the linear r\'egime (the UDM curve for ${c_\infty}=10^{-3}$ is virtually on top of the $\Lambda$CDM one). Instead, a larger sound speed inhibits structure formation earlier in time, thus we observe less power on small scales; in this case, the consequence of the oscillatory feature of the gravitational potential, due to the non-negligible speed of sound, can be clearly seen.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{P_k-UDM.ps}
\caption{Matter power spectra $P(k)\equiv P^\delta(k,0)$ for $\Lambda$CDM (solid) and UDM (dot-dashed), with ${c_\infty}=10^{-3},10^{-2},10^{-1}$ from top to bottom.}\label{matter_power_spetrum}
\end{figure}
In principle, the large-scale distribution of galaxies could constrain the value of ${c_\infty}$. However, the shape of the power spectrum also depends on the normalisation $\sigma_8$ and the spectral index $n_s$: therefore, for a given ${c_\infty}$ as large as $10^{-2}$, an appropriate choice of $\sigma_8$ and $n_s$ can provide a power spectrum in agreement with observations, at least on scales where non-linear effects are not dominant. In addition, in UDM models it is still unclear how the galaxy distribution is biased against the gravitational potential of the scalar field on small scales. Therefore the large-scale distribution of galaxies does not appear to be the best tool to constrain this family of UDM models. On the contrary, a weak lensing analysis can constrain the matter power spectrum without a fine-tuning of either $\sigma_8$ or the galaxy bias.
\subsubsection{Non-linear r\'egime}
For wavenumbers $k>k_\mathrm{nl}\simeq0.2\,h\,\mathrm{Mpc}^{-1}$, non-linear contributions to the evolution of the Newtonian potential (i.e. to matter overdensities) become important. In the $\Lambda$CDM model, the gravitational potential satisfies Eq.~(\ref{eq-Mukhanov:2005sc-lcdm}), but in this case $c_s$ is the sound speed of the hydrodynamical fluid, and therefore can be set equal to zero in the matter-dominated epoch. For $c_s=0$, Eq.~(\ref{eq-Mukhanov:2005sc-lcdm}) has an analytic solution \citep{Hu:1998tj,Hu:2001fb,Mukhanov:2005sc,Bertacca:2007cv}
\begin{equation}
\Phi_\mathbf{k}(a)=A_\mathbf{k}\left(1-\frac{H(a)}{a}\int_0^a\!\!\frac{\mathrm d a'}{H(a')}\right),\label{longwave}
\end{equation}
where the constant of integration is $A_\mathbf{k}=\Phi_\mathbf{k}(0)T(k)$, with $T(k)$ the matter transfer function and $\Phi_\mathbf{k}(0)$ the large-scale potential during the radiation-dominated era.
To perform further calculations on a wider range of scales than that allowed by linear theory, we will use the \citet{Smith:2002dz} non-linear fitting formul\ae~ for $P(k)$ in the $\Lambda$CDM model. However, currently there is no linear-to-non-linear mapping in UDM models. Nevertheless, as we have seen, differences between the $\Lambda$CDM and UDM models arise at scales smaller than the sound horizon. With a cross-over wavenumber $k\simeq1/\lambda_J$, if the sound speed is small enough to guarantee that $\lambda_J$ is well within the non-linear regime we can assume that the non-linear evolution of the UDM power spectrum will be similar to the $\Lambda$CDM one. A deeper knowledge on this aspect will be the next step of the development of UDM models and has to be explored in future work.
\subsection{The 3D shear signal}\label{signal}
For a $20,000\,\mathrm{deg}^2$ Euclid-like survey \citep{Cimatti:2009is,Refregier:2010ss}, we assume that the source distribution over redshifts has the form \citep{1994MNRAS.270..245S}
\begin{equation}
\bar n(z)\propto z^2e^{-\left(\frac{z}{z_0}\right)^{1.5}},
\end{equation}
where $z_0=z_m/1.4$, and $z_m=0.8$ is the median redshift of the survey. The source number density with photometric redshift and shape estimates is $35$ per square arcminute. We also assume that the photometric redshift errors are Gaussian, with a dispersion given by $\sigma(z)=0.05(1+z)$.
In order to avoid the high-wavenumber r\'egime where the fitting formul\ae~ of \citet{Smith:2002dz} may be unreliable, or where baryonic effects might alter the power spectrum ($k>10\,h\,\mathrm{Mpc}^{-1}$; \citealt{White:2004kv,Zhan:2004wq}), we do not analyse modes with $k>1.5\,\mathrm{Mpc}^{-1}$. Note that the non-local nature of gravitational lensing does mix modes to some degree, but these modes are sufficiently far from the uncertain highly non-linear r\'egime that this is not a concern \citep{Castro:2005bg}. We include angular modes as small as each survey will allow, and analyse up to $\ell_\mathrm{max}=5000$ (but note the wavenumber cut).
In Fig.~\ref{shearsignal} we present the 3D shear matrix $C^{\gamma\gamma}(k_1,k_2;\ell)$. The first three rows show $\log_{10}C^{\gamma\gamma}(k_1,k_2;\ell)$ for the $\Lambda$CDM model and for a UDM model with ${c_\infty}=5\,\cdot10^{-4}$ and ${c_\infty}=5\cdot10^{-3}$ (respectively) in the $(k_1,k_2)$-plane in blue(gray)-scale for a number of values of $\ell$. In the fourth row we present the diagonal elements $k^2C^{\gamma\gamma}(k,k;\ell)$ of the 3D shear matrix, where the upper (green) curve refers to the smaller speed of sound and the lower (green) curve to the greater ${c_\infty}$; the $\Lambda$CDM (red) curve is virtually on top of the small-${c_\infty}$ UDM curve. Finally, in the bottom row we show $k^2$ times the ratio of the diagonal elements $C^{\gamma\gamma}(k,k;\ell)$ of UDM models over the $\Lambda$CDM model.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{3dmatrix}
\caption{The 3D shear matrix $\log_{10}C^{\gamma\gamma}(k_1,k_2;\ell)$ for five values of $\ell$ in (blue)gray-scale. In the first row we show the $\Lambda$CDM signal, while in the second and third rows we present the UDM signal for ${c_\infty}=1.0\,\cdot10^{-3}$ and ${c_\infty}=5.4\cdot10^{-3}$, respectively. The fourth row shows the diagonal elements $k^2C^{\gamma\gamma}(k,k;\ell)$, and each curve, from top to bottom, refers to the corresponding matrix above. The $\Lambda$CDM curve is virtually on top of the small-${c_\infty}$ UDM curve. The fifth row shows the fractional error.}\label{shearsignal}
\end{figure*}
The oscillatory features of the UDM gravitational potential \citep{Camera:2009uz}, whose power spectrum enters the shear via Eq.~(\ref{Cgammagamma}), can be clearly seen in the shear signal of the UDM model with ${c_\infty}=4\cdot10^{-3}$. The bumps in the diagonal signal can be easily understood by looking at the $\log_{10}C^{\gamma\gamma}(k_1,k_2;\ell)$ plot, where it is interesting to notice how the oscillations take place along any direction, with the obvious symmetry along the $k_1$- and $k_2$-axes. Instead, as we have noticed in Fig.~\ref{matter_power_spetrum}, when the sound speed is small enough we do not see any oscillations and the matter power spectrum of UDM models is in agreement with $\Lambda$CDM. This agreement holds even at non-linear scales $k\gtrsim0.2\,h\,\mathrm{Mpc}^{-1}$.
Beyond the oscillations, these signals, expected for two different values of ${c_\infty}$, show us the effect of the effective Jeans length of the gravitational potential. In fact, The Newtonian potential in UDM models behaves like $\Lambda$CDM at scales much larger than $\lambda_J(a)$ (Eq.~\ref{jeans}), while at smaller scales it starts to decay and oscillate. Hence, at high values of $\ell$ and $k$, which correspond to small angular and physical scales, respectively, the signal of weak lensing observables, like cosmic shear, shows the decay of the gravitational potential.
Although the UDM signal for ${c_\infty}=5\,\cdot10^{-4}$ appears to be in agreement with the $\Lambda$CDM signal (fourth row of Fig. \ref{shearsignal}), their fractional difference shown in the fifth row is still of order unit at $k\gtrsim1\,h\,\mathrm{Mpc}^{-1}$ and is not negligible. In fact, we will see below in Section~\ref{selection}, that this low value of ${c_\infty}$ still yields a Bayesian evidence which indicates a statistically very large difference between this UDM model and $\Lambda$CDM.
Finally, in Fig.~\ref{shearsignal}, we can also notice that, the higher the value of $\ell$, the smaller the physical scales are those which contribute to the shear signal. This effect is due to the approximate Bessel function inequality, $\ell\leq k\chi$, in Eq.~(\ref{Iphi}). As the $\ell$ value increases the diagonal terms of the covariance matrix do not become significant until $k\chi_\mathrm{max}\sim\ell$, where $\chi_\mathrm{max}\equiv\chi(z_\mathrm{max})$ is the upper limit imposed on the integration over the radial comoving distance.
\subsection{Estimation of cosmological parameters}\label{estimation}
Once having introduced the method (Section~\ref{fisher}) and the survey design formalism (Section~\ref{signal}), now we show cosmological parameter forecasts for such a survey and we explore the variation in the marginal errors with changes in the sound speed parameter ${c_\infty}$.
By using the Fisher matrix analysis outlined in \citet{Taylor:2006aw}, we calculate predicted Fisher matrices and parameter constraints for a $20,000$ square-degree Euclid-like survey. In all Fisher matrix calculations we use a seven-parameter cosmological set $\{\Omega_m=\Omega_\mathrm{DM}+\Omega_b,\,\Omega_b,\,h,\,\Omega_\Lambda,\,\sigma_8,\,n_s,\,{c_\infty}\}$ with fiducial values $\{0.3,\,0.045,\,0.71,\,0.7,\,0.8,\,1.0\}$ for the first six. The Fisher matrix is sensitive to ${c_\infty}$, so we compute the evidences at twenty ${c_\infty}$ fiducial values from $5\cdot10^{-4}$ to $5\cdot10^{-2}$. We find that the Fisher matrices are unstable for ${c_\infty}\lesssim10^{-3}$. This is because, when the sound speed is small, the UDM 3D shear signal is virtually indistinguishable from that of $\Lambda$CDM, and the numerical derivatives w.r.t. ${c_\infty}$ thus become unreliable.
Fig.~\ref{fisher_plot} shows the Fisher matrix elements marginalised over all other parameters. In dark blue(gray) we present the results for a UDM model with ${c_\infty}=1.0\cdot10^{-3}$ and in light blue(gray) for ${c_\infty}=5.4\cdot10^{-3}$. Notice that results are shown for universes which are not necessarily flat. In non-flat geometries, the spherical Bessel functions $j_\ell(k\chi)$ should be replaced by ultraspherical Bessel functions $\Phi^\ell_\beta(y)$ \citep{Heavens:2006uk}. For the case considered here $\ell\gg1$ and $k\gg\left(\textrm{curvature scale}\right)^{-1}$, then $\Phi^\ell_\beta(y)\to j_\ell(k\chi)$ \citep{Abbott:1986ct,Zaldarriaga:1999ep}. The expansion used is not ideal for curved universes, but it should however be an adequate approximation given current constraints on flatness \citep[e.g.][]{Larson:2010gs}.
The Fisher constraints for lensing are large enough that for some parameters $(\sigma_8,\,\Omega_b)$ the $1\sigma$ confidence region has an unphysical lower bound. We note that this is a symptom of the Fisher matrices Gaussian approximation. \citet{Taylor:2010pi} address this concern by suggesting a semi-analytic approach that only assumes Gaussianity in particular parameter directions; we leave an implementation of this type of parameter error prediction, or a more sophisticated likelihood parameter search for future investigation.
Before starting the interpretation of such results, it is important to underline that what deeply affects the matter power spectrum in UDM models, and thus the lensing signal, is the presence of an effective Jeans length for the Newtonian potential. Let us focus on Eq.~(\ref{eq-Mukhanov:2005sc-lcdm}): we can consider the asymptotic solutions, i.e. long wavelength and short wavelength perturbations, depending on whether $k\ll1/\lambda_J$ or $k\gg1/\lambda_J$, respectively. In the former case, the term in Eq.~(\ref{eq-Mukhanov:2005sc-lcdm}) involving the speed of sound of the scalar field is negligible, therefore the solution is formally the same that in the $\Lambda$CDM model (Eq.~\ref{longwave}), and the Fourier modes $\Phi_k(a)$ read \citep{Bertacca:2007cv}
\begin{equation}
\Phi_{k\ll1/\lambda_J}(a)\propto\left[1-\frac{H(a)}{a}\int_0^a\!\!\frac{\mathrm d a'}{H(a')}\right]\qquad(k\ll1/\lambda_J);
\end{equation}
instead, in the opposite r\'egime we have
\begin{equation}
\Phi_{k\gg1/\lambda_J}(a)\propto\frac{1}{\sqrt{c_s(a)}}\cos\left[k\int_0^a\!\!\mathrm d a'\,\frac{c_s(a')}{{a'}^2H(a')}\right]\;(k\gg1/\lambda_J).
\end{equation}
This means that what enters in the oscillatory dynamics is not only ${c_\infty}$, which however plays an important role, but also $\Omega_\mathrm{DM}$ and $\Omega_\Lambda$, as described in Eq.~(\ref{c_s-udm}). Therefore, the links which connect the expected marginal errors in Fig.~\ref{fisher_plot} with the corresponding fiducial ${c_\infty}$ are not quite straightforward. Moreover, we find that the Fisher matrix is rather sensitive to ${c_\infty}$. The errors we find on the sound speed parameter are almost constant, and go from $\Delta {c_\infty}=3.0\cdot10^{-5}$, for the fiducial value ${c_\infty}=1.0\cdot10^{-3}$, to $\Delta {c_\infty}=2.6\cdot10^{-5},$ when ${c_\infty}=1.2\cdot10^{-2}$.
\begin{figure*}
\centering
\includegraphics[width=\textwidth,trim=10mm 20mm 20mm 0mm,clip]{3dfisher}
\caption{Expected marginal errors on UDM model cosmological parameters from a $20,000\,\mathrm{deg}^2$ Euclid-like survey with a median redshift $z_m=0.8$. Ellipses show the $1\sigma$ errors for two parameters ($68\%$ confidence regions), marginalised over all the other parameters. Dark(light) ellipses refer to a UDM model with ${c_\infty}=1.0\cdot10^{-3}$(${c_\infty}=5.4\cdot10^{-3}$).}\label{fisher_plot}
\end{figure*}
It is already well known that weak lensing can tightly constrain the $(\Omega_m,\,\sigma_8)$-plane, using standard cosmic shear techniques \citep[see][]{Brown:2002wt,2006A&A...452...51S}, and 3D weak lensing constrains $\sigma_8$ in the same way by measuring the overall normalisation of the matter power spectrum. The expected marginal errors on $\Omega_m$ and $\sigma_8$ are in fact very promising, particularly in the perspective of combining the cosmic shear data with other cosmological observables, i.e. CMB or SNeIa \citep{Heavens:2006uk}. However, the presence of a sound speed can be mimicked in the power spectrum, at least in the non-linear r\'egime, by an accurate tuning of some parameter values, on top of all $\sigma_8$ and $n_s$ \citep{Camera:2009uz}. This is why the ellipses of those parameters get worse for larger values of ${c_\infty}$.
In UDM models, there is another aspect which is particularly interesting to notice: we are able to lift the degeneracy between $\Omega_m$ and $\Omega_b$ without using early-Universe data. That is because $\Omega_\mathrm{DM}$ and $\Omega_b$ enter in the growth of structures in two different ways. The expansion history of the Universe takes into account only their joint effect, through $\Omega_m$, whereas the speed of sound is determined by $\Omega_\mathrm{DM}$ alone. In fact we have to keep in mind that in UDM models there is a scalar field which mimics both DM and $\Lambda$, but it still has proper dynamics different from that of its respective in the $\Lambda$CDM model.
\subsection{Model selection}\label{selection}
In Section~\ref{B-evidence} we showed how the Bayes factor can be used to determine which model is favoured by the data. By using the Fisher matrix formalism for a Euclid-like survey, we compute the Bayes factor $B$ for UDM models over the standard $\Lambda$CDM cosmology. We fix flat prior $\Delta {c_\infty}=1$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth,trim=15mm 0mm 0mm 0mm,clip]{evidence.ps}
\caption{Bayes factor $-\ln B$ for UDM models over the standard $\Lambda$CDM model as a function of the sound speed parameter ${c_\infty}$.}\label{evidence}
\end{figure}
The large values of $-\ln B$ derive from the large deviations $\delta\vartheta_\alpha$ in Eq.~(\ref{shifts}) which yield an extremely small exponential. On turn, the deviations $\delta\vartheta_\alpha$ are large because, as shown in the right-most column of Fig.~\ref{fisher_plot}, (i) the ellipsoidal confidence regions are narrow, and (ii) they are almost vertical; in other words, the $\Lambda$CDM parameters that one would derive if living in a universe with a non-null ${c_\infty}$ would be largely biased.
We conclude that, if UDM is the correct model, there would be large evidence for UDM models over $\Lambda$CDM for values of ${c_\infty}\gtrsim10^{-3}$. However, if ${c_\infty}$ is so small that the UDM peculiar features in the matter power spectrum only appear at $k\gg1\,h\,\mathrm{Mpc}^{-1}$, namely on galactic or smaller scales, in principle, we might be unable to distinguish UDM from $\Lambda$CDM, unless the non-linear dynamics and/or the effects of the baryonic physics on the DM-like dynamics of the scalar field are largely different from what we expect in $\Lambda$CDM.
\section{Conclusions}\label{conclusions}
In this work, we calculate the expected error forecasts for a $20,000$ square degree survey with median redshift $z_m=0.8$ such as Euclid \citep{Cimatti:2009is,Refregier:2010ss} in the framework of unified models of DM and DE (UDM models). We focus on those UDM models which are able to reproduce the same Hubble parameter as in the $\Lambda$CDM model \citep{Bertacca:2007cv,Bertacca:2008uf}. In these UDM models, beyond standard matter and radiation, there is only one exotic component, a classical scalar field with a non-canonical kinetic term in its Lagrangian, that during the structure formation behaves like DM, while at the present time contributes to the total energy density of the Universe like a cosmological constant $\Lambda$.
In order to avoid the strong integrated Sachs-Wolfe effect which typically plagues UDM models, we follow the technique outlined by \citet{Bertacca:2008uf}, that allows one to construct a UDM model in which the sound speed is small enough to let the cosmological structures grow and reproduce the LSS we see today. This can be achieved by parameterising the sound speed with its value at late times, ${c_\infty}$.
An effect of the presence of a non-negligible speed of sound of the UDM scalar field is the emerging of an effective time-dependent Jeans length $\lambda_J(a)$ of the gravitational potential. It causes a strong suppression, followed by oscillations, of the Fourier modes $\Phi_{\mathbf k}(a)$ with $k\equiv|\mathbf k|>1/\lambda_J$. This reflects on the predicted lensing signal, because the latter is an integrated effect of the potential wells of the LSS over the path that the photons travel from the sources to the observer.
We calculate the 3D shear matrix $C^{\gamma\gamma}(k_1,k_2;\ell)$ in the flat-sky approximation for a large number of values of ${c_\infty}$. In agreement with \citet{Camera:2009uz}, we see that, whilst the agreement with the $\Lambda$CDM model is good for small values of ${c_\infty}$, when one increases the sound speed parameter, the lensing signal appears more suppressed at small scales, and moreover the 3D shear matrix does show bumps related to the oscillations of the gravitational potential.
We also compute the Fisher matrix for a Euclid-like survey. It has been shown that 3D lensing is a powerful tool in constraining cosmological parameters \citep[e.g.][]{Castro:2005bg}, and \citet{Heavens:2006uk} have demonstrated that it is particularly useful in unveiling the properties of the dark components of the Universe. By using a seven-parameter cosmological set $\{\Omega_m=\Omega_\mathrm{DM}+\Omega_b,\,\Omega_b,\,h,\,\Omega_\Lambda,\,\sigma_8,\,n_s,\,{c_\infty}\}$, with one fiducial value for each parameter, except for ${c_\infty}$, for which we use twenty values in the range $5\cdot10^{-4}\ldots5\cdot10^{-2}$, we obtain the expected marginal errors. However, the ${c_\infty}$ Fisher matrix elements are unstable in the parameter range ${c_\infty}\lesssim10^{-3}$, because the UDM signal is degenerate with respect to $\Lambda$CDM. Therefore, we restrict our analysis by considering only sound speed fiducial values larger than $\sim10^{-3}$. We get minimal errors that go from $\Delta {c_\infty}=3.0\cdot10^{-5}$, for the fiducial value ${c_\infty}=1.0\cdot10^{-3}$, to $\Delta {c_\infty}=2.6\cdot10^{-5},$ when ${c_\infty}=1.2\cdot10^{-2}$.
In the case of UDM models, 3D lensing is revealed to be even more useful for estimating cosmological parameters, because since it encodes information from both the geometry and the dynamics of the Universe, it can lift the usual degeneracy between the DM and the baryon fractions, $\Omega_\mathrm{DM}$ and $\Omega_b$. This is because in the Hubble parameter, which determines the background evolution of the geometry of the Universe, both $\Omega_\mathrm{DM}$ and $\Omega_b$ enter in the usual way, through the total matter fraction $\Omega_m$. On the other side, the speed of sound, which affects the structure formation, and thus the dynamics of the Universe, is sensitive only on the DM-like behaviour of the scalar field, since for baryons $c_s=0$ holds.
Finally, we compute the Bayesian expected evidence \citep[e.g.][]{Trotta:2005ar} for UDM models over the $\Lambda$CDM model as a function of the sound speed parameter ${c_\infty}$. The expected evidence clearly shows that the survey data would unquestionably favour UDM models over the standard $\Lambda$CDM model, if its sound speed parameter exceed $\sim10^{-3}$.
\section*{Acknowledgments}
We thank the referee for her/his useful comments which contributed to remove some ambiguities in the presentation of our results. SC and AD gratefully acknowledge partial support from the INFN grant PD51. SC acknowledges Research Grants funded jointly by Ministero dell'Istruzione, dell'Universit\`a e della Ricerca (MIUR), by Universit\`a di Torino and by Istituto Nazionale di Fisica Nucleare within the {\sl Astroparticle Physics Project} (MIUR contract number: PRIN 2008NR3EBK). SC also acknowledges partial support from the Institute for Astronomy, University of Edinburgh and thanks it for the hospitality. TDK is supported by the STFC Rolling Grant number RA0888. DB would like to acknowledge the ICG Portsmouth for the hospitality during the development of this project and the ``Fondazione Ing. Aldo Gini" for support. DB research has been partly supported by ASI contract I/016/07/0 ``COFIS".
\bibliographystyle{mn2e}
| proofpile-arXiv_065-10422 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Risk measures are quantitative tools developed to determine mimimum capital reserves, which are required to be maintained by financial institutions in order to ensure their financial stability.
An axiomatic analysis of risk assessment in terms of capital requirements was initiated by Artzner, Delbaen, Eber, and Heath~\cite{adeh97,adeh99}, who introduced coherent risk measures. F\"ollmer and Schied~\cite{fs2} and Frittelli and Rosazza Gianin~\cite{fr2} replaced positive homogeneity by convexity in the set of axioms and established the more general concept of a convex risk measure. Since then, convex and coherent risk measures and their applications have attracted a growing interest both in mathematical finance research and among practitioners.
One of the most appealing properties of a convex risk measure is its robustness against model uncertainty. Under some regularity condition, it can be represented as a suitably modified worst expected loss over a whole class of probabilistic models. This was initially observed in \cite{adeh99, fs2, fr2} in the static setting, where financial positions are described by random variables on some probability space and a risk measure is a real-valued functional. For a comprehensive presentation of the theory of static coherent and convex risk measures we refer to Delbaen~\cite{d0} and F\"ollmer and Schied~\cite[Chapter 4]{fs4}.
A natural extension of a static risk measure is given by a conditional risk measure, which takes into account the information available at the time of risk assessment. As its static counterpart, a conditional convex risk measure can be represented as the worst conditional expected loss over a class of suitably penalized probability measures; see \cite{rse5,rie4,dt5,bn4,ks5, cdk6}. In the dynamical setting described by some filtered probability space, risk assessment is updated over the time in accordance with the new information. This leads to the notion of dynamic risk measure, which is a sequence of conditional risk measures adapted to the underlying filtration.
A crucial question in the dynamical framework is how risk evaluations at different times are interrelated.
Several notions of time consistency were introduced and studied in the literature. One of todays most used notions is strong time consistency, which corresponds to the dynamic programming principle; see \cite{adehk7,d6,dt5,ks5,cdk6,bn6,fp6,ck6,dpr10} and references therein. As shown in \cite{d6, bn6, fp6}, strong time consistency can be characterized by additivity of the acceptance sets and penalty functions, and also by a supermartingale property of the risk process and the penalty function process.
Similar characterizations of the weaker notions of time consistency, so called rejection and acceptance consistency, were given in \cite{Samuel, ipen7}. Rejection consistency, also called prudence in \cite{ipen7}, seems to be a particularly suitable property from the point of view of a regulator, since it ensures that one always stays on the safe side when updating risk assessment. The weakest notions of time consistency considered in the literature are weak acceptance and weak rejection consistency, which require that if some position is accepted (or rejected) for any scenario tomorrow, it should be already accepted (or rejected) today; see \cite{adehk7, Weber, tu8, burg, ros7}.
As pointed out in \cite{jr8, er08}, risk assessment in the multi-period setting should also account for uncertainty about the time value of money. This requires to consider entire cash flow processes rather than total amounts at terminal dates as risky objects, and it leads to a further extention of the notion of risk measure. Risk measures for processes were studied in \cite{adehk7, rie4, cdk4, cdk5, cdk6, ck6, fs6, jr8, afp9}. The new feature in this framework is that not only the amounts but also the timing of payments matters; cf.\ \cite{cdk6, ck6, jr8, afp9}. However, as shown in \cite{adehk7} in the static and in \cite{afp9} in the dynamical setting, risk measures for processes can be identified with risk measures for random variables on an appropriate product space. This allows a natural translation of results obtained in the framework of risk measures for random variables to the framework of processes; see \cite{afp9}.
The aim of this paper it to give an overview of the current theory of dynamic convex risk measures for random variables in discrete time setting; the corresponding results for risk measures for processes are given in \cite{afp9}. The paper is organized as follows. Section~\ref{setup} recalls the definition of a conditional convex risk measure and its interpretation as the minimal capital requirement from \cite{dt5}. Section~\ref{sectionrr} summarizes robust representation results from \cite{dt5,fp6,bn8}. In Section~\ref{sec:tc} we first give an overview of different time consistency properties based on \cite{tu8}. Then we focus on the strong notion of time consistency, in Subsection~\ref{subsec:tc}, and we characterize it by supermartingale properties of risk processes and penalty functions. The results of this subsection are mainly based on \cite{fp6}, with the difference that here we give characterizations of time consistency also in terms of absolutely continuous probability measures, similar to \cite{bn8}. In addition, we relate the martingale property of a risk process with the worst case measure, and we provide the explicit form of the Doob- and the Riesz-decomposition of the penalty function process. Subsection~\ref{subsec:rc} generalizes \cite[Sections 2.4, 2.5]{ipen7} and characterizes rejection and acceptance consistency in terms of acceptance sets, penalty functions, and, in case of rejection consistency, by a supermartingale property of risk processes and one-step penalty functions. Subsection~\ref{subsec:wc} recalls characterizations of weak time consistency from \cite{tu8, burg}, and Subsection~\ref{recur} characterizes the recursive construction of time consistent risk measures suggested in \cite{cdk6, ck6}. Finally, the dynamic entropic risk measure with a non-constant risk aversion parameter is studied in Section~\ref{entropic}.
\section{Setup and notation}\label{setup}
Let $T\in\mathbb N\cup\{\infty\}$ be the time horizon, $\mathbb{T}:=\{0,\ldots,T\}$ for $T<\infty$, and $\mathbb{T}:=\mathbb N_0$ for $T=\infty$. We consider a discrete-time setting given by a filtered probability space
$(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\in\T}, P)$ with $\mathcal{F}_0=\{\emptyset, \Omega\}$, $\mathcal{F}=\mathcal{F}_T$ for $T<\infty$, and $\displaystyle{\mathcal{F}=\sigma(\cup_{t\ge 0}\mathcal{F}_t)}$ for $T=\infty$. For $t\in\mathbb{T}$, $L^{\infty}_t:=L^{\infty}(\Omega, \mathcal{F}_t,P)$ is the space of all essentially bounded $\mathcal{F}_t$-measurable random variables, and $L^\infty:=L^{\infty}(\Omega, \mathcal{F}_T,P)$. All equalities and inequalities between random variables and between sets are understood to hold $P$-almost surely, unless stated otherwise. We denote by $\M_1(P)$ (resp. by $\M^e(P)$) the set of all probability measures on $(\Omega, \mathcal{F})$ which are absolutely continuous with respect to $P$ (resp. equivalent to $P$).
In this work we consider risk measures defined on the set $L^{\infty}$, which is understood as the set of discounted terminal values of financial positions. In the dynamical setting, a conditional risk measure $\rho_t$ assigns to each terminal payoff $X$ an $\mathcal{F}_t$-measurable random variable $\rho_t(X)$, that quantifies the risk of the position $X$ given the information $\mathcal{F}_t$. A rigorous definition of a conditional convex risk measure was given in \cite[Definition 2]{dt5}.
\begin{definition}\label{defrm}
A map $\rho_t\,:\,L^{\infty}\,\rightarrow\,L^{\infty}_t$ is called a \emph{conditional convex risk measure} if it satisfies the
following properties for all $X,Y\inL^{\infty}$:
\begin{itemize}
\item[(i)]
Conditional cash invariance: For all $m_t\inL^{\infty}_t$
\[\rho_t(X+m_t)=\rho_t(X)-m_t;\]
\item[(ii)]
Monotonicity: $X\le Y\;\,\Rightarrow\;\,\rho_t(X)\ge\rho_t(Y)$;
\item[(iii)]
Conditional convexity: for all $\lambda\inL^{\infty}_t,\,0\le \lambda\le 1$:
\[
\rho_t(\lambda X+(1-\lambda)Y)\le\lambda\rho_t(X)+(1-\lambda)\rho_t(Y);
\]
\item[(iv)]
{Normalization}: $\rho_t(0)=0$.
\end{itemize}
A conditional convex risk measure is called a \emph{conditional coherent risk measure} if it has in addition
the following property:
\begin{itemize}
\item[(iv)]
{Conditional positive homogeneity}: for all $\lambda\inL^{\infty}_t,\,\lambda\ge0$:
\[
\rho_t(\lambda X)=\lambda\rho_t(X).
\]
\end{itemize}
\end{definition}
In the dynamical framework one can also analyze risk assessment for cumulated cash flow \emph{processes} rather than just for terminal pay-offs, i.e.\ one can consider a risk measure that accounts not only for the amounts but also for the timing of payments. Such risk measures were studied in \cite{cdk4, cdk5, cdk6, ck6, fs6, jr8, afp9}. As shown in \cite{adehk7} in the static and in \cite{afp9} in the dynamical setting, convex risk measures for processes can be identified with convex risk measures for random variables on an appropriate product space. This allows to extend results obtained in our present setting to the framework of processes; cf.\ \cite{afp9}.
If $\rho_t$ is a conditional convex risk measure, the function $\phi_t:=-\rho_t$ defines a conditional monetary utility function in the sense of \cite{cdk6, ck6}.
The term ``monetary'' refers to conditional cash invariance of the utility function, the only property in Definition \ref{defrm} that does not come from the classical utility theory. Conditional cash invariance is a natural request in view of the interpretation of $\rho_t$ as a conditional capital requirement. In order to formalize this aspect we first recall the notion of the \textit{acceptance set} of a conditional convex risk measure $\rho_t$:
\[
\mathcal{A}_t:=\left\{\, X\inL^{\infty}\;\big|\; \rho_t(X)\le0\right\}.
\]
The following properties of the acceptance set were given in \cite[Proposition 3]{dt5}.
\begin{proposition}\label{acceptset}
The acceptance set $\mathcal{A}_t$ of a conditional convex risk measure $\rho_t$ is
\begin{enumerate}
\item conditionally convex, i.e.\ $\alpha X+(1-\alpha)Y\in\mathcal{A}_t$ for all $X,Y\in\mathcal{A}_t$ and $\alpha$ $\mathcal{F}_t$-measurable such that $0\leq\alpha\leq 1$;
\item solid, i.e. $Y\in\mathcal{A}_t$ whenever $Y\geq X$ for some $X\in\mathcal{A}_t$;
\item such that $0\in\mathcal{A}_t$ and $\ei\left\{\, X\inL^{\infty}_t\;\big|\; X\in\mathcal{A}_t\right\}=0$.
\end{enumerate}
Moreover, $\rho_t$ is uniquely determined through its acceptance set, since
\begin{equation}\label{defviaset}
\rho_t(X)=\ei\left\{\, Y\inL^{\infty}_t\;\big|\; X+Y\in\mathcal{A}_t\right\}.
\end{equation}
Conversely, if some set $\mathcal{A}_t\subseteqL^{\infty}$ satisfies conditions 1)-3), then the functional $\rho_t\,:\;L^{\infty}\rightarrowL^{\infty}_t$ defined via (\ref{defviaset}) is a conditional convex risk measure.
\end{proposition}
\begin{proof} Properties 1)-3) of the acceptance set follow easily from properties (i)-(iii) in Definition~\ref{defrm}. To prove (\ref{defviaset}) note that by cash invariance $\rho_t(X)+X\in\mathcal{A}_t$ for all $X$, and this implies ``$\ge$'' in (\ref{defviaset}). On the other hand, for all $Z\in \left\{\, Y\inL^{\infty}_t\;\big|\; X+Y\in\mathcal{A}_t\right\}$ we have
\[
0\ge\rho_t(Z+X)=\rho_t(X)-Z,
\]
thus $\rho_t(X)\le\ei\left\{\, Y\inL^{\infty}_t\;\big|\; X+Y\in\mathcal{A}_t\right\}.$\\
For the proof of the last part of the assertion we refer to \cite[Proposition 3]{dt5}.\end{proof}
Due to (\ref{defviaset}), the value $\rho_t(X)$ can be viewed as the minimal conditional capital requirement needed to be added to the position $X$ in order to make it acceptable at time $t$. The following example shows how risk measures can be defined via (\ref{defviaset}).
\begin{example}\label{ex:entr}
Consider the set of all positions having non-negative conditional expected utility, i.e.
\[
\mathcal{A}_t:=\{X\inL^{\infty}\;\big|\; E[u_t(X)|\mathcal{F}_t]\geq 0\},
\]
where $u_t$ denotes some non-increasing and concave utility function. It is easy to check that the set $\mathcal{A}_t$ has all
properties 1)-3) from Proposition~\ref{acceptset}. A basic choice is the exponential utility function
$u_t(x)=1-e^{-\gamma_tx}$, where $\gamma_t>0$ $P$-a.s.\ denotes the risk aversion parameter such that $\gamma_t,\frac{1}{\gamma_t}\inL^{\infty}_t$. The corresponding conditional convex risk measure $\rho_t$ associated to $\mathcal{A}_t$ via \eqref{defviaset} takes the form
\[
\rho_t(X)=\frac{1}{\gamma_t}\log E[e^{-\gamma_t X}|\mathcal{F}_t],\qquad X\inL^{\infty},
\]
and is called the \emph{conditional entropic risk measure}. The entropic risk measure was introduced in \cite{fs4} in the static setting, in the dynamical setting it appeared in \cite{bek4,ms5,dt5,cdk6,fp6,ck6}.
We characterize the dynamic entropic risk measure in Section~\ref{entropic}.
\end{example}
\section{Robust representation}\label{sectionrr}
As observed in \cite{adeh99, fs4, fr2} in the static setting, the axiomatic properties of a convex risk measure yield, under some regularity condition, a representation of the minimal capital requirement as a suitably modified worst expected loss over a whole class of probabilistic models. In the dynamical setting, such robust representations of conditional coherent risk measures were obtained on a finite probability space in \cite{rse5} for random variables and in \cite{rie4} for stochastic processes.
On a general probability space, robust representations for conditional coherent and convex risk measures
were proved in \cite{dt5,bn4,burg,ks5,fp6,bn8} for random variables and in \cite{cdk6} for stochastic processes.
In this section we mainly summarize the results from \cite{dt5,fp6,bn8}.
The alternative probability measures in a robust representation of a risk measure $\rho_t$ contribute to the risk evaluation to a different degree. To formalize this aspect we use the notion of the minimal penalty function $\alpha_t^{\min}$, defined for each $Q\in\M_1(P)$ as
\begin{equation}\label{pf1}
\alpha_t^{\min}(Q)=\qes_{X\in\mathcal{A}_t}E_Q[-X\,|\,\F_t\,].
\end{equation}
The following property of the minimal penalty function is a standard result, that will be used in the proof of Theorem~\ref{robdar}.
\begin{lemma}\label{erwpf}
For $Q\in\M_1(P)$ and $0\le s\le t$,
\[
E_Q[\alpha_t^{\min}(Q)|\mathcal{F}_s]=\qes_{Y\in\mathcal{A}_t}E_Q[-Y|\mathcal{F}_s]\quad Q\text{-a.s.}
\]
and in particular
\[
E_Q[\alpha_t^{\min}(Q)]=\sup_{Y\in\mathcal{A}_t}E_Q[-Y].
\]
\end{lemma}
\begin{proof} First we claim that the set
\[
\left\{\, E_Q[-X|\mathcal{F}_t]\;\big|\; X\in\mathcal{A}_t\right\}
\]
is directed upward for any $Q\in\M_1(P)$. Indeed, for $X,Y\in\mathcal{A}_t$ we can define $Z:=XI_A+YI_{A^c}$,
where $A:=\{E_Q[-X|\mathcal{F}_t]\ge E_Q[-Y|\mathcal{F}_t]\}\in\mathcal{F}_t$. Conditional convexity of $\rho_t$ implies that
$Z\in\mathcal{A}_t$, and by definition of $Z$
\[
E_Q[-Z|\mathcal{F}_t]=\max\left(E_Q[-X|\mathcal{F}_t],E_Q[-Y|\mathcal{F}_t]\right)\quad Q\text{-a.s.}.
\]
Hence there exists a sequence $(X^Q_n)_{n\in\mathbb N}$ in $\mathcal{A}_t$ such that
\begin{equation}\label{folge}
\alpha_t^{\min}(Q)=\lim_nE_Q[-X^Q_n|\mathcal{F}_t]\qquad Q\text{-a.s.},
\end{equation}
and by monotone convergence we get
\begin{align*}
E_Q[\alpha_t^{\min}(Q)|\mathcal{F}_s]&=\lim_nE_Q\left[\,E_Q[-X_n^Q|\mathcal{F}_t]\,\big|\,\mathcal{F}_s\,\right]\\
&\le\qes_{Y\in\mathcal{A}_t}E_Q[-Y|\mathcal{F}_s]\quad Q\text{-a.s.}.
\end{align*}
The converse inequality follows directly from the definition of $\alpha_t^{\min}(Q)$.\end{proof}
The following theorem relates robust representations to some continuity properties of conditional convex risk measures. It combines \cite[Theorem 1]{dt5} with \cite[Corollary 2.4]{fp6}; similar results can be found in \cite{bn4, ks5, cdk6}.
\begin{theorem}\label{robdar}
For a conditional convex risk measure $\rho_t$ the following are equivalent:
\begin{enumerate}
\item
$\rho_t$ has a robust representation
\begin{equation}\label{rd0}
\rho_t(X)=\es_{Q\in\Q_t}(E_Q[-X\,|\,\F_t\,]-\alpha_t(Q)),\qquad X\inL^{\infty},
\end{equation}
where
\begin{equation*
\mathcal{Q}_t:=\left\{\, Q\in\M_1(P)\;\big|\; Q=P|_{\mathcal{F}_t}\right\}
\end{equation*}
and $\alpha_t$ is a map from $\mathcal{Q}_t$ to the set of $\mathcal{F}_t$-measurable random variables with values in $\mathbb R\cup\{+\infty\}$, such that $\es_{Q\in\Q_t}(-\alpha_t(Q))=0$.
\item
$\rho_t$ has the robust representation in terms of the minimal penalty function, i.e.
\begin{equation}\label{rd1}
\rho_t(X)=\es_{Q\in\Q_t}(E_Q[-X\,|\,\F_t\,]-\alpha_t^{\min}(Q)),\qquad X\inL^{\infty},
\end{equation}
where $\alpha_t^{\min}$ is given in (\ref{pf1}).
\item $\rho_t$ has the robust representation
\begin{equation}\label{rd2}
\rho_t(X)=\es_{\mathcal{Q}\in\mathcal{Q}^f_t}(E_Q[-X\,|\,\F_t\,]-\alpha_t^{\min}(Q))\quad P\text{-a.s.},\qquad X\inL^{\infty},
\end{equation}
where
\[
\mathcal{Q}_t^f:=\left\{\, Q\in\M_1(P)\;\big|\; Q=P|_{\mathcal{F}_t}\;E_{Q}[\alpha_t^{\min}(Q)]<\infty\right\}.
\]
\item $\rho_t$ has the ``Fatou-property'': for any bounded sequence $(X_n)_{n\in\mathbb N}$ which converges $P$-a.s.\ to some $X$,
\[
\rho_t(X)\le\liminf_{n\to\infty}\rho_t(X_n)\quadP\mbox{-a.s.}.
\]
\item
$\rho_t$ is continuous from above, i.e.
\[
X_n\searrow X\;\,P\text{-a.s}\quad\Longrightarrow\quad \rho_t(X_n)\nearrow\rho_t(X)\;\,P\text{-a.s}
\]
for any sequence $(X_n)_n\subseteqL^{\infty}$ and $X\inL^{\infty}$.
\end{enumerate}
\end{theorem}
\begin{proof}
3) $\;\Rightarrow\; $ 1) and 2) $\;\Rightarrow\; $ 1) are obvious.
1) $\,\Rightarrow\, $ 4): Dominated convergence implies that $E_Q[X_n|\mathcal{F}_t]\rightarrow E_Q[X|\mathcal{F}_t]$ for each
$Q\in{\mathcal Q}_t$, and $\liminf_{n\to\infty}\rho_t(X_n)\ge\rho_t(X)$ follows by using the robust representation of
$\rho_t$ as in the unconditional setting, see, e.g., \cite[Lemma 4.20]{fs4}.
4) $\,\Rightarrow\, $ 5): Monotonicity implies $\limsup_{n\to\infty}\rho_t(X_n)\le\rho_t(X)$, and $\liminf_{n\to\infty}\rho_t(X_n)\ge\rho_t(X)$ follows
by 4).
5) $\,\Rightarrow\, $ 2): The inequality
\begin{equation}\label{ungl1}
\rho_t(X)\ge\es_{Q\in{\mathcal Q}_t}(E_Q[-X\,|\,\F_t\,]-\alpha_t^{\min}(Q))
\end{equation}
follows from the definition of $\alpha_t^{\min}$. In order to prove the equality we will show that
\[
E_P[\rho_t(X)]\le E_P\left[\es_{Q\in\Q_t}(E_Q[-X\,|\,\F_t\,]-\alpha_t^{\min}(Q))\right].
\]
To this end, consider the map $\rho^P\,:\,L^{\infty}\,\rightarrow\,\mathbb R$ defined by
$\rho^P(X):=E_P[\rho_t(X)]$. It is easy to check that $\rho^P$ is a convex risk measure which
is continuous from above. Hence \cite[Theorem 4.31]{fs4} implies that
$\rho^P$ has the robust representation
\[
\rho^P(X)=\sup_{Q\in\mathcal{M}_1(P)}(E_Q[-X]-\alpha(Q))\qquad X\inL^{\infty},
\]
where the penalty function $\alpha(Q)$ is given by
\[
\alpha(Q)=\sup_{X\inL^{\infty}: \rho^P(X)\le0}E_Q[-X].
\]
Next we will prove that $Q\in\mathcal{Q}_t$ if $\alpha(Q)<\infty$. Indeed, let $A\in\mathcal{F}_t$ and $\lambda>0$. Then
\[
-\lambda P[A]=E_P[\rho_t(\lambda I_A)]=\rho^P(\lambda I_A)\ge E_Q[-\lambda I_A]-\alpha(Q),
\]
so
\[
P[A]\le Q[A]+\frac{1}{\lambda}\alpha(Q)\quad\mbox{for all}\quad \lambda>0,
\]
and hence $P[A]\le Q[A]$ if $\alpha(Q)<\infty$. The same reasoning with $\lambda<0$ implies
$P[A]\ge Q[A]$, thus $P = Q$ on $\mathcal{F}_t$ if $\alpha(Q)<\infty$. By Lemma~\ref{erwpf}, we have for every $Q\in{\mathcal Q}_t$
\[
E_P[\alpha_t^{\min}(Q)]=\sup_{Y\in\mathcal{A}_t}E_P[-Y].
\]
Since $\rho^P(Y)\le0$ for all $Y\in\mathcal{A}_t$, this implies
\begin{equation*
E_P[\alpha_t^{\min}(Q)]\le\alpha(Q)
\end{equation*}
for all $Q\in{\mathcal Q}_t$, by definition of the penalty function $\alpha(Q)$.
Finally we obtain
\begin{align}\label{rdbeweis}
E_P[\rho_t(X)]=\rho^P(X)&=\sup_{Q\in\mathcal{M}_1(P), \alpha(Q)<\infty}\left(E_Q[-X]-\alpha(Q)\right)\nonumber\\
&\le\sup_{Q\in\mathcal{Q}_t, E_P[\alpha_t^{\min}(Q)]<\infty}\left(E_Q[-X]-\alpha(Q)\right)\nonumber\\
&\le\sup_{Q\in\mathcal{Q}_t, E_P[\alpha_t^{\min}(Q)]<\infty}E_P[E_Q[-X|\mathcal{F}_t]-\alpha_t^{\min}(Q)]\nonumber\\
&\le E_P\left[\es_{Q\in\mathcal{Q}_t, E_P[\alpha_t^{\min}(Q)]<\infty}\left(E_Q[-X|\mathcal{F}_t]-\alpha_t^{\min}(Q)\right)\right]\\
&\le E_P\left[\es_{Q\in\mathcal{Q}_t}E_Q[-X|\mathcal{F}_t]-\alpha_t^{\min}(Q)\right]\nonumber,
\end{align}
proving equality (\ref{rd1}).
5) $\,\Rightarrow\, $ 3) The inequality
\[
\rho_t(X)\ge\es_{\mathcal{Q}\in\mathcal{Q}^f_t}(E_Q[-X\,|\,\F_t\,]-\alpha_t^{\min}(Q))
\]
follows from (\ref{ungl1}) since $\mathcal{Q}_t^f\subseteq{\mathcal Q}_t$, and (\ref{rdbeweis}) proves the
equality.
\end{proof}
The penalty function $\alpha_t^{\min}(Q)$ is minimal in the sense that any other function $\alpha_t$ in a
robust representation \eqref{rd0} of $\rho_t$ satisfies
\[
\alpha_t^{\min}(Q)\le\alpha_t(Q)\;\,P\mbox{-a.s.}
\]
for all $Q\in{\mathcal Q}_t$.
An alternative formula for the minimal penalty function is given by
\begin{equation*
\alpha_t^{\min}(Q)=\es_{X\inL^{\infty}}\,\left(E_Q[-X\,|\,\F_t\,]-\rho_t(X)\right)\quad\mbox{for all}\;\, Q\in{\mathcal Q}_t.
\end{equation*}
This follows as in the unconditional case; see, e.g., \cite[Theorem 4.15, Remark 4.16]{fs4}.
\begin{remark}\label{abg}
Another characterization of a conditional convex risk measure $\rho_t$ that is equivalent to the properties 1)-4) of Theorem \ref{robdar} is the following: The acceptance set $\mathcal{A}_t$ is weak$^\ast$-closed, i.e., it is closed in $L^{\infty}$ with respect to the topology $\sigma(L^{\infty}, L^1(\Omega,\mathcal{F},P))$.
This equivalence was shown in
\cite{cdk6} in the context of risk measures for processes and in
\cite{ks5} for risk measures for random variables. Though in \cite{ks5} a slightly different definition of a conditional risk measure is used, the reasoning given there works just the same in our case; cf.\ \cite[Theorem 3.16]{ks5}.
\end{remark}
For the characterization of time consistency in Section~\ref{sec:tc} we will need a robust representation of a conditional convex risk measure $\rho_t$ under any measure $Q\in\M_1(P)$, where possibly $Q\notin\mathcal{Q}_t$. Such representation can be obtained as in Theorem~\ref{robdar} by considering $\rho_t$ as a risk measure under $Q$, as shown in the next corollary. This result is a version of \cite[Proposition 1]{bn8}.
\begin{corollary}\label{corrobdar}
A conditional convex risk measure $\rho_t$ is continuous from above if and only if it has the robust representations
\begin{align}
\rho_t(X)&=\qes_{R\in\mathcal{Q}_t(Q)}(E_R[-X\,|\,\F_t\,]-\alpha_t^{\min}(R))\label{rd3}\\
&=\qes_{R\in\mathcal{Q}^f_t(Q)}(E_R[-X\,|\,\F_t\,]-\alpha_t^{\min}(R))\quad Q\text{-a.s.},\quad \forall X\inL^{\infty},\label{rd3f}
\end{align}
for all $Q\in\M_1(P)$, where
\begin{equation*
\mathcal{Q}_t(Q)=\left\{\, R\in\M_1(P)\;\big|\; R=Q|_{\mathcal{F}_t}\right\}
\end{equation*}
and
\[
\mathcal{Q}_t^f(Q)=\left\{\, R\in\mathcal{M}_1(P)\;\big|\; R=Q|_{\mathcal{F}_t},\;E_{R}[\alpha_t^{\min}(R)]<\infty\right\}.
\]
\end{corollary}
\begin{proof} To show that continuity from above implies representation \eqref{rd3}, we can replace $P$ by a probability measure $Q\in\M_1(P)$ and repeat all the reasoning of the proof of 5)$\Rightarrow$2) in Theorem~\ref{robdar}. In this case we consider the static convex risk measure
\[
\rho^Q(X)=E_Q[\rho_t(X)]=\sup_{R\in\mathcal{M}_1(P)}(E_R[-X]-\alpha(R)),\qquad X\inL^{\infty},
\]
instead of $\rho^P$. The proof of \eqref{rd3f} follows in the same way from \cite[Corollary 2.4]{fp6}. Conversely, continuity from above follows from Theorem~\ref{robdar} since representation \eqref{rd3} holds under $P$.
\end{proof}
\begin{remark}
One can easily see that the set $\mathcal{Q}_t$ in representations \eqref{rd0} and \eqref{rd1} can be replaced by
$\mathcal{P}_t:=\left\{\, Q\in\M_1(P)\;\big|\; Q\approx P\;\text{on}\;\mathcal{F}_t\right\}$.
Moreover, representation \eqref{rd0} is also equivalent to
\[
\rho_t(X)=\es_{Q\in\M_1(P)}(E_Q[-X\,|\,\F_t\,]-\hat{\alpha}_t(Q)),\qquad X\inL^{\infty},
\]
where the conditional expectation under $Q\in\M_1(P)$ is defined under $P$ as
\[
E_Q[X|\mathcal{F}_t]:=\frac{E_P[Z_TX|\mathcal{F}_t]}{Z_t}I_{\{Z_t>0\}},
\]
and the extended penalty function $\hat{\alpha}_t$ is given by
\begin{eqnarray*}
\hat{\alpha}_t(Q) = \left\{ \begin{array}{ll} \alpha_t(Q) & \textrm{on}\;\{\frac{dQ}{dP}|_{\mathcal{F}_t}>0\}; \\
+\infty & \textrm{otherwise}.
\end{array} \right.
\end{eqnarray*}
\end{remark}
In the \textit{coherent} case the penalty function $\alpha_t^{\min}(Q)$ can only take values $0$ or $\infty$ due to positive homogeneity of $\rho_t$. Thus representation \eqref{rd3} takes the following form.
\begin{corollary}\label{rdcoherent}
A conditional coherent risk measure $\rho_t$ is continuous from above if and only if it is
representable in the form
\begin{equation}\label{rdcoh}
\rho_t(X)=\es_{\mathcal{Q}\in\mathcal{Q}^0_t}E_Q[-X\,|\,\F_t\,],\qquad X\inL^{\infty},
\end{equation}
where
\[
\mathcal{Q}_t^0:=\left\{\, Q\in\mathcal{Q}_t\;\big|\;\alpha_t^{\min}(Q)=0\; Q\mbox{-a.s.}\right\}.
\]
\end{corollary}
\begin{example}\label{avar}
A notable example of a conditional coherent risk measure is \emph{conditional Average Value at Risk} defined as
\begin{eqnarray*
AV@R_{t,\lambda_t}(X):=\es\{E_Q[-X|\mathcal{F}_t]\;\big|\; Q\in\mathcal{Q}_t, \frac{dQ}{dP}\leq \lambda_t^{-1}\}
\end{eqnarray*}
with $\lambda_t\inL^{\infty}_t$, $0<\lambda_t\leq 1$. Static Average Value at Risk was introduced in \cite{adeh99} as a valid alternative to the widely used yet criticized Value at Risk.
The conditional version of Average Value at Risk appeared in \cite{adehk7}, and was also studied in \cite{Samuel, vo6}.
\end{example}
As observed, e.g., in \cite[Remark 3.13]{cdk6}, the minimal penalty function has the local property. In our context it means that for any $Q^1, Q^2\in\mathcal{Q}_t(Q)$ with the corresponding density processes $Z^1$ and $Z^2$ with respect to $P$, and for any $A\in\mathcal{F}_t$, the probability measure $R$ defined via $\frac{dR}{dP}:=I_AZ^1_T+I_{A^{\text{c}}}Z^2_T$ has the penalty function value
\[
\alpha_t^{\min}(R)= I_A\alpha_t^{\min}(Q^1)+I_{A^{\text{c}}}\alpha_t^{\min}(Q^2)\qquad Q\text{-a.s.}.
\]
In particular $R\in\mathcal{Q}_t^f(Q)$ if $Q^1, Q^2\in\mathcal{Q}_t^f(Q)$. Standard arguments (cf., e.g., \cite[Lemma 1]{dt5}) imply then that the set
\[
\left\{\, E_R[\,-X\,|\,\mathcal{F}_t]-\alpha_t^{\min}(R)\;\big|\; R\in\mathcal{Q}^f_t(Q)\right\}
\]
is directed upward, thus
\begin{equation}\label{erwrho}
E_{Q}[\rho_t(X)|\mathcal{F}_s]=\qes_{R\in\mathcal{Q}^f_t(Q)}\left(E_{R}[-X|\mathcal{F}_s]-E_{R}[\alpha_t^{\min}(R)|\mathcal{F}_s]\right)
\end{equation}
for all $Q\in\M_1(P), X\inL^{\infty}(\Omega, \F,P)$ and $0\le s\leq t$.
\section{Time consistency properties}\label{sec:tc}
In the dynamical setting risk assessment of a financial position is updated when new information is released. This leads to the notion of a dynamic risk measure.
\begin{definition}\label{dcrm}
A a sequence $(\rho_t)_{t\in\T}$ is called a \emph{dynamic convex risk measure} if $\rho_t$ is a conditional convex risk measure for each $t\in\mathbb{T}$.
\end{definition}
A key question in the dynamical setting is how the conditional risk assessments at different times are interrelated. This question has led to several notions of time consistency discussed in the literature.
A unifying view was suggested in \cite{tu8}.
\begin{definition}\label{sina}
Assume that $(\rho_t)_{t\in\T}$ is a dynamic convex risk measure and let $\mathcal{Y}_t$ be a subset of $L^{\infty}$ such that $0\in\mathcal{Y}_t$ and $\mathcal{Y}_t+\mathbb R=\mathcal{Y}_t$ for each $t\in\mathbb{T}$. Then $(\rho_t)_{t\in\T}$ is called \emph{acceptance (resp. rejection) consistent with respect to $(\mathcal{Y}_t)_{t\in\T}$}, if for all $t\in\mathbb{T}$ such that $t<T$ and for any $X\inL^{\infty}$ and $Y\in\mathcal{Y}_{t+1}$ the following condition holds:
\begin{equation}\label{definition1}
\rho_{t+1}(X)\le\rho_{t+1}(Y)\;\;(\mbox{resp.}\,\ge)\quad\Longrightarrow\quad\rho_{t}(X)\le\rho_{t}(Y)\;\;(\mbox{resp.}\,\ge).
\end{equation}
\end{definition}
The idea is that the degree of time consistency is determined by a sequence of benchmark sets $(\mathcal{Y}_t)_{t\in\T}$: if a financial position at some future time is always preferable to some element of the benchmark set, then it should also be preferable today. The bigger the benchmark set, the stronger is the resulting notion of time consistency. In the following we focus on three cases.
\begin{definition}\label{cons}
We call a dynamic convex risk measure $(\rho_t)_{t\in\T}$
\begin{enumerate}
\item \emph{strongly time consistent}, if it is either acceptance consistent or rejection consistent with respect to $\mathcal{Y}_t=L^{\infty}$ for all $t$ in the sense of Definition~\ref{sina};
\item \emph{middle acceptance (resp. middle rejection) consistent}, if for all $t$ we have $\mathcal{Y}_t=L^\infty_t$ in Definition~\ref{sina};
\item \emph{weakly acceptance (resp. weakly rejection) consistent}, if for all $t$ we have $\mathcal{Y}_t=\mathbb R$ in Definition~\ref{sina}.
\end{enumerate}
\end{definition}
Note that there is no difference between rejection consistency and acceptance consistency with respect to $L^{\infty}$, since the role of $X$ and $Y$ is symmetric in that case. Obviously strong time consistency implies both middle rejection and middle acceptance consistency, and middle rejection (resp. middle acceptance) consistency implies weak rejection (resp. weak acceptance) consistency. In the rest of the paper we drop the terms ``middle'' and ``strong'' in order to simplify the terminology.
\subsection{Time consistency}\label{subsec:tc}
Time consistency has been studied extensively in the recent work on dynamic risk measures, see \cite{adehk7,d6,rie4,dt5,cdk6,ks5,burg,bn8,ipen7,fp6,ck6, dpr10} and the references therein. In the next proposition we recall some equivalent characterizations of time consistency.
\begin{proposition}\label{def2}
A dynamic convex risk measure $(\rho_t)_{t\in\T}$ is time consistent if and only if any of the following conditions holds:
\begin{enumerate}
\item for all $t\in\mathbb{T}$ such that $t<T$ and for all $X,Y\inL^{\infty}$:
\begin{equation}\label{tc4}
\rho_{t+1}(X)\le\rho_{t+1}(Y)\;\,P\text{-a.s}\quad\Longrightarrow\quad\rho_t(X)\le\rho_t(Y)\;\,P\text{-a.s.};
\end{equation}
\item for all $t\in\mathbb{T}$ such that $t<T$ and for all $X,Y\inL^{\infty}$:
\begin{equation}\label{tc2}
\rho_{t+1}(X)=\rho_{t+1}(Y)\;\,P\text{-a.s}\quad\Longrightarrow\quad\rho_t(X)=\rho_t(Y)\;\,P\text{-a.s.};
\end{equation}
\item
$(\rho_t)_{t\in\T}$ is recursive, i.e.
\[
\rho_t=\rho_t(-\rho_{t+s})\quad P\text{-a.s.}
\]
for all $t,s\ge 0$ such that $t,t+s\in\mathbb{T}$.
\end{enumerate}
\end{proposition}
\begin{proof}
It is obvious that time consistency implies condition (\ref{tc4}), and that (\ref{tc4}) implies (\ref{tc2}). By cash invariance we have $\rho_{t+1}(-\rho_{t+1}(X))=\rho_{t+1}(X)$ and hence one-step recursiveness follows from (\ref{tc2}). We prove that one-step recursiveness implies recursiveness by induction on $s$. For $s=1$ the claim is true for all $t$. Assume that the induction hypothesis holds for each $t$ and all $k\le s$ for some $s\ge 1$. Then we obtain
\begin{align*}
\rho_t(-\rho_{t+s+1}(X))&=\rho_t(-\rho_{t+s}(-\rho_{t+s+1}(X)))\\
&=\rho_t(-\rho_{t+s}(X))\\
&=\rho_t(X),
\end{align*}
where we have applied the induction hypothesis to the random variable $-\rho_{t+s+1}(X)$. Hence the claim
follows. Finally, due to monotonicity, recursiveness implies time consistency.
\end{proof}
If we restrict a conditional convex risk measure $\rho_t$ to the space $L^\infty_{t+s}$ for some $s\ge0$, the corresponding acceptance set is given by
\[
\mathcal{A}_{t,t+s}:=\left\{\, X\in L^\infty_{t+s}\;\big|\;\rho_t(X)\le0\;\,P\text{-a.s.}\right\},
\]
and the minimal penalty function by
\begin{equation}\label{ats}
\alpha_{t, t+s}^{\min}(Q):=\qes_{X\in\mathcal{A}_{t,t+s}}\,E_Q[-X\,|\,\F_t\,], \qquad Q\in\M_1(P).
\end{equation}
The following lemma recalls equivalent characterizations of recursive inequalities in terms of acceptance sets from \cite[Lemma 4.6]{fp6}; property \eqref{eqacset1} was shown in \cite{d6}.
\begin{lemma}\label{4.6}
Let $(\rho_t)_{t\in\T}$ be a dynamic convex risk measure. Then the following equivalences hold for all
$s,t$ such that $t,t+s\in\mathbb{T}$ and all $X\inL^{\infty}$:
\begin{align}
X\in\mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}&\iff-\rho_{t+s}(X)\in\mathcal{A}_{t,t+s}\label{eqacset1}\\
\mathcal{A}_t\subseteq \mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}&\iff\rho_t(-\rho_{t+s})\le\rho_t\quadP\mbox{-a.s.}\label{eqacset2}\\
\mathcal{A}_t\supseteq \mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}&\iff\rho_t(-\rho_{t+s})\ge\rho_t\quadP\mbox{-a.s.}.\label{eqacset3}
\end{align}
\end{lemma}
\begin{proof}
To prove ``$\Rightarrow$'' in (\ref{eqacset1}) let $X=X_{t,t+s}+X_{t+s}$ with $X_{t,t+s}\in\mathcal{A}_{t,t+s}$ and
$X_{t+s}\in\mathcal{A}_{t+s}$. Then
\[
\rho_{t+s}(X)=\rho_{t+s}(X_{t+s})-X_{t,t+s}\le-X_{t,t+s}
\]
by cash invariance, and monotonicity implies
\[
\rho_t(-\rho_{t+s}(X))\le\rho_t(X_{t,t+s})\le0.
\]
The converse direction follows immediately from
$X=X+\rho_{t+s}(X)-\rho_{t+s}(X)$ and $X+\rho_{t+s}(X)\in\mathcal{A}_{t+s}$ for all $X\inL^{\infty}$.
In order to show ``$\Rightarrow$'' in (\ref{eqacset2}), fix $X\inL^{\infty}$. Since
$X+\rho_t(X)\in\mathcal{A}_t\subseteq \mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}$, we obtain
\[
\rho_{t+s}(X)-\rho_t(X)=\rho_{t+s}(X+\rho_t(X))\in-\mathcal{A}_{t,t+s},
\]
by (\ref{eqacset1}) and cash invariance. Hence
\[
\rho_t(-\rho_{t+s}(X))-\rho_t(X)=\rho_t(-(\rho_{t+s}(X)-\rho_t(X)))\le0\quadP\mbox{-a.s.}.
\]
To prove ``$\Leftarrow$'' let $X\in\mathcal{A}_t$. Then $-\rho_{t+s}(X)\in\mathcal{A}_{t,t+s}$ by the right hand side
of (\ref{eqacset2}), and hence $X\in\mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}$ by (\ref{eqacset1}).
Now let $X\inL^{\infty}$ and assume $\mathcal{A}_t\supseteq \mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}$. Then
\begin{align*}
\rho_t(-\rho_{t+s}(X))+X&=\rho_t(-\rho_{t+s}(X))-\rho_{t+s}(X)+\rho_{t+s}(X)+X\\&\in\mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}\subseteq\mathcal{A}_t.
\end{align*}
Hence
\[
\rho_t(X)-\rho_t(-\rho_{t+s}(X))=\rho_t(X+\rho_t(-\rho_{t+s}(X)))\le 0
\]
by cash invariance, and this proves ``$\Rightarrow$'' in (\ref{eqacset3}). For the converse direction let
$X\in\mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}$. Since $-\rho_{t+s}(X)\in\mathcal{A}_{t,t+s}$ by (\ref{eqacset1}), we obtain
\[
\rho_t(X)\le\rho_t(-\rho_{t+s}(X))\le0,
\]
hence $X\in\mathcal{A}_t$.\end{proof}
We also have the following relation between acceptance sets and penalty functions; cf.\ \cite[Lemma 2.2.5]{ipen7}.
\begin{lemma}\label{setpen}
Let $(\rho_t)_{t\in\T}$ be a dynamic convex risk measures. Then the following implications hold for all
$t,s$ such that $t,t+s\in\mathbb{T}$ and for all $Q\in\M_1(P)$:
\begin{align*}
\mathcal{A}_t\subseteq \mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}&\Rightarrow\alpha_t^{\min}(Q)\le\alpha_{t, t+s}^{\min}(Q)+E_Q[\alpha_{t+s}^{\min}(Q)|\mathcal{F}_t]\quad\q
\\
\mathcal{A}_t\supseteq \mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}&\Rightarrow\alpha_t^{\min}(Q)\ge\alpha_{t, t+s}^{\min}(Q)+E_Q[\alpha_{t+s}^{\min}(Q)|\mathcal{F}_t]\quadQ\mbox{-a.s.}
\end{align*}
\end{lemma}
\begin{proof} Straightforward from the definition of the minimal penalty function and Lemma~\ref{erwpf}.
\end{proof}
The following theorem gives equivalent characterizations of time consistency in terms of acceptance sets, penalty functions, and a supermartingale property of the risk process.
\begin{theorem}\label{eqchar}
Let $(\rho_t)_{t\in\T}$ be a dynamic convex risk measure such that each $\rho_t$ is continuous from above. Then the
following conditions are equivalent:
\begin{enumerate}
\item $(\rho_t)_{t\in\T}$ is time consistent.
\item $\mathcal{A}_t=\mathcal{A}_{t,t+s}+\mathcal{A}_{t+s}\,$ for all $t,s$ such that $t,t+s\in\mathbb{T}$.
\item $\alpha_t^{\min}(Q)=\alpha_{t, t+s}^{\min}(Q)+E_Q[\,\alpha_{t+s}^{\min}(Q)\,|\,\mathcal{F}_t\,]\;\; Q$-a.s. for all $t,s$ such that $t,t+s\in\mathbb{T}$ and all $\,Q\in\M_1(P)$.
\item For all $X\inL^{\infty}(\Omega, \F,P)$ and all $t,s$ such that $t,t+s\in\mathbb{T}$ and all $\,Q\in\M_1(P)$ we have
\[
E_Q[\,\rho_{t+s}(X)+\alpha_{t+s}^{\min}(Q)\,|\,\mathcal{F}_t]\le\rho_t(X)+\alpha_t^{\min}(Q)\quadQ\mbox{-a.s.}.
\]
\end{enumerate}
\end{theorem}
Equivalence of properties 1) and 2) of Theorem~\ref{eqchar} was proved in \cite{d6}. Characterizations of time consistency in terms of penalty functions as in 3) of Theorem~\ref{eqchar} appeared in \cite{fp6, bn6, ck6, bn8}; similar results for risk measures for processes were given in \cite{cdk6, ck6}. The supermartingale property as in 4) of Theorem~\ref{eqchar} was obtained in \cite{fp6}; cf.\ also \cite{bn8} for the absolutely continuous case.
\begin{proof} The proof of 1)$\Rightarrow$2)$\Rightarrow$3) follows from Lemma~\ref{4.6} and Lemma~\ref{setpen}. To prove 3)$\Rightarrow$4) fix $Q\in\M_1(P)$.
By \eqref{erwrho} we have
\[
E_{Q}[\rho_{t+s}(X)|\mathcal{F}_t]=\qes_{R\in\mathcal{Q}^f_{t+s}(Q)}\left(E_{R}[-X|\mathcal{F}_{t}]-E_{R}[\alpha_{t+s}^{\min}(R)|\mathcal{F}_t]\right).
\]
On the set $\left\{\,\alpha_t^{\min}(Q)=\infty\right\}$ property 4) holds trivially. On the set $\left\{\,\alpha_t^{\min}(Q)<\infty\right\}$ property 3) implies
$E_Q[\alpha_{t+s}^{\min}(Q)|\mathcal{F}_t]<\infty$ and $\alpha_{t,t+s}^{\min}(Q)<\infty$, then for $R\in\mathcal{Q}^f_{t+s}(Q)$
\begin{equation*
\alpha_t^{\min}(R)=\alpha_{t,t+s}^{\min}(Q)+E_R[\alpha_{t+s}^{\min}(R)|\mathcal{F}_t]<\infty\quadQ\mbox{-a.s.}.
\end{equation*}
Thus
\begin{equation*}
E_{Q}[\rho_{t+s}(X)+\alpha_{t+s}^{\min}(Q)|\mathcal{F}_t]=\qes_{R\in\mathcal{Q}^f_{t+s}(Q)}\left(E_{R}[-X|\mathcal{F}_{t}]-\alpha_{t}^{\min}(R)\right)+\alpha_t^{\min}(Q)
\end{equation*}
on $\left\{\,\alpha_t^{\min}(Q)<\infty\right\}$.
Moreover, since $\mathcal{Q}_{t+s}^f(Q)\subseteq \mathcal{Q}_{t}(Q)$, \eqref{rd3} implies
\begin{equation*}
E_{Q}[\rho_{t+s}(X)+\alpha_{t+s}^{\min}(Q)|\mathcal{F}_t]\le\qes_{R\in \mathcal{Q}_{t}(Q)}\left(E_{R}[-X|\mathcal{F}_{t}]-\alpha_{t}^{\min}(R)\right)+\alpha_t^{\min}(Q)=\rho_t(X)+\alpha_t^{\min}(Q)\quadQ\mbox{-a.s.}.
\end{equation*}
It remains to prove 4)$\Rightarrow$1). To this end fix $Q\in\mathcal{Q}_t^f$ and $X,Y\inL^{\infty}$ such that $\rho_{t+1}(X)\le \rho_{t+1}(Y)$. Note that $E_Q[\alpha_{t+s}(Q)]<\infty$ due to 4), hence $Q\in\mathcal{Q}_{t+s}^f(Q)$.
Using 4) and representation \eqref{rd3f} for $\rho_{t+s}$ under $Q$, we obtain
\begin{align*}
\rho_t(Y)+\alpha_t^{\min}(Q)&\geq E_Q[\rho_{t+1}(Y)+\alpha_{t+1}^{\min}(Q)|\mathcal{F}_t]\\
&\geq E_Q[\rho_{t+1}(X)+\alpha_{t+1}^{\min}(Q)|\mathcal{F}_t]\\
&\geq E_Q[E_Q[-X|\mathcal{F}_{t+1}]-\alpha_{t+1}^{\min}(Q)+\alpha_{t+1}^{\min}(Q)|\mathcal{F}_t]\\
&=E_Q[-X|\mathcal{F}_t].
\end{align*}
Hence representation \eqref{rd2} yields $\rho_t(y)\ge \rho_t(X)$, and time consistency follows from Proposition~\ref{def2}.
\end{proof}
Properties 3) and 4) of Theorem~\ref{eqchar} imply in particular supermartingale propeties of penalty function processes and risk processes. This allows to apply martingale theory for characterization the the dynamics of these processes, as we do in Proposition~\ref{worstcase} and Proposition~\ref{riesz}; cf.\ also \cite{d6, fp6, ipen7, bn8, dpr10}.
\begin{proposition}\label{worstcase}
Let $(\rho_t)_{t\in\T}$ be a time consistent dynamic convex risk measure such that each $\rho_t$ is continuous from above. Then the process
\begin{equation*
V_t^Q(X):=\rho_t(X)+\alpha_t^{\min}(Q),\qquad t\in\mathbb{T}
\end{equation*}
is a $Q$-supermartingale for all $X\inL^{\infty}$ and all $Q\in\mathcal{Q}_0$, where
\begin{equation*
\mathcal{Q}_0:=\left\{\, Q\in\M_1(P)\;\big|\; \alpha_0^{\min}(Q)<\infty\right\}.
\end{equation*}
Moreover, $(V_t^Q(X))_{t\in\mathbb{T}}$ is a $Q$-martingale if $Q\in\mathcal{Q}_0$ is a ``worst case'' measure for $X$ at time $0$, i.e.\ if the supremum in the robust representation of $\rho_0(X)$ is attained at $Q$:
\[
\rho_0(X)=E_Q[-X]-\alpha^{\min}_0(Q)\quad Q\text{-a.s.}.
\]
In this case $Q$ is a ``worst case'' measure for $X$ at any time $t$, i.e.
\[
\rho_t(X)=E_Q[-X|\mathcal{F}_t]-\alpha^{\min}_t(Q)\quad Q\text{-a.s.}\quad\text{for all}\quad t\in\mathbb{T}.
\]
The converse holds if $T<\infty$ or $\lim_{t\to\infty}\rho_t(X)=-X$ $P$-a.s.\ (what is called asymptotic precision in \cite{fp6}): If $(V_t^Q(X))_{t\in\mathbb{T}}$ is a $Q$-martingale then $Q\in\mathcal{Q}_0$ is a ``worst case'' measure for $X$ at any time $t\in\mathbb{T}$.
\end{proposition}
\begin{proof}
The supermartingale property of $(V_t^Q(X))_{t\in\T}$ under each $Q\in\mathcal{Q}_0$ follows directly from properties 3) and 4) of Theorem~\ref{eqchar}. To prove the remaining part of the claim, fix $Q\in\mathcal{Q}_0$ and $X\inL^{\infty}$.
If $Q$ is a ``worst case'' measure for $X$ at time $0$, the process
\[
U_t(X):=V_t^Q(X)-E_Q[-X|\mathcal{F}_t],\qquad t\in\mathbb{T}
\]
is a non-negative $Q$-supermartingale beginning at $0$. Indeed, the supermartingale property follows from that of $(V_t^Q(X))_{t\in\T}$, and non-negativity follows from the representation \eqref{rd3f}, since $Q\in\mathcal{Q}_t^f(Q)$. Thus $U_t=0$ $Q$-a.s.\ for all $t$, and this proves the ``if'' part of the claim. To prove the converse direction, note that if $(V_t^Q(X))_{t\in\mathbb{T}}$ is a $Q$-martingale and $\rho_T(X)=-X$ (resp.\ $\lim_{t\to\infty}\rho_t(X)=-X$ $P$-a.s.), the process $U(X)$ is a $Q$-martingale ending at $0$ (resp.\ converging to $0$ in $L^1(Q)$), and thus $U_t(X)=0$ $Q$-a.s.\ for all $t\in\mathbb{T}$.
\end{proof}
\begin{remark}
The fact that a worst case measure for $X$ at time $0$, if it exists, remains a worst case measure for $X$ at any time $t\in\mathbb{T}$ was also shown in \cite[Theorem 3.9]{ck6} for a time consistent dynamic risk measure without using the supermartingale property from Proposition~\ref{worstcase}.
\end{remark}
\begin{remark}\label{remarksm}
In difference to \cite[Theorem 4.5]{fp6}, without the additional assumption that the set
\begin{equation}\label{qstar}
{\mathcal Q}^{\ast}:=\left\{\, Q\in\M^e(P)\;\big|\; \alpha_0^{\min}(Q)<\infty\right\}
\end{equation}
is nonempty, the supermartingale property of $(V_t^Q(X))_{t\in\T}$ for all $X\inL^{\infty}$ and all $Q\in{\mathcal Q}^{\ast}$ is not sufficient to prove time consistency. In this case we also do not have the robust representation of $\rho_t$ in terms of the set ${\mathcal Q}^{\ast}$.
\end{remark}
The process $(\alpha_t^{\min}(Q))_{t\in\T}$ is a $Q$-supermartingale for all $Q\in\mathcal{Q}_0$ due to Property 3) of Theorem~\ref{eqchar}. The next proposition provides the explicit form of its Doob- and its Riesz-decomposition; cf.\ also \cite[Proposition 2.3.2]{ipen7}.
\begin{proposition}\label{riesz}
Let $(\rho_t)_{t\in\T}$ be a time consistent dynamic convex risk measure such that each $\rho_t$ is continuous from above. Then for each $Q\in\mathcal{Q}_0$ the process $(\alpha_t^{\min}(Q))_{t\in\T}$ is a non-negative $Q$-supermartingale with the Riesz decomposition
\[
\alpha_t^{\min}(Q)=Z_t^Q+M_t^Q\quad Q\mbox{-a.s.},\qquad t\in\mathbb{T},
\]
where
\[
Z_t^Q:= E_Q\left[\,\sum_{k=t}^{T-1}\alpha_{k,k+1}^{\min}(Q)\,\big|\,\mathcal{F}_t\,\right]\quad Q\mbox{-a.s.},\quad t\in\mathbb{T}
\]
is a $Q$-potential and
\[
M_t^{Q}:=\left\{
\begin{array}{c@{\quad \quad}l}
0 & \text{if $T<\infty$},\\
\displaystyle\lim_{s\to\infty}E_Q\left[\alpha_s(Q)\,|\,\mathcal{F}_t\,\right] & \text{if $T=\infty$}
\end{array}\right.\qquad Q\mbox{-a.s.},\quad t\in\mathbb{T}
\]
is a non-negative $Q$-martingale.
Moreover, the Doob decomposition of $(\alpha_t^{\min}(Q))_{t\in\T}$ is given by
\begin{equation*
\alpha_t^{\min}(Q)=E_Q\left[\,\sum_{k=0}^{T-1}\alpha_{k,k+1}^{\min}(Q)\,\big|\,\mathcal{F}_t\,\right]+M_t^Q-\sum_{k=0}^{t-1}\alpha_{k,k+1}^{\min}(Q),\quad t\in\mathbb{T}
\end{equation*}
with the $Q$-martingale
\[
E_Q\left[\,\sum_{k=0}^{T-1}\alpha_{k,k+1}^{\min}(Q)\,\big|\,\mathcal{F}_t\,\right]+M_t^Q,\quad t\in\mathbb{T}
\]
and the non-decreasing predictable process $(\sum_{k=0}^{t-1}\alpha_{k,k+1}^{\min}(Q))_{t\in\T}$.
\end{proposition}
\begin{proof} We fix $Q\in\M_1(P)$ and applying property 3) of Theorem \ref{eqchar} step by step we obtain
\begin{equation}\label{lim}
\alpha_t^{\min}(Q)=E_Q\left[\,\sum_{k=t}^{t+s-1}\alpha_{k,k+1}^{\min}(Q)\,\big|\,\mathcal{F}_t\,\right]+E_Q[\,\alpha_{t+s}^{\min}(Q)\,|\,\mathcal{F}_t\,]\quad Q\mbox{-a.s.}
\end{equation}
for all $t,s$ such that $t,t+s\in\mathbb{T}$. If $T<\infty$, the Doob- and Riesz-decompositions follow immediately from \eqref{lim}, since $\alpha_T(Q)=0\; Q\mbox{-a.s.}$. If $T=\infty$, by monotonicity there exists the limit
\[
Z_t^Q= \lim_{s\to\infty}E_Q\left[\,\sum_{k=t}^{s}\alpha_{k,k+1}^{\min}(Q)\,\big|\,\mathcal{F}_t\,\right]=E_Q\left[\,\sum_{k=t}^{\infty}\alpha_{k,k+1}^{\min}(Q)\,\big|\,\mathcal{F}_t\,\right]\quad Q\mbox{-a.s.}
\]
for all $t\in\mathbb{T}$, where we have used the monotone convergence theorem for the second equality. Equality \eqref{lim} implies then that there exists
\[
M_t^Q= \lim_{s\to\infty}E_Q\left[\,\alpha_{t+s}^{\min}(Q)\,|\,\mathcal{F}_t\,\right]\quad Q\mbox{-a.s.},\quad t\in\mathbb{T}
\]
and
\[
\alpha_t^{\min}(Q)=Z_t^Q+M_t^Q\quad Q\mbox{-a.s.}
\]
for all $t\in\mathbb{T}$.
The process $(Z_t^Q)_{t\in\T}$ is a non-negative $Q$-supermartingale. Indeed,
\begin{equation}\label{fin}
E_Q[\,Z_{t}^Q\,]\le E_Q\left[\,\sum_{k=0}^{\infty}\alpha_{k,k+1}^{\min}(Q)\,\right]\le\alpha_0^{\min}(Q)<\infty
\end{equation}
and $E_Q[\,Z_{t+1}^Q\,|\,\mathcal{F}_t\,]\le Z_t^Q$ $Q$-a.s. for all $t\in\mathbb{T}$ by definition. Moreover, monotone convergence implies
\[
\lim_{t\to\infty}E_Q[\,Z_t^Q\,]=E_Q\left[\,\lim_{t\to\infty}\sum_{k=t}^{\infty}\alpha_{k,k+1}^{\min}(Q)\,\right]=0\quadQ\mbox{-a.s.},
\]
since $\sum_{k=0}^{\infty}\alpha_{k,k+1}^{\min}(Q)<\infty\;Q$-a.s. by \eqref{fin}. Hence the process $(Z_t^Q)_{t\in\T}$ is a $Q$-potential.
The process $(M_t^Q)_{t\in\T}$ is a non-negative $Q$-martingale, since
\[
E_Q[\,M_{t}^Q\,]\le E_Q\left[\,\alpha_t^{\min}(Q)\,\right]\le\alpha_0^{\min}(Q)<\infty
\]
and
\begin{align*}
E_Q[M_{t+1}^Q-M_{t}^Q|\mathcal{F}_t]&=E_Q[\alpha_{t+1}^{\min}(Q)|\mathcal{F}_t]-\alpha_t^{\min}(Q)-E_Q[Z_{t+1}^Q-Z_t^Q|\mathcal{F}_t]\\
&=\alpha_{t, t+1}^{\min}(Q)-\alpha_{t, t+1}^{\min}(Q)=0\qquad\mathcal{Q}\mbox{-a.s.}
\end{align*}
for all $t\in\mathbb{T}$ by property 3) of Theorem \ref{eqchar} and the definition of $(Z_t^Q)_{t\in\T}$.
The Doob-decomposition follows straightforward from the Riesz-decomposition.
\end{proof}
\begin{remark}\label{mnull}
It was shown in \cite[Theorem 5.4]{fp6} that the martingale $M^Q$ in the Riesz decomposition of $(\alpha_t^{\min}(Q))_{t\in\mathbb{T}}$ vanishes if and only if $\lim_{t\to\infty}\rho_t(X)\ge-X\,P$-a.s., i.e.\ the dynamic risk measure $(\rho_t)_{t\in\T}$ is asymptotically safe. This is not always the case; see \cite[Example 5.5]{fp6}.
\end{remark}
For a \emph{coherent} risk measure we have
\begin{equation*
\mathcal{Q}_t^f(Q)=\mathcal{Q}_t^0(Q):=\left\{\, R\in\mathcal{M}^1(P)\;\big|\; R=Q|_{\mathcal{F}_t},\;\; \alpha_t^{\min}(R)=0\;Q\mbox{-a.s.}\right\}.
\end{equation*}
In order to give an equivalent characterization of property 3) of Theorem~\ref{eqchar} in the coherent case, we introduce the sets
\begin{equation*
\mathcal{Q}_{t,t+s}^0(Q)=\left\{\, R\ll P|_{\mathcal{F}_{t+s}}\;\big|\; R=Q|_{\mathcal{F}_t},\;\; \alpha_{t,t+s}^{\min}(R)=0\;Q\mbox{-a.s.}\right\}\quad \forall\, t,s\ge 0\;\, \textrm{such that}\;\, t,t+s\in\mathbb{T}.
\end{equation*}
For $Q^1\in\mathcal{Q}_{t, t+s}^0(Q)$ and $Q^2\in\mathcal{Q}_{t+s}^0(Q)$ we denote by $Q^1\oplus^{t+s} Q^2$ the pasting of $Q^1$ and $Q^2$ in $t+s$ via $\Omega$, i.e. the measure $\widetilde{Q}$ defined via
\begin{equation}\label{pasting}
\widetilde{Q}(A)=E_{Q^1}\left[E_{Q^2}[I_A|\mathcal{F}_{t+s}]\right],\qquad A\in\mathcal{F}.
\end{equation}
The relation between stability under pasting and time consistency of coherent risk measures that can be represented in terms of equivalent probability measures was studied in \cite{adehk7, d6, ks5, fp6}. In our present setting, Theorem~\ref{eqchar} applied to a coherent risk measure takes the following form.
\begin{corollary}\label{coherent}
Let $(\rho_t)_{t\in\T}$ be a dynamic coherent risk measure such that each $\rho_t$ is continuous from above. Then the following conditions are equivalent:
\begin{enumerate}
\item $(\rho_t)_{t\in\T}$ is time consistent.
\item For all $Q\in\M_1(P)$ and all $t,s$ such that $t,t+s\in\mathbb{T}$
\begin{equation*
\mathcal{Q}_t^0(Q)= \left\{\, Q^1\oplus^{t+s} Q^2\;\big|\; Q^1\in\mathcal{Q}_{t, t+s}^0(Q), \;Q^2\in\mathcal{Q}_{t+s}^0(Q^1)\right\}.
\end{equation*}
\item For all $Q\in\M_1(P)$ such that $\alpha_t^{\min}(Q)=0\;Q$-a.s.,
\[
E_Q[\rho_{t+s}(X)\,|\,\mathcal{F}_t] \le \rho_t(X)\quad\text{and}\quad \alpha_{t+s}^{\min}(Q)=0\;\,Q\mbox{-a.s.}
\]
for all $X\inL^{\infty}(\Omega, \F,P)$ and for all $t,s$ such that $t,t+s\in\mathbb{T}$.
\end{enumerate}
\end{corollary}
\begin{proof}
$1)\Rightarrow 2)$: Time consistency implies property 3) of Theorem \ref{eqchar}, and we will show that this implies property 2) of Corollary \ref{coherent}. Fix $Q\in\M_1(P)$. To prove ``$\supseteq$'' let $Q^1\in\mathcal{Q}_t^0(Q)$, $Q^2\in\mathcal{Q}_{t+s}^0(Q^1)$, and consider $\widetilde{Q}$ defined as in \eqref{pasting}. Note that $\widetilde{Q}=Q^1$ on $\mathcal{F}_{t+s}$ and
\[
E_{\widetilde{Q}}[X|\mathcal{F}_{t+s}]=E_{Q^2}[X|\mathcal{F}_{t+s}]\quad Q^1\text{-a.s. for all}\quad X\inL^{\infty}(\Omega, \F,P).
\]
Hence, using 3) of Theorem~\ref{eqchar} we obtain
\begin{align*}
\alpha_t^{\min}(\widetilde{Q})&= \alpha_{t,t+s}^{\min}(\widetilde{Q})+E_{\widetilde{Q}}[\alpha_{t+s}^{\min}(\widetilde{Q})|\mathcal{F}_{t}]\\
&= \alpha_{t,t+s}^{\min}(Q^1)+E_{Q^1}[\alpha_{t+s}^{\min}(Q^2)|\mathcal{F}_{t}]=0\qquadQ\mbox{-a.s.},
\end{align*}
and thus $\widetilde{Q} \in\mathcal{Q}_t^0(Q)$.
Conversely, for every $\widetilde{Q}\in\mathcal{Q}_t^0(Q)$ we have $\alpha_{t+s}^{\min}(\widetilde{Q})=\alpha_{t,t+s}^{\min}(\widetilde{Q})=0\;\widetilde{Q}$-a.s. by 3) of Theorem~\ref{eqchar}, and $\widetilde{Q}=\widetilde{Q}\oplus\widetilde{Q}$. This proves ``$\subseteq$''.
$2)\Rightarrow 3)$: Let $R\in\M_1(P)$ with $\alpha_t^{\min}(R)=0\;R$-a.s.. Then $R\in\mathcal{Q}_t^0(R)$, and thus $R=Q^1\otimes^{t+s} Q^2$ for some $Q^1\in\mathcal{Q}_{t,t+s}^0(R)$ and $Q^2\in\mathcal{Q}_{t+s}^0(Q^1)$. This implies $R=Q^1$ on $\mathcal{F}_{t+s}$ and
\[
E_R[X|\mathcal{F}_{t+s}]=E_{Q^2}[X|\mathcal{F}_{t+s}]\quad R\text{-a.s.}.
\]
Hence $\alpha_{t, t+s}^{\min}(R)=\alpha_{t, t+s}^{\min}(Q^1)=0\;\,R$-a.s., and $\alpha_{t+s}^{\min}(R)=\alpha_{t+s}^{\min}(Q^2)=0\,R$-a.s..
To prove the inequality 3) note that due to \eqref{erwrho}
\begin{align*}
E_R[\,\rho_{t+s}(X)\,|\,\mathcal{F}_t]&=\res_{Q\in\mathcal{Q}_{t+s}^0(R)} E_{Q}[-X\,|\,\mathcal{F}_t]\\
&\le \res_{Q\in\mathcal{Q}_t^0(R)}E_Q[-X\,|\,\F_t\,]=\rho_t(X)\quad R\text{-a.s.},
\end{align*}
where we have used that the pasting of $R|_{\mathcal{F}_{t+s}}$ and $Q$ belongs to $\mathcal{Q}_t^0(R)$.
$3)\Rightarrow 1)$: Obviously property 3) of Corollary~\ref{coherent} implies property 4) of Theorem~\ref{eqchar} and thus time consistency. \end{proof}
\subsection{Rejection and acceptance consistency}\label{subsec:rc}
Rejection and acceptance consistency were introduced and studied in \cite{tu8, Samuel, ipen7}. These properties can be characterized via recursive inequalities as stated in the next proposition; see \cite[Theorem 3.1.5]{tu8} and \cite[Proposition 3.5]{Samuel}.
\begin{proposition}\label{rejrecursiveness}
A dynamic convex risk measure $(\rho_t)_{t\in\T}$ is rejection (resp. acceptance) consistent if and only if
for all $t\in\mathbb{T}$ such that $t<T$
\begin{equation}\label{rcdef}
\rho_t(-\rho_{t+1})\le\rho_t\quad(\mbox{resp.}\ge)\quadP\mbox{-a.s.}.
\end{equation}
\end{proposition}
\begin{proof} We argue for the case of rejection consistency; the case of acceptance consistency follows in the same manner. Assume first that $(\rt)\zt$ satisfies (\ref{rcdef}) and let $X\inL^{\infty}$ and $Y\in L^{\infty}(\mathcal{F}_{t+1})$ such that $\rho_{t+1}(X)\ge\rho_{t+1}(Y)$. Using cash invariance, (\ref{rcdef}), and monotonicity, we obtain
\[
\rho_t(X)\ge\rho_t(-\rho_{t+1}(X))\ge\rho_t(-\rho_{t+1}(Y))=\rho_t(Y).
\]
The converse implication follows due to cash invariance by applying (\ref{definition1}) to $Y=-\rho_{t+1}(X)$.
\end{proof}
\begin{remark}\label{weakmiddle}
For a dynamic \emph{coherent} risk measure, weak acceptance consistency and acceptance consistency are equivalent. This was shown in \cite[Proposition 3.9]{Samuel}.
\end{remark}
Another way to characterize rejection consistency was suggested in \cite{ipen7}.
\begin{proposition}
A dynamic convex risk measure $(\rho_t)_{t\in\T}$ is rejection consistent if only if any of the following conditions holds:
\begin{enumerate}
\item For all $t\in\mathbb{T}$ such that $t<T$ and all $X\inL^{\infty}$
\begin{equation}\label{pruddef}
\rho_t(X)-\rho_{t+1}(X) \in\mathcal{A}_{t,t+1};
\end{equation}
\item For all $t\in\mathbb{T}$ such that $t<T$ and all $X\in\mathcal{A}_t$, we have $-\rho_{t+1}(X)\in\mathcal{A}_t$.
\end{enumerate}
\end{proposition}
\begin{proof} Since
\begin{equation*}
\rho_t(-\rho_{t+1}(X))=\rho_t(\rho_t(X)-\rho_{t+1}(X))+\rho_t(X)
\end{equation*}
by cash invariance, \eqref{pruddef} implies rejection consistency, and obviously rejection consistency implies condition 2). If 2) holds, then for any $X\inL^{\infty}$
\begin{equation*}
\rho_t(\rho_t(X)-\rho_{t+1}(X))=\rho_t\left(-\rho_{t+1}(X+\rho_t(X))\right)\le0,
\end{equation*}
due to cash invariance and the fact that $X+\rho_t(X)\in\mathcal{A}_t$.
\end{proof}
Property \eqref{pruddef} was introduces in \cite{ipen7} under the name \emph{prudence}. It means that the adjustment
$\rho_{t+1}(X)-\rho_t(X)$ of the minimal capital requirement for $X$ at time $t+1$ is acceptable at time $t$. In other words, one stays on the safe side at each period of time by making capital reserves according to a rejection consistent dynamic risk measure.
Similar to time consistency, rejection and acceptance consistency can be characterized in terms of acceptance sets and penalty functions.
\begin{theorem}\label{eqcharprud}
Let $(\rho_t)_{t\in\T}$ be a dynamic convex risk measure such that each $\rho_t$ is continuous from above. Then the following properties are equivalent:
\begin{enumerate}
\item
$(\rho_t)_{t\in\T}$ is rejection consistent (resp. acceptance consistent).
\item The inclusion
\[
\mathcal{A}_t\subseteq\mathcal{A}_{t,t+1}+\mathcal{A}_{t+1}\quad\text{resp.}\quad\mathcal{A}_t\supseteq\mathcal{A}_{t,t+1}+\mathcal{A}_{t+1}
\]
holds for all $t\in\mathbb{T}$ such that $t<T$.
\item The inequality
\[
\alpha_t^{\min}(Q)\le(\text{resp.}\ge)\alpha_{t, t+1}^{\min}(Q)+E_Q[\,\alpha_{t+1}^{\min}(Q)\,|\,\mathcal{F}_t\,]\quadQ\mbox{-a.s.}
\]
holds for all $t\in\mathbb{T}$ such that $t<T$ and all $Q\in\M_1(P)$.
\end{enumerate}
\end{theorem}
\begin{proof} Equivalence of 1) and 2) was proved in Proposition~\ref{rejrecursiveness} and Lemma~\ref{4.6}, and the proof of $2)\Rightarrow 3) $ is given in Lemma~\ref{setpen}.
Let us show that property 3) implies property 1). We argue for the case of rejection consistency; the case of acceptance consistency follows in the same manner. We fix $t\in\mathbb{T}$ such that $t<T$, and consider the risk measure
\[
{\widetilde \rho_t(X)}:=\rho_t(-\rho_{t+1}(X)),\qquad X\inL^{\infty}.
\]
It is easily seen that ${\widetilde \rho_t}$ is a conditional convex risk measure that is continuous from above. Moreover, the dynamic risk measure $({\widetilde \rho_t}, \rho_{t+1})$ is time consistent by definition, and thus it fulfills properties 2) and 3) of Theorem~\ref{eqchar}. We denote by ${\widetilde \mathcal{A}_t}$ and
${\widetilde \mathcal{A}_{t,t+1}}$ the acceptance sets of the risk measure ${\widetilde \rho_t}$, and by ${\widetilde \alpha_{t}^{\min}}$ its penalty function. Since
\[
{\widetilde \rho_t(X)}=\rho_t(-\rho_{t+1}(X))=\rho_t(X)
\]
for all $X\in L_{t+1}$, we have ${\widetilde \mathcal{A}_{t,t+1}}={\mathcal A}_{t,t+1}$,
and thus
\[
{\widetilde \mathcal{A}_t}={\mathcal A}_{t,t+1}+{\mathcal A}_{t+1}
\]
by 2) of Theorem~\ref{eqchar}. Lemma~\ref{setpen} and property 3) then imply
\begin{equation*
{\widetilde \alpha_{t}^{\min}}(Q) =\alpha_{t, t+1}^{\min}(Q)+E_Q[\alpha_{t+1}^{\min}(Q)|\mathcal{F}_t]\ge\alpha_t^{\min}(Q)
\end{equation*}
for all $Q\in\mathcal{Q}_t$. Thus
\begin{equation*}
\rho_t(X)\ge{\widetilde \rho_t(X)}=\rho_t(-\rho_{t+1}(X))
\end{equation*}
for all $X\inL^{\infty}$, due to representation (\ref{rd2}).
\end{proof}
\begin{remark}\label{corm}
Similar to Corollary~\ref{coherent}, condition 3) of Theorem~\ref{eqcharprud} can be restated for a dynamic \emph{coherent} risk measure $(\rho_t)_{t\in\T}$ as follows:
\begin{equation*
\mathcal{Q}_t^0(Q) \supseteq \left\{\, Q^1\oplus^{t+1} Q^2\;\big|\; Q^1\in\mathcal{Q}_{t, t+1}^0(Q), \;Q^2\in\mathcal{Q}_{t+1}^0(Q^1)\right\}\quad (\mbox{resp.} \subseteq)
\end{equation*}
for all $t\in\mathbb{T}$ such that $t<T$ and all $Q\in\M_1(P)$.
\end{remark}
The following proposition provides an additional equivalent characterization of rejection consistency, that can be viewed as an analogon of the supermartingale property 4) of Theorem~\ref{eqchar}.
\begin{proposition}\label{mrc}
Let $(\rho_t)_{t\in\T}$ be a dynamic convex risk measure such that each $\rho_t$ is continuous from above. Then $(\rho_t)_{t\in\T}$ is rejection consistent if and only if the inequality
\begin{equation}\label{superm1}
E_Q\left[\,\rho_{t+1}(X)\,|\,\mathcal{F}_t\,\right]\le \rho_t(X)+\alpha_{t, t+1}^{\min}(Q)\qquad Q\text{-a.s.}
\end{equation}
holds for all $Q\in\M_1(P)$ and all $t\in\mathbb{T}$ such that $t<T$. In this case the process
\[
U_t^Q(X):=\rho_t(X)-\sum_{k=0}^{t-1}\alpha_{k,k+1}^{\min}(Q),\qquad t\in\mathbb{T}
\]
is a $Q$-supermartingale for all $X\inL^{\infty}$ and all $Q\in\mathcal{Q}^f$, where
\[
\mathcal{Q}^f:=\left\{\, Q\in\M_1(P)\;\big|\; E_Q\left[\sum_{k=0}^{t}\alpha_{k,k+1}^{\min}(Q)\right]<\infty\;\,\forall\,t\in\mathbb{T}\right\}.
\]
\end{proposition}
The proof of Proposition~\ref{mrc} is a special case of Theorem~\ref{optdec}, which involves the notion of sustainability; cf.\ \cite{ipen7}.
\begin{definition}\label{sustainable}
Let $(\rt)\zt$ be a dynamic convex risk measure.
We call a bounded adapted process $X=(X_t)_{t\in\T}$ \emph{sustainable with respect to the risk measure} $(\rt)\zt$ if
\begin{equation*}
\rho_t(X_t-X_{t+1})\le0\qquad\textrm{for all $t\in\mathbb{T}$ such that $t<T$}.
\end{equation*}
\end{definition}
Consider $X$ to be a cumulative investment process. If it is sustainable, then for all $t\in\mathbb{T}$ the adjustment $X_{t+1}-X_t$ is acceptable with respect to $\rho_t$.
The next theorem characterizes sustainable processes in terms of a supermartingale inequality; it is a generalization of \cite[Corollary 2.4.10]{ipen7}.
\begin{theorem}\label{optdec}
Let $(\rho_t)_{t\in\T}$ be a dynamic convex risk measure such that each $\rho_t$ is continuous from above and let $(X_t)_{t\in\T}$ be a bounded adapted process. Then the following properties are equivalent:
\begin{enumerate}
\item The process $(X_t)_{t\in\T}$ is sustainable with respect to the risk measure $(\rho_t)_{t\in\T}$.
\item For all $Q\in\M_1(P)$ and all $t\in\mathbb{T}, t\ge 1$, we have
\begin{equation}\label{superm2}
E_Q\left[\,X_{t}\,|\,\mathcal{F}_{t-1}\,\right]\le X_{t-1}+\alpha_{t-1,t}^{\min}(Q)\qquad Q\text{-a.s.}.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof} The proof of $1)\Rightarrow 2)$ follows directly from the definition of sustainability and the definition of the minimal penalty function.
To prove $2)\Rightarrow 1)$, let $(X_t)_{t\in\T}$ be a bounded adapted process such that \eqref{superm2} holds. In order to prove
\begin{equation*
X_t-X_{t-1}=:A_t\in-\mathcal{A}_{t-1,t}\quad\mbox{for all}\quad t\in\mathbb{T}, t\ge 1,
\end{equation*}
suppose by way of contradiction that $A_t\notin-\mathcal{A}_{t-1,t}$.
Since the set $\mathcal{A}_{t-1,t}$ is convex and weak$^\ast$-closed due to Remark~\ref{abg}, the Hahn-Banach separation theorem (see, e.g., \cite[Theorem A.56 ]{fs4}) ensures the existence of $Z\in L^1(\mathcal{F}_t,P)$ such that
\begin{equation}\label{e15}
a:=\sup_{X\in\mathcal{A}_{t-1,t}}E[\,Z(-X)\,]<E[\,Z\,A_t\,]=:b<\infty.
\end{equation}
Since $\lambda I_{\{Z<0\}}\in\mathcal{A}_{t-1,t}$ for every $\lambda\ge0$, (\ref{e15}) implies $Z\ge0\;P$-a.s., and in particular $E[Z]>0$. Define a probability measure $Q\in\M_1(P)$ via $\frac{dQ}{dP}:=\frac{Z}{E[Z]}$ and note that, due to Lemma \ref{erwpf} and \eqref{e15}, we have
\begin{equation}\label{e172}
E_Q[\alpha_{t-1,t}^{\min}(Q)]=\sup_{X\in\mathcal{A}_{t-1,t}}E_Q[\,(-X)\,]=\sup_{X\in\mathcal{A}_{t-1,t}}E[\,Z(-X)\,]\frac{1}{E[Z]}=\frac{a}{E[Z]}<\infty.
\end{equation}
Moreover, (\ref{e15}) and \eqref{e172} imply
\begin{equation*}
E_Q\left[\left(X_t-X_{t-1}-\alpha_{t-1,t}^{\min}(Q)\right)\right]=E[Z]\left(E[ZA_t]-E_Q\left[\alpha_{t-1,t}^{\min}(Q)\right]\right)= E[Z](b-a)>0,
\end{equation*}
which cannot be true if \eqref{superm2} holds under $Q$.
\end{proof}
\begin{remark}
In particular, property 2) of Theorem~\ref{optdec} implies that the process
\[
X_t-\sum_{k=0}^{t-1}\alpha_{k,k+1}^{\min}(Q),\qquad t\in\mathbb{T}
\]
is a $Q$-supermartingale for all $Q\in\mathcal{Q}^f$, if $X$ is sustainable with respect to $(\rho_t)$. As shown in \cite[Theorem 2.4.6, Corollary 2.4.8]{ipen7}, this supermartingale property is equivalent to sustainability of $X$ under some additional assumptions.
\end{remark}
\subsection{Weak time consistency}\label{subsec:wc}
In this section we characterize the weak notions of time consistency from Definition~\ref{cons}. Due to cash invariance, they can be restated as follows: A dynamic convex risk measure $(\rho_t)_{t\in\T}$ is weakly acceptance (resp. weakly rejection) consistent, if and only if
\begin{equation*
\rho_{t+1}(X)\le0\quad(\mbox{resp.}\;\ge)\quad\Longrightarrow\quad\rho_t(X)\le0\quad(\mbox{resp.}\;\ge)
\end{equation*}
for any $X\inL^{\infty}$ and for all $t\in\mathbb{T}$ such that $t<T$.
This means that if some position is accepted (or rejected) for any scenario tomorrow, it should be already accepted (or rejected) today. In this form, weak acceptance consistency was introduced in \cite{adehk7}. Both weak acceptance and weak rejection consistency appeared in \cite{Weber, ros7}.
Weak acceptance consistency was characterized in terms of acceptance sets in \cite[Corollary 3.6]{tu8}, and in terms of a supermartingale property of penalty functions in \cite[Lemma 3.17]{burg}. We summarize these characterizations in our present setting in the next proposition.
\begin{proposition}\label{weaktc}
Let $(\rho_t)_{t\in\T}$ be a dynamic convex risk measure such that each $\rho_t$ is continuous from above. Then the following properties are equivalent:
\begin{enumerate}
\item $(\rho_t)_{t\in\T}$ is weakly acceptance consistent.
\item $\mathcal{A}_{t+1}\subseteq\mathcal{A}_t\quad$ for all $t\in\mathbb{T}$ such that $t<T$.
\item The inequality
\begin{equation}\label{supermart}
E_Q[\,\alpha_{t+1}^{\min}(Q)\,|\,F_t\,]\le\alpha_t^{\min}(Q)\quadQ\mbox{-a.s.}
\end{equation}
holds for all $Q\in\M_1(P)$ and all $t\in\mathbb{T}$ such that $t<T$. In particular $(\alpha_t^{\min}(Q))_{t\in\T}$ is a $Q$-supermartingale for all $Q\in\mathcal{Q}_0$.
\end{enumerate}
\end{proposition}
\begin{proof} The equivalence of 1) and 2) follows directly from the definition of weak acceptance consistency. Property 2) implies 3), since by Lemma~\ref{erwpf}
\begin{align*}
E_Q[\,\alpha_{t+1}^{\min}(Q)\,|\,F_t\,]&=\qes_{X_{t+1}\in{\mathcal A}_{t+1}}E_Q[-X_{t+1}|\mathcal{F}_t]\\
&\le\qes_{X\in\mathcal{A}_t}E_Q[-X|\mathcal{F}_t]=\alpha_t^{\min}(Q)\qquadQ\mbox{-a.s.}
\end{align*}
for all $Q\in\M_1(P)$.
To prove that 3) implies 2), we fix $X\in{\mathcal A}_{t+1}$ and note that
\[
E_Q[-X|\mathcal{F}_{t+1}]\le\alpha_{t+1}^{\min}(Q)\quadQ\mbox{-a.s.}\quad\mbox{for all}\;\,Q\in\M_1(P)
\]
by the definition of the minimal penalty function. Using (\ref{supermart}) we obtain
\[
E_Q[-X|\mathcal{F}_{t}]\le E_Q[\,\alpha_{t+1}^{\min}(Q)\,|\,F_t\,]\le\alpha_t^{\min}(Q)\quadQ\mbox{-a.s.}
\]
for all $Q\in\M_1(P)$, in particular for $Q\in\mathcal{Q}_t^f(P)$. Thus $\rho_t(X)\le0$ by \eqref{rd2}.
\end{proof}
\subsection{A recursive construction}\label{recur}
In this section we assume that the time horizon $T$ is finite. Then one can define a time consistent dynamic convex risk measure $({\widetilde \rho_t})_{t=0,\ldots , T}$ in a recursive way, starting with an arbitrary dynamic convex risk measure $(\rho_t)_{t=0,\ldots , T}$, via
\begin{equation}\label{recurs}
\begin{aligned}
{\widetilde \rho_T}(X)&:=\rho_T(X)=-X\\
{\widetilde \rho_t}(X)&:=\rho_t(-{\widetilde \rho_{t+1}}(X)),\quad t=0,\ldots , T-1,\quad X\inL^{\infty}.
\end{aligned}
\end{equation}
The recursive construction \eqref{recurs} was introduced in \cite[Section 4.2]{cdk6}, and also studied in \cite{Samuel, ck6}. It is easy to see that $({\widetilde \rho_t})_{t=0,\ldots , T}$ is indeed a time consistent dynamic convex risk measure, and each ${\widetilde \rho_t}$ is continuous from above if each $\rho_t$ has this property.
\begin{remark}\label{cheaper}
If the original dynamic convex risk measure $(\rho_t)_{t=0,\ldots , T}$ is rejection (resp.\ acceptance) consistent, then the time consistent dynamic convex risk measure $({\widetilde \rho_t})_{t=0,\ldots , T}$ defined via (\ref{recurs}) lies below (resp. above) $(\rho_t)_{t=0,\ldots , T}$, i.e.
\[
{\widetilde \rho_t}(X)\le(\text{resp.}\ge)\rho_t(X)\quad\text{for all $t=0,\ldots , T$ and all $X\inL^{\infty}$}.
\]
This can be easily proved by backward induction using Proposition~\ref{rejrecursiveness}, monotonicity, and \eqref{recurs}. Moreover, as shown in \cite[Theorem 3.10]{Samuel} in the case of rejection consistency, $({\widetilde \rho_t})_{t=0,\ldots , T}$ is the biggest time consistent dynamic convex risk measure that lies below $(\rho_t)_{t=0,\ldots , T}$.
\end{remark}
For all $X\inL^{\infty}$, the process $({\widetilde \rho_t}(X))_{t=0,\ldots , T}$ has the following properties:
${\widetilde \rho}_T(X)\ge-X$, and
\begin{equation}\label{sust}
\rho_t({\widetilde \rho_t}(X)-{\widetilde \rho}_{t+1}(X))=-{\widetilde \rho_t}(X)+\rho_t(-{\widetilde \rho}_{t+1}(X))=0\qquad\forall\; t=0,\ldots , T-1,
\end{equation}
by definition and cash invariance. In other words, the process $({\widetilde \rho_t}(X))_{t=0,\ldots , T}$ covers the final loss $-X$ and is sustainable with respect to the original risk measure $(\rho_t)_{t=0,\ldots , T}$. The next proposition shows that $({\widetilde \rho_t}(X))_{t=0,\ldots , T}$ is in fact the smallest process with both these properties. This result is a generalization of \cite[Proposition 2.5.2 ]{ipen7}, and, in the coherent case, related to \cite[Theorem 6.4]{d6}.
\begin{proposition}\label{smallest}
Let $(\rho_t)_{t=0,\ldots , T}$ be a dynamic convex risk measure such that each $\rho_t$ is continuous from above. Then, for each $X\inL^{\infty}$, the risk process $({\widetilde \rho_t}(X))_{t=0,\ldots , T}$ defined via (\ref{recurs}) is the smallest bounded adapted process $(U_t)_{t=0,\ldots , T}$ such that $(U_t)_{t=0,\ldots , T}$ is sustainable with respect to $(\rho_t)_{t=0,\ldots , T}$ and $U_T\ge-X$.
\end{proposition}
\begin{proof} We have already seen that ${\widetilde \rho_t}_T(X)\ge-X$ and $({\widetilde \rho_t}(X))_{t=0,\ldots , T}$ is sustainable with respect to $(\rho_t)_{t=0,\ldots , T}$ due to (\ref{sust}). Now let $(U_t)_{t=0,\ldots , T}$ be another bounded adapted process with both these properties. We will show by backward induction that
\begin{equation}\label{ineq}
U_t\ge {\widetilde \rho_t}(X)\quadP\mbox{-a.s.}\qquad\forall\;t=0,\ldots , T.
\end{equation}
Indeed, we have
\[
U_T\ge-X={\widetilde \rho_T}(X)\qquad P\mbox{-a.s.}.
\]
If (\ref{ineq}) holds for $t+1$, Theorem~\ref{optdec} yields for all $Q\in\mathcal{Q}_t^f$:
\begin{align*
U_t&\ge E_Q\left[\,U_{t+1}-\alpha_{t, t+1}^{\min}(Q)\,|\,\mathcal{F}_t\,\right]\nonumber\\&\ge E_Q\left[\,{\widetilde \rho_{t+1}}(X)-\alpha_{t, t+1}^{\min}(Q)\,|\,\mathcal{F}_t\,\right]\quad P\mbox{-a.s.}.
\end{align*}
Thus
\begin{align*}
U_t&\ge\es_{Q\in\mathcal{Q}_{t}^f} \left(E_Q\left[{\widetilde \rho_{t+1}}(X)|\mathcal{F}_t\right]-\alpha_{t, t+1}^{\min}(Q)\right)\\
&=\rho_t(-{\widetilde \rho_{t+1}}(X))={\widetilde \rho_{t}}(X)\quad P\mbox{-a.s.},
\end{align*}
where we have used representation \eqref{rd2}. This proves (\ref{ineq}).\end{proof}
The recursive construction \eqref{recurs} can be used to construct a time consistent dynamic Average Value at Risk, as shown in the next example.
\begin{example}
It is well known that dynamic Average Value at Risk $(AV@R_{t,\lambda_t})_{t=0,\ldots , T}$ (cf.\ Example~\ref{avar}) is not time consistent, and does not even satisfy weaker notions of time consistency from Definition~\ref{cons}; see, e.g., \cite{adehk7, ros7}. Moreover, since $\alpha_0^{\min}(P)=0$ in this case, the set ${\mathcal Q}^{\ast}$ in \eqref{qstar} is not empty, and \cite[Corollary 4.12]{fp6} implies that there exists no time consistent dynamic convex risk measure $(\rho_t)_{t\in\T}$ such that each $\rho_t$ is continuous from above and $\rho_0=AV@R_{0,\lambda_0}$. However, for $T<\infty$, the recursive construction \eqref{recurs} can be applied to $(AV@R_{t,\lambda_t})_{t=0,\ldots , T}$ in order to modify it to a time consistent dynamic coherent risk measure $(\tilde{\rho_t})_{t=0,\ldots , T}$. This modified risk measure takes the form
\begin{align*
\tilde{\rho_t}(X)&= \es\left\{\, E_Q[-X|\mathcal{F}_t]\;\big|\; Q\in\mathcal{Q}_t, \frac{Z^Q_{s+1}}{Z^Q_s}\leq \lambda_s^{-1}, s=t,\ldots , T-1\right\} \\
&= \es\left\{\, E_P\left[-X\prod_{s=t+1}^{T}L_s\;\big|\;\mathcal{F}_t\right]\;\big|\; L_s\in L^{\infty}_s, 0\leq L_s\leq \lambda_s^{-1}, E[L_s|\mathcal{F}_{s-1}]=1, s=t+1,\ldots , T\right\}\nonumber
\end{align*}
for all $t=0,\ldots , T-1$, where $Z^Q_t=\frac{dQ}{dP}|_{\mathcal{F}_t}$. This was shown, e.g., in \cite[Example 3.3.1]{ck6}.
\end{example}
\section{The dynamic entropic risk measure}\label{entropic}
In this section we study time consistency properties of the dynamic entropic risk measure
\[
\rho_t(X)=\frac{1}{\gamma_t}\log E[e^{-\gamma_t X}|\mathcal{F}_t],\qquad t\in\mathbb{T},\qquad X\inL^{\infty},
\]
where the risk aversion parameter $\gamma_t$ satisfies $\gamma_t>0\, P$-a.s. and $\gamma_t, \frac{1}{\gamma_t}\inL^{\infty}_t$ for all $t\in\mathbb{T}$ (cf.\ Example~\ref{ex:entr}).
It is well known (see, e.g., \cite{dt5, fp6}) that the conditional entropic risk measure $\rho_{t}$ has the robust representation \eqref{rd1} with the minimal penalty function $\alpha_t$ given by
\[
\alpha_t(Q)=\frac{1}{\gamma_t}H_t(Q|P),\quad Q\in\mathcal{Q}_t,
\]
where $H_t(Q|P)$ denotes the conditional relative entropy of $Q$ with respect to $P$ at time $t$:
\[
H_t(Q|P)=E_Q\left[ \log\frac{dQ}{dP}\;\big|\;\mathcal{F}_t \right],\quad Q\in\mathcal{Q}_t.
\]
The dynamic entropic risk measure with constant risk aversion parameter $\gamma_t=\gamma_0\in\mathbb R$ for all $t$ was studied in \cite{dt5,cdk6,fp6,ck6}. It plays a particular role since, as proved in \cite{ks9}, it is the only law invariant time consistent relevant dynamic convex risk measure.
In this section we consider an \emph{adapted} risk aversion process $(\gamma_t)_{t\in\T}$, that depends both on time and on the available information. As shown in the next proposition, the process $(\gamma_t)_{t\in\T}$ determines time consistency properties of the corresponding dynamic entropic risk measure. This result corresponds to \cite[Proposition 4.1.4]{ipen7}, and generalizes \cite[Proposition 3.13]{Samuel}.
\begin{proposition}\label{riskav}
Let $(\rho_t)_{t\in\T}$ be the dynamic entropic risk measure with risk aversion given by an adapted process $(\gamma_t)_{t\in\T}$ such that $\gamma_t>0\,P$-a.s. and $\gamma_t, 1/\gamma_t\inL^{\infty}_t$. Then the following assertions hold:
\begin{enumerate}
\item $(\rho_t)_{t\in\T}$ is rejection consistent if $\gamma_t\ge\gamma_{t+1}\:P$-a.s. for all $t\in\mathbb{T}$ such that $t<T$;
\item $(\rho_t)_{t\in\T}$ is acceptance consistent if $\gamma_t\le\gamma_{t+1}\:P$-a.s. for all $t\in\mathbb{T}$ such that $t<T$;
\item $(\rho_t)_{t\in\T}$ is time consistent if $\gamma_t=\gamma_0\in\mathbb R\: P$-a.s. for all $t\in\mathbb{T}$ such that $t<T$.
\end{enumerate}
Moreover, assertions 1), 2) and 3) hold with ``if and only if'', if $\gamma_t\in\mathbb R$ for all $t$, or if the filtration $(\mathcal{F}_t)_{t\in\T}$ is rich enough in the sense that for all $t$ and for all $B\in\mathcal{F}_t$ such that $P[B]>0$ there exists $A\subset B$ such that $A\notin\mathcal{F}_t$ and $P[A]>0$.
\end{proposition}
\begin{proof} Fix $t\in\mathbb{T}$ and $X\inL^{\infty}$. Then
\begin{align*
\rho_t(-\rho_{t+1}(X))&=\frac{1}{\gamma_t}\log\left(E\left[\exp\left\{\frac{\gamma_t}{\gamma_{t+1}}\log\left(E\left[e^{-\gamma_{t+1}X}|\mathcal{F}_{t+1}\right]\right)\right\}\big|\mathcal{F}_t\right]\right)\nonumber\\
&=\frac{1}{\gamma_t}\log\left(E\left[E\left[e^{-\gamma_{t+1}X}|\mathcal{F}_{t+1}\right]^{\frac{\gamma_t}{\gamma_{t+1}}}\big|\mathcal{F}_t\right]\right).
\end{align*}
Thus $\rho_t(-\rho_{t+1})=\rho_t$ if $\gamma_t=\gamma_{t+1}$ and this proves time consistency. Rejection (resp. acceptance) consistency follow by the generalized Jensen inequality that will be proved in Lemma~\ref{jensen}. We apply this inequality at time $t+1$ to the bounded random variable $Y:=e^{-\gamma_{t+1}X}$ and the ${\mathcal B}\left((0,\infty)\right)\otimes\mathcal{F}_{t+1}$-measurable function
\[
u\;:\;(0,\infty)\times\Omega\;\rightarrow\;\mathbb R,\quad\quad u(x,\omega):=x^{\frac{\gamma_t(\omega)}{\gamma_{t+1}(\omega)}}.
\]
Note that $u(\cdot,\omega)$ is convex if $\gamma_t(\omega)\ge\gamma_{t+1}(\omega)$ and concave if $\gamma_t(\omega)\le\gamma_{t+1}(\omega)$. Moreover, $u(X,\cdot)\inL^{\infty}$ for all $X\inL^{\infty}$ and $u(\cdot,\omega)$ is differentiable on $(0,\infty)$ with
\[
|u'(x,\cdot)|=\frac{\gamma_t}{\gamma_{t+1}}x^{\frac{\gamma_t}{\gamma_{t+1}}-1}\le ax^b\quadP\mbox{-a.s.}
\]
for some $a,b\in\mathbb R$ if $\gamma_t\ge\gamma_{t+1}$, due to our assumption $\frac{\gamma_t}{\gamma_{t+1}}\inL^{\infty}$. On the other hand, for $\gamma_t\le\gamma_{t+1}$ we obtain
\[
|u'(x,\cdot)|=\frac{\gamma_t}{\gamma_{t+1}}x^{\frac{\gamma_t}{\gamma_{t+1}}-1}\le a\frac{1}{x^c}\quadP\mbox{-a.s.}
\]
for some $a,c\in\mathbb R$. Thus the assumptions of Lemma~\ref{jensen} are satisfied and we obtain
\[
\rho_t(-\rho_{t+1})\le\rho_t\quad\mbox{if}\quad\gamma_t\ge\gamma_{t+1}\quad P\mbox{-a.s.\;\; for all $t\in\mathbb{T}$ such that $t<T$}
\]
and
\[
\rho_t(-\rho_{t+1})\ge\rho_t\quad\mbox{if}\quad\gamma_t\le\gamma_{t+1}\quad P\mbox{-a.s.\;\; for all $t\in\mathbb{T}$ such that $t<T$}.
\]
The ``only if'' direction for constant $\gamma_t$ follows by the classical Jensen inequality.
Now we assume that the sequence $(\rho_t)_{t\in\T}$ is rejection consistent and our assumption on the filtration $(\mathcal{F}_t)_{t\in\T}$ holds. We will show that the sequence $(\gamma_t)_{t\in\T}$ is decreasing in this case. Indeed, for $t\in\mathbb{T}$ such that $t<T$, consider $B:=\{\gamma_t<\gamma_{t+1}\}$ and suppose that $P\left[ B \right]>0$.
Our assumption on the filtration allows us to choose $A \subset B$ with $P\left[ B \right]>P\left[ A \right]>0$ and $A\notin\mathcal{F}_{t+1}$. We define a random variable $X:=-xI_{A}$ for some $x>0$. Then
\begin{align*}
\rho_{t}( -\rho_{t+1}(X))&=\frac{1}{\gamma_t}\log \left( E\left[ \exp\left( \frac{\gamma_t}{\gamma_{t+1}}\log \left( E\left[ e^{ \gamma_{t+1}x I_{A}}\big |\mathcal{F}_{t+1} \right] \right) \right) \big|\mathcal{F}_t \right] \right)\\
&=\frac{1}{\gamma_t}\log \left( E\left[ \exp\left( \frac{\gamma_t}{\gamma_{t+1}} I_{B}\log \left( E\left[ e^{ \gamma_{t+1}x I_{A}}\big |\mathcal{F}_{t+1} \right] \right) \right) \big|\mathcal{F}_t \right] \right),
\end{align*}
where we have used that $A\subset B$. Setting
\begin{equation*}
Y:=E\left[ e^{ \gamma_{t+1}x I_{A}}\big |\mathcal{F}_{t+1} \right]\\
=e^{ \gamma_{t+1}x }P\left[ A |\mathcal{F}_{t+1} \right]+P\left[ A^c |\mathcal{F}_{t+1} \right]
\end{equation*}
and bringing $\frac{\gamma_t}{\gamma_{t+1}}$ inside of the logarithm we obtain
\begin{equation}\label{a1}
\rho_{t}\left( -\rho_{t+1}\left( X \right) \right)=\frac{1}{\gamma_t}\log \left( E\left[ \exp\left( I_{B}\log \left( Y^{\frac{\gamma_t}{\gamma_{t+1}} I_{B}} \right) \right) \big|\mathcal{F}_t \right] \right).
\end{equation}
The function $x \mapsto x^{\gamma_{t}(\omega)/\gamma_{t+1}(\omega)}$ is strictly concave for almost each $\omega\in B$, and thus
\begin{align}\label{a2}
Y^{\frac{\gamma_t}{\gamma_{t+1}} }&=\left(e^{\gamma_{t+1}x }P\left[ A |\mathcal{F}_{t+1} \right]+(1-P\left[ A |\mathcal{F}_{t+1} \right])\right)^{\frac{\gamma_t}{\gamma_{t+1}}}\nonumber\\
&\ge e^{ \gamma_{t}x }P\left[ A |\mathcal{F}_{t+1}\right]+(1-P\left[ A |\mathcal{F}_{t+1} \right])\qquad P\mbox{-a.s on}\;B,
\end{align}
with strict inequality on the set
\[
C:=\left\{P\left[ A |\mathcal{F}_{t+1}\right]>0\right\}\cap \left\{P\left[ A |\mathcal{F}_{t+1}\right]<1\right\}\cap B.
\]
Our assumptions $P[A]>0, \,A\subset B$ and $A\notin\mathcal{F}_{t+1}$ imply $P[C]>0$ and using
\begin{equation}\label{a3}
e^{ \gamma_{t}x }P\left[ A |\mathcal{F}_{t+1}\right]+(1-P\left[ A |\mathcal{F}_{t+1} \right])=E\left[e^{\gamma_txI_A}|\mathcal{F}_{t+1}\right]
\end{equation}
we obtain from (\ref{a1}), (\ref{a2}) and (\ref{a3})
\begin{equation}\label{0}
\rho_{t}\left( -\rho_{t+1}\left( X \right) \right)\ge\frac{1}{\gamma_t}\log \left( E\left[ \exp\left( I_{B}\log \left(E\left[e^{\gamma_{t}x I_{A}}|\mathcal{F}_{t+1} \right] \right) \right) \big|\mathcal{F}_t \right] \right),
\end{equation}
with the strict inequality on some set of positive probability due to strict monotonicity of the exponential and the logarithmic functions. For the right hand side of (\ref{0}) we have
\begin{align*}
\lefteqn{\frac{1}{\gamma_t}\log \left( E\left[ \exp\left( I_{B}\log \left(E\left[e^{\gamma_{t}x I_{A}}|\mathcal{F}_{t+1} \right] \right) \right) \big|\mathcal{F}_t \right] \right)=}\\
&=\frac{1}{\gamma_t}\log \left( E\left[ I_{B}E\left[e^{ \gamma_{t}x I_{A} }|\mathcal{F}_{t+1} \right]+I_{B^c} \big|\mathcal{F}_t \right] \right)\\
&=\frac{1}{\gamma_t}\log \left( E\left[ \exp\left( \gamma_{t}x I_{A} \right)\big|\mathcal{F}_t \right] \right)\\
&=\rho_{t}\left( X \right),
\end{align*}
where we have used $A\subset B$ and $B\in\mathcal{F}_{t+1}$. This is a contradiction to rejection consistency of $(\rho_t)_{t\in\T}$, and we conclude that $\gamma_{t+1}\le\gamma_t$ for all $t$. The proof in the case of acceptance consistency follows in the same manner. And since time consistent dynamic risk measure is both acceptance and rejection consistent, we obtain $\gamma_{t+1}=\gamma_t$ for all $t$.
\end{proof}
The following lemma concludes the proof of Proposition~\ref{riskav}.
\begin{lemma}\label{jensen}
Let $(\Omega,\mathcal{F}, P)$ be a probability space and $\mathcal{F}_t\subseteq \mathcal{F}$ a $\sigma$-field. Let $I\subseteq\mathbb R$ be an open interval and
\[
u\;:\;I\times\Omega\;\rightarrow\;\mathbb R
\]
be a ${\mathcal B}\left(I\right)\otimes\mathcal{F}_{t}$-measurable function such that $u(\cdot, \omega)$ is convex (resp. concave) and finite on $I$ for $P$-a.e. $\omega$. Assume further that
\[
|u_+'(x,\cdot)|\le c(x)\quad P\mbox{-a.s. with some}\;\;c(x)\in\mathbb R\;\,\mbox{for all}\;\;x\in I,
\]
where $u_+'(\cdot,\omega)$ denotes the right-hand derivative of $u(\cdot,\omega)$. Let $X\;:\;\Omega\;\rightarrow\;[a,b]\subseteq I$ be an $\mathcal{F}$-measurable bounded random variable such that $E\left[\,|u(X,\, )|\,\right]<\infty$. Then
\[
E\left[\,u(X,\, )\,|\,\mathcal{F}_t\,\right]\ge u\left(E[X|\mathcal{F}_t],\, \right)\quad(\mbox{resp}\le)\quadP\mbox{-a.s.}.
\]
\end{lemma} \begin{proof} We will prove the assertion for the convex case; the concave one follows in the same manner. Fix $\omega\in\Omega$ such that $u(\cdot, \omega)$ is convex. Due to convexity we obtain for all $x_0\in I$
\[
u(x,\omega)\ge u(x_0,\omega)+u_+'(x_0,\omega)(x-x_0)\quad \mbox{for all}\quad x\in I.
\]
Take $x_0=E[X|\mathcal{F}_t](\omega)$ and $x=X(\omega)$. Then
\begin{equation}\label{dk}
u(X(\omega),\omega)\ge u(E[X|\mathcal{F}_t](\omega),\omega)+u_+'(E[X|\mathcal{F}_t](\omega),\omega)(X(\omega)-E[X|\mathcal{F}_t](\omega))
\end{equation}
for $P$-almost all $\omega\in\Omega$. Note further that ${\mathcal B}\left(I\right)\otimes\mathcal{F}_{t}$-measurability of $u$ implies ${\mathcal B}\left(I\right)\otimes\mathcal{F}_{t}$-measurability of $u_+$. Thus
\[
\omega\,\rightarrow\, u(E[X|\mathcal{F}_t](\omega),\omega)\quad\mbox{and}\quad \omega\,\rightarrow\,u_+'(E[X|\mathcal{F}_t](\omega),\omega)
\]
are $\mathcal{F}_t$-measurable random variables, and $\omega\rightarrow u(X(\omega),\omega)$ is $\mathcal{F}$-measurable. Moreover, due to our assumption on $X$, there are constants $a,b\in I$ such that $a\le E[X|\mathcal{F}_t]\le b\;P$-a.s.. Since $u_+'(\cdot,\omega)$ is increasing by convexity, by using our assumption on the boundedness of $u_+'$ we obtain
\[
-c(a)\le u_+'(a,\omega)\le u_+'(E[X|\mathcal{F}_t],\omega)\le u_+'(b,\omega)\le c(b),
\]
i.e. $u_+'(E[X|\mathcal{F}_t],\,)$ is bounded. Since $E\left[\,|u(X,\, )|\,\right]<\infty$, we can build conditional expectation on the both sides of (\ref{dk}) and we obtain
\begin{align*}
E[\,u(X,\,)\,|\,\mathcal{F}_t\,]&\ge E\left[\,u(E[X|\mathcal{F}_t],\,)+u_+'(E[X|\mathcal{F}_t],\,)(X-E[X|\mathcal{F}_t])\,|\,\mathcal{F}_t\,\right]\\
&=E\left[\,u(E[X|\mathcal{F}_t],\,)\,|\,\mathcal{F}_t\,\right]\quadP\mbox{-a.s.},
\end{align*}
where we have used $\mathcal{F}_t$-measurability of $u(E[X|\mathcal{F}_t],\,)$ and of $u_+'(E[X|\mathcal{F}_t],\,)$ and the boundedness of $u_+'(E[X|\mathcal{F}_t],\,)$. This proves our claim.\end{proof}
\bibliographystyle{plain}
| proofpile-arXiv_065-10479 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In this paper, we consider viscosity solutions
$u$ to Hamilton-Jacobi equations
\begin{equation}\label{HJ initial}
\partial_{t}u+H(D_{x} u)=0 \qquad \textrm{in } \Omega\subset
[0,T]\times \mathbb{R}^{n} .
\end{equation}
As it is well known, solutions of the Cauchy problem for
\eqref{HJ initial} develop singularities of the gradient
in finite time, even if the initial data $u (0, \cdot)$
is extremely regular.
The theory of viscosity solutions, introduced by Crandall and Lions
30 years ago, provides several
powerful existence and uniqueness results which allow
to go beyond the formation of singularities. Moreover,
viscosity solutions are the limit of several smooth
approximations of \eqref{HJ initial}.
For a review of the concept of viscosity solution and the
related theory for equations of type \eqref{HJ initial}
we refer to \cite{bress1,cansin,lions}.
In this paper we are concerned about the regularity
of such solutions, under the following key assumption:
\begin{equation}\label{convex ineq}
H\in C^2(\mathbb{R}^n) \qquad \mbox{and} \qquad
c_{H}^{-1}Id_{n}\leq D^2H\leq c_{H}Id_{n} \quad\mbox{for some
$c_H>0$.}
\end{equation}
There is a vast literature about this issue. As it is
well-known, under the assumption \eqref{convex ineq}, any viscosity
solution $u$ of \eqref{HJ initial} is locally semiconcave in $x$. More precisely,
for every $K\subset\subset \Omega$
there is a constant $C$ (depending on $K, \Omega$ and $c_H$) such that
the function $x\mapsto u(t,x)- C |x|^2$ is concave on $K$. This easily implies that
$u$ is locally Lipschitz and that $\nabla u$ has locally bounded variation,
i.e. that the distributional Hessian $D^2_x u$
is a symmetric matrix of Radon measures. It is then not difficult
to see that the same conclusion holds for $\partial_t D_x u$ and
$\partial_{tt} u$. Note that this result is
independent of the boundary values of $u$ and can be regarded as an
interior regularization effect of the equation.
The rough intuitive picture
that one has in mind is therefore that of functions which are Lipschitz
and whose gradient is piecewise smooth, undergoing jump discontinuities
along a set of codimension $1$ (in space and time). A refined regularity
theory, which confirms this picture and goes beyond, analyzing the behavior
of the functions where singularities are formed, is available under
further assumptions on the boundary values of $u$ (we refer
to the book \cite{cansin} for an account on this research topic). However, if the boundary values are
just Lipschitz, these results do not apply and the corresponding
viscosity solutions might be indeed quite rough, if we understand
their regularity only in a pointwise sense.
In this paper we prove that the BV regularization
effect is in fact more subtle and there is a measure-theoretic analog of ``piecewise
$C^1$ with jumps of the gradients''. As a consequence of our analysis,
we know for instance that the singular parts of the Radon
measures $\partial_{x_ix_j} u$, $\partial_{x_it} u$
and $\partial_{tt} u$ are concentrated on a rectifiable set of
codimension $1$. This set is indeed the measure theoretic jump set $J_{D_x u}$
of $D_x u$ (see below for the precise definition). This
excludes, for instance, that the second derivative of $u$
can have a complicated
fractal behaviour. Using the language introduced in \cite{dgamb} we say that
$D_x u$ and $\partial_t u$ are (locally) special functions of bounded variation, i.e.
they belong to the space $SBV_{loc}$ (we refer to the monograph \cite{afp}
for more details). A typical example of a
$1$-dimensional function which belongs to BV but not to SBV is the
classical Cantor staircase (cp. with Example 1.67 of \cite{afp}).
\begin{teo}\label{main theo}
Let $u$ be a viscosity
solution of \eqref{HJ initial}, assume \eqref{convex ineq}
and set $\Omega_{t}:=\{x\in \mathbb{R}^n:(t,x)\in
\Omega\}$.
Then, the set of times
\begin{equation}\label{e:exceptional}
S:=\{t:D_{x}u(t,.) \notin SBV_{loc}(\Omega_{t})\}
\end{equation}
is at most countable. In particular $D_x u, \partial_t u \in SBV_{loc} (\Omega)$.
\end{teo}
\begin{cor}\label{corollary1}
Under assumption \eqref{convex ineq},
the gradient of any viscosity solution $u$ of
\begin{equation}
H(D_{x}u)=0\qquad \textrm{in } \Omega\subset \mathbb{R}^{n},
\end{equation}
belongs to $SBV_{loc}(\Omega)$.
\end{cor}
Theorem \ref{main theo} was proved first by Luigi Ambrosio
and the second author in the special case $n=1$ (see \cite{adl}
and also \cite{robyr} for the extension to Hamiltonians
$H$ depending on $(t,x)$ and $u$). Some of the ideas
of our proof originate indeed in the work \cite{adl}. However,
in order to handle the higher dimensional case, some new
ideas are needed. In particular, a key role is played
by the geometrical theory of monotone functions developed
by Alberti and Ambrosio in \cite{aa}.
\section{Preliminaries: the theory of monotone functions}
\begin{defi}
Let $\Omega\subset \mathbb{R}^{n}$ be an open set. We say that a continuous
function $u: \Omega\rightarrow \mathbb{R}$ is \emph{semiconcave}
if, for any convex $K \subset\subset \Omega$, there exists
$C_{K}> 0$ such that
\begin{equation}\label{e:semiconc-cond}
u(x+h)+u(x-h)-2u(x)\leq C_{K}|h|^{2},
\end{equation}
for all $x,h \in \mathbb{R}^n$ with $x,x-h,x+h \in K$.
The smallest nonnegative costant $C_K$ such that \eqref{e:semiconc-cond}
holds on $K$ will be called {\em semiconcavity constant of $u$ on $K$}.
\end{defi}
Next, we introduce the concept of superdifferential.
\begin{defi}
Let $u: \Omega \to \mathbb{R}$ be a measurable function.
The set $\partial u(x)$,
called the
$\emph{superdifferential}$
of $u$ at point $x\in \Omega$, is defined as
\begin{align}
\partial u(x)&:=\Big{\{}p\in \mathbb{R}^n: \limsup_{y\rightarrow
x}\frac{u(y)-u(x)-p\cdot(y-x)}{|y-x|}\leq 0\Big{\}}.
\end{align}
\end{defi}
Using the above definition we can describe some properties of
semiconcave functions (see Proposition 1.1.3 of \cite{cansin}):
\begin{prop}\label{concave proposition}
Let $\Omega\subset \mathbb{R}^n$ be open and $K\subset \Omega$ a
compact convex set. Let $u:\Omega\rightarrow
\mathbb{R}$ be a semiconcave function with semiconcavity constant
$C_K \geq 0$. Then, the function
\begin{equation}
\tilde{u}:x\mapsto u(x)-\frac{C_K}{2}|x|^2 \qquad \textrm{ is concave in }K.
\end{equation}
In particular, for any given $x,y\in K$, $p\in \partial \tilde u(x)$ and
$q\in \partial \tilde{u} (y)$ we have that
\begin{equation}
\langle q-p,y-x\rangle \leq 0.
\end{equation}
\end{prop}
From now on, when $u$ is a semi--concave function,
we will denote the set-valued map $x\to \partial \tilde{u} (x) + C_K x$
as $\partial u$. An important observation is that, being $\tilde{u}$
concave, the map $x\to \partial \tilde{u} (x)$ is a maximal monotone
function.
\subsection{Monotone functions in $\mathbb{R}^{n}$}\label{chap mononotone}
Following the work of Alberti and Ambrosio \cite{aa} we introduce
here some results about the theory of monotone functions in
$\mathbb{R}^{n}$. Let $B:\mathbb{R}^n\rightarrow \mathbb{R}^n$ be a set-valued map (or
multifunction), i.e. a map which maps every point $x\in \mathbb{R}^n$ into
some set $B(x)\subset \mathbb{R}^n$. For all $x\in \mathbb{R}^n$ we define:
\begin{itemize}
\item the {\em domain} of $B$, $Dm (B):=\{x:B(x)\neq\emptyset\}$,
\item the {\em image} of $B$, $Im (B):=\{y: \exists x, y\in B(x)\}$,
\item the {\em graph} of $B$,
$\Gamma B:=\{(x,y)\in \mathbb{R}^n \times \mathbb{R}^n: y\in
B(x)\}$,
\item then {\em inverse} of $B$, $[B^{-1}](x):=\{y:x\in B(y)\}$.
\end{itemize}
\begin{defi}\label{def monotone fct} Let $B:\mathbb{R}^n \rightarrow \mathbb{R}^n$
be a multifunction, then
\begin{enumerate}
\item $B$ is a \emph{monotone} function if
\begin{equation}\label{e:minoreuguale}
\langle y_{1}-y_{2},x_{1}-x_{2}\rangle\leq 0 \qquad \forall
x_{i} \in \mathbb{R}^{n}, y_{i} \in B(x_{i}), i=1,2.
\end{equation}
\item A monotone function $B$ is called \emph{maximal} when it
is maximal with respect to the inclusion in the class of
monotone functions, i.e. if the following implication holds:
\begin{equation}
A(x)\supset B(x) \textrm{ for all }x, A \textrm{ monotone
}\Rightarrow A=B.
\end{equation}
\end{enumerate}
\end{defi}
Observe that in this work we assume $\leq$ in \eqref{e:minoreuguale}
instead of the most common $\geq$. However, one can pass
from one convention to the other by simply considering $-B$ instead of
$B$. The observation of the previous subsection is then
summarized in the following Theorem.
\begin{teo}\label{t:concave}
The supergradient $\partial u$ of a concave function is a maximal
monotone function.
\end{teo}
An important tool of the theory of maximal monotone functions,
which will play a key role in this paper, is the
Hille-Yosida approximation
(see Chapters 6 and 7 of \cite{aa}):
\begin{defi}\label{d:Hille}
For every $\varepsilon>0$
we set
$\Psi_{\varepsilon}(x,y):=(x-\varepsilon y,y)$ for all $(x,y)\in
\mathbb{R}^n \times \mathbb{R}^n$,
and for every maximal monotone function $B$ we
define $B_{\varepsilon}$ as the multifunction whose graph is
$\Psi_{\varepsilon}(\Gamma B)$,
that is, $\Gamma B_{\varepsilon}=\{(x-\varepsilon y,y):
(x,y)\in \Gamma B\}$. Hence
\begin{equation}
B_{\varepsilon}:=(\varepsilon Id-B^{-1})^{-1}.
\end{equation}
\end{defi}
In the next Theorems we collect some properties of maximal
monotone functions $B$ and their approximations
$B_{\varepsilon}$ defined above.
\begin{teo}\label{t:cont}
Let $B$ be a maximal monotone function.
Then, the set
$S(B):= \{x: B (x) \mbox{ is not single valued}\}$
is a $\mathcal{H}^{n-1}$ rectifiable set.
Let $\tilde{B}: Dm (B)\to \mathbb{R}^n$ be such that $\tilde{B} (x) \in
\tilde{B} (x)$ for every $x$.
Then $\tilde{B}$ is a measurable function and $B(x)=\{\tilde{B} (x)\}$
for a.e. $x$.
If $Dm (B)$ is open, then $D\tilde{B}$ is a measure, i.e. $\tilde{B}$ is a
function of locally bounded variation.
If $K_i$ is a sequence of compact sets contained
in the interior of $Dm (B)$ with $K_i\downarrow
K$, then $B (K_i)\to B(K)$ in the Hausdorff sense.
Therefore, the map $\tilde{B}$ is continuous at every $x\not\in S(B)$.
Finally, if $Dm (B)$ is open and $B=\partial u$ for
some concave function $u: Dm (B)\to \mathbb{R}$, then $\tilde{B}(x)
= Du (x)$ for a.e. $x$ (recall that $u$ is locally
Lipschitz, and hence the distributional
derivative of $u$ coincides a.e. with the classical differential).
\end{teo}
\begin{proof} First of all, note that, by Theorem 2.2 of \cite{aa},
$S(B)$ is the union of rectifiable sets of Hausdorff dimension
$n-k$, $k\geq 1$. This guarantees the existence of the classical
measurable function $\tilde{B}$. The BV regularity when
$Dm (B)$ is open is shown in Proposition 5.1 of \cite{aa}.
\medskip
Next, let $K$ be a compact set contained in the interior of $Dm (B)$.
By Corollary 1.3(3) of \cite{aa}, $B (K)$ is bounded. Thus,
since $\Gamma B \cap K\times \mathbb{R}^n$s is closed by maximal
monotonicity, it turns out that it is also compact. The continuity
claimed in the second paragraph of the Theorem is then a simple
consequence of this observation.
\medskip
The final paragraph of the Theorem is proved in Theorem 7.11 of \cite{aa}.
\end{proof}
In this paper, since we will always consider monotone functions
that are the supergradients of some concave functions, we will
use $\partial u$ for the supergradient and $Du$ for the distributional
gradient. A corollary of Theorem \ref{t:cont} is that
\begin{cor}\label{c:contconc}
If $u: \Omega \to \mathbb{R}$ is semiconcave, then $\partial u (x)
= \{Du (x)\}$ for a.e. $x$, and at any point where
$\partial u$ is single--valued, $Du$ is continuous. Moreover
$D^2 u$ is a symmetric matrix of Radon measures.
\end{cor}
Next we state the following important convergence theorem.
For the notion of current and the corresponding convergence
properties we refer to the work of Alberti and Ambrosio.
However, we remark that very little of the theory
of currents is needed in this paper: what we actually need is a simple corollary of
the convergence in (ii), which is stated and proved in Subsection \ref{ss:approx}.
In (iii) we follow the usual convention
of denoting by $|\mu|$
the total variation of a (real-, resp. matrix-,
vector- valued) measure $\mu$. The theorem stated below is in fact
contained in Theorem 6.2 of \cite{aa}.
\begin{teo}\label{convergence teo}
Let $\Omega$ be an open and convex subset of $\mathbb{R}^n$ and
let $B$ be a maximal monotone function such that
$\Omega\subset Dm(B)$. Let
$B_{\varepsilon}$ be the approximations given in Definition
\ref{d:Hille}. Then, the following properties hold.
\begin{enumerate}
\item[(i)] $B_{\varepsilon}$ is a $1/\varepsilon$-Lipschitz maximal
monotone function on $\mathbb{R}^n$ for every $\varepsilon>0$. Moreover,
if $B= Du$, then $B_\varepsilon = Du_\varepsilon$ for the concave function
\begin{equation}\label{e:esplicita}
u_\varepsilon (x) \;:=\; \inf_{y\in \mathbb{R}^n} \left\{ u (y) + \frac{1}{2\varepsilon} |x-y|^2\right\}
\end{equation}
\item[(ii)] $\Gamma B$ and $\Gamma B_\varepsilon$
have a natural structure as integer rectifiable currents,
and $\Gamma B_{\varepsilon} \res
\Omega \times \mathbb{R}^n$ converges to $\Gamma B \res \Omega\times \mathbb{R}^n$ in
the sense of currents
as $\varepsilon\downarrow 0$.
\item[(iii)] $DB_{\varepsilon}\rightharpoonup^* D\tilde{B}$ and
$|DB_{\varepsilon}|\rightharpoonup^*
|D\tilde{B}|$ in the sense of measures on $\Omega$.
\end{enumerate}
\end{teo}
\subsection{BV and SBV functions} We conclude the
section by introducing the basic notations related to
the space $SBV$ (for a complete survey on this topic we address the
reader to \cite{afp}).
If $B\in BV(A, R^k)$, then it is possible to split the measure $DB$ into
three mutually singular parts:
$$DB=D_{a}B+D_{j}B+D_{c}B.$$
$D_{a}B$ denotes the absolutely continuous part (with respect to the
Lebesgue measure). $D_{j}B$ denotes the jump part of $D B$.
When $A$ is a $1$-dimensional domain, $D_j B$ consists of
a countable sum of weighted Dirac masses, and hence it is also
called the atomic part of $DB$. In higher dimensional domains, $D_j B$
is concentrated on a rectifiable set of codimension $1$, which corresponds
to the measure-theoretic jump set $J_B$ of $B$.
$D_{c}B$ is called the
Cantor part of the gradient and it is the ``diffused part''
of the singular measure $D_{s}B:=D_{j}B+D_{c}B$. Indeed
\begin{equation}\label{e:cantor_prop}
D_c B (E) = 0
\qquad\mbox{for any Borel set $E$ with $\mathcal{H}^{n-1} (E)<\infty$.}
\end{equation}
For all these
statements we refer to Section 3.9 of \cite{afp}.
\begin{defi}
Let $B\in BV(\Omega)$, then $B$ is a special function of bounded
variation, and we write $B\in SBV(\Omega)$, if $D_{c}B=0$, i.e. if the
measure $DB$ has no Cantor part. The more general
space $SBV_{loc} (\Omega)$ is
defined in the obvious way.
\end{defi}
In what follows, when $u$ is a (semi)-concave function, we will
denote by $D^2 u$ the distributinal hessian of $u$. Since $D u$
is, in this case, a $BV$ map, the discussion above applies.
In this case we will use the notation $D^2_a u$, $D^2_j u$ and
$D^2_c u$. An important property of $D^2_c u$ is the
following regularity property.
\begin{prop}\label{p:cont2}
Let $u$ be a (semi)-concave function. If $D$ denotes the set
of points where $\partial u$ is not single--valued, then
$|D^2_c u| (D) =0$.
\end{prop}
\begin{proof} By Theorem \ref{t:cont}, the set $D$ is $\mathcal{H}^{n-1}$-rectifiable.
This means in particular, that it is $\mathcal{H}^{n-1}-\sigma$ finite. By the property
\eqref{e:cantor_prop} we conclude $D^2_c u (E)=0$ for every Borel subset $E$ of $D$.
Therefore $|D^2_c u| (D) =0$.
\end{proof}
\section{Hamilton-Jacobi equations}
In this section we collect some definitions and well-known results
about Hamilton-Jacobi equations. For a complete survey on this
topic we redirect the reader to the vast literature.
For an introduction to the topic we suggest the following sources
\cite{bress1},\cite{cansin},\cite{evans}. In this paper we will
consider the following Hamilton-Jacobi equations
\begin{eqnarray}
&&\partial_{t} u+H(D_{x}u)\;=\; 0, \qquad \hbox{in $\Omega \subset
[0,T]\times \mathbb{R}^n $\, ,}
\label{e:HJt}\\
&& H (D_x u) \;=\; 0, \qquad\,\,\,\,\qquad \hbox{in
$\Omega\subset \mathbb{R}^n$\, ,
}\label{e:HJs}
\end{eqnarray}
under the assumption that
\begin{itemize}
\item[{\bf A1:}] The Hamiltonian $H\in C^2(\mathbb{R}^n)$ satisfies:
$$p \mapsto H(p) \textrm{ is convex and }\lim_{|p|\rightarrow \infty} \frac{H(p)}{|p|}=+\infty.$$
\end{itemize}
Note that this assumption is obviously implied by \eqref{convex ineq}.
We will often consider $\Omega= [0,T]\times \mathbb{R}^n$ in \eqref{e:HJt}
and couple it with the initial condition
\begin{equation}\label{e:initial}
u (0,x)\;=\; u_0 (x)
\end{equation}
under the assumption that
\begin{itemize}
\item[{\bf A2:}]
The initial data $u_{0}:\mathbb{R}^n\rightarrow\mathbb{R}$ is Lipschitz continuous and bounded.
\end{itemize}
\begin{defi}[Viscosity solution]
A bounded, uniformly continuous function $u$ is called a
\emph{viscosity solution} of \eqref{e:HJt} (resp. \eqref{e:HJs})
provided that
\begin{enumerate}
\item $u$ is a \emph{viscosity subsolution} of \eqref{e:HJt} (resp. \eqref{e:HJs}): for
each $v\in C^{\infty}(\Omega)$ such that $u-v$
has a maximum at $(t_{0},x_{0})$ (resp. $x_0$),
\begin{equation}
v_{t}(t_{0},x_{0})+H(D_{x}v(t_{0},x_{0}))\leq 0
\qquad \mbox{(resp. $H (D v (x_0))\leq 0$);}
\end{equation}
\item $u$ is a \emph{viscosity supersolution} of \eqref{e:HJt} (resp. \eqref{e:HJs}): for
each $v\in C^{\infty}(\Omega)$ such that $u-v$
has a minimum at $(t_{0},x_{0})$ (resp. $x_0$),
\begin{equation}
v_{t}(t_{0},x_{0})+H(D_{x}v(t_{0},x_{0}))\geq 0
\qquad\mbox{(resp. $H (Dv (x_0))\geq 0$).}
\end{equation}
\end{enumerate}
In addition, we say that
$u$ solves the Cauchy problem \eqref{e:HJt}-\eqref{e:initial} on
$\Omega = [0,T]\times \mathbb{R}^n$ if \eqref{e:initial} holds in the
classical sense.
\end{defi}
\begin{teo}[The Hopf-Lax formula as viscosity solution]\label{viscosity-theorem}
The unique viscosity solution of the initial-value problem
\eqref{e:HJt}-\eqref{e:initial} is given by the Hopf-Lax formula
\begin{equation}\label{Hopf-Lax}
u(t,x)=\min_{y\in \mathbb{R}^n}
\Big{\{}u_{0}(y)+tL\Big{(}\frac{x-y}{t}\Big{)} \Big{\}} \qquad (t>0,
x\in \mathbb{R}^{n}),
\end{equation}
where $L$ is the Legendre transform of $H$:
\begin{equation}
L(q):=\sup_{p\in \mathbb{R}^{n}} \{p\cdot q- H(p)\} \qquad (q \in \mathbb{R}^{n}).
\end{equation}
\end{teo}
In the next Proposition we collect some properties of the
viscosity solution defined by the Hopf-Lax formula:
\begin{prop}\label{properties visc sol}
Let $u(t,x)$ be the viscosity solution of \eqref{e:HJt}-\eqref{e:initial} and defined by
\eqref{Hopf-Lax}, then
\begin{enumerate}
\item[(i)] \textbf{A functional identity:} For each $x\in\mathbb{R}^n$ and
$0\leq s <t \leq T$, we have
\begin{equation}\label{functional identity}
u(t,x)=\min_{y\in \mathbb{R}^n}
\Big{\{}u(s,y)+(t-s)L\Big{(}\frac{x-y}{t-s}\Big{)} \Big{\}}.
\end{equation}
\item[(ii)] \textbf{Semiconcavity of the solution:} For any
fixed $\tau>0$ there exists a constant $C(\tau)$
such that the function defined by
\begin{equation}
u_t:\mathbb{R}^n \rightarrow \mathbb{R}^n
\textrm{ with } u_t(x):=u(t,x),
\end{equation}
is semiconcave with constant less than $C$ for any $t\geq \tau$.
\item[(iii)]
\textbf{Characteristics:} The minimum point $y$ in \eqref{Hopf-Lax}
is unique if and only if $\partial u_t (x)$ is single valued. Moreover,
in this case we have $y=x-t DH(D_x u (t,x))$.
\item[(iv)] \textbf{The linear programming principle:} Let $t>s>0$, $x\in \mathbb{R}^n$ and
assume that $y$ is a minimum for \eqref{Hopf-Lax}. Let $z=
\frac{s}{t} x + (1-\frac{s}{t}) y$.
Then $y$ is the {\em unique} minimum for $u_0 (w) + s L
((z-w)/s)$.
\end{enumerate}
\end{prop}
\begin{re}
For a detailed proof of Theorem \ref{viscosity-theorem} and
Proposition \ref{properties visc sol} we address the reader to
Chapter 6 of \cite{cansin} and Chapters 3, 10 of \cite{evans} .
\end{re}
Next, we state a useful locality property of the solutions of \eqref{e:HJt}.
\begin{prop}\label{p:locality}
Let $u$ be a viscosity solution of \eqref{e:HJt} in $\Omega$. Then $u$ is locally
Lipschitz. Moreover, for any $(t_0, x_0)\in \Omega$, there exists a neighborhood
$U$ of $(t_0, x_0)$, a positive number $\delta$
and a Lipschitz function $v_0$ on $\mathbb{R}^n$ such that
\begin{itemize}
\item[(Loc)] $u$ coincides on $U$ with the viscosity solution of
\begin{equation}\label{e:(Loc)}
\left\{
\begin{array}{l}
\partial_t v + H (D_x v) \;=\; 0 \qquad \mbox{in $[t_0-\delta, \infty[\times \mathbb{R}^n$}\\ \\
v (t_0-\delta, x) \;=\; v_0(x)\, .
\end{array}\right.
\end{equation}
\end{itemize}
\end{prop}
This property of viscosity solutions of Hamilton-Jacobi equations is
obviously related to the finite speed of propagation (which holds
when the solution is Lipschitz) and it is well-known. One could
prove it, for instance, suitably modifying the proof of Theorem 7
at page 132 of \cite{evans}. On the other hand
we have not been able to find a complete reference for Proposition
\ref{p:locality}. Therefore, for the reader's convenience, we provide
a reduction to some
other properties clearly stated in the literature.
\begin{proof}
The local Lipschitz regularity of $u$ follows from its local semiconcavity,
for which we refer to \cite{cansin}. As for the locality property
(Loc), we let $\delta>0$ and $R$ be such that $C:=
[t_0-\delta, t_0+\delta]\times
\overline{B}_R (x_0)\subset \Omega$. It is then known that
the following dynamic programming principle holds for every $(t,x)\in C$
(see for instance Remark 3.1 of \cite{canson}
or \cite{evsoug}):
\begin{eqnarray}
u (t,x) &=& \inf \bigg\{ \int_\tau^t L (\dot{\xi} (s))\, ds
+ u (\tau, \xi (\tau)) \, \Big|\, \tau\leq t, \xi\in W^{1,\infty} ([\tau, t]),
\label{e:dyn_prog}\\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad \xi (t)=x
\mbox{ and } (\tau, \xi (\tau))\in \partial C\bigg\}\, . \nonumber
\end{eqnarray}
The Lipschitz regularity of $u$ and the convexity of $L$
ensure that a minimizer exists. Moreover any minimizer is a straight line.
Next, assume that $x\in B_\delta (x_0)$. If $\delta$ is much smaller than $R$,
the Lipschitz regularity
of $u$ ensures that any minimizer $\xi$ has the endpoint $(\tau, \xi (\tau))$
lying in $\{t_0-\delta\}\times B_R (x_0)$. Thus,
for every $(t, x)\in [t_0-\delta, t_0+\delta]\times B_\delta (x_0)$
we get the formula
\begin{equation}\label{e:HL_loc}
u(t,x) \;=\; \min_{y\in \overline{B}_R (x_0)}
\left(u(t_0-\delta, y) + (t-t_0+\delta) L \left(\frac{x-y}{t-t_0+\delta}\right)\right)\, .
\end{equation}
Next, extend the map $\overline{B}_R (0)\ni x\mapsto u (t_0-\delta, x)$ to
a bounded Lipschitz map $v_0: \mathbb{R}^n \to \mathbb{R}$, keeping the same Lipschitz constant.
Then the solution of \eqref{e:(Loc)} is given by the Hopf-Lax formula
\begin{equation}\label{e:HL_glob}
v (t,x)\;=\; \min_{y\in \mathbb{R}^n} \left( v_0 (y) + (t-t_0+\delta)
L \left(\frac{x-y}{t-t_0+\delta}\right)\right)\, .
\end{equation}
If $(t, x)\in [t_0-\delta, t_0+\delta]\times B_\delta (0)$,
then any minimum point $y$ in \eqref{e:HL_glob} belongs to $\overline{B}_R (0)$,
provided $\delta$ is sufficiently small (compared to $R$ and the Lipschitz
constant of $v$, which in turn is bounded independently of $\delta$).
Finally, since $v_0 (y) = u (t_0-\delta, y)$ for every $y\in \overline{B}_R (0)$,
\eqref{e:HL_loc} and \eqref{e:HL_glob} imply that $u$ and $v$
coincide on $[t_0-\delta, t_0+\delta]\times B_\delta (0)$ provided $\delta$
is sufficiently small.
\end{proof}
\section{Proof of the main Theorem}
\subsection{Preliminary remarks}
Let $u$ be a viscosity solution of \eqref{e:HJt}. By Proposition
\ref{p:locality} and the time invariance of the equation, we can,
without loss of generality, assume that $u$ is a solution on
$[0,T]\times \mathbb{R}^n$ of the Cauchy-Problem
\eqref{e:HJt}-\eqref{e:initial} under the assumptions A1, A2.
Clearly, it suffices to show that, for every $j>0$, the set of times
$S\cap ]1/j, +\infty[$ is countable. Therefore, by Proposition
\ref{viscosity-theorem} and the time--invariance of
the Hamilton--Jacobi
equations, we can restrict ourselves to the following case:
\begin{equation}\label{2der bounded}
\mbox{$\exists C$ s.t. $u_{\tau}$ is semiconcave
with constant less than $C$ and $|Du_\tau|\leq C$ $\forall \tau\in [0,T]$}.
\end{equation}
Arguing in the same way, we can further assume that
\begin{equation}\label{e:eps}
\mbox{$T$ is smaller than some constant $\varepsilon (C)>0$,}
\end{equation}
where the choice of the constant $\varepsilon (C)$ will be specified later.
Next we consider a ball $B_R (0)\subset \mathbb{R}^n$ and a bounded
convex set $\Omega\subset [0,T]\times \mathbb{R}^n$ with the properties
that:
\begin{itemize}
\item $B_R (0)\times \{s\}\subset \Omega$ for every $s\in [0,T]$;
\item For any $(t,x)\in \Omega$ and for any $y$ reaching the
minimum in the formulation \eqref{Hopf-Lax}, $(0,y)\in \Omega$
(and therefore the entire segment joining $(t,x)$ to $(0,y)$
is contained in $\Omega$).
\end{itemize}
Indeed, recalling that $\|Du\|_\infty <\infty$, it suffices
to choose $\Omega:= \{(x,t)\in \mathbb{R}^n\times [0,T]: |x|\leq R + C' (T-t)\}$
where the costant $C'$ is sufficiently large,
depending only on $\|Du\|_\infty$ and $H$.
Our goal is now to show
the countability of the set $S$ in \eqref{e:exceptional}.
\subsection{A function depending on time}
For any $s<t\in [0,T]$, we define the set--valued map
\begin{equation}\label{e:X}
X_{t,s} (x) \;:=\; x- (t-s) DH (\partial u_t (x))\, .
\end{equation}
Moreover, we will denote by $\chi_{t,s}$ the restriction
of $X_{t,s}$ to the points where $X_{t,s}$ is single--valued.
According to Theorem \ref{t:cont} and Proposition
\ref{properties visc sol}(iii),
the domain of $\chi_{t,s}$ consists of
those points where $Du_t (\cdot)$ is continuous, which are those
where the minimum point $y$ in \eqref{functional identity}
is unique. Moreover, in this case we have $\chi_{t,s} (x) = \{y\}$.
Clearly, $\chi_{t,s}$ is defined a.e. on $\Omega_t$. With
a slight abuse of notation we set
\begin{equation}
F(t)\;:=\; |\chi_{t,0} (\Omega_t)|\, ,
\end{equation}
meaning that, if we denote by $U_t$ the set of points $x\in
\Omega_{t}$ such that \eqref{Hopf-Lax} has a unique minimum point,
we have $F(t) = |X_{t,0} (U_t)|$.
The proof is then split in the following three lemmas:
\begin{lem}\label{nonincreasing}
The functional $F$ is nonincreasing,
\begin{equation}
F(\sigma)\geq F(\tau) \qquad\mbox{for any $\sigma, \tau
\in [0,T]$ with $\sigma<\tau$}.
\end{equation}
\end{lem}
\begin{lem}\label{cantorlem}
If $\varepsilon$ in \eqref{e:eps} is small enough, then the following holds.
For any $t\in]0,T[$ and $\delta\in ]0, T-t]$
there exists a Borel set $E \subset \Omega_{t}$ such that
\begin{itemize}
\item[(i)] $|E|=0$, and
$|D^2_c u_t| (\Omega_t\setminus E)=0$;
\item[(ii)] $X_{t,0}$ is single valued on $E$
(i.e. $X_{t,0} (x) = \{\chi_{t,0} (x)\}$ for every $x\in E$);
\item[(iii)] and
\begin{equation}\label{cantor disappear from X}
\chi_{t,0} (E) \cap \chi_{t+\delta, 0}
(\Omega_{t+\delta})=\emptyset.
\end{equation}
\end{itemize}
\end{lem}
\begin{lem}\label{ineqlem2}
If $\varepsilon$ in \eqref{e:eps} is small enough, then the following holds.
For any $t\in ]0,\varepsilon]$ and any Borel set
$E\subset \Omega_{t}$, we have
\begin{equation}\label{ineqq}
|X_{t,0} (E)| \geq c_{0} |E|-c_1 t \int_{E}
d(\Delta u_{t})\, ,
\end{equation}
where $c_{0}$ and $c_1$ are positive constants
and $\Delta u_{t}$
is the Laplacian of $u_{t}$.
\end{lem}
\subsection{Proof of Theorem \ref{main theo}}\label{ss:main}
The three key lemmas stated above will be proved in the next two sections.
We now show how to complete the proof of the Theorem.
First of all, note that $F$ is a bounded function. Since
$F$ is, by Lemma \ref{nonincreasing}, a monotone function,
its points of discontinuity are, at most, countable.
We claim that,
if $t\in ]0,T[$ is such that $u_t\not\in SBV_{loc} (\Omega_t)$,
then $F$ has a discontinuity at $t$.
Indeed, in this case we have
\begin{equation}\label{e:noSBV}
|D^2_{c}u_{t}| (\Omega_{t})> 0.
\end{equation}
Consider any $\delta>0$ and let $B=E$ be the set of
Lemma \ref{cantorlem}. Clearly, by Lemma \ref{cantorlem}(i) and (ii),
\eqref{cantor disappear from X} and \eqref{ineqq},
\begin{equation}\label{e:quasi}
F(t+\delta)\;\leq\; F(t) + c_1 t \int_E d\, \Delta_s u_t \;\leq\; F(t) +
c_1 t \int_{\Omega_t} d\, \Delta_c u_t\, ,
\end{equation}
where the last inequality follows from
$\Delta_s u_t = \Delta_c u_t + \Delta_j u_t$
and $\Delta_j u_t\leq 0$ (because of the semiconcavity of $u$).
Next, consider the Radon--Nykodim decomposition $D^2_c u_t = M
|D^2_c u_t|$, where $M$ is a matrix--valued Borel function with
$|M|=1$. Since we are dealing with second derivatives, $M$ is
symmetric, and since $u_t$ is semiconcave, $M\leq 0$. Let
$\lambda_1, \ldots, \lambda_n$ be the eigenvalues of $- M$. Then
$1=|M|^2 = \lambda_1^2 + \ldots + \lambda_n^2$ and $- Tr M =
\lambda_1+\ldots + \lambda_n$. Since $\lambda_i\geq 0$, we easily
get $- Tr M \geq 1$. Therefore,
\begin{equation}\label{e:CS}
-\Delta_{c}u_{t}\;=\; - Tr M |D^2_c u_t| \;\geq\; |D^2_c u_t|\, .
\end{equation}
Hence
$$
F(t+\delta)\;\stackrel{\eqref{e:quasi}+\eqref{e:CS}}{\leq}\; F(t) -
c_1 t|D^2_c u_t| (\Omega_t)\, .
$$
Letting $\delta\downarrow 0$ we conclude
$$
\limsup_{\delta \downarrow 0} F(t+\delta)\;<\; F(t)\, .
$$
Therefore $t$ is a point of discontinuity of $F$, which is the desired
claim.
\subsection{Easy corollaries}
The conclusion that $D_x u \in SBV (\Omega)$ follows from the slicing
theory of $BV$ functions (see Theorem 3.108 of \cite{afp}).
In order to prove the same property for
$\partial_t u$ we apply the Volpert chain rule
to $\partial_t u = - H (D_x u)$. According to Theorem 3.96 of
\cite{afp}, we conclude that $[\partial_{x_j t}]_c u =
- \sum_i \partial_i H (D_x u) [\partial_{x_jx_i}]_c u = 0$ (because $[D^2_x]_c u = 0$)
and $[\partial_{tt}]_c u = - \sum_i \partial_i H (D_x u) [\partial_{x_i t}]_c u = 0$
(because we just concluded $[D^2_{x t}]_c u =0$).
As for Corollary \ref{corollary1}, let $u$ be a viscosity solution of
\eqref{e:HJs} and set $\widetilde{u}(t,x):=u(x)$. Then $\tilde{u}$
is a viscosity solution of
$$
\partial_{t}\widetilde{u}+H(D_{x}\widetilde{u})=0\,
$$
in $\mathbb{R}\times \Omega$.
By our main Theorem \ref{main theo} the set of
times for which $D_x \widetilde{u}(t,.)\notin SBV_{loc}(\Omega)$ is
at most countable. Since $D_x \widetilde{u} (t, \cdot) = D u$,
for every $t$, we conclude that $Du \in
SBV_{loc}(\Omega)$.
\begin{re}
The special case of this Corollary for $\Omega\subset \mathbb{R}^2$ was
already proved in \cite{adl} (see Corollary 1.4 therein).
We note that the
proof proposed in \cite{adl} was more complicated than the one above.
This is due to the power of Theorem \ref{main theo}.
In \cite{adl} the authors proved the $1$--dimensional case
of Theorem \ref{main theo}. The proof above reduces the $2$--dimensional
case of Corollary \ref{corollary1} to the $2+1$ case of Theorem
\ref{main theo}. In \cite{adl} the $2$-dimensional case of Corollary
\ref{corollary1} was reduced to the $1+1$ case of Theorem
\ref{main theo}: this reduction requires a subtler argument.
\end{re}
\section{Estimates}
In this section we prove two important estimates. The first
is the one in Lemma \ref{ineqlem2}. The second is an estimate
which will be useful in proving Lemma \ref{cantorlem} and
will be stated here.
\begin{lem}\label{ineqlem1}
If $\varepsilon (C)$ in \eqref{e:eps} is sufficiently small,
then the following holds.
For any $t\in ]0,T]$, any $\delta \in [0,t]$ and any Borel set
$E\subset \Omega_{t}$ we have
\begin{equation}\label{ineqlem3}
\Big{|}X_{t, \delta} (E)\Big{|}\geq \frac{(t-\delta)^n}{t^n}
\Big{|}X_{t,0} (E)\Big{|}\, .
\end{equation}
\end{lem}
\subsection{Injectivity}
In the proof of both lemmas, the following remark
plays a fundamental role.
\begin{prop}\label{p:inj} For any $C>0$
there exists $\varepsilon (C)>0$ with the following property. If $v$ is a
semiconcave function with constant less than $C$, then the map
$x\mapsto x-t DH (\partial v)$ is injective for every $t\in [0, \varepsilon
(C)]$.
\end{prop}
Here the injectivity of a set--valued map $B$
is understood in the following natural way
$$
x\neq y\qquad \Longrightarrow \qquad B(x)\cap B(y)=\emptyset\, .
$$
\begin{proof}
We assume by contradiction that there exist $x_{1},x_{2}\in
\Omega_{t}$ with $x_1\neq x_2$ and such that:
$$
[x_1 - t DH (\partial v (x_1))]\cap [x_2 - t DH (\partial v
(x_2))]\neq\emptyset.
$$
This means that there is a point $y$ such that
\begin{equation}
\left\{%
\begin{array}{ll}
\frac{x_{1}-y}{t} \in DH(\partial v (x_{1})), \\
\frac{x_{2}-y}{t} \in DH(\partial v (x_{2})); \\
\end{array}%
\right.
\Rightarrow
\left\{%
\begin{array}{ll}
DH^{-1}(\frac{x_{1}-y}{t})\in\partial v (x_{1}), \\
DH^{-1}(\frac{x_{2}-y}{t})\in\partial v (x_{2}). \\
\end{array}%
\right.
\end{equation}
By the semiconcavity of $v$ we get:
\begin{equation}\label{ineq1}
M(x_{1},x_{2}):=\Big{\langle}
DH^{-1}\Big{(}\frac{x_{1}-y}{t}\Big{)}-DH^{-1}\Big{(}\frac{x_{2}-y}{t}
\Big{)},x_{1}-x_{2}\Big{\rangle}
\leq C |x_1-x_2|^2.
\end{equation}
On the other hand, $D (DH^{-1}) (x) = (D^2 H)^{-1} (DH^{-1} (x))$ (note that in this
formula, $DH^{-1}$ denotes the inverse of the map $x\mapsto DH (x)$, whereas
$D^2 H^{-1} (y)$ denotes the matrix $A$ which is the inverse of the matrix $B:=
D^2 H (y)$).
Therefore $D(DH^{-1}) (x)$ is a symmetric matrix,
with $D(DH^{-1}) (x)\geq c_H^{-1} Id_n$.
It follows that
\begin{align}\label{ineq2}
M(x_{1},x_{2})&=t\Big{\langle}
DH^{-1}\Big{(}\frac{x_{1}-y}{t}\Big{)}-DH^{-1}\Big{(}\frac{x_{2}-y}{t}\Big{)},
\frac{x_{1}-y}{t}-\frac{x_{2}-y}{t}\Big{\rangle}\geq
\nonumber \\
&\geq
\frac{t}{2c_{H}}\Big{|}\frac{x_{1}-y}{t}-\frac{x_{2}-y}{t}
\Big{|}^2\geq
\frac{1}{2 t c_{H}}|x_{1}-x_{2}|^2\geq \frac{1}{2\varepsilon
c_{H}}|x_{1}-x_{2}|^2.
\end{align}
But if $\varepsilon>0$ is small enough, or more precisely if it is
chosen to satisfy $2\varepsilon c_{H}< \frac{1}{C}$ the two
inequalities (\ref{ineq1}) and (\ref{ineq2}) are in contradiction.
\end{proof}
\subsection{Approximation}\label{ss:approx}
We next consider $u$ as in the formulations of
the two lemmas, and $t\in [0,T]$. Then the function
$\tilde{v} (x):= u (x) - C|x|^2/2$ is concave.
Consider the approximations $B_\eta$ (with $\eta>0$) of $\partial \tilde{v}$
given in Definition \ref{d:Hille}. By Theorem
\ref{convergence teo}(i), $B_\eta = D \tilde{v}_\eta$ for
some concave function $\tilde{v}_\eta$ with Lipschitz gradient. Consider therefore
the function $v_\eta (x) = \tilde{v}_\eta (x) + C|x|^2/2$.
The semiconcavity constant of $v_\eta$ is not larger than $C$.
Therefore we can apply Proposition \ref{p:inj} and
choose $\varepsilon (C)$ sufficiently small in such a way that
the maps
\begin{equation}\label{e:girate}
x\;\mapsto\; A(x) = x - t DH (\partial u_t)
\qquad
x\;\mapsto\; A_\eta (x)= x - t DH (Dv_\eta)
\end{equation}
are both injective. Consider next the following measures:
\begin{equation}\label{e:measures}
\mu_\eta (E) \;:=\; |(Id - t DH (Dv_\eta)) (E)| \qquad \mu (E)\;:=\;
|(Id - t DH (\partial u_t)) (E)|\, .
\end{equation}
These measures are well-defined because of the injectivity
property proved in Proposition \ref{p:inj}.
Now, according to Theorem \ref{convergence teo},
the graphs $\Gamma Dv_\eta$ and
$\Gamma \partial u_t$ are both rectifiable currents
and the first are converging, as $\eta\downarrow 0$, to
the latter. We denote them, respectively,
by $T_\eta$ and $T$. Similarly, we can associate
the rectifiable currents $S$ and $S_\eta$ to the graphs
$\Gamma A$ and $\Gamma A_\eta$ of
the maps in \eqref{e:girate}. Note that these graphs can
be obtained by composing $\Gamma \partial u_t$
and $\Gamma Dv_\eta$ with the following
global diffeomorphism of $\mathbb{R}^n$:
$$
(x,y)\;\mapsto\; \Phi (x,y) = x-tDH(y)\, .
$$
In the language of currents we then have
$S_\eta = \Phi_\sharp T_\eta$ and $S=\Phi_\sharp
T$. Therefore, $S_\eta \to S$ in the sense of currents.
We want to show that
\begin{equation}\label{e:convergence}
\mu_\eta \;\rightharpoonup^*\; \mu\, .
\end{equation}
First of all, note that $S$ and $S_\eta$ are rectifiable currents of
multiplicity $1$ supported on the rectifiable sets $\Gamma A = \Phi
(\Gamma \partial u_t)$ and $\Gamma A_\eta = \Phi(\Gamma B_\eta) =
\Phi (\Gamma Dv_\eta)$. Since $B_\eta$ is a Lipschitz map, the
approximate tangent plane $\pi$ to $S_\eta$ in (a.e.)
point $(x, A_\eta (x))$ is spanned by the vectors $e_i + DA_\eta
(x)\cdot e_i$ and hence oriented by the $n$-vector
$$
\stackrel{\to}{v} \;:=\;
\frac{(e_1 + DA_\eta (x)\cdot e_1)\wedge \ldots \wedge (e_n+DA_\eta (x)\cdot e_n)}
{|(e_1 + DA_\eta (x)\cdot e_1)\wedge \ldots \wedge (e_n+DA_\eta (x)\cdot e_n)|}\, .
$$
Now, by the calculation of Proposition \ref{p:inj},
it follows that $\det DA_\eta\geq 0$. Hence
\begin{equation}\label{e:positivity}
\langle dy_1\wedge \ldots \wedge dy_n,
\stackrel{\to}{v}\rangle \;\geq\; 0\, .
\end{equation}
By the convergence $S_\eta\to S$,
\eqref{e:positivity} holds for the tangent planes
to $S$ as well.
Next, consider a
$\varphi\in C^\infty_c (\Omega_t)$. Since both $\Gamma A$ and
$\Gamma A_\eta$ are bounded sets, consider a
ball $B_R (0)$ such that ${\rm supp}\, (\Gamma A), {\rm supp}\,
(\Gamma A_\eta)\subset \mathbb{R}^n\times B_R (0)$ and
let $\chi\in C^\infty_c (\mathbb{R}^n)$ be a cut-off function
with $\chi|_{B_R (0)} = 1$.
Then, by standard calculations on currents,
the injectivity property of Proposition \ref{p:inj}
and \eqref{e:positivity} imply that
\begin{eqnarray}
\int \varphi d\mu
&=& \langle S, \varphi (x) \chi (y) dy_1\wedge\ldots\wedge dy_n\rangle,\\
\int \varphi d\mu_\eta
&=& \langle S_\eta, \varphi (x) \chi (y) dy_1\wedge\ldots\wedge dy_n
\rangle\, .
\end{eqnarray}
Therefore, since $S_\eta \to S$, we conclude that
$$
\lim_{\eta\downarrow 0} \int \varphi d\mu_\eta
\;=\; \int \varphi d\mu\, .
$$
This shows \eqref{e:convergence}.
\subsection{Proof of Lemma \ref{ineqlem1}}\label{ss:prima}
First of all we choose $\varepsilon$ so small that the
conclusions of Proposition \ref{p:inj} and
those of Subsection \ref{ss:approx} hold.
We consider therefore, the approximations $v_\eta$
of Subsection \ref{ss:approx}, we define the measures
$\mu$ and $\mu_\eta$ as in \eqref{e:measures}
and the measures $\hat{\mu}$ and $\hat{\mu}_\eta$ as
\begin{equation}\label{e:measures2}
\hat{\mu} (E) \;:=\;
|(Id - (t-\delta) DH (\partial u_t)) (E)|
\qquad \hat{\mu}_\eta (E) \;:=\;
|(Id - (t-\delta) DH (Dv_\eta)) (E)|\, .
\end{equation}
By the same arguments as in Subsection \ref{ss:approx},
we necessarily have $\hat{\mu}_\eta\rightharpoonup^* \hat{\mu}$.
The conclusion of the Lemma can now be formulated as
\begin{equation}\label{e:restated}
\hat{\mu} \;\geq\; \frac{(t-\delta)^n}{t^n}
\mu\, .
\end{equation}
By the convergence of the measures $\mu_\eta$ and
$\hat{\mu}_\eta$ to $\mu$ and $\hat{\mu}$, it suffices
to show
\begin{equation}\label{e:restated10}
\hat{\mu}_\eta \;\geq\; \frac{(t-\delta)^n}{t^n}
\mu_\eta\, .
\end{equation}
On the other hand, since the maps $x\mapsto x-t DH (Dv_\eta)$ and
$x\mapsto x- (t-\delta) DH (Dv_\eta)$ are both injective and
Lipschitz, we can use the area formula to write:
\begin{eqnarray}
\hat{\mu}_\eta (E) &=&
\int_{E}\det\Big{(}Id_n-(t-\delta) D^{2}H(D v_\eta (x))
D^2 v_\eta (x)\Big{)}\, dx,\\
\mu_\eta (E)
&=& \int_{E}\det\Big{(}Id_n-t D^{2}H(D v_\eta (x))
D^2 v_\eta (x)\Big{)}\, dx
\end{eqnarray}
Therefore, if we set
\begin{eqnarray*}
M_1 (x) &:=& Id_n - (t-\delta) D^2H (Dv_\eta (x))
D^2 v_\eta (x)\\
M_2 (x) &:=& Id_n - t D^2H (Dv_\eta (x))D^2 v_\eta (x)\, \, ,
\end{eqnarray*}
the inequality \eqref{e:restated} is equivalent to
\begin{equation}\label{e:ineqdet}
\det M_1 (x)\;\geq\; \frac{(t-\delta)^n}{t^n} \det M_2 (x) \qquad
\mbox{for a.e. $x$.}
\end{equation}
Note next that
\begin{eqnarray*}
\det M_1 (x) &=&
\det (D^2 H (Dv_\eta (x)))
\det\Big{(}[D^2H(D v_\eta (x))]^{-1}-(t-\delta)
D^2 v_\eta (x)\Big{)}\nonumber\\
\det M_2 (x) &=&
\det (D^2 H (Dv_\eta (x)))
\det\Big{(}[D^2H(Dv_\eta (x))]^{-1}-tD^2 v_\eta (x)\Big{)}
\end{eqnarray*}
Set $A (x):= [D^2H(D v_\eta (x))]^{-1}$ and $B (x)= D^2 v_\eta (x)$.
Then it suffices to prove that:
\begin{equation}
\det (A (x)-(t-\delta) B (x))\;\geq\;
\frac{(t-\delta)^n}{t^n}
\det (A(x)- tB (x))\, .
\end{equation}
Note that
$$
A - (t-\delta) B \;=\; \frac{\delta}{t} A + \frac{t-\delta}{t}
(A-tB)\, .
$$
By choosing $\varepsilon$ sufficiently small (but only depending
on $c_H$ and $C$),
we can assume that $A-tB$ is a positive semidefinite matrix.
Since $A$ is a positive definite matrix, we conclude
\begin{equation}\label{e:>=}
A - (t-\delta) B \;\geq\; \frac{t-\delta}{t}
(A-tB)\, .
\end{equation}
A standard argument in linear algebra shows that
\begin{equation}\label{e:det>=}
\det (A-(t-\delta) B)\;\geq\; \frac{(t-\delta)^n}{t^n}
\det (A-tB)\,
\end{equation}
which concludes the proof.
We include, for the reader convenience, a proof of \eqref{e:>=}
$\Longrightarrow$\eqref{e:det>=}. It suffices to show that,
if $E$ and $D$ are positive semidefinite matrices with $E\geq D$,
then $\det E\geq \det D$. Without loss of generality,
we can assume that $E$ is in diagonal form, i.e.
$E= {\rm diag}\, (\lambda_1, \ldots, \lambda_n)$, and that
$E>D$. Then each
$\lambda_i$ is positive. Define $G:={\rm diag}\, (\sqrt{\lambda_1}
,\ldots, \sqrt{\lambda_n})$. Then
$$
{\rm Id}_n \;\geq\; G^{-1} D G^{-1} = \tilde{D}\, .
$$
Our claim would follow if we can prove $1\geq \det \tilde{D}$, that
is, if we can prove the original claim for $E$ and $D$ in the
special case where $E$ is the identity matrix. But in this case we
can diagonalize $E$ and $D$ at the same time. Therefore $D= {\rm
diag}\, (\mu_1, \ldots, \mu_n)$. But, since $E\geq D\geq 0$, we have
$0\leq \mu_i\leq 1$ for each $\mu_i$. Therefore
$$
\det E\;=\; 1 \;\geq\; \Pi_i \mu_i \;=\; \det D\, .
$$
\subsection{Proof of Lemma \ref{ineqlem2}}
As in the proof above we will show the Lemma by approximation
with the functions $v_\eta$. Once again we introduce
the measures $\mu_\eta$ and $\mu$ of \eqref{e:measures}.
Then, the conclusion of the Lemma can be formulated as
\begin{equation}\label{e:restated2}
\mu\;\geq\; c_0\mathcal{L}^n - t c_1 \Delta u_t\, .
\end{equation}
Since $\Delta v_\eta \rightharpoonup^* \Delta u_t$
by Theorem \ref{convergence teo}(iii), it suffices to
show
\begin{equation}\label{e:restated20}
\mu_\eta\;\geq\; c_0\mathcal{L}^n - t c_1 \Delta v_\eta\, .
\end{equation}
Once again we can use the area formula to
compute
\begin{equation}\label{estimate 0}
\mu_\eta (E) \;=\; \int_{E} \det (D^2 H (Dv_\eta (x)))
\det\Big{(}[D^2H(D v_\eta (x))]^{-1}-tD^2 v_\eta(x)\Big{)} dx
\end{equation}
Since $D^2 H\geq c_H^{-1} Id_n$ and $[D^2 H]^{-1}\geq c_H^{-1}
Id_n$, we can estimate
\begin{equation}\label{estimate 1}
\det (D^2 H (Dv_\eta (x))) \det\Big{(}[D^2H(D v_\eta (x))]^{-1}-tD^2
v_\eta(x)\Big{)} \;\geq\; c_H^{-n} \det \left(\frac{1}{c_H} Id_n - t
D^2 v_\eta (x)\right)
\end{equation}
arguing as in Subsection \ref{ss:prima}. If we choose $\varepsilon$
so small that $0<\varepsilon <\frac{1}{2c_{H}C}$, then $M (x)
:= \frac{1}{2c_{H}}Id_n-tD^{2} v_\eta (x)$ is positive semidefinite.
Therefore
\begin{equation}\label{estimate 2}
\det (D^2 H (Dv_\eta (x))) \det\Big{(}[D^2H(D v_\eta (x))]^{-1}-tD^2
v_\eta(x)\Big{)} \;\geq\; c_H^{-n} \det \left(\frac{1}{2c_H} Id_n +
M (x)\right)\,.
\end{equation}
Diagonalizing $M (x) = {\rm diag}
(\lambda_1, \ldots, \lambda_n)$,
we can estimate
\begin{eqnarray}
\det \left(\frac{1}{2c_{H}}Id_{n}+ M (x)\right) &=&
\left(\frac{1}{2c_H}\right)^n \prod_{i=1}^{n} (1+ 2c_H \lambda_i)
\;\geq\; \left(\frac{1}{2c_H}\right)^n (1 + 2c_H {\rm Tr}\, M (x))
\nonumber\\
&=& c_2 - c_3 t \Delta v_\eta (x)\, .\label{e:estimate2}
\end{eqnarray}
Finally, by \eqref{estimate 0},
\eqref{estimate 1}, \eqref{estimate 2} and
\eqref{e:estimate2}, we get
$$
\mu_\eta (E) \;\geq\; \int_E (c_0 - c_1t \Delta v_\eta (x))\,
dx\,.
$$
This concludes the proof.
\section{Proofs of Lemma \ref{nonincreasing} and
Lemma \ref{cantorlem}}
\subsection{Proof of Lemma \ref{nonincreasing}}
The claim follows from the following consideration:
\begin{equation}\label{e:inclusion}
\chi_{t,0} (\Omega_t) \subset \chi_{s,0} (\Omega_s)
\qquad \mbox{for every $0\leq s\leq t\leq T$.}
\end{equation}
Indeed, consider $y\in \chi_{t,0} (\Omega_t)$. Then there exists
$x\in \Omega_t$ such that $y$ is the unique minimum of
\eqref{Hopf-Lax}. Consider $z:= \frac{s}{t} x + \frac{t-s}{t} y$.
Then $z\in \Omega_s$. Moreover, by Proposition \ref{properties visc
sol}(iv), $y$ is the unique minimizer of $u_0 (w) + s L ((z-w)/ s)$.
Therefore $y=\chi_{s,0} (z)\in \chi_{s,0} (\Omega_s)$.
\subsection{Proof of Lemma \ref{cantorlem}}
First of all, by Proposition \ref{p:cont2},
we can select a Borel set $E$ of measure $0$ such that
\begin{itemize}
\item $\partial u_t (x)$ is single-valued for every $x\in E$;
\item $|E|=0$;
\item $|D^2_c u_t| (\Omega_t\setminus E)=0$.
\end{itemize}
If we assume that our statement were false, then there would exist
a compact set $K\subset E$ such that
\begin{equation}\label{e:contra4}
|D^2_c u_t| (K)\;>\; 0\, .
\end{equation}
and $X_{t,0} (K) = \chi_{t,0} (K)
\subset \chi_{t+\delta, 0} (\Omega_{t+\delta})$.
Therefore it turns out that $X_{t,0} (K)
= \chi_{t+\delta,0} (\tilde{K}) = X_{t+\delta, 0} (\tilde{K})$ for some
Borel set $\tilde{K}$.
Now, consider $x\in \tilde{K}$
and let $y := \chi_{t+\delta, 0} (x)\in
X_{t+\delta, 0} (\tilde{K})$ and $z := \chi_{t+\delta, t} (x)$.
By
Proposition \ref{properties visc sol}(iv), $y$
is the unique minimizer of $u_0 (y) + t L ((z-y)/t)$,
i.e. $\chi_{t, 0} (z) = y$.
Since $y\in \chi_{t, 0} (K)$, there exists
$z'$ such that $\chi_{t,0} (z')$. On the other hand,
by Proposition \ref{p:inj}, provided $\varepsilon$ has been
chosen sufficiently small,
$\chi_{t,0}$ is an injective map. Hence we necessarily have
$z'=z$. This shows that
\begin{equation}\label{e:inclusion2}
X_{t+\delta, t} (\tilde{K}) \subset K\, .
\end{equation}
By Lemma \ref{ineqlem1},
\begin{equation}\label{e:contra1}
|K| \;\geq\; |X_{t+\delta, t} (\tilde{K})|
\;\geq\; \frac{\delta^n}{(t+\delta)^n}
|X_{t+\delta, 0} (\tilde{K})|
\;=\; \frac{\delta^n}{(t+\delta)^n} |X_{t,0} (K)|\, .
\end{equation}
Hence, by Lemma \ref{ineqlem2}
\begin{equation}\label{e:contra2}
|K|\;\geq\; c_0 |K| - c_1 t \frac{\delta^n}{(t+\delta)^n}
\int_K d\, \Delta u_t\, .
\end{equation}
On the other hand, recall that $K\subset E$ and $|E|=0$. Thus,
$\int_K d\,\Delta_s u_t = \int_K d\, \Delta u_t \geq 0$.
On the other hand $\Delta_s u_t \leq 0$ (by the semiconcavity
of $u$). Thus we conclude that $\Delta_s u_t$, and hence
also $\Delta_c u_t$, vanishes indentically on $K$.
However, arguing as in Subsection \ref{ss:main},
we can show $-\Delta_c u_t \geq |D^2_c u_t|$, and hence,
recalling \eqref{e:contra4}, $-\Delta_c u_t (K)>0$.
This is a contradiction
and hence concludes the proof.
| proofpile-arXiv_065-10645 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The profile of a light curve in an astronomical object includes a wide range of information, i.e., the spatial distribution and dynamical motion of gas, or its radiative processes, etc.
The analysis of the light curve has been therefore known as an important tool from the old days.
Especifically, in the case of the eclipsing binary system, which has a compact star, it is useful to suppose the brightness distribution of the accretion disk around the compact star.
This method can also be applied to black hole candidates.
Generally, it is difficult to discern light from the companion star and light from the accretion disk, but if the object has an eclipse,
it may be possible for two ingredients to separate.
Moreover, black hole candidates show the (partially) eclipsing property,
and we could close in on the feature of the accreting gas,
or the black hole mass, or spin, etc (Fukue 1987; Watarai et al. 2005; Takahashi \& Watarai 2007).
For the last decade,
a large number of luminous black hole candidates (BHCs)
have been found in nearby galaxies.
They are called ``ultraluminous X-ray sources (ULXs)'', but their origin is still unknown (Makishima et al. 2000; Roberts et al. 2005).
X-ray data analysis is the mainstream study of the ULXs,
but other band data will be useful for evaluating other binary parameters.
It is no wonder that eclipsing binaries are detected among the BHCs.
In fact, several eclipsing black hole binaries have been discovered (Orosz et al. 2007 in M33; Ghosh et al. 2006 in NGC4214).
To examine the observational properties of such luminous eclipsing binaries,
light-curve fitting will be a powerful tool.
The conventional study of a light curve in a binary system
applies a geometrically thin disk, a so-called standard accretion disk
(Shakura \& Sunyaev 1973).
This model may have applicability to the high/soft state in black hole candidates, but it may not extend to the low/hard state, very high state, or super-critical state, which is a more luminous state than the high soft state.
Luminous black hole candidates that exceed to the Eddington luminosity seem to accrete a large amount of gas via a companion star, and the disk becomes geometrically thick. This type of accretion flow is called a ``slim disk'' (Abramowicz et al. 1988; Watarai et al. 2000).
The geometrical thickness of this disk can play an important role in covering a companion star, depending on an angle, and contribute to changing the shape of the light curve.
It is necessary to include it in a calculation of bright binary system properly. Fukue et al. (1997, 1998) performed light curve analysis in SS433
using geometrically thick torus model by Madau (1988),
but it is known that the disk model is thermally unstable.
Hirai \& Fukue (2001) compared the optical light curve in SS433
with (thermally stable) supercritical accretion disks,
but they adopted the self-similar solutions even at the disk outer region.
We thus adopted a thermally stable, more realistic (appropriate) treatment of the disk model, and compared the model with observations of super-critical accreting objects such as SS433 and ultraluminous X-ray sources.
In addition,
in the previous studies
the mass outflow from a supercritical disk was not included,
but a naked supercritical disk was considered.
Hence, in the present study
we consider the supercritical accretion disk
with an optically thick outflow from the center.
In the next section, we introduce the assumptions of our model.
In section 3, we briefly show the light curve calculation method.
Calculation results are presented in section 4.
The final section is devoted to concluding remarks.
\section{Model for Accretion and Outflow under Supercritical Accretion Flows}
In this section, we calculate the light curve when the binary star system that fills Roche-lobe is assumed, and a supercritical accretion has occured in the compact star.
Ideas of supercritical accretion have been proposed by many authors in early 1970's (Shakura \& Sunyaev 1973; Abramowicz et al. 1978;
Jaroczynski et al. 1980; Paczy\'nski \& Wiita 1980).
Whether accretion that exceeds the Eddington luminosity is possible or not has been doubted for many years.
The solution for supercritical accretion had already been obtained in one dimension (Abramowicz et al. 1988).
Due to the development of the computer, supercritical accretion is actually known to be reproduced by numerical simulations (Okuda 2002; Ohsuga et al. 2005; Ohsuga \& Mineshige 2008).
It thus becomes impossible for us to deny supercritical accretion any longer.
In the next subsections, we briefly introduce our model and assumptions.
\subsection{Model for Supercritical Accretion Flows}
The photon trapping radius characterizes supercritical accretion
at the radius where advective energy transport becomes important (Begelman \& Meier 1982; Ohsuga et al. 2002).
This radius accords with the radius that the gravity of the disk balances with the radiation pressure, and it has been suggested that an outflow may originate inside it (Shakura \& Sunyaev 1973; Lipunova 1999; Fukue 2004; Heinzeller \& Duschl 2007).
However, the outflow blows a very tiny area compared with the size of the whole disk. It actually will not have an influence on computing optical flux.
Recently Takeuchi et al. (2009) analyzed 2D RHD simulation results by Ohsuga et al. (2005), and rebuilt a one dimension model, but the effective temperature distribution of the one dimension model hardly changes.
Therefore, we can use an one-dimension model safely without any outflow effect.
\vspace{0.5cm}
\subsubsection{Radiation-pressure dominated regime: $\kappa_{\rm es} \gg \kappa_{\rm ff}$}
For radiation-pressure dominated regime, Watarai (2006) constructed analytical formulae that can be applied for a wide range of accretion rates, and it has been shown that these solutions are a good approximation of the numerical solutions.
According to Watarai (2006), the scale height of the disk is given by
\begin{equation}
H_{\rm a} = 3.0 f(\hat{r},\dot{m})^{1/2} \hat{r}.
\label{eq:ha}
\end{equation}
This solution is characterized by the ratio of the advective cooling rate to the viscous heating rate, i.e., $f=Q_{\rm adv}^-/Q_{\rm vis}^+$, which can be represented by an analytical form dependent on the radius and the mass accretion rate, $f(\hat{r},\dot{m})$.
The radius $\hat{r}$ is normalized by the Schwarzschild radius,
and the $\dot{m}$ represents the mass accretion rate in Eddington units
($\dot{M}_{\rm Edd}=L_{\rm E}/c^2$). The explicit form of $f(\hat{r},\dot{m})$ is given by
\begin{equation}
f(\hat{r},\dot{m}) = \frac{1}{2} \left[ D-2 (\hat{r}/\dot{m})^2 +2 -D (\hat{r}/\dot{m}) \sqrt{D^2 (\hat{r}/\dot{m})^2+4} \right]
\end{equation}
where $D$ is a constant of order unity
(e.g., $D \approx 2.18$ for a polytropic index $N=3$).
The function $f(\hat{r},\dot{m})$ is close to unity for an advection dominated regime, and it is close to zero for a radiative cooling dominated regime (see Watarai 2006 for more details).
The effective temperature distribution is given by
\begin{equation}
T_{\rm eff} \approx 4.48 \times 10^7 f(\hat{r},\dot{m})^{1/8} m^{-1/4} \hat{r}^{-1/2} {\rm K}.
\end{equation}
The boundary radius between the radiation-pressure dominated regime and the gas-pressure dominated regime is located at
\begin{eqnarray}
\hat{r}_{\rm rad-gas} &=& 18 (\alpha m)^{2/3} \dot{m}^{16/21} \\
&\approx& 601 (\alpha/0.1)^{2/3} (m/10)^{2/3} (\dot{m}/100)^{16/21}.
\end{eqnarray}
The same equations are posed by Shakura \& Sunyaev (1973).
We note that analytic solutions shown here are useful for radiation-pressure dominated regime.
\vspace{0.5cm}
\subsubsection{Gas-pressure dominated regime}
In the gas-pressure dominated regime, if electron scattering dominates the
opacity, the scale height of the disk and the effective temperature distribution are
\begin{equation}
H_{\rm b} = 2.7 \times 10^3 \alpha^{-1/10} m^{9/10} \dot{m}^{1/5}
\hat{r}^{21/20},
\label{eq:hb}
\end{equation}
\begin{equation}
T_{\rm eff} \approx 3.50 \times 10^7 m^{-1/4} \dot{m}^{1/4} \hat{r}^{-3/4} {\rm K},
\label{eq:teff}
\end{equation}
with the same formula by Shakura \& Sunyaev (1973).
The transition radius where $\kappa_{\rm es} \sim \kappa_{\rm ff}$ is
\begin{equation}
\hat{r}_{\rm gas, out} = 2.5 \times 10^3 \dot{m}^{2/3},
\end{equation}
and thus it is a simple function of the mass accretion rate.
The outer region of the disk is dominated by free-free opacity, and the scale height is given by
\begin{equation}
H_{\rm c} = 1.5 \times 10^3 \alpha^{-1/10} m^{9/10} \dot{m}^{3/20}
\hat{r}^{9/8},
\label{eq:hc}
\end{equation}
with the effective radial dependence of the temperature as in equation (\ref{eq:teff}).
We ignore the irradiation by the disk itself or the photosphere of the outflow, because the irradiation dominated regime appears at the outer region of the disk, and the temperature and geometrical effects do not contribute to the optical light curves (less than 10 \%).
To avoid confusion of the model, we decided to handle an easier model.
\subsection{Model for Massive Wind}
Shakura and Sunyaev (1973) proposed the massive (supercritical) outflow, which is formed by the strong radiation from the disk.
The size of the photosphere surface made by the outflow becomes large and has an influence on the form of the light curve.
Thus, we should include the effect of the outflow in our model.
We do not include the collimated, relativistic jet component in this paper.
Here we introduce a simple wind model by Abramowicz et al. (1991), which assumes a spherical symmetry and uniform outflow velocity.
The density of the wind, $\rho_{\rm w}$, is
\begin{equation}
\rho_{\rm w} = \left(\frac{\dot{M}_{\rm out}}{4 \pi v \gamma} \right) R^{-2}
\end{equation}
where $\dot{M}_{\rm out}$ is the mass outflow rate, $v$ is the velocity of the gas, $\gamma$ is the Lorentz factor: $(1-\beta^2)^{1/2}$, where $\beta=v/c$,
and $R$ is the distance from the black hole.
The mass outflow rate should be determined by the physics of the interaction between the disk and outflow, i.e., $\dot{M}_{\rm out} = \eta \dot{M}_{\rm acc}$, where $\eta$ is the efficiency of the outflow gas from accreting gas.
In our present study, we assume $\eta=1.0$ for simplicity.
That is, our model simply assumes that all accreting matter changes to the outflow at the disk inner edge ($3r_{\rm g}$).
The location of the boundary layer between the outflow and the disk may have a strong impact on the emerging X-ray spectrum, but it is not expected to have an enormous influence on the optical band, since the optical emitting region of the photosphere is far away from the disk inner edge.
\section{Binary Light Curve Calculation}
To calculate the $V$-band flux in a binary system,
we adopt the ``Ray-Tracing Method''.
We suppose that the binary star fills the Roche lobe and transports its mass to the compact star.
The shape of the companion star reflects the shape of the potential.
The photon propagates from an emitting point on the surface of the disk or that of the photosphere of the wind to the distant observer.
According to Fermat's principle, however, the light rays are traced
from the observer's display coordinate $(x_{\rm s},y_{\rm s},z_{\rm s})$.
After the ray arrives at the surface of the disk/wind,
we evaluate the geometrical thickness and effective temperature
of the disk/wind model presented in the previous section.
The observed flux is integrated in each optical bands ($V$, $R$, $I$),
assuming the blackbody radiation at the surface of the disk, star, and photosphere of the outflow.
The spatial resolution of the calculation is about
$1 \% - 0.1 \%$ of the binary distance,
which equals to $10^4 r_{\rm g}$ - $10^3 r_{\rm g}$.
This resolution is sufficient to resolve optical flux from the binary system.
Thus, mesh size of this calculation is at least much smaller than 10 \% of this radius.
\subsection{Location of the Photosphere}
Location of the wind photosphere is estimated from the point where the optical depth measured from a distant observer equals to unity, that is,
\begin{equation}
\tau_{\rm ph} = 1 = \int_{R_{\rm ph}}^{\infty} \gamma (1-\beta \cos{\theta}) \kappa \rho_{\rm w} ds.
\label{eq:tauph1}
\end{equation}
where the $R_{\rm ph}$ is location of the last scattering surface
of the photon in the disk coordinates, $\gamma$ is the Lorentz factor, $\kappa$ is the opacity for electron scattering,
$\beta$ is the velocity of the wind normalized by the speed of light $c$,
and the $\theta$ is the inclination angle.
Abramowicz et al. (1991) obtained an analytic solution of a moving plasma,
and the shape of the photosphere in their model does not have spherical symmetry. We use their formulae in this paper.
In some cases, analytic formulae derived by King (2003)
are useful to estimate the size of the photosphere. It is given by
\begin{equation}
R_{\rm ph} \sim \frac{\kappa \dot{M}_{\rm out}}{4 \pi b v},
\end{equation}
where $R_{\rm ph}$ is the radius of the photosphere, $\kappa$ is the opacity for electron scattering, and $b$ is the eccentricity of the photosphere. The value of $b$ is $0.5 \to 1$, which is fixed at unity throughout this paper.
In this paper, we assume that the mass outflow rate, $\dot{M}_{\rm out}$, is equal to the mass accretion rate at the central region.
The mass outflow rate in our model gives the maximum rate.
Typical size of the photosphere is given by
\begin{equation}
\frac{R_{\rm ph}}{r_{\rm g}} = \frac{\dot{m}_{\rm out}}{2 \beta}
= 10^5 \left(\frac{\dot{m}_{\rm out}}{2000}\right) \left(\frac{\beta}{0.01}\right)^{-1}.
\label{eq:rph2}
\end{equation}
As we show later, if the size of the photosphere is much smaller
than the disk size, the photosphere does not influence the shape of the eclipsing light curves.
\subsection{Temperature and Luminosity at the Photosphere}
We evaluate the maximum temperature of the photosphere by using the following procedure.
First, we estimate the size of the photosphere of the outflow.
Assuming that photons ejected from the disk surface inside the photosphere is conserved until escaping photons from the surface of the photosphere,
i.e., all photons are generated inside the disk, not be generated in the wind.
The temperature of the wind photosphere $T_{\rm w}$ then is given by
\begin{equation}
\sigma T_{\rm w}^4 \approx \frac{L_{\rm disk}}{4 \pi (bR_{\rm ph})^2}.
\label{eq:tw}
\end{equation}
Here the luminosity of the disk, $L_{\rm disk}$, is given by
\begin{equation}
L_{\rm disk} = \int_{R_{\rm in}}^{R_{\rm ph}} 2 \sigma T_{\rm eff}^4 \cdot 2 \pi r dr.
\end{equation}
To determine the location of the disk surface or the photosphere of the outflow, we integrated the optical depth from an observer at infinity to the surface of $\tau_{\rm ph}=1$ along the line of sight.
After the light rays arrived at the surface, we measured the temperature using equation (\ref{eq:tw}) and the geometrical thickness of the disk from the underlying disk/wind models.
\section{Optical Flux Images}
\subsection{Case without Outflow }
Let us see the results when only a naked supercritical disk is considered.
In figure 1, we show the $V$-band ($5.11 \times 10^{14}$ Hz -- $6.61 \times 10^{14}$ Hz) flux images at different binary phases.
The mass accretion rates is $\dot{m}=10^3$, the inclination angle is $i=70^\circ$,
mass ratio is $q=M_{\rm X}/M_{\rm C}=1$, temperature of the companion star is $T_{\rm C}=15000$ K, and velocity of the wind is $\beta=0$, respectively.
As can be seen in figure 1, the geometrical thickness of the disk is thin compared with the size of the disk.
It is understood that the thickness of the disk does not affect the light curve for $\dot{m}=10^3$.
The scale height of the disk weakly depends on the $\dot{m}$ as can be seen equations (\ref{eq:ha}), (\ref{eq:hb}), (\ref{eq:hc}).
Figure \ref{fig:nowind-i-lc} is the time-variation of $V$-band flux for various inclination angles. The mass accretion rate is set to be $\dot{m}=10^3$.
When the inclination angle increases,
the flux drops at phase 0.5.
This is because the disk hides emission from a secondary star.
Optical emission from the secondary star is much larger than that from the disk for $\dot{m}=10^3$.
As mass accretion rate increases,
the disk becomes as luminous as the companion star (see figure \ref{fig:nowind-md-lc}).
Some previous studies applied a thick disk model to the optical light
curve analysis in SS433 (Sanbuichi \& Fukue 1993; Fukue et al. 1997, 1998;
Hirai \& Fukue 2001).
However, their model is called ``thick torus model''
(Abramowicz et al. 1978; Madau et al. 1988), and it is thermally/secularly unstable (Kato et al. 1998).
Apart from the structure of the inner most region of the disk, our analytic disk model (i.e., slim disk model) is applicable to the supercritically accreting regime (Watarai 2006). The model can produce more plausible temperature and scale height.
Hence, our model for a bright object is more realistic than that of past researche.
\begin{figure}
\begin{center}
\FigureFile(65mm,65mm){figure/p0q1md1e3i70nowind.eps}
\FigureFile(65mm,65mm){figure/p90q1md1e3i70nowind.eps}
\FigureFile(65mm,65mm){figure/p180q1md1e3i70nowind.eps}
\FigureFile(65mm,65mm){figure/p270q1md1e3i70nowind.eps}
\end{center}
\caption{$V$-band flux images in no-outflow models with various phases (0, 0.25, 0.5, and 0.75).
Horizontal and vertical axis are normalized by the binary distance $a$.
The mass accretion rate is set to be $1000 \dot{M}_{\rm crit}$,
and mass ratio $q=M_{\rm x}/M_{\rm c}$ is $q=1.0$.
The inclination angle is $70^\circ$,
and the temperature of the companion star $T_{\rm c}$ is 15000 K. }
\label{fig1}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure/q1-md1e3-i.eps}
\end{center}
\caption{Theoretical V-band light curves expected from our disk model (no wind) with different inclination angles.
The inclination angles are $i=40^\circ,50^\circ,~60^\circ,~70^\circ$, and $80^\circ$ from bottom to top.
Other parameters are $q=M_{\rm X}/M_{\rm C}=1.0$, $\dot{m}=10^3$, and $T_{\rm c}=15000 {\rm K}$, respectively. }
\label{fig:nowind-i-lc}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure/q1-i80-mdot.eps}
\end{center}
\caption{Same as figure \ref{fig:nowind-i-lc}, but as a function of mass accretion rates (no wind).
The mass accretion rates are $\dot{m}=10^2,10^{2.5},10^3,10^{3.5}$, and $10^4$ from bottom to top.
Other parameters are $q=1.0$, $i=80^\circ$, and $T_{\rm c}=15000 {\rm K}$, respectively. }
\label{fig:nowind-md-lc}
\end{figure}
\subsection{Case with Massive Wind}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure/p90q1md1e3i70beta3e-3.eps}
\FigureFile(80mm,80mm){figure/p90q1md1e3i70beta5e-3.eps}
\end{center}
\caption{ V-band images including the effect of wind at phase 0.5 with various wind velocity $\beta$ = 0.003 (top) and 0.005 (bottom).
Other parameters are $q=1.0$, $\dot{m}=10^3$, $i=80^\circ$, and $T_{\rm c}=15000 {\rm K}$, respectively.
}
\label{fig:md1e3beta}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure/q1-i80-beta1e-2-mdot.eps}
\end{center}
\caption{ V-band light curves with various mass accretion rates. The mass accretion rates are $\dot{m}$=100, 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, and 10000 from bottom to top.
Other parameters are $q=1.0$, $\beta$=0.01, and $i=80^\circ$. }
\label{fig:beta001}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure/q1-i80-md1e3-beta.eps}
\end{center}
\caption{Same as figure \ref{fig:beta001}, but the dependence of the wind velocity $\beta$.
The wind velocities are 0.003, 0.005, 0.007, and 0.01 from top to bottom.}
\label{fig:beta2}
\end{figure}
Figure \ref{fig:md1e3beta} represents flux images of a model with massive wind.
The spherical structure at the central region of the disk is the photosphere by the massive wind.
The brightness of the photosphere gradually changes from the central region to the outer region. This is called the limb-darkling effect.
Figure \ref{fig:beta001} shows V-band light curves for high inclination angle $i=80^\circ$.
The most remarkable feature is that a primary minimum and a secondary minimum inversion happen as the mass accretion rate increases.
The $V$-band magnitude at phase 0 or 1 is larger than the magnitude at phase 0.5 when the mass accretion rate is relatively small ($\dot{m}=100$ and 1000).
However, an inversion of the flux happens when $\dot{m}$ is large.
This is because the optical $V$-band flux from an accretion disk increases as the mass accretion rate increases.
This feature may be applicable to the observation data of SS433.
In figure \ref{fig:beta2}, we change the wind velocities with various values.
The size of the wind photosphere depends on the velocity of the wind
$\beta$.
That is, the last scattering surface is inversely proportional to the velocity (see equation (\ref{eq:rph2})). The low $\beta$ outflow therefore makes large photosphere, and thus the geometrical thickness of the wind causes the deep secondary minimum during its eclipse.
These features are the main results of the present study.
\section{Discussion}
\subsection{Eclipsing Light Curves in SS433}
One difficulty of the optical light curves in SS433 is interpretating of the secondary minimum at phase 0.5, which is made by an eclipse of the companion star by the disk.
As we explained in the previous section, it is difficult to fit the optical light curves observed in SS433 with only the disk model.
The theory also support the massive outflow scenario introduced by observations.
The mass accretion rate in SS433 is appreciably supercritical, $\dot{M} \sim 10^{-4} M_\odot$/yr (van den Heuvel 1980; Shklovskii 1981; Perez \& Blundell 2009 ). This value is corresponds to $\sim 4.5 \times 10^7 \dot{m}$ for a $10 M_\odot$ black hole, and it seems to be an upper limit of the mass accretion rate.
Since its discovery, binary parameters in SS433 have not yet been confirmed.
In particular, we could not reach consensus as to whether the compact object in SS433 is a neutron star or a black hole.
Gies et al. (2002) evaluated the mass of the companion star via absorption lines of A7Ib star to $19 \pm 7 M_\odot$.
Recently Kubota et al. (2009) reported a new constraint on the mass of the compact object by the absorption lines taken from SUBARU and Gemini observations. They derive the mass of the compact object and companion star to be $M_{\rm X} = 4.1_{-0.7}^{+0.8} M_\odot$ and $M_{\rm C} = 12.2_{-2.1}^{+0.8} M_\odot$.
We apply the mass ratio $q = M_{\rm X}/M_{\rm C} = 0.38$ derived by Kubota et al. (2009), and the black hole mass $M_{\rm BH}$ set to be $4.0 M_\odot$.
This mass ratio is not in conflict with other observation results (Gies et al. 2002; Hillwig et al. 2004).
Figure \ref{fig:imgss433} represents $V$-band images at various binary phases.
We fix the mass ratio $q = 0.38$, inclination angle $i=78^\circ$, and the binary period $P=13.1$ day based on many former observations.
The disk size (radius) is $1.35 \times 10^6 r_{\rm g} \sim 1.6 \times 10^7$ km, which is smaller than the case of $q=1.0$.
The size of the photosphere is comparable to the disk size in the case of $\dot{m}$=5000.
As for the wind velocity, a quasi-spherical non-relativistic wind from accretion disk has 3000 ${\rm km/s} \approx 0.01 c$ (Cherepashchuk 2002).
Recently, Perez et al. (2009) found very fast accretion disk wind by near-IR spectroscopy, and its terminal velocity is about 1500 ${\rm km~s^{-1}}$ which is equivalent to 0.5 \% of the speed of light.
In the fit, we fixed the wind velocity to $\beta=0.01$ for simplicity, allowing changes the other two parameters, namely the mass accretion rate and temperature of the donor star.
\begin{figure}
\begin{center}
\FigureFile(60mm,60mm){figure/ss433p0q1md5e3beta1e-2.eps}
\FigureFile(60mm,60mm){figure/ss433p90q1md5e3beta1e-2.eps}
\FigureFile(60mm,60mm){figure/ss433p180q1md5e3beta1e-2.eps}
\FigureFile(60mm,60mm){figure/ss433p270q1md5e3beta1e-2.eps}
\end{center}
\caption{$V$-band flux image in SS433 at various phases (0, 0.25, 0.5, 0.75).
The mass accretion/outflow rate is $\dot{m}_{\rm acc}=\dot{m}_{\rm out}$=5000.
The black hole mass is $M_{\rm BH}=4 M_{\odot}$, the mass of the companion star is $M_{\rm C}=12 M_{\odot}$, i.e., mass ratio $q=M_{\rm BH}/M_{\rm C}=0.38$.
The effective temperature of the companion star is $T_{\rm C}=$15000 K,
and the inclination angle is $i=78^{\circ}$, whose values are referred from observational results. The velocity of the outflow is fixed at $\beta=0.01$.
}
\label{fig:imgss433}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure/ss433deltam.eps}
\end{center}
\caption{ Difference of $V$-magnitude in phase 0 and phase 0.5 as a function of mass accretion rate. Other parameters are fixed at $M_{\rm BH}=4 M_{\odot}, M_{\rm C}=12 M_{\odot}$, $\beta=0.01$, and $i=78^{\circ}$, respectively. }
\label{fig:ss433vmag}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure/ss433fit3.eps}
\end{center}
\caption{$V$-band light curve of SS433 fitted by our model with several parameter sets. Kemp et al. (1986) showed the averaged optical light curve data of SS433. We set the numerical data to agree to the observed flux at phase 0.
Other fixed the parameters are the same as figure \ref{fig:ss433vmag}. }
\label{fig:ss433fit3}
\end{figure}
Before comparing the observational data in SS433 and our model directly, we infer the possible parameter sets with figure \ref{fig:ss433vmag}.
Figure \ref{fig:ss433vmag} represents the difference of $V$-magnitude ($\Delta M_{\rm V}$) at phase 0 and phase 0.5 as a function of accretion rate.
The definition of the $\Delta M_{\rm V}$ is given by
\begin{equation}
\Delta M_{\rm V} = M_{\rm V} ({\rm @phase~0.5}) - M_{\rm V} ({\rm @phase~0}).
\end{equation}
In the case of SS433, $\Delta M_{\rm V}$ is about 0.4, which shows the magnitude of phase 0.5 is larger than that of phase 0.
Actually, there are some parameter sets that can fit the observation,
we thus could not uniformly determine the best-fit parameter.
We tried to fit our model to the $V$-band average light curve by Kemp et al. (1986). It is clear in figure \ref{fig:ss433fit3} that our model can fit whole shape of the light curve, but it does not make a distinction between three cases, i.e., (i) $\dot{m}$=5000, $T_{\rm C}$=10000K, (ii) $\dot{m}$=7000, $T_{\rm C}$=12000K, and (iii) $\dot{m}$=10000, $T_{\rm C}$=14000K, respectively.
The mass-donor (companion) star is supposed to be A-type evolved star from the absorption line analysis (Gies et al. 2002; Hillwig et al. 2004).
On the other hand, some recent observations indicate that the companion star has lower temperature which is less than 15000 K (Kudritzki et al. 2003; Cherepashchuk et al. 2005).
It is an important task for observers to confirm the temperature of the companion star in SS433.
SS433 has a precession period of the jet,
and the light curve changes at each precession phases.
It is necessary to compare the light curve of each precession phase
with the model testing.
In addition, X-ray emission in SS433 comes from not only the photosphere of the wind but also the non-thermal emission of jet component (Rose 1995; Krivosheyev et al. 2009; see also Abolmasov et al. 2009).
X-ray emission from the outflow depends on the acceleration mechanism,
the temperature distribution, and the initial velocity of the outflow
launched from the disk surface.
If we consider the X-ray emission more seriously, non-thermal X-ray emitting component (maybe jet component) should be included in our model (e.g., Reynoso et al. 2008; Cherepashchuk et al. 2009).
These issues will be our future tasks.
\subsection{Comments on Eclipsing ULXs}
Considering the probability of the eclipse events in the binary system,
it is natural that a few eclipsing binaries exist (Pooley \& Rappaport 2005).
Recently X-ray eclipsing light curves has been detected in several ULXs.
As the data of the ULXs increases,
the number of eclipsing ULXs also increases.
M33 X-7 is one example of an eclipsing black hole X-ray binary discovered in recent years, and X-ray eclipse has been clearly detected (Pietsch et al. 2004).
This object suspects a high mass X-ray binary, but its luminosity is very large unlike the famous Cygnus X-1.
There are several scenarios to explain the high luminosity.
Moreno M\'endez et al. (2008) pointed out the contradiction of the black hole spin scenario and proposed a hypercritical accretion scenario in M33 X-7 based on the binary evolution theory.
An eclipsing luminous X-ray binary in the dwarf starburst galaxy NGC4214 has been reported by Ghosh et al. (2006).
They clearly detected the X-ray eclipsing feature from this object.
If both optical and X-ray eclipse are observed in an object,
we may presume the spatial structure of the accretion disk with the difference of the emitting region.
We are looking forward to waiting for further detection of the eclipse in black hole binaries not only in our galaxy, but also in galaxies.
\section{Conclusion}
We have calculated the light curves of eclipsing
binaries by handling more realistic accretion disk models with an optically thick outflow.
We also applied the present model to the supercritical accreting
object SS~433.
Our calculation is somewhat simple, but the geometrical thickness of the accretion disk has not been considered seriously so far.
In addition, we clearly show the change of the shape of the light curve by the wind using a numerical calculation.
As for the model of wind, because it includes unknown physics (accretion, geometrical thickness, mass loss rate, etc.), so we need to evaluate the temperature
at the photosphere more seriously.
This will be an important issue for observed PG quasars
(Young et al. 2007).
Modeling of the detailed physics of outflow is our future issue.
The measurement of the wind velocity and
the mass loss rate via observations of the absorption lines will be crucial evidences to confirm our scenario.
The fitting with the multi-wavelength observational data will be the next step to confirming our model. It is also one of our future tasks.
\vspace{10mm}
We would like to thank S. Mineshige for stimulating discussions.
We also would like to thank K. Kubota for her helpful comments from the observational viewpoint.
The author also would like to thank T.Suzumori for checking the manuscript.
This calculation was supported by Kongo system of the Osaka Kyoiku University Information Processing Center.
This work was supported by the Grant-in-Aid for the Global COE Program ``The Next Generation of Physics, Spun from Universality and Emergence'' from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan.
| proofpile-arXiv_065-10857 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
Lipid molecules constituting the
membranes of biological cells play a major role in the regulation
of various cellular processes.
About a decade ago Simons and Ikonen proposed a hypothesis
which suggests that the lipids organize themselves into
sub-micron sized domains termed ``rafts"~\cite{simons-97}.
The rafts serve as platforms for proteins, which in turn attributes
a certain functionality to each domain.
Although there have been extensive studies in this area, the details of the
underlying physical mechanisms leading to formation of rafts, their stability,
and the regulation of their finite domain size remain elusive.
Numerous experiments on intact cells and artificial membranes containing
mixtures of saturated lipids, unsaturated lipids and cholesterol,
have demonstrated the segregation of the lipids into liquid-ordered
and liquid-disordered phases~\cite{veatch-05}.
Recent experimental observations of critical fluctuations
point towards the idea that the cell maintains its membrane slightly
above the critical point~\cite{hsmith-08,veatch-08}.
Below the transition temperature, there have been studies on the
dynamics in multicomponent membranes such as diffusion of
domains~\cite{cicuta-07} and domain coarsening~\cite{yanagisawa-07}.
A clear understanding of phase separation may contribute
towards a better explanation of the dynamics of lipid organization
in cell membranes.
Apart from biological membranes, it is of relevance to understand
the dynamics of Langmuir monolayer systems which are also thin fluid
films.
Phase separation of binary fluids following a quench has been under
study for over forty years~\cite{bray-02}.
The dynamic scaling hypothesis assumes that there exists a scaling
regime characterized by the average domain size $R$ that grows with
time $t$ as $R \sim t^\alpha$ with an universal exponent $\alpha$.
For three-dimensional (3D) off-critical binary fluids, there is an
initial growth by the Brownian coagulation process~\cite{binder-74},
followed by the Lifshitz-Slyozov (LS) evaporation-condensation
process~\cite{lifshitz-81}; both mechanisms show a growth exponent
$\alpha=1/3$.
This is followed by a late time inertial regime of
$\alpha=2/3$~\cite{furukawa-94}.
For critical mixtures, there is an intermediate $\alpha=1$ regime
owing to interface diffusion~\cite{siggia-79}.
The scenario is slightly different for pure two-dimensional (2D)
systems~\cite{miguel-85}.
For an off-critical mixture, it was predicted that after the initial
formation of domains, they grow by the Brownian coagulation mechanism
with a different exponent $\alpha=1/2$ (as will be explained later),
followed by a crossover to the LS mechanism which gives $\alpha=1/3$
even in 2D.
For critical mixtures, on the other hand, the initial quench produces
an interconnected structure which coarsens and then breaks up due
to the interface diffusion with an exponent $\alpha=1/2$.
After the breakup processes, coarsening takes place through Brownian
coagulation that is again characterized by the $\alpha=1/2$
scaling~\cite{binder-74}.
These predictions were confirmed by molecular dynamics simulations
in 2D~\cite{ossadnik-94}.
The exponent $\alpha=1/2$ was also observed in 2D lattice-Boltzmann
simulations in the presence of thermal noise for a critical
mixture~\cite{yeomans-99}.
Although biomembranes composed of lipid bilayers can be regarded as
2D viscous fluids, they are not isolated pure 2D systems since
lipids are coupled to the adjacent fluid.
Hence it is of great interest to investigate the phase separation
dynamics in such a quasi-2D liquid film in the presence of hydrodynamic
interaction.
(We use the word ``quasi-2D'' whenever the film is coupled to
the bulk fluid.)
To address this problem, we consider a 2D binary viscous fluid
in contact with a bulk fluid.
Our approximation of the membrane as a planar 2D liquid film is
valid in the limit of large bending rigidity (common in biological
membranes) or in the presence of a lateral tension, which both
act to suppress membrane undulations.
We employ a simple model in which the film is confined
to a plane with the bulk fluid particles added above and below.
In our model using dissipative particle dynamics (DPD) simulation
technique, the exchange of momentum between the film
and the bulk fluid is naturally taken into account.
We particularly focus on the effect of bulk fluid on the quasi-2D
phase separation.
We show that the presence of a bulk fluid will alter the domain growth
exponent from that of 2D to 3D indicating the significant role
played by the film-solvent coupling.
In order to elucidate the underlying physical mechanism of this effect,
we have looked into the diffusion properties in the film
by measuring two-particle correlated diffusion.
Our result suggests that quasi-2D phase separation proceeds by the
Brownian coagulation mechanism which reflects the 3D nature of the bulk
fluid.
Such a behavior is universal as long as the domain size exceeds the
Saffman-Delbr\"uck length~\cite{saffman-76}.
\section{Model and simulation technique}
\label{model}
For the purpose of our study, we use a structureless
model of the 2D liquid film within
the DPD framework~\cite{degroot-warren-97,espanol-warren-95}.
As shown in fig.~\ref{image}, the 2D film is represented by
a single layer of particles confined to a plane.
In order to study phase separation, we introduce two species
of particles, $A$ and $B$.
The bulk fluid which we call as ``solvent'' ($S$) is also represented by
single particles of same size as that of the film particles.
All particles have the same mass $m$.
We avoid using the existing DPD models for
a self-assembling bilayer~\cite{sunil-mohamed-05, sanoop-08} as they inherently
include bending and protrusion modes, which makes it
difficult to separate hydrodynamic effects from the effect
of membrane shape deformations.
In DPD, the interaction between any two particles, within a range $r_0$,
is linearly repulsive.
The pairwise interaction leads to full momentum conservation,
which in turn brings out the correct fluid hydrodynamics.
The force on a particle $i$ is given by
\begin{equation}
m\frac{\upd \mathbf{v}_i}{\upd t}
= \sum_{j\neq i} \left[
\mathbf{F}_{ij}^{\rm C}(\mathbf{r}_{ij}) +
\mathbf{F}_{ij}^{\rm D}(\mathbf{r}_{ij},\mathbf{v}_{ij}) +
\mathbf{F}_{ij}^{\rm R}(\mathbf{r}_{ij})
\right],
\label{eqn:EoM}
\end{equation}
where $\mathbf{r}_{ij} = \mathbf{r}_i - \mathbf{r}_j$
and $\mathbf{v}_{ij} = \mathbf{v}_i - \mathbf{v}_j$.
Of the three types of forces acting on the particles,
the conservative force on particle $i$ due to $j$ is
$\mathbf{F}_{ij}^{\rm C}=
a_{ij}\omega(r_{ij})\hat{\mathbf{r}}_{ij}$,
where $a_{ij}$ is an interaction strength and
$\hat{\mathbf{r}}_{ij}=\mathbf{r}_{ij}/r_{ij}$ with
$r_{ij}=|\mathbf{r}_{ij}|$.
The second type of force is the dissipative force
$\mathbf{F}_{ij}^{\rm D}=
-\Gamma_{ij}\omega^2(r_{ij})
(\hat{\mathbf{ r}}_{ij}\cdot{\bf v}_{ij})\hat{\mathbf{r}}_{ij}$,
where $\Gamma_{ij}$ is the dissipative strength for the pair $(i,j)$.
The last is the random force
$\mathbf{F}_{ij}^{\rm R}=
\sigma_{ij}(\Delta t)^{-1/2}\omega(r_{ij})
\zeta_{ij}\hat{\mathbf{r}}_{ij}$,
where $\sigma_{ij}$ is the amplitude of the random noise
for the pair $(i,j)$, and $\zeta_{ij}$ is a random variable
with zero mean and unit variance which is uncorrelated for
different pairs of particles and different time steps.
The dissipative and random forces act as a thermostat, provided the
fluctuation-dissipation theorem $\sigma_{ij}^2=2\Gamma_{ij} k_{\rm B}T$
is satisfied ($k_{\rm B}$ is Boltzmann constant and $T$ is the
thermostat temperature).
The weight factor is chosen as $\omega(r_{ij})= 1-r_{ij}/r_0$ up to
the cutoff radius $r_0$ and zero thereafter.
The particle trajectories are obtained by solving eq.~(\ref{eqn:EoM})
using the velocity-Verlet integrator.
In the simulation, $r_0$ and $m$ set the scales for length and mass,
respectively, while $k_{\rm B}T$ sets the energy scale.
The time is measured in units of
$\tau=\left(mr_0^2/k_{\rm B}T\right)^{1/2}$.
The numerical value of the amplitude of the random force is assumed
to be the same for all pairs such that
$\sigma_{ij}=3.0\left[(k_{\rm B}T)^3 m/r_0^2 \right]^{1/4}$,
and the fluid density is set as $\rho=3.0$.
We set $k_{\rm B}T=1$ and the integration time step is
chosen to be $\Delta t=0.01\tau$.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.2]{figure1.eps}
\caption{Image of the model film with the bulk fluid called
solvent.
The yellow and red particles represent the two components constituting
the model film, while blue ones represent the solvent.
For clarity, only a fraction of the solvent particles are shown.}
\label{image}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3]{figure2.eps}
\caption{The snapshots for a 70:30 mixture (yellow and red) undergoing
phase separation at $t=0,150$ and $1000$ (top to bottom) for a pure 2D
(left column) and quasi-2D system with $L_z=40$ (right column).
The above snapshots are from one of the ten independent trials that were
conducted.
}
\label{fig:panel7030}
\end{center}
\end{figure}
The thin film is constructed by placing particles in the $xy$-plane
in the middle of the simulation box (see fig.~\ref{image}).
Owing to the structureless representation of the constituent particles, we
apply an external potential so as to maintain the film integrity.
This is done by fixing the $z$-coordinates of all the film particles.
It is known that confinement of simulation particles between walls
leads to a reduction in the solvent temperature near the
wall~\cite{altenhoff-07}.
However, since we allow for the in-plane motion of the film particles,
the solvent temperature is found to be only 2\% less than the bulk near
the 2D film.
Hence we consider that this effect is negligible.
Our work involves the systematic variation of the height of the
simulation box starting from the pure 2D case.
In the absence of solvent, we work with a 2D-box of dimensions
$L_x \times L_y = 80\times 80$ with $19,200$ particles constituting
the film.
For the quasi-2D studies, we add solvent particles $S$ above and below
the model film, and increase the height of the box as
$L_z = 5, 20$ and $40$.
For all the cases there are $19,200$ film particles.
The largest box size ($L_z=40$) has $748,800$ solvent particles.
The box with height $L_z= 40$ is found to be sufficiently large enough
to prevent the finite size effect which affects the solvent-film
interaction.
The system is then subject to periodic boundary conditions in all the three
directions.
For phase separation simulations, we introduce two species of
film particles $A$ and $B$.
The interaction parameter between various particles are given by
$a_{AA}=a_{BB}=a_{SS}=a_{AS}=a_{BS}=25$ and $a_{AB}=50$.
In order to do a quench, the film is first equilibrated with
a single component, following which a fraction of the particles are
instantaneously changed to the $B$ type.
\section{Phase separation}
\label{separation}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{figure3.eps}
\caption{The average domain size $R$ as a function of time $t$
for a $70:30$ off-critical mixture.
The upper black curve is the pure 2D case showing an $\alpha=1/2$
scaling, and the lower red curve is the quasi-2D case when $L_z = 40$
showing a distinct $\alpha=1/3$ scaling.
The inset shows the zoomed in portion with different box heights
in the $z$-direction, i.e., $L_z = 0, 5, 20$ and $40$ starting
from the top black curve.
}
\label{fig:7030}
\end{center}
\end{figure}
First we describe the results of the phase separation dynamics.
The snapshots for $A:B$ composition set to $70:30$ (off-critical mixture)
is shown in fig.~\ref{fig:panel7030} for both pure 2D case
(left column) and quasi-2D case with $L_z=40$ (right column).
Qualitatively, it is seen that the domains for the quasi-2D case
are smaller in size when compared at the same time step.
We also monitor the average domain size $R(t)$ which can be obtained
from the total interface length $L(t)$ between the two components.
This is because $R(t)$ and $L(t)$ are related by $L(t)=2 \pi N(t) R(t)$,
where $N(t)$ is the number of domains.
The area occupied by the $B$-component is given by
${\cal A}=\pi N(t) R^2(t)$ which is a conserved quantity.
Then we have
\begin{equation}
R(t) = 2{\cal A}/L(t).
\end{equation}
When the domain size grows as $R \sim t^{\alpha}$, one has
$L \sim t^{-\alpha}$ and $N \sim t^{-2\alpha}$.
The domain size $R(t)$ for $70:30$ mixture is shown in fig.~\ref{fig:7030}.
In this plot, average over 10 independent trials has been taken.
It can be seen that the pure 2D case has a growth exponent $\alpha=1/2$.
Upon the addition of solvent, we observe that the exponent shifts
to a lower value of $\alpha=1/3$.
This exponent is reminiscent of the phase separation dynamics of an
off-critical mixture in 3D.
Upon systematically increasing the amount of solvent in the system by
changing the height $L_z$, we can see a clear deviation from the
pure 2D behavior (see the inset of fig.~\ref{fig:7030}).
There is no further change if $L_z$ is increased beyond $40$.
We note that the scaling regime covers about one decade
in time, which is similar to that previously shown in
the literature~\cite{sunil-mohamed-05}.
A larger system size
$L_x \times L_y = 200 \times 200$
also produced the same scaling for the pure 2D
case, which demonstrates that finite-size effects are small.
However, we present here only the results for the $80\times80$
system in 2D, because this is the system size studied for
the quasi-2D case with a bulk fluid, which requires
a large amount of particles in 3D.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{figure4.eps}
\caption{The average domain size $R$ as a function of time $t$ for a
$50:50$ critical mixture.
The upper black curve is the pure 2D case showing an $\alpha=1/2$
scaling, and the lower red curve is the quasi-2D case when $L_z = 40$
showing a distinct $\alpha=1/3$ scaling.
The inset shows the zoomed in portion with different box heights
in the $z$-direction as in fig.~\ref{fig:7030}.
}
\label{fig:5050}
\end{center}
\end{figure}
In fig.~\ref{fig:5050}, we show the result for a component ratio of
$50:50$ (critical mixture).
In this case, the growth exponent for the pure 2D case is less obvious
owing to rapid coarsening of the domains.
However, by simulating a bigger system $200\times200$
with the same areal density, an $\alpha=1/2$ exponent is indeed obtained.
Similar to the off-critical case, the growth of the domains is slowed
down by the addition of solvent and the exponent is reduced to
$\alpha=1/3$.
These results indicate that solvent is responsible for slowing down
the growth dynamics.
The observed exponent $\alpha=1/2$ in pure 2D systems can be explained
in terms of the Brownian coagulation mechanism~\cite{miguel-85}.
From dimensional analysis, the 2D diffusion coefficient of the domain
is given by $D_{2} \sim k_{\rm B}T/\eta$, where $\eta$ is the
film 2D viscosity.
Using the relation
\begin{equation}
R^2 \sim D_{2} t \sim (k_{\rm B}T/\eta) t,
\end{equation}
we find $R \sim t^{1/2}$.
For 3D systems, on the other hand, the diffusion coefficient of the
droplet is inversely proportional to its size, $D_{3} \sim 1/R$,
a well-known Stokes-Einstein relation.
Hence the Brownian coagulation mechanism in 3D gives rise to an
exponent $\alpha=1/3$.
(In general, the exponent is $\alpha=1/d$, where $d$ is the space dimension.)
The change in the exponent from $\alpha=1/2$ to $1/3$ due to the
addition of solvent implies the crossover from 2D to 3D behaviors of the
phase separation dynamics even though the lateral coarsening takes place
only within the 2D geometry~\cite{LS}.
Therefore it is necessary to examine the size dependence of the
domain diffusion coefficient in quasi-2D systems.
This can be calculated by
tracking the mean-squared displacement of domains of various radii.
The equivalent information can be more efficiently obtained by
calculating the two-particle longitudinal coupling diffusion
coefficient in a single component film rather than in a binary
system.
This is described in the next section.
\section{Correlated diffusion}
\label{diffusion}
Consider a pair of particles separated by a 2D vector $\mathbf{s}$,
undergoing diffusion in the liquid film.
The two-particle mean squared displacement is given
by~\cite{diamant-09}
\begin{equation}
\langle \Delta s_i^k \Delta s_j^l \rangle
= 2 D_{ij}^{kl}(\mathbf{s})t,
\end{equation}
where $\Delta s_i^k$ is the displacement of the particle $k (=1,2)$
along the axis $i (=x,y)$, $D_{ij}^{kl}$ is the diffusion tensor giving
self-diffusion when $k=l$ and the coupling between them when $k \neq l$.
The $x$-axis is defined along the line connecting a pair of
particles $1$ and $2$,
i.e., $\mathbf{s}= s \hat{x}$.
Hence, we have $D_{xy}^{12}=0$ by symmetry.
The longitudinal coupling diffusion coefficient,
$D_{\rm L}^{\rm c}(s)=D_{xx}^{12}(s \hat{x})$,
gives the coupled diffusion along the line of centers of the particles.
We first describe the analytical expression of $D_{\rm L}^{\rm c}(s)$
based on the Saffman and Delbr\"uck (SD) theory which was originally
developed for protein diffusion in membranes~\cite{saffman-76}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{figure5.eps}
\caption{
Longitudinal coupling diffusion $D_{\rm L}^{\rm c}$ as a function of
particle separation $s$.
The upper circles are data for the pure 2D case.
The lower squares correspond to the case with solvent when $L_z=40$.
The upper solid line is the fit by eq.~(\ref{eqn:longDsmallr}), and the
lower solid line is the fit by eq.~(\ref{eqn:longD}).
The dashed line shows the $1/s$ dependence.
}
\label{fig:2pcorr}
\end{center}
\end{figure}
Since the calculation of the diffusion coefficient in a pure 2D system
is intractable due to Stokes paradox, SD circumvented this problem by
taking into account the presence of the solvent with 3D viscosity
$\eta_{\rm s}$ above and below the membrane.
Suppose a point force $\mathbf{f}$ directed along the plane of the
film lying in the $xy$-plane acts at the origin.
Then we seek for the velocity $\mathbf{v(s)}$ induced at the position
$\mathbf{s}$.
According to the SD theory, it is given in
Fourier space,
$\mathbf{v}[\mathbf{q}] = \int {\rm d} \mathbf{s}\,
e^{-i \mathbf{q} \cdot \mathbf{s}} \mathbf{v(s)}$,
as~\cite{saffman-76,diamant-09,IF-08}
\begin{equation}
v_i [\mathbf{q}]= G_{ij}^{\rm SD} [\mathbf{q}] f_j
= \frac{1}{\eta q (q+\nu)} \left( \delta_{ij} -
\frac{q_i q_j}{q^2}\right)f_j,
\end{equation}
where $G^{\rm SD}$ is the 2D film analog of the Oseen tensor.
In the above, the SD length is defined by
$\nu^{-1} = \eta/2 \eta_{\rm s}$.
For over-damped dynamics, we can use the Einstein relation to relate
the diffusion tensor $D_{\rm L}^{\rm c}$ to
$G_{ij}^{\rm SD}$~\cite{diamant-09}.
After converting $G_{ij}^{\rm SD}[\mathbf{q}]$ into real space, we
obtain
\begin{align}
D_{\rm L}^{\rm c}(s) & = k_{\rm B}T G_{xx}^{\rm SD}(s \hat{x})
\nonumber \\
& =\frac{k_{\rm B}T}{4\pi \eta}
\left[\frac{\pi \mathbf{H}_1(\nu s)}{\nu s}-
\frac{\pi Y_1(\nu s)}{\nu s}
- \frac{2}{(\nu s)^2} \right],
\label{eqn:longD}
\end{align}
where $\mathbf{H}_1$ and $Y_1$ are Struve function and
Bessel function of the second kind, respectively.
At short distances $s \ll \nu^{-1}$, the asymptotic form of
the above expression becomes
\begin{equation}
D_{\rm L}^{\rm c}(s) \approx
\frac{k_{\rm B}T}{4 \pi \eta}
\left[ \ln\left(\frac{2}{\nu s}\right) - \gamma + \frac{1}{2}\right],
\label{eqn:longDsmallr}
\end{equation}
where $\gamma=0.5772 \cdots$ is Euler's constant.
At large inter-particle separations $s \gg \nu^{-1}$,
on the other hand, eq.~(\ref{eqn:longD}) reduces to
\begin{equation}
D_{\rm L}^{\rm c}(s) \approx
\frac{k_{\rm B}T}{2\pi\eta\nu s}=
\frac{k_{\rm B}T}{4\pi\eta_{\rm s} s},
\label{eqn:longDlarger}
\end{equation}
showing the asymptotic $1/s$ decay which reflects the 3D nature
of this limit.
Notice that eq.~(\ref{eqn:longDlarger}) depends only on the solvent
viscosity $\eta_{\rm s}$ but not on the film viscosity $\eta$
any more.
In fig.~\ref{fig:2pcorr}, we plot the measured longitudinal coupling
diffusion coefficient $D_{\rm L}^{\rm c}$ as a function of 2D distance $s$.
In these simulations, we have worked with only single component
films with the same system sizes and number of particles
as those used for the phase separation simulations.
We have also taken average over 20 independent trials.
In the pure 2D case without any solvent, $D_{\rm L}^{\rm c}$ shows a
logarithmic dependence on $s$.
This is consistent with eq.~(\ref{eqn:longDsmallr}) obtained
when the coupling between the film and solvent is very weak so
that the film can be regarded almost as a pure 2D system.
Using eq.~(\ref{eqn:longDsmallr}) as an approximate expression, we get
from the fitting as
$k_{\rm B}T/4\pi\eta \approx 0.89\times10^{-2}$ and
$\nu^{-1} \approx 20 $.
In an ideal case, the SD length should diverge due to the absence
of solvent.
The obtained finite value for $\nu^{-1}$ is roughly set by the
half of the system size in the simulation.
When we add solvent ($L_z=40$), the $D_{\rm L}^{\rm c}$ is decreased
and no longer behaves logarithmically.
In this case, we use the full expression eq.~(\ref{eqn:longD}) for the
fitting, and obtained $k_{\rm B}T/4\pi\eta \approx 1.35\times 10^{-2}$
and $\nu^{-1} \approx 1$.
In the above two fits we have neglected the first two points as
they lie outside the range of validity, $s\gg1$, of
eq.~(\ref{eqn:longD})~\cite{diamant-09}.
Since $\nu^{-1} \approx 1$ when the solvent is present, the data shown
in fig.~\ref{fig:2pcorr} are in the crossover region,
$s \gtrsim \nu^{-1}$, showing an approach towards the asymptotic $1/s$
dependence as in eq.~(\ref{eqn:longDlarger}).
Hence we conclude that the solvent brings in the 3D hydrodynamic
property to the diffusion in films.
This is the reason for the 3D exponent $\alpha=1/3$
in the phase separation dynamics, and justifies that it is
mainly driven by Brownian coagulation mechanism.
In our simulations the film and the solvent
have very similar viscosities.
This sets the SD length scale to be of the order of unity, which
is consistent with the value $\nu^{-1} \approx 1$ obtained from the
fitting.
As explained above, the fit also provides the 2D film viscosity as
$\eta\approx6$, and hence we obtain as $\eta_{\rm s} \approx 3$.
This value is in reasonable agreement with the value
$\eta_{\rm s} \approx 1$ calculated in ref.~\cite{pan-08} by using
the reverse Poiseuille flow method.
The reason for the slightly higher value of $\eta_{\rm s}$ in our
simulations is that the tracer particles are of the same size as the
film particles.
This may lead to an underestimation of the correlated diffusion
coefficient.
In real biomembranes sandwiched by water, however, the value of the
SD length is much larger than the lipid size, and is in the order
of micron scale~\cite{saffman-76}.
Hence the 3D nature of hydrodynamics can be observed only for large enough
domains observed under optical microscopes~\cite{cicuta-07}.
\section{Discussion}
Several points merit further discussion.
The growth exponents obtained from our simulations for critical mixtures
are the same for the off-critical case, namely $\alpha=1/2$ without
solvent and $\alpha=1/3$ with solvent.
A previous DPD study by Laradji and Kumar on phase separation dynamics of
two-component membranes (both critical and off-critical cases) used a
coarse-grained model for the membrane lipids~\cite{sunil-mohamed-05}.
In their model, the self-assembly of the bilayer in the presence of
solvent is naturally taken into account.
The exponent for the off-critical case $\alpha=1/3$ is the same as that
obtained in our study, although they attributed this value to the LS
mechanism.
For critical mixtures in the presence of solvent, they obtained a
different value $\alpha=1/2$.
A suitable explanation for this exponent was not given in their paper.
As a related experimental work, the diffusion of tracer particles
embedded in a soap film was recently reported~\cite{prasad-09}.
When the diameter of the tracer particles is close to the thickness
of the soap film, the system shows a 2D behavior.
On the other hand, if the particle diameter is much smaller than the
soap film thickness, it executes a 3D motion.
On systematically increasing the soap film thickness, they identified a
transition from 2D to 3D nature.
In this paper, we have demonstrated the analogue for a
2D liquid film-solvent system using DPD simulations.
In the SD theory, the bulk fluid is assumed to occupy an infinite
space above and below the membrane.
The situation is altered when the solvent and the membrane are confined
between two solid walls~\cite{IF-08}.
If the distance $H$ between the membrane and the wall is small enough,
we are in a situation similar to that described by
Evans and Sackmann~\cite{evans-88}.
The Oseen tensor $G^{\rm ES}$ in this case is defined through
the relation~\cite{shige-07},
\begin{equation}
v_i [\mathbf{q}]= G^{\rm ES}_{ij} [\mathbf{q}] f_j
= \frac{1}{\eta (q^2+\kappa^2)}
\left( \delta_{ij} - \frac{q_i q_j}{q^2}\right) f_j,
\end{equation}
where the new length scale is defined as
$\kappa^{-1}=\sqrt{\eta H/2\eta_{\rm s}}$.
Notice that $\kappa^{-1}$ is the geometric mean of $\nu^{-1}$ and
$H$~\cite{stone-98}.
Following the same procedure as in the previous section, the
longitudinal coupling diffusion coefficient can be obtained as
\begin{equation}
D_{\rm L}^{\rm c}(s) =
\frac{k_{\rm B}T}{2\pi\eta} \left[\frac{1}{(\kappa s)^2}
- \frac{K_1(\kappa s)}{\kappa s} \right],
\label{ESdiff}
\end{equation}
where $K_1$ is modified Bessel function of the second kind.
At short distances $s \ll \kappa^{-1}$, we have
\begin{equation}
D_{\rm L}^{\rm c}(s) \approx
\frac{k_{\rm B}T}{4 \pi \eta}
\left[ \ln\left(\frac{2}{\kappa s}\right) - \gamma + \frac{1}{2}\right],
\end{equation}
which is almost identical to eq.~(\ref{eqn:longDsmallr}) except
$\nu$ is replaced now by $\kappa$.
At long distances $s \gg \kappa^{-1}$, on the other hand, we get
\begin{equation}
D_{\rm L}^{\rm c}(s) \approx
\frac{k_{\rm B}T}{2\pi\eta \kappa^2 s^2}=
\frac{k_{\rm B}T H}{4\pi\eta_{\rm s} s^2},
\end{equation}
which exhibits a $1/s^2$ dependence.
This is in contrast to eq.~(\ref{eqn:longDlarger}).
Following the similar scaling argument as before, we predict
that, in the presence of solid walls, the domain growth exponent
should be $\alpha=1/4$ within the Brownian coagulation mechanism.
In biological systems, the above model with solid walls can be
relevant because the cell membranes are strongly anchored to the
underlying cytoskeleton, or are tightly adhered to other cells.
In summary, we have shown that the bulk fluid has a significant
effect on the phase separation dynamics of thin liquid films
through a simple quasi-2D simulation model.
We have demonstrated the change in the growth exponents
from 2D to 3D by the addition of bulk fluid.
This is further justified by the two-particle correlation studies,
which showed that the longitudinal coupling diffusion coefficient
changes from a logarithmic dependence for the pure 2D case to an
algebraic one for the quasi-2D case.
Future directions include investigating the role of out-of-plane
fluctuations and the effect of boundary walls on the phase separation.
\acknowledgments
This work was supported by KAKENHI (Grant-in-Aid for Scientific
Research) on Priority Area ``Soft Matter Physics'' and Grant
No.\ 21540420 from the Ministry of Education, Culture, Sports,
Science and Technology of Japan.
| proofpile-arXiv_065-10873 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In recent years, stellar photometry primarily from the Sloan Digital Sky Survey (SDSS)
has allowed astronomers to discover previously unknown spatial substructure
in the Galactic spheroid. Other surveys such as 2MASS and QUEST have also been
influential. Highlights include the discovery of new tidal
streams \citep{2002ApJ...569..245N,2003AJ....126.2385O,2006ApJ...639L..17G,
2006ApJ...643L..17G,2006ApJ...645L..37G,2006ApJ...642L.137B,2009ApJ...700L..61N}; dwarf
galaxies, star clusters, and transitional objects
\citep{2005AJ....129.2692W,2005ApJ...626L..85W,2006ApJ...643L.103Z,2006ApJ...650L..41Z,
2006ApJ...647L.111B,2007ApJ...654..897B,2007ApJ...656L..13I,2009ApJ...696.2179K,2009MNRAS.397.1748B}; and previously
unidentified spatial structure of unknown or controversial
identity \citep{2001ApJ...554L..33V,2003ApJ...588..824Y,2004ApJ...615..738M,
2005ASPC..338..210N,2007ApJ...657L..89B,2008ApJ...673..864J,anVOD}.
The discovery of spatial substructure immediately
suggests the need for spectroscopic follow-up to determine the orbital
characteristics of tidal debris, the masses of bound structures, and the
character of structures of unknown identity. It has long been known that
a spheroid that is formed through accretion will retain the record of its
merger history much longer in the velocities of its component stars than it
will in their spatial distribution \citep{2003MNRAS.339..834H}. For this
reason early searches concentrated on velocity to find ``moving groups"
in the spheroid \citep{1996ApJ...459L..73M} rather than on the search
for spatial substructure.
Groups of spheroid stars that have similar
spatial positions and velocities (moving groups) are generally equated with
tidal disruption of star clusters or dwarf galaxies, unlike local disc
``moving groups" that were once hypothesized to be the result of
disruption of star clusters \citep{bok,eggen} but are now thought to
be the result of orbital resonances \citep{2008A&A...483..453F}.
Recently, searches for spheroid velocity substructure in SDSS data have concentrated
on local (up to 17.5 kpc from the Sun) metal-poor main sequence stars
\citep{klement,smith2009,schlaufman}, finding more than a dozen groups of stars
with coherent velocities. \citet{schlaufman} estimate that there are $10^3$
cold substructures in the Milky Way's stellar halo. Blue horizontal branch (BHB)
stars are particularly important for studies of spheroid substruscture
(e.g. Clewey \& Kinman 2006) because they can be seen to large
distances in the SDSS and because a large fraction of the SDSS stellar
spectra are BHBs. In the future we need
more complete spectroscopic and proper motion surveys
designed to discover velocity substructure in the Milky Way's spheroid.
In this paper we present evidence for a tidal moving group of BHB stars, discovered in the
SDSS and Sloan Extension for Galactic Understanding and Exploration (SEGUE; Yanny et al. 2009)
spectroscopic survey.
Additional evidence that the stars are part of a coherent structure
comes from the unusually low metallicity of the stars in the moving group.
We are unable to isolate this moving group in density
substructure, highlighting the power of velocity
information to identify low density contrast substructure in the
spheroid.
Historically many of the spheroid substructures have been discovered by
eye in incomplete datasets, rather than by a mechanical analysis of data
with a well-understood background population. As the size of the substructures
decreases, it becomes more critical to determine whether the observed structure
could be a random fluctuation of the background. In this paper, we present
a method (Appendix A) for estimating the probability that a random fluctuation
could produce an observed ``lump."
As we work towards the fainter limits of the SDSS data, it becomes more important
to understand how stellar parameters derived from spectra are affected by a low
signal-to-noise (S/N) ratio. The stars in the newly detected halo substructure have
S/N ratio between 7 and 10, as measured from the ratio of fluctuations to continuum
on the blue side of the spectrum. In the past we have successfully used spectra with S/N
as low as 5 to measure radial velocities of F turnoff stars in the Virgo
Overdensity \citep{netal07}. In this paper, we show that although higher S/N is
preferable, some information about metallicity and surface gravity can be gained
for BHB stars with S/N ratios less than 10.
Since the strongest evidence that these BHBs form a coherent group is that
they have unusually low metallicity, even for the spheroid, we also explore
the SEGUE metallicity determinations for BHB stars in the outer spheroid.
A great variety of previous authors have measured a large metallicity
dispersion for spheroid stars, with average metallicities somewhere in the
$-1.5 < \rm [Fe/H] < -1.7$ range \citep{1987ARA&A..25..603F,gwk89}. These
investigations analysed globular clusters
\citep{1978ApJ...225..357S,1985ApJ...293..424Z},
RR Lyraes \citep{1985ApJ...289..310S,1991ApJ...367..528S},
K giants \citep{2003AJ....125.2502M}, and
dwarfs \citep{1990AJ.....99..201C}. \citet{1986ApJS...61..667N} found an
average [Fe/H] of $-1.67$ for globular clusters in the outer halo, and
an average metallicity of $-1.89$ for field stars in the outer halo, again with
a large metallicity dispersion that does not depend on distance.
The Besan\c{c}on model of the Milky Way adopted an average [Fe/H] of $-1.7$, with
$\sigma=0.25$, for the stellar halo \citep{2000A&A...359..103R}.
Recently, \citet{carollo,carollo2} analysed the full space motions of $>10,000$ stars
within 4 kpc from the Sun, and found evidence that the ones that are
kinematic members of the outer halo have an average metallicity of
[Fe/H]=$-2.2$.
Our analysis of the SDSS BHB stars shows that the metallicity of BHB
stars in the Galactic spheroid does not change from $g_0=14.5$ (6 kpc
from the Sun) to $g_0=19.15$ (55 kpc from the Sun) at high Galactic
latitudes. The mean measured metallicity of these stars is [Fe/H]=$-1.9$,
but analyses of globular clusters in the sample show that there could
be a systematic shift in the BHB measured metallicities of a few
tenths of a magnitude, and in fact [Fe/H]=$-2.0$ is our best guess for
the proper calibrated value.
\section {Observations and Data}
\begin{figure*}
\noindent
\centerline{\includegraphics[angle=-90,width=.95\textwidth]{fig1.ps}}
\caption{Identification of a moving group. The upper left and right plots show blue
horizontal branch (BHB) stars with spectroscopic data in the region
$50^\circ<l<80^\circ$ and $40^\circ<b<55^\circ$, where the velocity
is the line of sight velocity with respect to the Galactic standard of rest. The
data in the two plots are the same, with the upper left plot marking two
overdensities noticed on the upper right plot.
Open circles in the lower left plot show the sky positions of the same BHB candidates
with spectroscopic data in SDSS DR6. Filled circles correspond
to stars that are in the co-moving stellar association, with the same stars plotted
in the upper left and lower left plots. These stars have $g_0$
magnitudes of $18.65 < g_0 < 19.15$ and line of sight velocities, with respect to
the Galactic standard of rest, of $-35 < V_{\mbox{\it gsr}} < 15$ km s$^{-1}$. The
crossed circles show the positions of stars in a density at $g_0=16.5$ found on the upper left
plot. In the lower left figure, a higher fraction of the BHB stars have spectra near
the globular cluster M13, at $(l,b)=(59^\circ,41^\circ)$. The box indicates the area of
the newly discovered over-density.
The smaller black dots in the lower left plot indicate the
positions of photometrically selected BHB stars in Galactic coordinates.
The lower right plot shows the positions of the photometric
data from the lower left plot that has the $g_0$ magnitude range of the
moving group, $18.65 < g_0 < 19.15$. Note that the moving group is not detected from the
photometric data alone.
\label{ident}}
\end{figure*}
In order to search the stellar halo for co-moving groups of stars, we selected BHB
stars from the sixth data release (DR6; Adelman-McCarthy et al. 2008) of the SDSS.
This release contains both Legacy Survey and SEGUE photometric data from 9583 square
degrees of sky. More technical information for the SDSS survey can be found
in \citet{2000AJ....120.1579Y,1996AJ....111.1748F,1998AJ....116.3040G,2002AJ....123.2121S,
2002AJ....123..485S,2003AJ....126.2081A,2003AJ....125.1559P,
2004AN....325..583I,2006AJ....131.2332G,2006AN....327..821T}.
We selected 8753 spectra with the colors of A stars from the SDSS DR6 Star database using the
color cuts $-0.3 < (g-r)_0 < 0.0$ and $0.8 < (u-g)_0 < 1.5$, within the magnitude
range $15 < g_0 < 23$, where the $0$ subscript indicates that the magnitudes have been
corrected for reddening and extinction using the \citet{1998ApJ...500..525S} reddening map.
Within this color box, the stars that are bluer in $(u-g)_0$ tend to be high surface
gravity blue straggler (BS) or main sequence stars, and those that are redder in
$(u-g)_0$ tend to be low surface gravity blue horizontal branch (BHB) stars. Using
$(u-g)_0$, $(g-r)_0$ color selection parameters as described in Figure 10
of \citet{ynetal00}, we divided the sample into $4630$ candidate BHB
stars and $4123$ candidate BS stars with spectra.
We separated the stars in surface
gravity using a photometric technique, because in 2007 when we were selecting the stars,
we were concerned that the S/N of the spectra would degrade more quickly than the
S/N of the photometry at the faint end of our sample. The S/N of the spectroscopy increases
over the whole magnitude range from $15<g_0<20.5$, while the photometric accuracy degrades
only for $u>19$. The photometric degradation affects halo BHB stars fainter than about $g_0=18$.
With more recent reduction software (see below), we have found similar sensitivity to surface gravity
in photometric and spectroscopic indicators for faint SDSS spectra, but the spectroscopic
separation is much better than photometric separation for bright, high S/N spectra.
The photometric selection
technique does not introduce gradients in the selection effect with apparent magnitude,
and can be applied identically for stars with and without measured spectra.
At bright magnitudes, the spectral S/N is high enough that the DR6 surface
gravities are reliable. The SDSS spectra are R=1800, and the S/N of A star spectra varies
from 50 at $g_0=16$ to 7 at $g_0=19$. We tested the efficiency and completeness
of the photometric selection technique using 3996 SDSS spectra of stars with
$16<g_0<17.5$ and colors of A-type stars. Of the 1948 that had surface gravities
consistent with BHB stars, 84\% were classified as BHBs by the photometric technique.
Of the 2048 spectra with high surface gravity (BS stars), 35\% were classified as
BHBs in the photometric technique. These numbers for completeness and contamination
are similar to estimates that we made from examination of Figure 13 from
\citet{ynetal00}, and apply only to stars with $u<19.$ Fainter than this, the photometric
errors in $u$ will cause increased mixing of the populations.
\section{Detection of a new moving group in the Galactic halo}
We searched our BHB catalog for stars that clustered in velocity, apparent
magnitude, and Galactic coordinates. Our search was less systematic than that used
by \citet{2006MNRAS.371L..11C}, but allows us to find moving groups that are
more extended, such as tidal streams. Because they first searched for pairs of
co-moving stars that were within $2$ kpc of each other, they reduced their sensitivity
to tidal debris streams that could be spread over tens of kpc.
We first divided the catalog into sky areas in $(l,sin(b))$ that were $20^\circ$ by $0.2$
in the respective coordinates,
and made plots of $V_{\mbox{\it gsr}}$ vs. $g_0$ magnitude. We used $sin(b)$ so that each
partition of the data would cover an equal area of the sky. Here, $V_{\mbox{\it gsr}}$ refers to the
line-of-sight velocity transformed to the Galactic standard of rest using the Solar
motion of $(v_X,v_Y,v_Z)=(10,225,7)$ km s$^{-1}$ \citep{dehetal98}. By examining the
line-of-sight velocity as a function of magnitude, we found a number of
co-moving groups of stars that were associated with known globular clusters, dwarf galaxies, and tidal
streams. Halo substructures that had been previously identified were not studied
further. We concentrated only on the most significant of the remaining BHB
association; others were left for future studies.
Figure 1 shows the $V_{\mbox{\it gsr}}$ vs. $g_0$ plot for the most significant co-moving group
of BHB stars, along with the distribution of these stars in $(l,b)$. Although
there is a second cluster of stars at $g_0 \sim 16.5$ in the top left plot, we show in
the lower left plot that these stars are dispersed in position on the sky so we did not
explore that association further.
The moving group we describe here has BHB stars at $g_0=18.9$, $\sigma_{g_0}=0.1$;
and $V_{\mbox{\it gsr}}=-10$ km s$^{-1}$, $\sigma_{\mbox{\it gsr}}=10$ km s$^{-1}$. The dispersion is
consistent with the instrumental error in the velocity of 19$^{\rm th}$ magnitude
BHB stars \citep{sirkoa}. These stars have an average heliocentric radial
velocity of $<\langle V_r\rangle>=-157\pm4$ km s$^{-1}$,
which rules out any association of these stars with the disc population.
In the direction $(l,b)=(65^\circ,48^\circ)$, the line-of-sight, heliocentric
velocities expected for the thin disc, thick disc, and spheroid components
are $-2$, $-35$, and $-144$ km s$^{-1}$, respectively. The heliocentric radial
velocity of our moving group is not far from that of the average spheroid star
in that direction ($V_{\mbox{\it gsr}}=0$ for the spheroid and $V_{\mbox{\it gsr}}=-10$ km s$^{-1}$
for the moving group), but the dispersion rules out an association with the
general spheroid population.
The lower right panel of Figure 1 shows that the association of horizontal branch
stars could not be detected spatially; velocity information was required to identify
the moving group.
In the same area, at $(l,b)\sim(59^\circ,46^\circ)$, another cluster identified from SDSS BHB
stars was found by \citet{2006MNRAS.371L..11C}. Although this cluster lies at
roughly the same (l,b) angular position as the cluster identified in this paper, the estimated
distance to the Clewley et al. clump is 7.8 kpc, which is much closer to the Sun
than our moving group, with a horizontal branch brighter than $g_0=16$.
The parameters of the seven BHB stars in the moving group are presented in Table 1.
The new moving group found in this study is located at $(l,b)\sim(65^\circ,47.5^\circ)$. The
stars have apparent magnitude in the $g$ filter of $g_0\sim18.9$ and
velocities located within $-35 < V_{\mbox{\it gsr}} < 15$ km s$^{-1}$, with an average velocity of
$<V_{\mbox{\it gsr}}>=-10$ km s$^{-1}$. A $g_0$ apparent magnitude of $18.9$
for BHB stars, with an estimated absolute magnitude of $M_{g_0}=0.45$ \citep{2009ApJ...700L..61N}
corresponds to a distance of $\sim 50$ kpc from the Sun. The magnitude dispersion of the
seven stars is $\sigma=0.10$. Note that the dispersion in apparent magnitude for these
stars is $\sigma_{g_0}=0.1$, and drops to $\sigma_{g_0}=0.03$ if the two outliers with
larger magnitudes (one of which has a high estimated surface gravity) are excluded. The
magnitudes are consistent with a structure that has zero depth, but the depth could be
as large as $\sigma=2$ kpc.
\begin{table*}
\centering
\begin{minipage}{140mm}
\caption{Properties of BHBs in the Hercules Moving Group.}
\begin{tabular}{@{}lrrrrrrrrrrr@{}}
\hline
ID & RA & DEC & l & b & $g_0$ & $(u-g)_0$ & $(g-r)_0$ & $V_{\mbox{\it gsr}}$ & $\log(g)$ & $[Fe/H]$ & S/N\\
plate-mjd-fiber & hh:mm:ss.s & dd:mm:ss.s & $^\circ$ & $^\circ$ & mag & mag & mag & km s$^{-1}$ & dex & dex &\\
\hline
1336-52759-290 & 16:08:33.7 & +38:49:19.7 & 61.8 & 47.6 & 19.00 & 1.22 & -0.21 & 2.0 & 2.75 & -2.99 & 7.7\\
1335-52824-100$^1$ & 16:06:59.7 & +40:20:39.5 & 64.1 & 47.8 & 18.89 & 1.17 & -0.16 & -4.4 & 2.34 & -1.97 & 8.5\\
1335-52824-167$^1$ & 16:04:32.4 & +40:46:03.1 & 64.8 & 48.3 & 18.85 & 1.27 & -0.17 & -10.0 & 2.91 & -2.86 & 9.8\\
1335-52824-362$^1$ & 16:00:37.6 & +41:55:10.5 & 66.6 & 48.9 & 18.82 & 1.17 & -0.20 & -8.3 & 2.83 & -2.63 & 9.2\\
1334-52764-540 & 15:58:24.7 & +43:04:07.4 & 68.4 & 49.2 & 19.10 & 1.16 & -0.14 & -9.7 & 3.39 & -2.10 & 7.7\\
1335-52824-008$^1$ & 16:09:25.9 & +40:05:30.9 & 63.7 & 47.4 & 18.86 & 1.24 & -0.20 & -19.1 & 2.92 & -2.27 & 8.1\\
0814-52443-046 & 16:17:17.7 & +44:19:04.7 & 69.7 & 45.7 & 18.89 & 1.16 & -0.14 & -18.7 & 2.62 & -2.42 & 9.1\\
\hline
1056-52764-637 & 16:18:21.5 & +36:54:11.0 & 59.1 & 45.7 & 18.90 & 1.32 & -0.12 & -5.3 & 2.21 & -2.99 & 9.9\\
\hline
\end{tabular}
$^1$ Highly likely member of moving group.
\end{minipage}
\end{table*}
\section{Surface Gravity Estimates of Faint BHB Stars in SDSS}
The SDSS DR7 SSPP pipeline \citep{SSPP1,SSPP2} generates ten different measures of the surface
gravity for each stellar spectrum, and additionally computes an ``adopted" $\log g$, which is
determined from an analysis of the ten different methods. For stars with S/N less than 10,
the adopted $\log g$ is set to an error code.
We analyzed histograms of the distribution in $\log g$ of bright and faint A stars in our sample,
looking for a method that showed two peaks: one at low surface gravity (BHB stars),
and one at high surface gravity (BS or main sequence stars). Several of the methods appeared
to separate bright A stars by surface gravity, and none appeared to separate the stars
with S/N$<10$ very well. We selected the logg9 parameter, which
is described in general terms in \citet{SSPP1},
in which it is referred to as the ``k24 grid". The original reference to the method is
\citet{ap06}, where the method is described in great detail. The method finds the best match
between the SDSS spectrum and a set of model atmospheres from \citet{kurucz93}, using the
$(g-r)$ color index and the normalized spectral fluxes in the wavelength range
$4400 < \lambda < 5500 $ \AA\ , using a resolving power of $R=1000$. The surface gravities using
this DR7 logg9 estimator are tabulated in Table 1.
Six of the seven candidate members of the moving group have surface gravities less than
$3.0$, as expected for BHB stars.
Subsequent to our search for spheroid structure using BHB stars, \citet{xue} published a list of
SDSS DR6 A stars along with measurements of $D_{0.2}$ and $f_m$ measurements. These are
classical indices used by \citet{1983ApJS...53..791P} to classify BHB stars from the $H_\delta$
line width and depth. Using this method for determining the surface gravities of A
stars, only 2 of our candidate A stars appear to be low surface gravity. There are two
effects that contribute to the small fraction of confirmed BHBs: (1) the performance
of the $D_{0.2}$ indicator is degraded at low signal-to-noise, and (2) we disagree with
the published $D_{0.2}$ values for two of the stars.
In the top panels of Figure 2, we show the performance of the $D_{0.2}$, $f_m$ indicators for the
set of $10,224$ stars from \citet{xue}, and for the subset of 1176 of these stars for which
the S/N of the spectrum is between 7 and 10. While the bright, high signal-to-noise spectra
that were used by \citet{xue} are separable, the lower signal-to-noise spectra do not have
two clear peaks in the diagram. We show histograms of $D_{0.2}$ for all of
the stars between the two vertical lines ($0.16<f_m<0.24$) in the top two panels of
Figure 3. For all of the stars (most of which are high signal-to-noise), we see two peaks
in the $D_{0.2}$ distribution. The low S/N sample shows no separation.
We compare the separation using the properties of one line of the spectrum ($H \delta D_{0.2}$
vs. $f_m$) with the SSPP logg9 surface
gravities using photometry and the continuum levels of the spectra in the bottom panels of
Figures 2 and 3. Figure 2 shows that the logg9 indicator works about as well as the
$D_{0.2}$ indicator for bright stars. Note that in the lower right panel, for stars with
marginal S/N, the distribution of stars is much wider in the vertical direction, and the width
in the horizontal axis is wider but not dramatically so. The lower panels of Figure 3
show histograms of logg9 for all of the stars between the two horizontal lines
($15<D_{0.2}<40$). The brighter star data on the left shows two separate peaks for low and high
surface gravity stars, while the marginal S/N data in the right panel does not show
that separation. However, we can see from the correlation between $D_{0.2}$ and logg9
in the lower right panel of Figure 2 that there is some information on the surface
gravity of even these lower S/N spectra.
Because we were surprised that the $D_{0.2}$ numbers could be so high, even at low
signal-to-noise, for stars that we expected were BHBs in a moving group, we independently
computed new values for the width of the $H_\delta$ at 80\% of the continuum for the eight
stars in Table 1. For six of them, we measured widths that were reasonably consistent with
the published values. However, we measured a width of 23 for 1335-52824-167 instead of
40, and a width of 23 for 1335-52824-008 instead of 33. With these corrections,
six of the eight stars in Table 1 have $D_{0.2}$ widths less than 30, including all four
of the stars that we will later show are highly likely members of the moving group.
\begin{figure*}
\noindent
\centerline{\includegraphics[width=0.90\textwidth]{fig2.ps}}
\caption{Luminosity separation of A colored stars into BHB and BS populations. Upper left panel
reproduces the sample of 10,224 A colored stars from \citet{xue}, showing that BHBs have
lower $H\delta$ $D_{0.2}$ widths than BSs. The upper right panel
shows a low S/N subset of 1,176 stars ($7 < \rm S/N < 10$). The candidate
stream members from Table 1, which all have S/N in this range, are indicated with
(red) crosses. The single $D_{0.2}$ method appears to be a less effective discriminant at
S/N $< 10$. However, our independent measurement of $D_{0.2}$ for the moving group
candidates resulted in significantly lower $H_\delta$ widths for two of the eight stars
(indicated be the vertical line below two of the crosses), so the effectiveness of the
$D_{0.2}$ may depend on how it's measurement is implemented for low signal-to-noise. The
lower left panel shows $D_{0.2}$ versus the 9$^{\rm th}$ SSPP method (logg9)for the same
\citet{xue} sample. Note that the logg9 surface gravity discriminant achieves similar
results to the $D_{0.2}$ measure for bright stars.
On the lower right, the lower S/N sample is shown. A cut at $\rm logg9 < 3.15$ maintains the
$D_{0.2}$ separation into BHBs and BSs which appears effective
even at S/N $< 10$. In this paper, we use an even more conservative cut at logg9$<3.0$. Using this
conservative selection criterion for BHB stars, 7/8 stream
candidates (red crosses) are consistent with low surface gravity BHB stars.
}
\end{figure*}
\begin{figure*}
\noindent
\centerline{\includegraphics[angle=-90,width=0.95\textwidth]{fig3.ps}}
\caption{Histograms showing gravity separation of data between the parallel lines in each panel of Figure 2.
Upper left: We show $D_{0.2}$ for the subset of the \citet{xue} sample with an $H\delta$ flux minimum (as
fraction of continum) of $0.16 < f_m <0.24$.
Upper right: We show the same histogram for the lower ($7 < \rm S/N < 10$) signal-to-noise subsample. Stream
candidates from Table 1 are indicated with a heavy curve. Note the two values at 40 \AA\ and 33 \AA\ should be moved lower (23 \AA\ ) based on inspection of individual spectra. Lower Left: We show the distribution of
logg9 for most of the \citet{xue} sample. Lower Right: We show the lower S/N subsample. Note
the positions of the stream candidates (dark line), which are consistent with the low surface gravity BHB population
in 7/8 cases. Note the clear bimodality between BHBs (low $D_{0.2}$ or low logg9) and BSs
(high $D_{0.2}$ or high logg9) for the full samples, and the absence of bimodality for the lower S/N
samples.
}
\end{figure*}
\section{Statistical Significance of the Moving Group}
Given that we were searching through the data by eye and looking for clusters, it is
important to verify that the clustering of stars is not a chance coincidence. In this section
we will estimate an upper bound for the probability that this clump could be a
random fluctuation in the number of stars in a a small region of angle in the sky,
magnitude, and line-of-sight velocity.
Most of the sky that we searched for clumps was at high Galactic latitude; at low Galactic
latitude the stellar population is different, the density of stars is higher, and the sky
coverage is less uniform. So for statistical purposes we limited our sample to $b>35^\circ$.
There are 2893 stars in our sample with $b>35^\circ$. The seven stars were discovered
in a 19 square degree area of the sky (compared with 8800 square degrees above $b>35^\circ$),
with the requirement that this sky area be rectangular
in Galactic latitude and longitude ($61.8<l<69.7, 45.7<b<49.2$). In comparison with the entire
high latitude sample of BHB candidates,
the seven stars in the clump span about 1/46$^{\rm th}$ of the entire longitude range
of the input data, and 1/10$^{\rm th}$ of the latitude range of the data in an equal-area plot.
From Figure 1, we find that the range of velocities in the clump includes $15\%$ of the
sample stars, and the range of magnitudes in the clump is the same as $10\%$ of those in the
high latitude sample.
Appendix A presents a statistical method to determine the significance of a ``clump" of data
points in a $d$-dimensional search space. In this case we have four dimensions: Galactic
longitude, Galactic latitude, radial velocity, and apparent magnitude. The statistical
method measures the probability that random data will produce a clump of seven or more
points within a four-dimensional box of the measured size. By using percentiles, we account
for the non-uniformity of the data in the radial velocity and apparent magnitude dimensions.
We also adjust the statistics, as explained in Appendix A.4.2, for the periodic boundary
condition in Galactic longitude. What is important for the calculation is that each of
the dimensions be independent (separable, in the terms of Appendix A). In our case, the
variables are not quite independent, especially Galactic coordinates. Because the density
of A stars with SDSS spectra varies as a function of position in the sky, the relative percentiles
in $(l,b)$ would be different if a clump of the same areal extent were discovered in
a different part of the sky. We estimate that the local density in the region of the sky
that the clump was found is 3.5 times higher than average. Therefore, following the prescription
of Appendix A4.4, we multiplied the longitude width of the clump by a factor of 3.5.
It was not possible to account for every known peculiarity of the data. For instance, when we
searched for clumps we
divided the sky up into fixed $(l,b)$ boxes, so there are regions around the edge of each
$(l,b)$ box that were not as thoroughly searched. However, it is possible that if nothing was
found with these limits we would have adjusted the box sizes and positions, so this oversight
(which would underestimate the significance of a clump) is probably justified.
Given the percentiles of the 4-D search space listed above, we expect 0.32 stars in a typical
box with these dimensions. Using the algorithm in Appendix A, the number of clumps we should
expect to find in this data that have seven or more stars in a 4-D box of these dimensions
or smaller is 0.96. Since we found one clump, this is very close to random.
If we eliminate one
star from the sample that is an outlier in $(l,b)$, so that the area of sky is now
$61.8^\circ<l<68.4^\circ$, $47.4<b<49.2^\circ$, then we calculate that
the E-value for finding six or more stars in a 4-D box
with these (smaller) dimensions is 0.18, so the probability of finding this moving
group is not more than one in six. Using only these dimensions, the clump of stars is
not a significant detection.
In order to claim a significant detection of the moving group represented by these seven
stars, it is necessary to compare their metallicities with the metallicities of the
background population. Only 30\% of the stars in the sample have metallicities as low as
those in the clump of seven stars. If we extend the statistical calculation to five
dimensions, including metallicity, then the probability of finding seven stars in a region
of this size and shape (or smaller), anywhere in the 5-D parameter space is not more than
1 in 244 (E-value of 0.004), and the probability of finding six stars in a smaller
angular region in the sky (in 5-D) is not more than 1 in 490 (E-value=0.002). These are
highly significant detections.
Since metallicity is critical to the detection of this substructure, we examine the SDSS DR7
metallicity measurements of these stars, and BHB stars in general, in the next two sections. In
section 8, we use the information we have learned in the analysis of this moving group to
re-select similar data from SDSS DR7, and make an even stronger argument for the statistical
significance of our result.
\section{Metallicities of the moving group stars}
All seven of the candidate BHB stars in this moving group had unmeasured metallicities in
SDSS DR6, but metallicities were assigned to these same spectra in the much improved SSPP pipeline
of SDSS DR7 \citep{SSPP1,SSPP2}. Our metallicities come from the DR7,
which became public in October 2008. The SDSS DR7 tabulates many measures
of the metallicities of stars, of which the most commonly used is FEHA, the
``adopted" metallicity, which is derived from a comparison of all methods
used to measure metallicity. In this paper we use instead the metallicity of
\citet{wbg99}, hereafter WBG, which is specifically designed to measure the metallicities
of BHB stars. The WBG metallicities of the seven BHB stars
in the moving group are given in Table 1 and are shown in Figure 4. The mean of the
distribution is [Fe/H]$=-2.46\pm0.14$, and the width is $\sigma=0.4$.
\begin{figure}
\noindent
\centerline{\includegraphics[angle=-90,width=.45\textwidth]{fig4.ps}}
\caption {Metallicity distribution of the moving group. We selected all stars with spectra from SDSS DR7.1 which had
the same photometric properties as the moving group BHB stars ($18.65 < g_0 < 19.15$, $-0.3 < g-r_0 < 0.0$,
$0.8 < (u-g)_0 < 1.5$, and split photometrically by likelihood of low surface gravity), with Galactic
latitude $b>45^\circ$. The 298 stars selected are all likely spheroid BHB stars, of which 260 had measured
metallicities. The distribution of metallicities
of the spheroid BHB stars with similar photometric properties (thin lines) is shown along with the metallicities of the
seven BHB stars in the moving group (thick lines). The moving group has a mean metallicity of $-2.5$, while the
spheroid BHBs have a mean metallicity of $-1.69$.
\label{metals}}
\end{figure}
We now show that the metallicities of the moving group stars are not consistent with
being drawn at random from the stellar halo.
We selected all stars in DR7 with the same
photometric constraints as the stars in the moving group, and with $b>45^\circ$ so that the
BHB stars are likely to come from the same component (the spheroid), and not be confused with
disc populations of BHBs. To remove BHB stars in the Sagittarius dwarf tidal
stream from the sample, we also eliminated stars with $\delta<5^\circ$.
There were 298 stars with $18.65 < g_0 < 19.15$, $-0.3 < (g-r)_0 < 0.0$,
$0.8 < (u-g)_0 < 1.5$, and that passed the photometric low surface gravity cut. Of these, 260 had measured
metallicities that are shown in Figure 4. The mean of the distribution is
[Fe/H]$=-1.69\pm0.04$, and the width is $\sigma=0.71$. Note that the error quoted
here is a statistical error. Because very few of the SEGUE calibration stars
are as blue as A stars, the systematic errors at this end are about $\pm 0.3$.
The comparison stars are about 45 kpc from the Sun, and above $b=45^\circ$, so
they come from the distant halo. The blue straggler contaminants are 10 to 28
kpc away, which is also fairly far from the disc at high Galactic latitudes.
A comparison of the stars in the moving group with the similarly selected stars
in the spheroid with a t-test gives a probability of 1 in $10^6$ that the two groups
of stars were selected from the same stellar population. The Mann-Whitney
test (also known as the Wilcoxon rank sum test) gives a p-value of 0.00023, or a
2 in 10,000 probability that the two samples are drawn from the same population.
This leads us to conclude that the moving group is real.
As a further test, we selected the other 8 stars in the lower panel of Figure 1 that have the
same $V_{\mbox{\it gsr}}$ and apparent magnitude as the blue horizontal branch stars in the
moving group, and find that they have a mean metallicity of $-1.87$ with a sigma of 0.86.
The t-test (p=0.948) shows that this distribution is not
distinguishable from the background population. (Performing the Mann-Whitney
test for this sample is problematic since several of the stars are not
part of our selected background distribution.)
\section{Systematics of DR7 Metallicities of halo BHB stars}
\begin{figure}
\noindent
\centerline{\includegraphics[angle=-90,width=0.45\textwidth]{fig5.ps}}
\caption{Measured metallicities of BHB star in the Sgr dwarf leading tidal tail, as a function
of S/N. The metallicities of stars with S/N$>10$ are considered fairly reliable. The stars
with $7<$S/N$<10$ have a similar distribution. Stars with $0<$S/N$<7$ have a significantly
higher average metallicity. Therefore, we conclude that the metallicity measurements of
BHB stars with S/N$<7$ are not reliable. Note that the excess of stars at [Fe/H]$_{\rm WBG}=-3.0$
is due to edge effectes in the WBG estimation procedure, and does not represent a cluster
of stars with identical metallicity.
}
\end{figure}
In this section we explore the accuracies of the DR7 metallicity measurements of BHB stars
as a function of S/N and in comparison with published metallicities for globular clusters.
We also explore the metallicity of BHB stars in the halo as a function of apparent magnitude.
Since all of our moving group stars have low S/N, we need to test the accuracy of the metallicity
measurements. We do this by selecting BHB stars in the leading tidal tail of the Sgr dwarf galaxy
that are about the same distance from the Sun as the moving group. We selected all of the spectra
from SDSS DR7 that had elodierverr$>0$ (if the radial velocity error when matching to the stellar
templates is less than zero, that indicates that the object is not a star), $0.8<(u-g)_0<1.5$,
$-0.3<(g-r)_0<0.0$, $320^\circ<l<350^\circ$,
$30^\circ<b<70^\circ$, $-100<V_{gsr}<100$ km s$^{-1}$, logg9$<3.0$, and $18.5<g_0<19.6$.
The 94 stars that are likely BHBs in the Sgr dwarf tidal stream were divided into three files based
on S/N. There were 14 stars with $0.0<$S/N$<7$, 49 stars with $7<$S/N$<10$ (excluding three with
unmeasured metallicities), and 28 with $10<$S/N$<25$. The metallicity distributions of these three
S/N groups is shown in Figure 5.
The highest S/N group ($10<$S/N$<25$) has a mean [Fe/H]$_{\rm WBG}=-1.94$, which is similar to the
group with marginal S/N ($7<$S/N$<10$) which has a mean [Fe/H$_{\rm WBG}=-2.13$. A Mann-Whittney
test comparing these two samples gives a result of $U=818$, $z=-1.39$, and a p-value of
0.165. These distributions are not significantly different from each other. This gives us some
confidence that the metallicities of the moving group stars, which have S/N measurements in
this marginal range, are usefully measured.
In contrast, the lowest S/N group ($0<$S/N$<7$) has a mean of [Fe/H]$_{\rm WBG}=-1.42$. A
Mann-Whittney test comparing this sample with the highest S/N sample produces $U=275.5$,
$z=-2.11$, and a p-value of 0.0349. This is a statistically significant difference,
and metallicities of BHB stars measured by this technique are probably not accurate.
To understand the likely systematics in our metallicity determinations, we
selected BHB stars from six globular clusters that had spectroscopic measurements
of BHB stars in SDSS DR7. We selected stars that have the colors of BHB stars,
are within half a degree of the centre of the globular cluster, and have
radial velocities within 20 km s$^{-1}$ of the published value for the cluster.
M2, M3, and M13 have metallicities very close to $-1.6$; M53 has metallicity of
$-1.99$; and the two clusters M92 and NGC 5053 have metallicities very close to
$-2.3$ \citep{harris}. Because there were very few stars with spectroscopy in
each globular cluster, we combined the data from clusters with similar
metallicities before plotting the histograms in Figure 6. All of these
globular clusters have horizontal branches near $g_0=16$ (the magnitude
range for which we have the most complete sample of BHB stars), and thus
the spectra have high $S/N$.
\begin{figure}
\noindent
\centerline{\includegraphics[width=.45\textwidth]{fig6.ps}}
\caption{Metallicity measures of known clusters.
We show the metallicity distribution of SDSS DR7 BHB stars, as
measured by the \citet{wbg99} technique, for BHB stars selected from
clusters of known metallicity.
Uppermost panel: Stars in M2 ([Fe/H] = $-1.62$), M3 ([Fe/H] = $-1.57$), and
M13 ([Fe/H] = $-1.54$).
Second panel: Stars in M53 ([Fe/H] = $-1.99$).
Third Panel: Stars in M92 ([Fe/H] = $-2.28$) and NGC 5053 ([Fe/H] = $-2.29$).
Lowermost Panel: The seven Hercules stream BHBs.
Note that the measured metallicities are compressed toward [Fe/H]=$-1.9$
(higher metallicity stars are measured systematically low, and lower
metallicity stars are measured systematically high). Also note that
the Hercules stream stars appear to be lower metallicity than M92 and
NGC 5053.
}
\end{figure}
Figure 6 shows that the SDSS DR7 WBG metallicities for BHB stars tend to
push the metallicities toward [Fe/H]=$-1.9$. The measured metallicities
are correlated with the published metallicities of the globular clusters;
however, stars that have higher
metallicities are measured with metallicities that are too low, and stars
with lower metallicities are measured systematically too high.
We show for comparison the measured metallicities of the seven stream stars.
The measured metallicity is lower than that of M92 and NGC 5053.
Because the validity of our stream detection depends on our ability to separate
the stream from the stellar halo in metallicity, we need to show that the
metallicities in our background population are accurate. We selected all
of the stars with spectra in DR7 that have photometry consistent with being
a BHB star, as explained in \S 2, $b>45^\circ$, and $\delta>5^\circ$. We
further restricted the sample by
insisting that the WBG estimate of $\log g$, as measured in DR7, was less
than 3.75.
In Figure 7, we show the DR7 WBG metallicity as a function of apparent
magnitude. We have spectra for stars with $14.5< g_0 <19.15$, which span
distances of 6 to 50 kpc from the Sun, all at high Galactic latitude.
The metallicity distributions for all of the stars brighter than $g_0<18.5$
are similar, with a mean near [Fe/H]=$-1.9$ and a sigma of 0.45. In the
faintest set of stars, which have apparent magnitudes similar to that of the
newly detected stream, the distribution appears considerably broader, but
with a similar mean. We attribute the increased width to the lower
S/N in many of these spectra. The mean is somewhat lower in the last
panel in Figure 7 than in Figure 4, probably due to a cleaner sample of BHB stars, with
less contamination from BS, from the additional restriction in
$\log g$.
\begin{figure}
\noindent
\centerline{\includegraphics[width=.45\textwidth]{fig7.ps}}
\caption{Metallicity of the outer halo vs. distance.
We show the metallicity distribution of SDSS DR7 BHB stars in several
apparent magnitude ranges.
The last panel shows the six Hercules stream BHBs (thick lines in lower
left of diagram) that survive the
additional cut in $\log g$. They are inconsistent with being drawn from
the same distribution as the other stars in this panel.
All of the stars have $ugr$ colors consistent
with BHB stars, $b>45^\circ$, $\delta>5^\circ$, and $\log g <3.75$. The
[Fe/H] and $\log g$ measurements are from the WBG \citep{wbg99} techniques,
as implemented in the SSPP of DR7. A Gaussian with mean $-1.9$ and sigma
0.45, normalized to the number of stars in each panel, is shown for
reference. The mean metallicity does not change as a function of distance
from the Sun. The distribution is wider in the lowest panel because the
signal-to-noise is lower here. If one selects only the highest S/N
spectra in this bin, the width is similar to the four other panels.
}
\end{figure}
We show for comparison the six stream stars that have $\log g < 3.75$ in the
last panel of Figure 7. The metallicity distribution of these stars is
inconsistent with the distribution of the other stars in this figure at
the 98\% confidence level, as determined by the Mann-Whitney test.
The value that we find for the metallicity of the outer halo, as measured
from SDSS BHB stars is on the low side of the distribution of previous
measurements, but higher than the \citet{carollo} metallicity measurement
of [Fe/H]=$-2.2$. (We note, however that Carollo et al. 2007
distribution of BHB stars in their paper, and the BHB star distributions
peak closer to [Fe/H]=$-2.0$.)
\section{The Moving Group in DR7}
\begin{figure}
\noindent
\centerline{\includegraphics[angle=-90,width=0.45\textwidth]{fig8.ps}}
\caption{Projection of the moving group in equatorial coordinates, as detected in SDSS DR7.
The small black dots show the positions of A stars with spectra that are not from SEGUE plates
as selected from DR7. Only twelve of these stars (large cyan symbols) have surface
gravities, metallicities,
apparent magnitudes, and line-of-sight velocities that are consistent with the moving
group, as detected in Figure 9. Of these twelve, four are very tightly clustered and a fifth
is only a few degrees away and in line with the other four.
}
\end{figure}
\begin{figure}
\noindent
\centerline{\includegraphics[angle=-90, width=0.45\textwidth]{fig9.ps}}
\caption{Detection of the moving group in SDSS DR7. This figure is similar
to the upper right and lower left panels of Figure 1, with the additional restriction that the all the stars in these panels have $\rm [Fe/H]_{\rm WBG} < -1.95$. The significance of the group over background is enhanced with this
metallicity cut.
}
\end{figure}
The moving group was originally discovered by searching through data from SDSS DR6.
In this section we
show the moving group as selected from the now public SDSS DR7 databases, and
using our knowledge of metallicity and surface gravity determination at low S/N,
as learned in the production of this paper.
We selected from SDSS DR7 all of the spectra with elodierverr $>0$ (which selects
stellar spectra), magnitude $15<g_0<23$, the colors of an A star
[$0.8<(u-g)_0<1.5, -0.3<(g-r)_0<0.0$], and a flag that indicates the spectrum
is not from a SEGUE plate. We did not select stars from SEGUE plates
since most of them have not been searched for moving groups of A stars, and because
their non-uniform sky coverage, different section criterion, and different exposure
lengths make it difficult to use them in a statistical calculation.
We show the distribution of SDSS DR7 A stars in equatorial coordinates in Figure 8.
We then selected the low surface gravity (logg9 $ <3.0$) stars with $50^\circ<l<80^\circ$
and $40^\circ<b<55^\circ$,
to match the region of the sky presented in Figure 1, and metallicity [Fe/H]$_{\rm WBG}<-1.95$,
since we know we are looking for a low metallicity moving group. Figure 9 shows plots like
those used in the moving group discovery, but using data from this low metallicity DR7
dataset. From the upper panel, we identify a clump of stars with $-35<V_{gsr}<0.0$ km s$^{-1}$
and $18.8<g_0<18.9$. The lower panel shows the positions in Galactic coordinates of the
same objects from the top panel, with filled circles showing the six objects that are clumped
in line-of-sight velocity and apparent magnitude. Four of these objects are very tightly
aligned in angular position on the sky.
To get a sense for how common a moving group with these characteristics is, we selected from
our original DR7 dataset all of the stars with surface gravity, metallicity, line-of-sight
velocity, and apparent magnitude that are similar to the stars in the clump. In the entire
northern sky portion of the SDSS, there are twelve stars with these characteristics. Four of
the stars are in a tight line on the sky, and a fifth is only a few degrees away from the clump
along the same line. Because this fifth star is a candidate member of the moving group, and
not included in the original list of seven, we have added it to the bottom of the list of
candidate members in Table 1. The four highly probable members of the moving group,
which were selected from both the DR6 and DR7 datasets, are marked in Table 1 with a footnote.
The four highly likely moving group members are a subset of the original seven, and have
velocity and metallicity characteristics that are nearly identical to our previous results. The
average $V_{gsr}$ is $-10 \pm 3$ km s$^{-1}$, with a sigma of $6$ km s$^{-1}$. The
average metallicity is [Fe/H]$_{\rm WBG}=-2.4 \pm 0.2$ with a sigma of 0.39.
Following a similar procedure to that used in Section 5, we now calculate the probability
that this group of four stars is a random coincidence. There are 6118 stars with $b>35^\circ$
in the A star sample depicted in Figure 8. Of these stars, 3335 have $-9<{\rm logg9}<3.0$
(54.5\%), 1171 have $-9<$ [Fe/H]$_{\rm WBG}<-1.95$ (28.9\%), 224 have $18.8<g_0<18.9$
(3.7\%), and 487 have $-23<V_{gsr}<0$ km s$^{-1}$ (8.1\%). The four stars in the clump have
sky positions within $63.72^\circ<l<66.62^\circ$ (0.8\% of longitude range) and
$47.37^\circ<b<48.92^\circ$ (4.2\% of area-corrected latitude range). Because the density
of the stars in Figure 8 is not uniform, we need to use a multiplier on the fraction of stars
in the longitude range, since that is fractionally the smallest dimension. In the
3.0 square degree area that contains the four stars, there are 8 stars in the original sample
of 6118. So, in the region of the clump there are 2.7 stars per square degree, while there
are $6118/8796=0.69$ stars per square degree in the whole sky region, a factor of 3.9. Using
the procedure outlined in Appendix A, we calculate that we would expect 0.0038 stars in a 6-D
region that instead has four stars. The expected number of clumps of four stars is
0.0238. The p-value is therefore less than 0.024 ($p<E=0.238$). Since $p<0.05$, this is a
significant detection.
Note that this calculation is extremely conservative. First, the calculation assumed that we
searched all 6118 stars for clumps, when in fact we only searched the ones that had low
surface gravity, as determined by the photometric indices. Second, the four stars
are not randomly distributed within the $(l,b)$ box, they are tightly constrained to a line,
as one would expect for a tidally disrupted globular cluster; if our statistical analysis
allowed a non-axis aligned region, the computed statistical significance would have been
stronger. And third, when we calculated
the density of stars in the region containing the moving group we included the moving group
stars (half of the sample of eight) in the density calculation. This artificially inflates our
Galactic latitude multiplier. To estimate the effect of just this third factor, we counted
the number of A stars in a slightly larger region of the sky with $60^\circ<l<66.63^\circ$
and $47.37^\circ<b<48.92^\circ$ and found 16 stars, of which 4 are members of the moving group.
This gives us a local density of $12/6.87=1.75$ stars per square degree, which is a factor
of 2.5 times the average star density. Using this multiplier, we expect to find 0.0025
objects where we actually find four, and the expected fraction of the time a clump of four
stars in this size region of 6-D space should be found in this size data set is not more
than 0.0063.
\section{Estimated luminosity of the progenitor}
We estimate the properties of a progenitor of this moving group, making the assumption
that it was a globular cluster, and that we have detected the
majority of the BHB stars from what was the core of the star cluster. We expect the
progenitor was a globular cluster because the stars are very well aligned in the sky, have
a velocity dispersion consistent with instrumental errors, and there are very few stars
identified. If the progenitor
was a dwarf galaxy, so that it also has younger or more metal rich components, or if
we have found only a knot of increased stellar density along a longer tidal debris stream,
then the inferred size of the progenitor is larger.
We detected 4-8 BHB stars in the moving group. In the color-magnitude range in the vicinity
of the moving group, we have spectra of 50 of the 73 candidate BHBs (68\%). Therefore,
we expect that there are $\sim 10$ BHBs in the moving group. \citet{1997A&AS..121..499B} found
five BHB stars in the globular cluster Pal 13 ([Fe/H]=$-1.74$, $M_V=-3.74$, Harris 1996).
\citet{ynetal00} find at least five BHB stars in the globular cluster Pal 5
([Fe/H]=$-1.41$, $M_V=-5.17$, Harris 1996). The moving group is consistent with a
progenitor that is like one of the smaller globular clusters of the Milky Way, with
a total integrated luminosity of about $M_V\sim -4$.
To understand why this moving group could not be identified from photometry alone, we estimated
the number of red giant and subgiant branch stars in the moving group by comparison with
NGC 2419. Note that at 50 kpc, turnoff stars are at $g_0 \sim 22.7$, which is close to the
limiting magnitude of the SDSS, so deeper photometry is required to clearly detect
significant density enhancements expected from turnoff and main sequence stars
fainter than these limits. We selected SDSS DR6 stars within about nine arcminutes
of the centre of NGC 2419 (this does not include stars near the very centre of the GC,
since individual stars are not resolved there). BHB stars were selected with $20.2<g_0<20.6$
and $-0.4<(g-r)_0<0.0$. Giant stars were selected within a parallelogram with vertices
$[(g-r)_0,g_0]=[0.5,20], [0.9, 18], [0.7, 20], [1.0, 18]$. Subgiant stars were selected
within a triangular area with vertices $[(g-r)_0, g_0]= [0.5, 21], [0.1, 23], [0.6, 23]$.
We found 101 BHBs, 98 red giants, and 302 subgiants over background in the region of sky
that was searched. Therefore, we expect about the same number of red giant stars as BHBs
in our moving group, and about three times as many subgiants. We shifted the magnitudes
of the color-magnitude boxes by 1.41 magnitudes (the difference in distance modulus between
our moving group and NGC 2419) and counted the number of stars in
$60^\circ<l<70^\circ$ and $45^\circ<b<50^\circ$. The moving group is expected to have
10 of 73 candidate BHB stars (1 sigma fluctuation), 10 of 2742 candidate giant stars (0.2 sigma),
and 30 of 13,182 candidate subgiant stars (0.3 sigma). The background star counts are much
too high for us to find this moving group in photometry alone.
\section{Conclusions}
We have detected a moving group of at least four BHB stars in the corner (``toe") of the
Hercules constellation spilling into Corona Borealis. These stars are coincident in angular position
[$(l,b)=(65^\circ,48^\circ$)], apparent
magnitude ($g_0=18.9$), line-of-sight velocity ($V_{\mbox{\it gsr}}=-10$ km s$^{-1}$,
$\sigma_{V_{\mbox{\it gsr}}}<10$ km s$^{-1}$, $\langle V_r\rangle=-157$ km s$^{-1}$), and
metallicity ([Fe/H]=$-2.4$). We expect that the progenitor of this moving group was a low
metallicity globular cluster, with a luminosity like that of one of the smaller globular
clusters in the Milky Way halo.
We show that useful surface gravities and metallicities
are measured for BHB stars with S/N$>7$ in SDSS DR7. The mean
metallicity of BHB stars in the outer halo is similar to M53, which has a published
metallicity of [Fe/H]=$-2.0$. The metallicity does not appear to change with distance from
the Sun ($6<R<55$ kpc). Our measurement of the spheroid metallicity is slightly higher
than claimed by \citet{carollo} and somewhat lower than earlier studies
of outer halo stars.
The Hercules moving group is one of many tidally disrupted stellar associations
expected to comprise the spheroid of the
Milky Way and could not have been identified from photometry alone; more
complete spectroscopic surveys are required to identify the component
spheroid moving groups, and determine the merger history of our galaxy.
We present a statistical technique that allows us to estimate the significance of clumps
discovered in multidimensional data.
\section*{Acknowledgments}
This project was funded by the National Science Foundation under grant number AST 06-07618.
T.C.B and Y.S.L. acknowledge partial support from grans PHY 02-16783 and PHY
08-22648, Physics Frontier Centers/JINA: Join Institute for Nuclear Astrophysics,
awarded by the National Science Foundation. P.R.F. acknowledges support through the Marie
Curie Research Training Network ELSA (European Leadership in Space Astroometry) under
contract MRTN-CT-2006-033481.
Many thanks to Ron Wilhelm, who answered our questions about potential RR Lyrae stars and
stellar metallicities.
Funding for the SDSS and SDSS-II has been provided by the
Alfred P. Sloan Foundation, the Participating Institutions, the
National Science Foundation, the U.S. Department of Energy, the
National Aeronautics and Space Administration, the Japanese
Monbukagakusho, the Max Planck Society, and the Higher Education Funding
Council for England. The SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for
the Participating Institutions. The Participating Institutions are the
American Museum of Natural History, Astrophysical Institute Potsdam,
University of Basel, University of Cambridge, Case Western Reserve
University, University of Chicago, Drexel University, Fermilab, the
Institute for Advanced Study, the Japan Participation Group, Johns
Hopkins University, the Joint Institute for Nuclear Astrophysics, the
Kavli Institute for Particle Astrophysics and Cosmology, the Korean
Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos
National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the
Max-Planck-Institute for Astrophysics (MPA), New Mexico State
University, Ohio State University, University of Pittsburgh, University
of Portsmouth, Princeton University, the United States Naval
Observatory, and the University of Washington.
| proofpile-arXiv_065-10965 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{intro}
Photolithography is a key process in the production of integrated
circuits. It is the process by which circuit patterns are transferred
onto silicon wafers. A review of this manufacturing technology is given in
\cite{Schell03}. The main step in photolithography is the
creation of a circuit image on the photoresist coating which sits on
the silicon layer that is to be patterned. The image is formed using
ultra-violet (UV) light which is diffracted by a mask, and refracted by a
system of lenses. The mask simply consists of cut-outs, and lets
light through the holes. The parts of the photoresist that are exposed to
the UV light can be removed, leaving openings to the layer to be
patterned. The next stage is etching, which removes material in the
layer that is unprotected by the photoresist. Once etching is done,
the photoresist can be removed, and the etched away ``channels'' may
be filled. The entire process is illustrated schematically in Figure 1.
\begin{figure}[t]
\hspace{2in}{\includegraphics[width=3.5in]{process.png}}
\caption{The photolithographic process. Ultraviolet light, diffracted
by a mask, forms an image on the photoresist. The exposed portion
of the photoresist is removed, leaving openings. Etching removes
parts of the layer to be patterned. After etching, the photoresist
is removed.}
\label{process}
\end{figure}
The problem we address in this work is the inverse problem of
determining what mask is needed in order to remove a desired shape
in the photoresist. The difficulty of producing a desired shape comes
from the fact that the UV light is diffracted at the mask. Moreover,
the chemicals in the photoresist reacts nonlinearly to UV exposure --
only portions of the photoresist that have been exposed to a certain
level of intensity are removed in the bleaching process.
The nature of the present work is analytical. Our goal is to formulate
mathematically well-posed problems for photolithography. The methods
we use to prove well-posedness are constructive and may serve as a
foundation for a computational method.
Our investigation into photolithography is inspired by the work of
Cobb \cite{Cobb98} who was the first to approach this problem from the
point of view of optimal design which utilizes a physically-based
model. This general approach was further developed by introducing
a level set method in \cite{Tuzel09}. A different computational
approach which models the mask as a pixelated binary image can be found in
\cite{Poona07}.
The plan of the paper is as follows. In the first and preliminary
section, Section~\ref{model}, we develop the most basic model for
removal of the exposed photoresist. We describe the inverse problem
to be solved. This is followed by a discussion of the approximate problem
whose properties we intend to investigate in this work.
Section~\ref{prelsec} contains mathematical preliminaries needed
for our work. We introduce the basic notation and recall
various results which will be useful for our analysis.
In particular, in Subsection~\ref{geomsubsec} we discuss the
geometry of masks or circuits and how to measure the distance between
two of them. In Section~\ref{dirsec} we discuss the properties of the operator which
maps the mask into the circuit. Section~\ref{approxsec} provides an
analysis of the variational approach to the problem of the optimization of the mask and we prove a
convergence result for it, Theorem~\ref{finalteo}, in the framework of
$\Gamma$-convergence.
\section{Description of the inverse problem}\label{model}
This section is separated into three subsections. First, we review some
basic facts about Fourier transforms and prove a result about approximation
of a Gaussian. We follow this with a discussion of
the optics involved and a model for photolithography. In the final subsection
we describe the inverse problem and its approximation.
\subsection{Fourier transform and approximation of Gaussians}
We first set some notation and describe a few preliminary results.
For every $x\in\mathbb{R}^2$, we shall set $x=(x_1,x_2)$, where $x_1$ and $x_2\in\mathbb{R}$. For every $x\in\mathbb{R}^2$ and $r>0$, we shall denote by $B_r(x)$ the open ball
in $\mathbb{R}^2$ centered at $x$ of radius $r$.
Usually we shall write $B_r$ instead of $B_r(0)$.
We recall that, for any set $E\subset \mathbb{R}^2$, we denote by $\chi_E$
its characteristic function, and for any $r>0$,
$B_r(E)=\bigcup_{x\in E}B_r(x)$.
For any $f\in\mathcal{S}'(\mathbb{R}^2)$, the space of tempered distributions,
we denote by $\hat{f}$ its \emph{Fourier transform}, which, if
$f\in L^1(\mathbb{R}^2)$, may be written as
$$\hat{f}(\xi)=\int_{\mathbb{R}^2}f(x)\mathrm{e}^{-\mathrm{i} \xi\cdot x}\mathrm{d} x,\quad \xi\in\mathbb{R}^2.$$
We recall that $f(x)=(2\pi)^{-2}\hat{\hat{f}}(-x)$, that is, when also $\hat{f}\in L^1(\mathbb{R}^2)$,
$$f(x)=\frac{1}{(2\pi)^2}\int_{\mathbb{R}^2}\hat{f}(\xi)\mathrm{e}^{\mathrm{i} \xi\cdot x}\mathrm{d} \xi,\quad x\in\mathbb{R}^2.$$
If $f$ is a radial function, that is $f(x)=\phi(|x|)$ for any $x\in \mathbb{R}^2$, then
$$\hat{f}(\xi)=2\pi \mathcal{H}_0(\phi)(|\xi|),\quad \xi\in\mathbb{R}^2,$$
where
$$\mathcal{H}_0(\phi)(s)=\int_0^{+\infty}rJ_0(sr)\phi(r)\mathrm{d} r,\quad s\geq 0,$$
is the Hankel transform of order $0$, $J_0$ being the Bessel function of order $0$, see for instance \cite{Col}.
We denote the Gaussian distribution by
$G(x)=(2\pi)^{-1}\mathrm{e} ^{-|x|^2/2}$, $x\in \mathbb{R}^2$, and let us note that
$\hat{G}(\xi)=\mathrm{e} ^{-|\xi|^2/2}$, $\xi\in\mathbb{R}^2$. Moreover, $\|G\|_{L^1(\mathbb{R}^2)}=1$. Furthermore if $\delta_0$ denotes the Dirac delta centered at $0$, we have $\widehat{\delta_0}\equiv 1$, therefore $(2\pi)^{-2}\hat{1}=\delta_0$.
For any function $f$ defined on $\mathbb{R}^2$ and any positive constant $s$, we denote $f_s(x)=s^{-2}f(x/s)$,
$x\in \mathbb{R}^2$.
We note that $\|f_s\|_{L^1(\mathbb{R}^2)}=\|f\|_{L^1(\mathbb{R}^2)}$ and $\widehat{f_s}(\xi)=\hat{f}(s\xi)$, $\xi\in\mathbb{R}^2$.
We conclude these preliminaries with the following integrability result for the Fourier transform and its applications.
\begin{teo}\label{intFourierteo}
There exists an absolute constant $C$ such that the following estimate holds
$$\|\hat{f}\|_{L^1(\mathbb{R}^2)}\leq C\|f\|_{W^{2,1}(\mathbb{R}^2)}.$$
\end{teo}
\proof{.} This result is contained in Theorem A in \cite{Kol} and it is based on previous analysis done in \cite{Pel-Woj}.\hfill$\square$
\bigskip
We recall that a more detailed analysis on conditions for which integrability of the Fourier transform holds may be found in \cite{Taib}. However the previous result is simple to use and it is enough for our purposes, in particular for proving the following lemma.
\begin{lem}\label{approxlemma}
For any $\tilde{\delta}>0$ there exist a constant $s_0$, $0<s_0\leq 1$, and a radial function $\hat{T}\in C_0^{\infty}(\mathbb{R}^2)$ such that $\hat{T}\equiv 1$ on $B_{s_0}$ and, if we call
$T=(2\pi)^{-2}\hat{\hat{T}}$, then $T\in W^{2,1}(\mathbb{R}^2)$ and
$$\|T-G\|_{W^{1,1}(\mathbb{R}^2)}\leq \tilde{\delta}.$$
\end{lem}
\proof{.}
We sketch the proof of this result. Let us consider the following cut-off function
$\phi\in C^{\infty}(\mathbb{R})$ such that $\phi$ is nonincreasing, $\phi\equiv 1$ on$(-\infty,0]$ and $\phi\equiv 0$ on $[1,+\infty)$.
We define a function $\hat{\tilde{T}}$ as follows
$$\hat{\tilde{T}}(x)=\phi(|x|-1)+(1-\phi(|x|-1))\widehat{G_{s_0}}(x)\phi(|x|-b),\quad x\in\mathbb{R}^2,$$
for suitable constants $s_0$, $0<s_0\leq 1$ and $b\geq 2$.
We call $\tilde{T}=(2\pi)^{-2}\hat{\hat{\tilde{T}}}$.
Then lengthy but straightforward computations, with the aid of Theorem~\ref{intFourierteo}, allow us to prove that for some $s_0$ small enough and for some $b=s_0^{-1}b_0$, with $b_0$ large enough, we have
$$\|\tilde{T} -G_{s_0}\|_{W^{1,1}(\mathbb{R}^2)}\leq \tilde{\delta}.$$
Then, let $\hat{T}(x)=\hat{\tilde{T}}(x/s_0)$, $x\in\mathbb{R}^2$, so that
$T=\tilde{T}_{1/s_0}$, or equivalently $\tilde{T}=T_{s_0}$.
Therefore
$$\|T_{s_0}-G_{s_0}\|_{W^{1,1}(\mathbb{R}^2)}\leq \tilde{\delta}.$$
By a simple rescaling argument we have that $\hat{T}$ satisfies the required properties.
Furthermore, by this construction, we may choose $\hat{T}$ such that it is radially nonincreasing, $\hat{T}\equiv 1$ on $B_{s_0}$ and it decays to zero in a suitable smooth, exponential way.\hfill$\square$
\subsection{A model of image formation}
We are now in the position to describe the model we shall use.
The current industry standard for modeling the optics is based on
Kirchhoff approximation. Under this approximation,
the light source at the mask is on where the mask is open, and off
otherwise (see Figure 1). Propagation through the lenses can be
calculated using Fourier optics. It is further assumed that the image
plane, in this case the plane of the photoresist, is at the focal
distance of the optical system. If there were no
diffraction, a perfect image of the mask would be formed on the image
plane. Diffraction, together with partial coherence of the light
source, acts to distort the formed image.
The mask, which we mention consists of cut-outs, is represented as a
binary function, i.e., it is a
characteristic function of the cut-outs. Suppose that $D$ represents
the cut-outs, then the mask is given by
\[
m(x) = \chi_D(x).
\]
The image is the light intensity on the image plane. This is given by
\cite{Pati97}
\begin{equation}\label{Hopkins}
I(x) = \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} m(\xi)K(x-\xi) J(\xi-\eta) K(x-\eta) m(\eta) \mathrm{d}\xi \mathrm{d}\eta,\quad x\in\mathbb{R}^2.
\end{equation}
In the above expression the kernel $K(\cdot)$ is called the \emph{coherent point spread function}
and describes the optical system. For an optical system with a
circular aperture, once the wavenumber of the light
used, $k>0$, has
been chosen, the kernel depends on a single parameter called the Numerical
Aperture, $\text{NA}$. Notice that the wavelength
is $\lambda=2\pi/k$.
Let us recall that the so-called Jinc function is defined as
\[
\textrm{Jinc}(x)=\frac{J_1(|x|)}{2\pi|x|},\quad x\in\mathbb{R}^2,
\]
where $J_1$ is the Bessel function of order 1. We notice that in the
Fourier space, see for instance \cite[page~14]{Erd},
$$\widehat{\textrm{Jinc}}(\xi)=\chi_{B_1}(\xi),\quad \xi\in\mathbb{R}^2.$$
If we denote by $s=(k\text{NA})^{-1}$, then
the kernel
is usually modeled as follows
\[
K(x)=\textrm{Jinc}_s(x)=\frac{k\text{NA}}{2\pi}\frac{J_1(k \text{NA}|x|)}{|x|},\quad x\in\mathbb{R}^2,
\]
therefore
$$\hat{K}(\xi)=\chi_{B_1}(s\xi)=\chi_{B_{1/s}}(s\xi)=\chi_{B_{k\text{NA}}}(\xi),\quad \xi\in\mathbb{R}^2.$$
If $\text{NA}$ goes to $+\infty$, that is $s\to 0^+$, then $\hat{K}$ converges pointwise to $1$, thus $K$ approximates in a suitable sense the Dirac delta.
For technical reasons, we shall consider a slightly different coherent point spread function $K$. Let us fix a positive constant $\tilde{\delta}$, to be chosen later.
We shall replace the characteristic function $\chi_{B_1}$, the Fourier transform of the Jinc function, with the function $\hat{T}(s_0\xi)$, $\xi\in\mathbb{R}^2$, with $\hat{T}$ and $s_0$ as in Lemma~\ref{approxlemma}.
Therefore $\hat{T}(s_0\cdot)$ is a radial function that it is still identically equal to $1$ on $B_1$, it is still compactly supported, it is nonincreasing with respect to the radial variable and it decays to zero in a smooth, exponential way.
Its Fourier transform is $T_{s_0}$ and we shall assume that
\begin{equation}\label{Kdef}
K(x)=(T_{s_0})_s(x)=T_{ss_0}(x),\quad x\in\mathbb{R}^2,
\end{equation}
where again $s=(k\text{NA})^{-1}$. Also in this model, if $\text{NA}$ goes to $+\infty$, that is $s\to 0^+$, then $\hat{K}$ converges pointwise to $1$, thus $K$ approximates in a suitable sense the Dirac delta.
The function $J(\cdot)$ is called the \emph{mutual
intensity function}. If the illumination is fully coherent, $J\equiv 1$.
In practice, illumination is never fully coherent and is parametrized
by a \emph{coherency coefficient} $\sigma$. A typical model for $J$ is
\begin{equation}\label{Jdef}
J(x)=2\frac{J_1(k \sigma \text{NA}|x|)}{k\sigma\text{NA}|x|}=\pi \textrm{Jinc}(k\sigma\text{NA}|x|),\quad x\in\mathbb{R}^2.
\end{equation}
Thus,
$$\frac{1}{(2\pi)^2}\hat{J}(\xi)=\frac{1}{\pi(k\sigma\text{NA})^2}\chi_{B_{k\sigma\text{NA}}}(\xi),\quad\xi\in\mathbb{R}^2,$$
that, as $\sigma\to 0^+$, converges, in a suitable sense, to the Dirac delta. Therefore
full coherence is achieved for $\sigma\to 0^+$. In fact, if $\sigma\to 0^+$, $J$ converges to $1$ uniformly on any compact subset of $\mathbb{R}^2$.
The equation (\ref{Hopkins}) is often referred to as the Hopkins areal
intensity representation. As it will become apparent from the analysis developed in the paper, the value of $s$ is related to the scale of details that the manufacturing of the mask allows, thus in turn to the scale of details of the desired circuit. Therefore,
we typically consider $k\text{NA}\gg 1$, that is $s\ll 1$, and
$k\sigma\text{NA}\ll 1$.
\subsection{The inverse problem and its approximation}
The photoresist material responds to the intensity of the image. When
intensity at the photoresist goes over a certain threshold, it is
then considered exposed and can be removed. Therefore, the
exposed pattern, given a mask $m(x)$, is
\begin{equation}\label{exposed}
\Omega = \{ x\in\mathbb{R}^2\,:\ I(x) > h \},
\end{equation}
where $h$ is the exposure threshold. Clearly, $\Omega$ depends on the
mask function $m(x)$, which we recall is given by the characteristic
function of $D$ representing the cut-outs, that is $\Omega=\Omega(D)$. In photolithography, we have a
desired exposed pattern which we wish to achieve. The inverse problem
is to find a mask that achieves this desired exposed pattern.
Mathematically, this cannot, in general, be done. Therefore, the
inverse problem must be posed as an optimal design problem.
Suppose the desired pattern is given by $\Omega_0$. We pose the
minimization problem
\begin{equation}\label{design_prob}
\displaystyle{\min_{D\in{\mathcal A}}} \; d(\Omega(D), \Omega_0).
\end{equation}
The distance function $d(\cdot,\cdot)$ will be discussed in detail
below. The admissible set $\mathcal A$ is our search space, and needs to
be defined carefully as well.
Instead of solving (\ref{design_prob}), we pose a variational problem for
a function $u$ (instead of the mask $D$). We will show below that this problem is well-posed and that
as the approximation parameter is set to zero, we recover the solution of (\ref{design_prob}) under
a perimeter penalization.
Instead of dealing with the characteristic function $\chi_D(x)$ which represents
the mask, we will work with a phase-field function $u$ which takes on values of 0 and 1 with
smooth transitions. Thus, the intensity in (\ref{exposed}) is calculated with $u$ instead of $m=\chi_D$ in \eqref{Hopkins},
so $I$ is a function of $u$. At this point, we will not be precise about the space of
functions to which $u$ belongs. To force $u$ to take on values of mostly 0 and 1, we
introduce the Mordica-Mortola energy
\[
P_\varepsilon(u) =
\displaystyle{\frac{1}{\varepsilon}\int W(u)+\varepsilon\int|\nabla u|^2},
\]
where $\displaystyle{W(t)=9t^2(t-1)^2}$ is a double-well potential.
We will regularize the problem of minimizing the distance between the target pattern
and the exposed region by this energy.
Then we relax the hard threshold in defining the exposed
region $\Omega$ in (\ref{exposed}).
Let $\phi(t)$ be a $C^{\infty}$ nondecreasing approximate Heaviside function
with values $\phi(t \leq -1/2)=0$ and $\phi(t\geq 1/2)=1$. The function
$$\Phi_{\eta}(u)=\phi\left(\frac{I(u)-h}{\eta} \right) $$
will be 1 where the intensity $I \geq h+\eta/2$. A sigmoidal threshold function is
employed in the computational work in \cite{Poona07}.
Now we consider the distance function between $\Omega$ and $\Omega_0$ in (\ref{design_prob}). Let
\begin{equation}
d=d(\Omega,\Omega_0)=\int|\chi_{\Omega}-\chi_{\Omega_0}|+\left|P(\Omega)-P(\Omega_0)\right|,
\end{equation}
where $\chi_\Omega$ is the characteristic function of the set $\Omega$ and
$P(\Omega)$ is the perimeter of the region $\Omega$. To approximate this distance
function, we replace it by
\[
d_\eta(u,\Omega_0) = \int | \Phi_\eta(u) - \chi_{\Omega_0} | +
\left| \int | \nabla (\Phi_\eta(u))| - P(\Omega_0) \right|.
\]
The characteristic function of $\Omega$ is replaced by the smooth threshold function while
its perimeter is replaced by the TV-norm of the function.
The approximate problem we shall solve is
\[
F_{\varepsilon}(u)= d_{\eta(\varepsilon)}(u,\Omega_0) + b P_\varepsilon(u) \rightarrow \min .
\]
The remainder of the paper is an analytical study of this minimization problem. We will show
that it is well-posed, and that in the limit $\varepsilon \rightarrow 0^+$, we recover the
solution of the original problem (\ref{design_prob}) under a perimeter penalization.
\section{Mathematical preliminaries}\label{prelsec}
\newcommand{\mathcal{D}}{\mathcal{D}}
By $\mathcal{H}^1$ we denote the $1$-dimensional Hausdorff measure and by
$\mathcal{L}^2$ we denote the $2$-dimensio\-nal Lebesgue measure. We recall that,
if $\gamma\subset\mathbb{R}^2$ is a smooth curve,
then $\mathcal{H}^1$ restricted to
$\gamma$ coincides with its arclength.
For any Borel $E\subset\mathbb{R}^2$ we denote $|E|=\mathcal{L}^2(E)$.
Let $\mathcal{D}$ be a bounded open set contained in $\mathbb{R}^2$, with boundary $\partial \mathcal{D}$.
We say that $\mathcal{D}$ has a \emph{Lipschitz boundary}
if for every $x=(x_1,x_2)\in\partial\mathcal{D}$ there exist a Lipschitz
function $\varphi:\mathbb{R}\to\mathbb{R}$ and a positive constant $r$
such that for any $y\in B_r(x)$ we have, up to a rigid transformation,
$$
y=(y_1,y_2)\in\mathcal{D}\quad \text{if and only if}\quad y_2<\varphi(y_1).
$$
We note that $\mathcal{D}$ has a finite number of connected components, whereas
$\partial \mathcal{D}$ is formed by a finite number of rectifiable Jordan curves, therefore
$\mathcal{H}^1(\partial\mathcal{D})=\mathrm{length}(\partial\mathcal{D})<+\infty$.
We recall some basic notation and properties
of functions of bounded variation and sets of finite perimeter. For a more comprehensive treatment of
these subjects see, for instance, \cite{Amb e Fus e Pal, Eva e Gar, Giu}.
Given a bounded open set $\mathcal{D}\subset \mathbb{R}^2$,
we denote by $BV(\mathcal{D})$ the Banach space of \emph{functions of bounded
variation}. We recall that $u\in BV(\mathcal{D})$ if and only if
$u\in L^1(\mathcal{D})$ and its distributional derivative $Du$ is a bounded
vector
measure. We endow $BV(\mathcal{D})$ with the standard norm as follows. Given
$u\in BV(\mathcal{D})$, we denote by $|Du|$ the total variation of its
distributional derivative and
we set $\|u\|_{BV(\mathcal{D})}=\|u\|_{L^1(\mathcal{D})}+|Du|(\mathcal{D})$. We shall call
$P(u,\mathcal{D})=|Du|(\mathcal{D})$.
We recall that whenever $u\in W^{1,1}(\mathcal{D})$,
then $u\in BV(\mathcal{D})$ and $|Du|(\mathcal{D})=\int_{\mathcal{D}}|\nabla u|$, therefore
$\|u\|_{BV(\mathcal{D})}=\|u\|_{L^1(\mathcal{D})}+\|\nabla u\|_{L^1(\mathcal{D})}=\|u\|_{W^{1,1}(\mathcal{D})}$.
We say that a sequence of $BV(\mathcal{D})$ functions $\{u_h\}_{h=1}^{\infty}$
\emph{weakly}$^*$ \emph{converges} in $BV(\mathcal{D})$ to $u\in BV(\mathcal{D})$ if and only if
$u_h$ converges to $u$ in $L^1(\mathcal{D})$ and $Du_h$
weakly$^*$ converges to $Du$ in $\mathcal{D}$, that is
\begin{equation}\label{weakstarconv}
\lim_{h}\int_{\mathcal{D}}v \mathrm{d} Du_h=
\int_{\mathcal{D}}v \mathrm{d} Du\quad\text{for any }v\in C_0(\mathcal{D}).
\end{equation}
By Proposition~3.13 in \cite{Amb e Fus e Pal}, we have that if
a sequence of $BV(\mathcal{D})$ functions $\{u_h\}_{h=1}^{\infty}$ is bounded in $BV(\mathcal{D})$ and converges to $u$ in $L^1(\mathcal{D})$, then
$u\in BV(\mathcal{D})$ and $u_h$ converges to $u$
weakly$^*$ in $BV(\mathcal{D})$.
We say that a sequence of $BV(\mathcal{D})$ functions $\{u_h\}_{h=1}^{\infty}$
\emph{strictly converges} in $BV(\mathcal{D})$ to $u\in BV(\mathcal{D})$ if and only if
$u_h$ converges to $u$ in $L^1(\mathcal{D})$ and $|Du_h|(\mathcal{D})$
converges to $|Du|(\mathcal{D})$. Indeed,
$$d_{st}(u,v)=\int_{\mathcal{D}}|u-v|+\big||Du|(\mathcal{D})-|Dv|(\mathcal{D}) \big|$$
is a distance on $BV(\mathcal{D})$ inducing the strict convergence.
We also note that strict convergence implies weak$^*$ convergence.
Let $\mathcal{D}$ be a bounded open set with Lipschitz boundary.
A sequence of $BV(\mathcal{D})$ functions $\{u_h\}_{h=1}^{\infty}$
such that $\sup_h\|u_h\|_{BV(\mathcal{D})}<+\infty$ admits a subsequence
converging weakly$^*$ in $BV(\mathcal{D})$ to a function $u\in BV(\mathcal{D})$, see for instance Theorem~3.23 in \cite{Amb e Fus e Pal}.
As a corollary, we infer that for any $C>0$ the set
$\{u\in BV(\mathcal{D})\,:\ \|u\|_{BV(\mathcal{D})}\leq C\}$
is a compact subset of $L^1(\mathcal{D})$.
For any fixed constant $R>0$, with a slight abuse of notation, we shall identify
$L^1(B_R)$ with the set $\{u\in L^1(\mathbb{R}^2)\,:\ u=0\text{ a.e. outside }B_R\}$.
Let $E$ be a bounded
Borel set contained in $B_R\subset \mathbb{R}^2$. We shall denote by $\chi_E$ its characteristic function. We notice that
$E$ is compactly
contained in $B_{R+1}$, which we shall denote by $E\Subset B_{R+1}$.
We say that $E$ is a \emph{set of finite perimeter} if
$\chi_E$ belongs to $BV(B_{R+1})$ and we call the number
$P(E)=|D\chi_E|(B_{R+1})$ its \emph{perimeter}.
Analogously, for any $u\in L^1(B_R)\cap BV(B_{R+1})$, we shall denote
$P(u,B_{R+1})=|Du|(B_{R+1})$. Obviously, if
$u=\chi_E$, then $P(u,B_{R+1})=P(E)$.
Let us further remark that the intersection of two sets of finite perimeter is
still a set of finite perimeter. Moreover,
whenever $E$ is open and $\mathcal{H}^1(\partial E)$ is finite, then $E$ is a set of finite
perimeter, see for instance \cite[Section~5.11, Theorem~1]{Eva e Gar}.
Therefore a bounded open set
$\mathcal{D}$ with Lipschitz boundary
is a set of finite perimeter and its perimeter $P(\mathcal{D})$ coincides with
$\mathcal{H}^1(\partial\mathcal{D})$.
\subsection{$\Gamma$-convergence approximation of the perimeter functional}\label{ModMorsubsec}
Let us introduce the following, slightly different, version of a $\Gamma$-convergence result due to Modica and Mortola, \cite{Mod e Mor}. We shall follow the notation and proofs contained in
\cite{Bra}. We begin by setting some notation. For the definition and properties of $\Gamma$-convergence we refer to \cite{DaM}.
For any bounded open set $\mathcal{D}\subset\mathbb{R}^2$, with a slight abuse of notation,
we identify $W^{1,p}_0(\mathcal{D})$, $1<p<+\infty$, with the subset of $W^{1,p}(\mathbb{R}^2)$ functions $u$ such that $u$ restricted
to $\mathcal{D}$ belongs to $W^{1,p}_0(\mathcal{D})$ and $u$ is equal to $0$ almost everywhere outside $\mathcal{D}$.
Let us assume that for some positive constant $R$ we have $\mathcal{D}\subset B_R$.
We recall that any function in $L^1(\mathcal{D})$ is extended to zero outside $\mathcal{D}$
and the same procedure is used for $L^1(B_R)$.
Therefore, with this slight abuse of notation, $L^1(\mathcal{D})\subset L^1(B_R)$.
Throughout the paper, for any $p$, $1\leq p\leq +\infty$, we shall denote its conjugate exponent by $p'$, that is $p^{-1}+(p')^{-1}=1$.
\begin{teo}\label{Mod-Morteo}
Let $\mathcal{D}\subset B_R\subset\mathbb{R}^2$ be a bounded open set with Lipschitz boundary. Let us also assume that
$\mathcal{D}$ is convex.
Let $1<p<+\infty$ and $W:\mathbb{R}\to[0,+\infty)$ be a continuous function such that
$W(t)=0$ if and only if $t\in\{0,1\}$. Let $c_p=(\int_0^1(W(s))^{1/p'}\mathrm{d}s)^{-1}$.
For any $\varepsilon>0$ we define the functional
$P_{\varepsilon}:L^1(\mathbb{R}^2)\to [0,+\infty]$ as follows
\begin{equation}\label{modmordef}
P_{\varepsilon}(u)=\left\{
\begin{array}{ll}
\displaystyle{\frac{c_p}{p'\varepsilon}\int_{\mathcal{D}}W(u)+\frac{c_p\varepsilon^{p-1}}{p}\int_{\mathcal{D}}|\nabla u|^p}&\text{if }u\in W^{1,p}_0(\mathcal{D}),\\
\vphantom{\displaystyle{\int}}+\infty&\text{otherwise}.
\end{array}
\right.
\end{equation}
Let $P:L^1(\mathbb{R}^2)\to [0,+\infty]$ be such that
\begin{equation}\label{Pdef}
P(u)=
\left\{
\begin{array}{ll}
\vphantom{\displaystyle{\int}}P(u,B_{R+1})
&\text{if }u\in BV(B_{R+1}),\ u\in\{0,1\}\text{ a.e.},\\ &\quad\text{and }
u=0\text{ a.e. outside }\mathcal{D},\\
\vphantom{\displaystyle{\int}}+\infty&\text{otherwise}.
\end{array}
\right.
\end{equation}
Then $P=\Gamma\textrm{-}\!\lim_{\varepsilon\to 0^+}P_{\varepsilon}$ with respect to the $L^1(\mathbb{R}^2)$ norm.
\end{teo}
\begin{oss}
We observe that $P(u)=P(E)$ if $u=\chi_E$ where $E$ is a set of finite perimeter contained in $\overline{\mathcal{D}}$ and $P(u)=+\infty$ otherwise.
Furthermore, we note that the result does not change if in the definition of $P_{\varepsilon}$ we set $P_{\varepsilon}(u)=+\infty$ whenever $u$ does not satisfy the
constraint
\begin{equation}
0\leq u\leq 1\text{ a.e. in }\mathcal{D}.
\end{equation}
\end{oss}
\proof{.} We sketch the proof following that of Theorem~4.13 in \cite{Bra}. In fact,
the only difference with respect to that theorem is that we assume $\mathcal{D}$ convex and that we take $W^{1,p}_0(\mathcal{D})$
instead of $W^{1,p}(\mathcal{D})$ in the definition
of $P_{\varepsilon}$.
By Proposition~4.3 in \cite{Bra},
we obtain that $P(u)\leq \Gamma\textrm{-}\!\liminf_{\varepsilon\to 0^+} P_{\varepsilon}(u)$ for any $u\in L^1(\mathbb{R}^2)$.
In order to obtain the $\Gamma\textrm{-}\!\limsup$ inequality, we follow the procedure described in
Section~4.2 of \cite{Bra}.
It would be enough to construct
$\mathcal{M}\subset L^1(\mathbb{R}^2)$ such that the following two conditions are satisfied.
First, we require that, for any $u\in L^1(\mathbb{R}^2)$ such that $P(u)<+\infty$, there exists a sequence
$\{u_j\}_{j=1}^{\infty}$ such that
$u_j\in \mathcal{M}$, for any $j\in\mathbb{N}$, $u_j\to u$ in $L^1(\mathbb{R}^2)$ as $j\to\infty$, and $P(u)=\lim_jP(u_j)$.
Second, for any $u\in\mathcal{M}$, $\Gamma\textrm{-}\!\limsup_{\varepsilon\to 0^+} P_{\varepsilon}(u)\leq P(u)$.
We choose $\mathcal{M}=\{u=\chi_E\,:\ E\Subset\mathcal{D},\ E\text{ of class }C^{\infty}\}$.
The second property follows by Proposition~4.10 in \cite{Bra}. As far as the first property is concerned, this can be
obtained by following the proof of Theorem~1.24 in \cite{Giu}. That theorem states that any bounded set of finite perimeter
$E$ can be approximated by a sequence of $C^{\infty}$ sets $\{E_j\}_{j=1}^{\infty}$ such that, as $j\to \infty$,
$\int_{\mathbb{R}^2}|\chi_{E_j}-\chi_E|\to 0$ and $P(E_j)\to P(E)$. If we assume that $E\subset \overline{\mathcal{D}}$,
and that $\mathcal{D}$ is convex, by choosing in the proof of Theorem~1.24 in \cite{Giu} a value of $t$
satisfying
$1/2<t<1$, we obtain that the sets $E_j$ are also compactly contained in $\mathcal{D}$, for any $j\in\mathbb{N}$.\hfill$\square$
\bigskip
Also the following result, due to Modica, \cite{Mod}, will be useful.
\begin{prop}\label{modcompprop}
For any $C>0$, let us take $1<p<+\infty$ and any $\varepsilon>0$, and let us define
$$A_C=\{u\in L^1(\mathbb{R}^2)\,:\ 0\leq u\leq 1\text{ a.e. and }P_{\varepsilon}(u)\leq C\}.$$
Then $A_C$ is precompact in $L^1(\mathbb{R}^2)$.
\end{prop}
\proof{.} We repeat, for the reader's convenience, the arguments developed in \cite{Mod}.
Clearly $A_C$ is a bounded subset of $L^1(\mathcal{D})$.
Let $\{u_n\}_{n=1}^{\infty}$ be a sequence in $A_C$. We need to prove that there exists a subsequence converging in $L^1(\mathcal{D})$.
For any $t$, $0\leq t\leq 1$, let
$\phi(t)=\int_0^t(W(s))^{1/p'}\mathrm{d}s$. For any $n\in\mathbb{N}$,
we define $v_n=\phi(u_n)$ and we observe that
$0\leq v_n\leq \phi(1)$ almost everywhere. Therefore,
the functions $v_n$, $n\in\mathbb{N}$, are
uniformly bounded in $L^{\infty}(\mathcal{D})$ and, consequently, in $L^1(\mathcal{D})$. Furthermore, since $\phi$ is a $C^1$ function,
with bounded $C^1$ norm, then $Dv_n=\phi'(u_n)Du_n=W^{1/p'}(u_n)Du_n$.
Therefore,
$$\int_{\mathcal{D}}|Dv_n|=\int_{\mathcal{D}}|W^{1/p'}(u_n)||Du_n|\leq P_{\varepsilon}(u_n)/c_p.$$
We infer that there exists a subsequence $\{v_{n_k}\}_{k=1}^{\infty}$ converging, as $k\to\infty$, to a function $v_0$ in $L^1(\mathcal{D})$
and almost everywhere.
Let $\psi$ be the inverse function of $\phi$ and let $u_0=\psi(v_0)$. We observe that
$\psi$ is bounded and uniformly continuous on $[0,\phi(1)]$, hence we conclude that,
as $k\to\infty$, $u_{n_k}$ converges to $u_0$ in $L^1(\mathcal{D})$.\hfill$\square$
\begin{oss}\label{compactnessoss}
With the same proof, we can show the following. Let us consider
any family $\{u_{\varepsilon}\}_{0<\varepsilon\leq \varepsilon_0}$ such that,
for some positive constant $C$ and
for any $\varepsilon$, $0<\varepsilon\leq\varepsilon_0$,
we have $0\leq u_{\varepsilon}\leq 1$ almost everywhere and
$P_{\varepsilon}(u_{\varepsilon})\leq C$.
Then $\{u_{\varepsilon}\}_{0<\varepsilon\leq \varepsilon_0}$
is precompact in $L^1(\mathbb{R}^2)$.
\end{oss}
\subsection{Convolutions}
We recall that, for any two functions $f$ and $g$ defined on $\mathbb{R}^2$,
we define the \emph{convolution} of $f$ and $g$, $f\ast g$, as follows
$$(f\ast g)(x)=\int_{\mathbb{R}^2}f(x-y)g(y)\mathrm{d} y=\int_{\mathbb{R}^2}f(y)g(x-y)\mathrm{d} y,\quad x\in\mathbb{R}^2,$$
whenever this is well-defined.
The following classical properties of convolutions will be used. First convolution is commutative. Second, as a consequence of Young inequality we have
the following result about integrability and regularity of convolutions.
\begin{prop}\label{convprop}
Let $1\leq r,\ p,\ q\leq +\infty$ be such that $1+\frac{1}{r}=\frac{1}{p}+\frac{1}{q}$, and let $n=0,1,2,\ldots$.
Let $1\leq q<+\infty$,
let $f\in L^{q}(\mathbb{R}^2)$ and let $g\in W^{n,p}(\mathbb{R}^2)$. Then
$h=f\ast g\in W^{n,r}(\mathbb{R}^2)$ and there exists a constant $C$, depending on $n$, $p$, $q$ and $r$ only, such that
$$\|h\|_{W^{n,r}(\mathbb{R}^2)}\leq C\|f\|_{L^q(\mathbb{R}^2)}\|g\|_{W^{n,p}(\mathbb{R}^2)}.$$
Let $q=+\infty$ and let $f\in L^{\infty}(\mathbb{R}^2)$, with compact support. If
$g\in W^{n,1}(\mathbb{R}^2)$, then
$h=f\ast g\in W^{n,\infty}(\mathbb{R}^2)$ and there exists a constant $C$, depending on $n$ only, such that
$$\|h\|_{W^{n,\infty}(\mathbb{R}^2)}\leq C\|f\|_{L^{\infty}(\mathbb{R}^2)}\|g\|_{W^{n,1}(\mathbb{R}^2)}.$$
If $f\in L^1(\mathbb{R}^2)$ and $g\in L^{\infty}(\mathbb{R}^2)$, then
$h=f\ast g\in L^{\infty}(\mathbb{R}^2)$ and it holds
$\|h\|_{L^{\infty}(\mathbb{R}^2)}\leq \|f\|_{L^{\infty}(\mathbb{R}^2)}\|g\|_{L^1(\mathbb{R}^2)}$. Furthermore, if $g$ is uniformly continuous and
$\omega_g$ denotes its modulus of continuity, then $h$ is also uniformly continuous and
$$\omega_h\leq \|f\|_{L^1(\mathbb{R}^2)}\omega_g.$$
Finally, let $f\in L^1(\mathbb{R}^2)$ and let $g\in C^{n,\alpha}(\mathbb{R}^2)$,
for some $\alpha$, $0<\alpha\leq 1$. Then $h\in C^{n,\alpha}(\mathbb{R}^2)$
and there exists a constant $C$, depending on $n$ and $\alpha$ only, such that
$$\|h\|_{C^{n,\alpha}(\mathbb{R}^2)}\leq C\|f\|_{L^1(\mathbb{R}^2)}\|g\|_{C^{n,\alpha}(\mathbb{R}^2)}.$$
\end{prop}
\subsection{The geometry of masks and circuits}\label{geomsubsec}
In this subsection we investigate the following two questions, namely
what are reasonable assumptions on the geometry of the mask $D$ and how to measure the distance between
the constructed circuit $\Omega$ and the desired one $\Omega_0$. We begin with the following definition. During this subsection, in most cases proofs will be omitted and left to the reader.
For given positive constants $r$ and $L$, we say that a bounded open set
$\Omega\subset \mathbb{R}^2$
is \emph{Lipschitz} or $C^{0,1}$ \emph{with constants} $r$ \emph{and} $L$ if
for every $x\in\partial\Omega$ there exists a Lipschitz
function $\varphi:\mathbb{R}\to\mathbb{R}$, with Lipschitz constant bounded by $L$, such that for any $y\in B_r(x)$, and up to a rigid transformation,
\begin{equation}\label{Lipdomain}
y=(y_1,y_2)\in\Omega\quad \text{if and only if}\quad y_2<\varphi(y_1).
\end{equation}
Without loss of generality, we may always assume that $x=(0,0)$ and $\varphi(0)=0$.
We shall always denote by $e_1$ and $e_2$ the vectors of the canonical bases.
Clearly the orientation of the canonical bases may
vary depending on $x\in\partial\Omega$.
We shall also use the following notation.
There exist positive constants $\delta_1\leq 1/2$, $\delta_2\leq \delta_1$ and $m_1\leq 1$, all of them depending on $L$ only, such that the following holds.
For any $x\in \partial\Omega$ and for any $\delta>0$, let $M_{\delta}(x)=
\{y\,:\ |y_1|\leq\delta r,\ y_2=\varphi(y_1)\}$ and
$N_{\delta}(x)=\{y\,:\ |y_1|\leq\delta_1r,\ \varphi(y_1)-\delta r\leq y_2\leq\varphi(y_1)+\delta r\}$.
Then we assume that, for any $\delta$, $0<\delta\leq \delta_2$, the following properties hold. First,
$N_{\delta}(x)\subset B_{r/2}(x)$ (hence
$M_{\delta_1}(x)\subset B_{r/2}(x)$ as well).
Clearly $N_{\delta}(x)$ is contained in
$\overline{B}_{\delta r}(\partial\Omega)$, and we assume that $N_{\delta}(x)$
contains $\overline{B}_{m_1\delta r}(M_{\delta_1/2}(x))$ and that for any
$y\in \{y\,:\ |y_1|\leq\delta_1r/2,\ y_2=\varphi(y_1)\pm\delta r\}$, $y\not\in\overline{B}_{m_1\delta r}(\partial\Omega)$.
For any integer $k=1,2,\ldots$, any $\alpha$, $0< \alpha\leq 1$, and any positive constants $r$ and $L$,
we say that a bounded open set $\Omega\subset\mathbb{R}^2$
is $C^{k,\alpha}$ \emph{with constants} $r$ \emph{and} $L$ if
for every $x\in\partial\Omega$ there exists a $C^{k,\alpha}$
function $\varphi:\mathbb{R}\to\mathbb{R}$, with $C^{k,\alpha}$ norm bounded by $L$, such that for any
$y\in B_r(x)$, and up to a rigid transformation, \eqref{Lipdomain} holds.
Without loss of generality, we may always assume that $x=(0,0)$ and $\varphi(0)=0$.
Let us fix three positive constants $r$, $L$ and $R$.
Let $\mathcal{A}^{0,1}(r,L,R)$ be the class of all bounded open sets, contained in $B_R\subset \mathbb{R}^2$,
which are Lipschitz with constants $r$ and $L$.
For any integer $k=1,2,\ldots$ and any $\alpha$, $0< \alpha\leq 1$, we denote with
$\mathcal{A}^{k,\alpha}(r,L,R)$ the class of all bounded open sets, contained in $B_R\subset \mathbb{R}^2$,
which are $C^{k,\alpha}$ with constants $r$ and $L$.
Since we shall identify open sets $D$ with their characteristic functions $\chi_{D}$,
if $\mathcal{A}=\mathcal{A}^{0,1}(r,L,R)$,
(or $\mathcal{A}=\mathcal{A}^{k,\alpha}(r,L,R)$, respectively)
then, with a slight
abuse of notation,
$\mathcal{A}$ will also denote the subset of functions $u\in L^1(B_R)$ such that
$u=\chi_{D}$ for some $D\in \mathcal{A}$. Moreover,
we shall denote
$$A=\{u\in L^1(B_R)\,:\ 0\leq u\leq 1\text{ a.e. in }B_R\}$$
and, for any $\gamma>0$,
\begin{equation}
\mathcal{A}_{\gamma}=\{u\in A\,:\
\|u-\chi_{D}\|_{L^1(B_R)}\leq \gamma\text{ for some }D\in\mathcal{A}\}.
\end{equation}
Let us assume that
$\Omega_1$ and $\Omega_2$ belong to $\mathcal{A}^{0,1}(r,L,R)$.
There are several ways to define the distance between these two sets.
We shall describe four of them and study their relationships.
We let
\begin{align}
&d_1=d_1(\Omega_1,\Omega_2)=d_{H}(\overline{\Omega}_1,\overline{\Omega}_2);\\
&\tilde{d}_1=\tilde{d}_1(\Omega_1,\Omega_2)=d_{H}(\partial\Omega_1,\partial\Omega_2);\\
&d_2=d_2(\Omega_1,\Omega_2)=|\Omega_1\Delta\Omega_2|=\|\chi_{\Omega_1}-\chi_{\Omega_2}\|_{L^1(B_{R+1})};\\
&d_3=d_3(\Omega_1,\Omega_2)=d_2+|P(\Omega_1)-P(\Omega_2)|
=d_{st}(\chi_{\Omega_1},\chi_{\Omega_2}).\label{d3def}
\end{align}
Here $d_H$ denotes the Hausdorff distance, whereas we recall that
$P(\Omega)$ denotes the perimeter of $\Omega$ in $B_{R+1}$ and $d_{st}$ is the distance inducing strict convergence in $BV(B_{R+1})$.
First of all, we observe that all of these are distances. We now investigate their relationships.
We begin with the first two, $d_1$ and $\tilde{d}_1$, and we notice that
\begin{equation}
\text{if }d_1\leq r/4,\text{ then }d_1\leq \tilde{d}_1.
\end{equation}
There exists a constant $c$, $0<c\leq 1$, depending on $L$ only, such that
$$d_1\geq c \min\{r,\tilde{d}_1\}.$$
Therefore,
$$
\text{if }\tilde{d}_1\leq r,\text{ then }\tilde{d}_1\leq C d_1,
$$
where $C=1/c$. Furthermore, if $d_1\leq (c/2) r$, then $\tilde{d}_1$ must be less than or equal to $r$, so
$$
\text{if }d_1\leq (c/2)r,\text{ then }\tilde{d}_1\leq C d_1.
$$
Moreover, we can find a constant $c_1$, $0<c_1\leq 1$, depending on $L$ only,
such that
$$
\text{if }\tilde{d}_1\leq c_1r,\text{ then }d_1\leq \tilde{d}_1\leq C d_1.
$$
We conclude that we can find a constant $c_1$, $0<c_1\leq 1$,
and a constant $C\geq 1$, both depending on $L$ only, such that
\begin{equation}\label{est1}
\text{if either }d_1\leq c_1r\text{ or }\tilde{d}_1\leq c_1r,\text{ then }d_1\leq \tilde{d}_1\leq C d_1.
\end{equation}
Since $d_1$ and $\tilde{d}_1$ are bounded by $2R$, we also have
\begin{equation}\label{est2}
\text{if both }d_1\geq c_1r\text{ and }\tilde{d}_1\geq c_1r,\text{ then }d_1\leq \frac{2R}{c_1r}\tilde{d}_1
\text{ and }\tilde{d}_1 \leq \frac{2R}{c_1r} d_1.
\end{equation}
We finally observe that the estimates \eqref{est1} and \eqref{est2} are essentially optimal.
Before comparing $d_1$ (or $\tilde{d}_1$) with $d_2$ and $d_3$, let us make the following remark on
the lengths of $\partial\Omega_1$ and $\partial\Omega_2$.
If $\Omega$ is an open set which is Lipschitz with constants $r$ and $L$, then
for any integer $n\geq 0$, we have
\begin{equation}\label{length1}
\mathcal{H}^1(\partial\Omega\cap(\overline{B}_{(n+1)r}\backslash B_{nr}))\leq C(L)r(n+1).
\end{equation}
Here, a simple computation shows that we may choose
$C(L)=48 \sqrt{1+L^2}$.
Therefore, if we assume that $\Omega\subset B_R$ and $R\geq 10r$,
we may conclude that
\begin{equation}\label{length}
P(\Omega)\leq C_1(L)R^2/r,
\end{equation}
where $C_1(L)=\frac{1}{2}\left(\frac{11}{9}\right)^2C(L)$.
Moreover, there exist two constants $c_2$, $0< c_2\leq c_1$, and $C_1>0$, depending on $L$ only, such that we have
\begin{equation}\label{neigh}
|\overline{B}_{d}(\partial\Omega)|\leq C_1\mathrm{length}(\partial\Omega)d\text{ for any }d\leq c_2r.
\end{equation}
Since
$$
\text{if }\tilde{d}_1\leq c_2r,\text{ then } d_2\leq \min\{|\overline{B}_{\tilde{d}_1}(\partial\Omega_2)|,|\overline{B}_{\tilde{d}_1}(\partial\Omega_2)| \},
$$
we obtain that
\begin{equation}\label{d2vsdtilde1}
\text{if }\tilde{d}_1\leq c_2r,\text{ then } d_2\leq C_1\min\{\mathrm{length}(\partial\Omega_1),\mathrm{length}(\partial\Omega_2)\}\tilde{d}_1.
\end{equation}
If $\tilde{d}_1\geq c_2r$, then
$d_2\leq \pi R^2\leq\frac{\pi R^2}{c_2r}\tilde{d}_1$. By \eqref{length},
we may conclude that
\begin{equation}\label{d2vsdtilde1bis}
d_2\leq \frac{C_2R^2}{r}\tilde{d}_1.
\end{equation}
Here $C_2$ depends on $L$ only. Moreover, up to changing the constants $c_2$, $C_1$ and $C_2$, \eqref{d2vsdtilde1} and
\eqref{d2vsdtilde1bis} still hold if we replace $\tilde{d}_1$ with $d_1$.
On the other hand, there exists a constant $c_3$, $0<c_3\leq \pi$, depending on $L$ only,
such that
$$d_2\geq c_3\min\{r^2,d_1^2\}.$$
We infer that either if $d_1\leq r$ or if $d_2\leq (c_3/2) r^2$, then
$d_1\leq C_3d_2^{1/2}$, where $C_3=1/c_3^{1/2}$. If $d_2\geq (c_3/2)r^2$,
then $d_1\leq 2R\leq \frac{4C_3^2}{r^2}Rd_2$
or, better,
$d_1\leq \frac{2\sqrt{2}C_3}{r}Rd_2^{1/2}$.
Summarizing, we have
\begin{equation}
\text{if }d_2\leq (c_3/2)r^2,\text{ then }d_1\leq C_3d_2^{1/2}
\end{equation}
and,
finally,
\begin{equation}
\text{if }d_2\geq (c_3/2)r^2,\text{ then }d_1\leq \frac{2\sqrt{2}C_3}{r}Rd_2^{1/2}.
\end{equation}
Clearly, up to suitably changing the constants $c_3$ and $C_3$, the last two estimates still hold if we
replace $d_1$ with $\tilde{d}_1$. We also remark that, as before, the estimates relating $d_1$, $\tilde{d}_1$ and $d_2$ are essentially optimal.
We have obtained that $d_1$, $\tilde{d}_1$ and $d_2$ are topologically equivalent distances.
About $d_2$ and $d_3$, obviously $d_2\leq d_3$, however the two distances are
not topologically equivalent. In fact we can find $\Omega$ and $\Omega_i$, $i\in\mathbb{N}$, open sets belonging to $\mathcal{A}^{0,1}(r,L,R)$, such that
$d_2(\Omega,\Omega_i)$ goes to zero as $i\to\infty$, whereas
$d_3(\Omega,\Omega_i)\geq c>0$ for any $i\in\mathbb{N}$. Therefore $d_3$ induces
a strictly finer topology than the one induced by $d_2$
An assumption that the mask is a bounded open set which is Lipschitz with
given constants $r$ and $L$ is reasonable from the manufacturing point of view as well as
from the mathematical point of view, by the
following compactness result.
\begin{prop}\label{compactprop}
The set $\mathcal{A}^{0,1}(r,L,R)$
\textnormal{(}respectively $\mathcal{A}^{k,\alpha}(r,L,R)$, $k=1,2,\ldots$,
$0<\alpha\leq 1$\textnormal{)}
is compact with respect to the distance $d_1$.
\end{prop}
We remark that the same result holds with respect to the distances $\tilde{d}_1$ and
$d_2$. Furthermore, we obtain as a corollary that the set $\mathcal{A}_{\gamma}$
is closed with respect to the $L^1$ norm, for any $\gamma>0$.
The previous example shows that
compactness fails with respect to the distance $d_3$, at least for the Lipschitz case. On the other hand,
if $\Omega_1$ and $\Omega_2$ belong to $\mathcal{A}^{1,\alpha}(r,L,R)$,
with $0<\alpha<1$, then, following
Lemma~2.1 in \cite{Ron99}, we can show that
\begin{equation}\label{d2vsd3}
|P(\Omega_1)-P(\Omega_2)|\leq C_4(\tilde{d}_1(\Omega_1,\Omega_2))^{\alpha/(2\alpha+2)},
\end{equation
where $C_4$ depends on $r$, $L$, $R$ and $\alpha$ only.
We may conclude that in the $C^{k,\alpha}$ case, $k=1,2,\ldots$,
$0<\alpha\leq 1$, $d_3$ is topologically equivalent to the other three distances and that
Proposition~\ref{compactprop} holds also with respect to the distance $d_3$.
It is worthwhile to observe that, under some circumstances, the estimate
\eqref{d2vsd3} can be extended to the piecewise $C^{1,\alpha}$ case.
For example, typically we may assume that the desired circuit $\Omega_0$ belongs
to $\mathcal{A}^{0,1}(r,L,R)$.
Moreover, we assume that the boundary of $\Omega_0$ is composed by a finite number of closed segments
$I_i$, $i=1,\ldots,n$, which are pairwise internally disjoint and
whose lengths are greater than or equal to $2r$. Therefore, $\Omega_0$ is actually
a piecewise $C^{1,\alpha}$ open set.
We shall show in Section~\ref{dirsec} that, under suitable assumptions on the mask $D$,
the corresponding constructed
circuit $\Omega$ belongs to $\mathcal{A}^{1,\alpha}(r_1,L_1,\tilde{R})$, for some suitable
positive constants $r_1\leq r$, $L_1\geq L$, $\tilde{R}\geq R$ and $\alpha$, $0<\alpha<1$.
Then we can find positive constants $c_4$, $0<c_4\leq 1$, $C_5$ and $C_6$, depending on
$r_1$, $L_1$, $\tilde{R}$ and $\alpha$ only,
such that if $\tilde{d}_1(\Omega_0,\Omega)\leq c_4r_1$, then
we can subdivide $\partial \Omega$ into smooth curves $J_i$, $i=1\ldots,n$,
which are pairwise internally disjoint, such that for any $i=1,\ldots,n$ we have
$$d_H(J_i,I_i)\leq C_5\tilde{d}_1(\Omega_0,\Omega)$$
and
$$\mathrm{length}(I_i)-2C_5\tilde{d}_1(\Omega_0,\Omega)\leq
\mathrm{length}(J_i)\leq \mathrm{length}(I_i)+C_6(\tilde{d}_1(\Omega_0,\Omega))^{\alpha/(2\alpha+2)}.$$
Therefore,
$$-2nC_5\tilde{d}_1(\Omega_0,\Omega)\leq P(\Omega)-P(\Omega_0)\leq
nC_6(\tilde{d}_1(\Omega_0,\Omega))^{\alpha/(2\alpha+2)}.$$
By these reasonings it might seem that we may choose
to measure the distance between the desired circuit $\Omega_0$ and the reconstructed one $\Omega$ by using any of these distances. However, there are several reasons to prefer the distance $d_3$, which we actually choose. In fact,
it is easier to compute than $d_1$ and $\tilde{d}_1$, it can be extended in a natural way from characteristic functions to any $BV$ function by using
$d_{st}$, and should provide a better approximation of the desired circuit than $d_2$, which seems to be too weak for this purpose.
\subsection{Convolutions of characteristic functions and Gaussian distributions}
We recall that
$G(x)=(2\pi)^{-1}\mathrm{e} ^{-|x|^2/2}$, $x\in \mathbb{R}^2$, and let us note that
$\hat{G}(\xi)=\mathrm{e} ^{-|\xi|^2/2}$, $\xi\in\mathbb{R}^2$. Moreover, $\|G\|_{L^1(\mathbb{R}^2)}=1$.
For any positive constant $s$ we denote by $G_s(x)=s^{-2}G(x/s)$, $x\in \mathbb{R}^2$.
We note that $\|G_s\|_{L^1(\mathbb{R}^2)}=1$ and $\widehat{G_s}(\xi)=\hat{G}(s\xi)$, $\xi\in\mathbb{R}^2$.
Let $D$ be a bounded open set which is Lipschitz with constants $R_0$ and $L$ and let $\chi_{D}$
be its characteristic function.
We investigate how $\chi_{D}$ is perturbed if we convolute it with $G$. We call $v=\chi_{D}\ast G$,
that is
$$v(x)=\int_{\mathbb{R}^2}\chi_{D}(x-y)G(y)\mathrm{d} y=
\int_{\mathbb{R}^2}\chi_{D}(y)G(x-y)\mathrm{d} y,\quad x\in\mathbb{R}^2.$$
We recall that we shall use the positive constants $\delta_1$, $\delta_2$ and $m_1$, and the sets $M_{\delta_1}$ and
$N_{\delta}$ introduced at the beginning of Subsection~\ref{geomsubsec}.
\begin{prop}\label{firstapproxprop}
Under the previous notation and assumptions, let
us fix $\delta$, $0<\delta\leq \delta_2/4$. Then
there exist constants $R_0\geq 1$, $\tilde{h}$, $0<\tilde{h}\leq 1/24$, and $a_1>0$,
depending on $L$ and $\delta$ only, such that the following estimates hold.
For any $x\in\mathbb{R}^2$,
\begin{equation}\label{a1}
\text{if }\tilde{h}<v(x)<1-\tilde{h},\text{ then }x\in \overline{B}_{m_1\delta R_0}(\partial D),
\end{equation}
and for any $x\in\partial D$,
\begin{equation}\label{a2}
\text{if }y\in N_{\delta}(x),\text{ then }\nabla v(y)\cdot (-e_2)\geq a_1.
\end{equation}
\end{prop}
\proof{.} If
$x\not \in \overline{B}_{m_1\delta R_0}(\partial D)$, then we have
$$v(x)\leq \mathrm{e}^{-m_1^2\delta^2R_0^2/2},\text{ if }x\not\in D,$$
and
$$v(x)\geq 1-\mathrm{e}^{-m_1^2\delta^2R_0^2/2},\text{ if }x\in D.$$
Consequently, provided $\tilde{h}=\mathrm{e}^{-m_1^2\delta^2R_0^2/2}\leq 1/24$, we may conclude that
\eqref{a1} holds.
Let us take $x\in \partial D$ and
$y\in N_{\delta}(x)$.
Then, denoting by $\nu$ the exterior unit normal vector to $ D$,
$$\nabla v(y)\cdot (-e_2)=
\int_{\partial D}G(y-z)\nu(z)\cdot e_2\mathrm{d} \mathcal{H}^1(z).$$
Therefore,
\begin{multline*}
\nabla v(y)\cdot (-e_2)=
\int_{\partial D\cap \overline{B}_{2\delta R_0}(y)}G(y-z)\nu(z)\cdot e_2\mathrm{d} \mathcal{H}^1(z)+\\
\int_{\partial D\cap(B_{R_0/2}(y)\backslash \overline{B}_{2\delta R_0}(y)) }G(y-z)\nu(z)\cdot e_2\mathrm{d} \mathcal{H}^1(z)+\\
\int_{\partial D\backslash B_{R_0/2}(y)}G(y-z)\nu(z)\cdot e_2\mathrm{d} \mathcal{H}^1(z)=A+B+C.
\end{multline*}
Since $B_{R_0/2}(y)$ is contained in $B_{R_0}(x)$,
for any $z\in \partial D\cap B_{R_0/2}(y)$,
we have $\nu(z)\cdot e_2\geq c_1>0$ where $c_1$ is a constant depending on $L$ only.
Moreover, the length of
$\partial D\cap \overline{B}_{2\delta R_0}(y)$ is also bounded from below by $c_2\delta R_0$, $c_2>0$ depending on $L$ only.
Therefore, we obtain that $A\geq c_1c_2\delta R_0\mathrm{e}^{-2\delta^2R_0^2}$ and $B\geq 0$.
For what concerns the term $C$, with the help of \eqref{length1}, we can find a constant $C_1$, depending on $L$ only,
such that, for any $R_0\geq 1$, we have
$$|C|\leq C_1R_0\mathrm{e}^{-R_0^2/8}.$$
Therefore, we can find $R_0\geq 1$, depending on $L$ and $\delta$ only, such that
$\tilde{h}=\mathrm{e}^{-m_1^2\delta^2R_0^2/2}\leq1/24$,
and $2C_1\mathrm{e}^{-R_0^2/8}\leq c_1c_2\delta \mathrm{e}^{-2\delta^2R_0^2}$.
We set
$a_1=(1/2)c_1c_2\delta R_0 \mathrm{e}^{-2\delta^2R_0^2}$ and the proof is concluded.\hfill$\square$
\begin{oss}\label{R0oss}
Without loss of generality, we may choose $R_0$ such that it also satisfies
\begin{equation}\label{R0choice}
\|\nabla G\|_{L^1(\mathbb{R}^2\backslash B_{R_0/2})}\leq (1/12)a_1.
\end{equation}
\end{oss}
In the sequel, we shall fix $\delta=\delta_2/4$ and $R_0$ as the corresponding constant in
Proposition~\ref{firstapproxprop} such that \eqref{R0choice} holds. We note that, in this case,
$\delta$ and $R_0$ depend on $L$ only. We shall also fix a constant $R\geq 10R_0$.
We recall that, with a slight abuse of notation, we identify $L^1(B_R)$ with the set of real valued $L^1(\mathbb{R}^2)$ functions that are
equal to zero almost everywhere outside $B_R$.
The same proof of Proposition~\ref{firstapproxprop} allows us to prove
this corollary.
\begin{cor}\label{cors}
For any $s$, $0<s\leq 1$, let $r=sR_0$ and let $D$ be a bounded open set which is Lipschitz with constants
$r$ and $L$. Let $v_s=\chi_{D}\ast G_s$. Then, for any $x\in\mathbb{R}^2$,
$$\text{if }\tilde{h}<v_s(x)<1-\tilde{h},\text{ then }x\in \overline{B}_{m_1\delta r}(\partial D),$$
and for any $x\in\partial D$,
$$\text{if }y\in N_{\delta}(x),\text{ then }\nabla v_s(y)\cdot (-e_2)\geq a_1R_0/r=a_1/s.$$
\end{cor}
We conclude this part with the following perturbation argument. Let us consider a function $\psi$ such that either
$\psi
\in C^1(\mathbb{R}^2)\cap W^{1,1}(\mathbb{R}^2)$ or $\psi
\in W^{2,1}(\mathbb{R}^2)$ and that, for some $\tilde{\delta}>0$,
$$\|\psi\|_{W^{1,1}(\mathbb{R}^2)}\leq \tilde{\delta}.$$
Let $\tilde{G}=G+\psi$. Then the following result holds.
\begin{cor}\label{cors2}
Let us assume that $\tilde{\delta}\leq \min\{\tilde{h},a_1/2\}$.
For any $s$, $0<s\leq 1$, let $r=sR_0$ and let $D$ be a bounded open set which is Lipschitz with constants
$r$ and $L$. Let $v_s=\chi_{D}\ast \tilde{G}_s$. Then, for any $x\in\mathbb{R}^2$,
\begin{equation}\label{in}
\text{if }2\tilde{h}<v_s(x)<1-2\tilde{h},\text{ then }x\in \overline{B}_{m_1\delta r}(\partial D),
\end{equation}
and for any $x\in\partial D$,
\begin{equation}\label{gradlower}
\text{if }y\in N_{\delta}(x),\text{ then }\nabla v_s(y)\cdot (-e_2)\geq a_1R_0/2r=a_1/(2s).
\end{equation}
\end{cor}
\proof{.} It follows immediately from the previous corollary and Proposition~\ref{convprop}. We first notice that in either cases $\chi_D\ast \tilde{G}_s\in C^1(\mathbb{R}^2)$. Moreover
we have
$$\|\chi_D\ast \tilde{G}_s-\chi_D\ast G_s\|_{L^{\infty}(\mathbb{R}^2)}\leq \|\chi_D\|_{L^{\infty}(\mathbb{R}^2)}\|\psi_s\|_{L^1}\leq \|\psi\|_{L^1(\mathbb{R}^2)}$$
and
\begin{multline*}
\|\nabla (\chi_D\ast \tilde{G}_s-\chi_D\ast G_s)\|_{L^{\infty}(\mathbb{R}^2)}=
\|\chi_D\ast (\nabla\psi_s)\|_{L^{\infty}(\mathbb{R}^2)}\leq\\ \|\chi_D\|_{L^{\infty}(\mathbb{R}^2)}\|\nabla \psi_s\|_{L^1}\leq \|\nabla \psi\|_{L^1(\mathbb{R}^2)}/s.
\end{multline*}
Thus the conclusion follows.\hfill$\square$
\section{Relationship between a mask and its image intensity}\label{dirsec}
In this section we study the relationship between a function representing a mask (not
necessarily a characteristic function of a domain) and its associated image intensity.
We recall the notation used.
We fix $\delta=\delta_2/4$ and $R_0$ as the corresponding constant in
Proposition~\ref{firstapproxprop} such that \eqref{R0choice} holds. We note that, in this case,
$\delta$ and $R_0$ depend on $L$ only. We shall also fix a constant $R\geq 10R_0$.
We recall that, with a slight abuse of notation, we identify $L^1(B_R)$ with the set of real valued $L^1(\mathbb{R}^2)$ functions that are
equal to zero almost everywhere outside $B_R$.
We recall that
$A=\{u\in L^1(B_R)\,:\ 0\leq u\leq 1\text{ a.e. in }B_R\}$.
Fixed $\tilde{\delta}>0$,
we assume that
$\psi\in W^{2,1}(\mathbb{R}^2)$ and that
$$\|\psi\|_{W^{1,1}(\mathbb{R}^2)}\leq \tilde{\delta}.$$
We denote $\tilde{G}=G+\psi$ and, for any $s$, $0<s\leq 1$, we define the operator
$\mathcal{T}_s:L^1(B_R)\to W^{2,1}(\mathbb{R}^2)$ as follows
$$\mathcal{T}_s(u)=u\ast \tilde{G}_s,\quad\text{for any }u\in L^1(B_R).$$
The point spread function we use, $T$, can
be described in general by the function $\tilde{G}$. Therefore a
study of properties of convolutions with $\tilde{G}$ will be useful.
We remark that the following continuity properties of the operator
$\mathcal{T}_s$ hold. For any $p$, $1\leq p\leq +\infty$, and any $u\in L^1(B_R)$, we have, for an absolute constant $C$,
\begin{align}\label{c_1}
&\|\mathcal{T}_s(u)\|_{L^p(\mathbb{R}^2)}\leq \|\tilde{G}\|_{L^1(\mathbb{R}^2)}\|u\|_{L^p(\mathbb{R}^2)},\\\label{c_2}
&\|\nabla \mathcal{T}_s(u)\|_{L^p(\mathbb{R}^2)}\leq (C/s)\|\nabla \tilde{G}\|_{L^1(\mathbb{R}^2)}\|u\|_{L^p(\mathbb{R}^2)},\\\label{c_3}
&\|D^2 \mathcal{T}_s(u)\|_{L^p(\mathbb{R}^2)}\leq (C/s^2)\|D^2 \tilde{G}\|_{L^1(\mathbb{R}^2)}\|u\|_{L^p(\mathbb{R}^2)}.
\end{align}
Let $J\in C^{0}(\mathbb{R}^2)$.
For any $u\in L^1(B_R)$, we define $U\in L^1(\mathbb{R}^4)$ as follows
$$U(x,y)=u(x)u(y)J(x-y),\quad\text{for any }x,y\in\mathbb{R}^2.$$
Then, for any $s$, $0<s\leq 1$, we define
$H_s\in W^{2,1}(\mathbb{R}^4)$ in the following way
$$H_s(x,y)=\tilde{G}_s(x)\tilde{G}_s(y),\quad\text{for any }x,y\in\mathbb{R}^2.$$
Therefore, for any $p$, $1\leq p\leq +\infty$, and any $u\in L^1(B_R)$, we have, for an absolute constant $C$,
\begin{align}\label{cc_1}
&\|U\ast H_s\|_{L^p(\mathbb{R}^4)}\leq \|\tilde{G}\|^2_{L^1(\mathbb{R}^2)}
\|J\|_{L^{\infty}(B_{2R})}
\|u\|^2_{L^p(\mathbb{R}^2)},\\\label{cc_2}
&\|\nabla(U\ast H_s)\|_{L^p(\mathbb{R}^4)}\leq (C/s)
\|\tilde{G}\|_{L^1(\mathbb{R}^2)}
\|\nabla \tilde{G}\|_{L^1(\mathbb{R}^2)}\|J\|_{L^{\infty}(B_{2R})}\|u\|^2_{L^p(\mathbb{R}^2)},\\\label{cc_3}
&\|D^2 (U\ast H_s)\|_{L^p(\mathbb{R}^4)}\leq (C/s^2)\|\tilde{G}\|^2_{W^{2,1}(\mathbb{R}^2)}\|J\|_{L^{\infty}(B_{2R})}\|u\|^2_{L^p(\mathbb{R}^2)}.
\end{align}
Let us fix $p>4$ and let $\alpha=1-4/p$, $0<\alpha<1$. Then, if $u\in A$ we have
$U\ast H_s\in C^{1,\alpha}(\mathbb{R}^4)$ and, for some absolute constant $C$ depending on $p$,
$$\|U\ast H_s\|_{C^{1,\alpha}(\mathbb{R}^4)}\leq C\|U\ast H_s\|_{W^{2,p}(\mathbb{R}^4)}.$$
We define
$\mathcal{P}_{J,s}:A\to C^{1,\alpha}(\mathbb{R}^2)$ and $\mathcal{P}_{1,s}:A\to C^{1,\alpha}(\mathbb{R}^2)$ as follows.
For any $u\in A$
$$\mathcal{P}_{J,s}(u)(x)=\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} u(\xi)\tilde{G}_s(x-\xi) J(\xi-\eta) \tilde{G}_s(x-\eta) u(\eta) \mathrm{d}\xi \mathrm{d}\eta,\quad x\in\mathbb{R}^2$$
and
$$\mathcal{P}_{1,s}(u)=(\mathcal{T}_s(u))^2.$$
We notice that the two definitions are consistent when $J\equiv 1$ and that
$$\mathcal{P}_{J,s}(u)(x)=(U\ast H_s)(x,x),\quad x\in\mathbb{R}^2.$$
Putting together the previous estimates we obtain
the following result. We recall that we have fixed a number $p>4$ and that $\alpha=1-4/p$.
\begin{prop}\label{coherenceproposition}
Under the previous notation and assumptions,
let $\varepsilon=\|J-1\|_{L^{\infty}(B_{2R})}$. Then for any $u\in A$ and any $s$, $0<s\leq 1$, we have, for some absolute constant $C$ depending on $p$,
\begin{multline*}
\|\mathcal{P}_{J,s}(u)\|_{C^{0,\alpha}(\mathbb{R}^2)}\leq ((1+\varepsilon)C/s)\|\tilde{G}\|^2_{W^{1,1}(\mathbb{R}^2)}\|u\|^2_{L^{p}(B_R)}\leq\\ ((1+\varepsilon)C/s)\|\tilde{G}\|^2_{W^{1,1}(\mathbb{R}^2)}\|u\|_{L^1(B_R)}^{2/p}.
\end{multline*}
The same estimate holds also for the gradient, namely
\begin{multline*}
\|\nabla \mathcal{P}_{J,s}(u)\|_{C^{0,\alpha}(\mathbb{R}^2)}\leq
((1+\varepsilon)C/s^2)\|\tilde{G}\|^2_{W^{2,1}(\mathbb{R}^2)}\|u\|^2_{L^p(B_R)}\leq\\
((1+\varepsilon)C/s^2)\|\tilde{G}\|^2_{W^{2,1}(\mathbb{R}^2)}\|u\|_{L^1(B_R)}^{2/p}.
\end{multline*}
Furthermore, we have
\begin{equation}\label{0coher}
\|\mathcal{P}_{J,s}(u)-\mathcal{P}_{1,s}(u)\|_{L^{\infty}(\mathbb{R}^2)}=\|\mathcal{P}_{J-1,s}(u)\|_{L^{\infty}(\mathbb{R}^2)}\leq
\|\tilde{G}\|^2_{L^1(\mathbb{R}^2)} \varepsilon
\end{equation}
and, for some absolute constant $C$,
\begin{equation}\label{1coher}
\|\nabla (\mathcal{P}_{J,s}(u)-\mathcal{P}_{1,s}(u))\|_{L^{\infty}(\mathbb{R}^2)}\leq C\|\tilde{G}\|_{L^1(\mathbb{R}^2)}
\|\nabla \tilde{G}\|_{L^1(\mathbb{R}^2)}\varepsilon/s.
\end{equation}
\end{prop}
Although $\mathcal{P}_{J,s}$ is nonlinear in its argument $u$, by a simple adaptation of the previous reasonings, we obtain that for any $u_1$, $u_2\in A$, and for some absolute constant $C$ depending on $p$, we have the following corresponding estimates
\begin{multline*}
\|\mathcal{P}_{J,s}(u_1)-\mathcal{P}_{J,s}(u_2)\|_{C^{0,\alpha}(\mathbb{R}^2)}\leq ((1+\varepsilon)C/s)\|\tilde{G}\|^2_{W^{1,1}(\mathbb{R}^2)}R^{2/p}
\|u_1-u_2\|_{L^{p}(B_R)}
\leq\\ 2((1+\varepsilon)C/s)\|\tilde{G}\|^2_{W^{1,1}(\mathbb{R}^2)}R^{2/p}
\|u_1-u_2\|^{1/p}_{L^1(B_R)}
\end{multline*}
and
\begin{multline*}
\|\nabla (\mathcal{P}_{J,s}(u_1)-\mathcal{P}_{J,s}(u_2))\|_{C^{0,\alpha}(\mathbb{R}^2)}\leq
((1+\varepsilon)C/s^2)\|\tilde{G}\|^2_{W^{2,1}(\mathbb{R}^2)}R^{2/p}\|u_1-u_2\|_{L^{p}(B_R)}\leq\\
2((1+\varepsilon)C/s^2)\|\tilde{G}\|^2_{W^{2,1}(\mathbb{R}^2)}R^{2/p}\|u_1-u_2\|_{L^1(B_R)}^{1/p}.
\end{multline*}
Therefore, $\mathcal{P}_{J,s}:A\to C^{1,\alpha}(\mathbb{R}^2)$
is Lipschitz continuous with respect to the $L^p$ norm and H\"older continuous
with exponent $1/p$ with respect to the $L^1$ norm.
We fix $\tilde{\delta}$ such that $0<\tilde{\delta}\leq\min\{\tilde{h},a_1/2\}$, with $\tilde{h}$, $0<\tilde{h}\leq 1/24$, and $a_1>0$ as in Proposition~\ref{firstapproxprop}, thus depending on $L$ only. We define the corresponding $T$ and $s_0$ as in Lemma~\ref{approxlemma}.
We finally fix $s=(k\text{NA})^{-1}$, $0<s\leq 1/s_0$, and $\sigma$, $0<\sigma\leq s$.
Then we define
$$I_{s,\sigma}(u)=\mathcal{P}_{J,ss_0}(u),\quad\text{for any }u\in A,$$
where $J$ is given by \eqref{Jdef}. We recall that in \eqref{Kdef} we defined
$K=T_{ss_0}$, therefore for any open set $D\subset B_R$ we have that
$I_{s,\sigma}(\chi_D)$ is the light intensity on the image plane corresponding to the mask $D$, see \eqref{Hopkins}.
We denote by $\mathcal{H}:\mathbb{R}\to\mathbb{R}$ the Heaviside function
such that $\mathcal{H}(t)=0$ for any $t\leq 0$ and $\mathcal{H}(t)=1$ for any $t> 0$.
For any constant $h$ we set $\mathcal{H}_h(t)=\mathcal{H}(t-h)$
for any $t\in\mathbb{R}$.
Then, for any $h$, $0<h<1$, any $s$, $0<s\leq 1/s_0$, and any $\sigma$, $0<\sigma\leq s$,
we define the operator
$\mathcal{W}:A\to L^{\infty}(\mathbb{R}^2)$ as follows
\begin{equation}\label{Wdefinition}
\mathcal{W}(u)=\mathcal{H}_h(I_{s,\sigma}(u)),\quad \text{for any }u\in A.
\end{equation}
Clearly, for any $u\in A$, $\mathcal{W}(u)$ is the characteristic function of an open set,
which we shall call $\Omega(u)$. That is
\begin{equation}\label{Omegedefinition}
\Omega(u)=\{x\in\mathbb{R}^2\,:\ I_{s,\sigma}(u)(x)>h\},\quad \text{for any }u\in A.
\end{equation}
In other
words, $\chi_{\Omega(u)}=\mathcal{W}(u)=\mathcal{H}_h(I_{s,\sigma}(u))$.
Moreover, whenever $u=\chi_D$, where $D$ is an open set contained in $B_R$,
we shall denote $\Omega(D)=\Omega(\chi_D)$.
The final, and crucial, result of this section is the following.
\begin{teo}\label{mainteo}
Let us fix a positive constant $L$. Let $\delta=\delta_2/4$ and let $R_0$ be as in Proposition~\textnormal{\ref{firstapproxprop}} and such that \eqref{R0choice} holds.
Let us also fix $R\geq 10 R_0$ and $p$, $p>4$, and $\alpha=1-4/p$.
We fix $\tilde{\delta}$ such that $0<\tilde{\delta}\leq\min\{\tilde{h},a_1/2\}$, with $\tilde{h}$, $0<\tilde{h}\leq 1/24$, and $a_1>0$ as in Proposition~\textnormal{\ref{firstapproxprop}}, thus depending on $L$ only. We define the corresponding $T$ and $s_0$ as in Lemma~\textnormal{\ref{approxlemma}}.
We finally fix $s=(k\text{NA})^{-1}$, $0<s\leq 1/s_0$, and $\sigma$, $0<\sigma\leq s$.
Then, for any $u\in A$ we define
$$I_{s,\sigma}(u)=\mathcal{P}_{J,ss_0}(u)$$
where $J$ is given by \eqref{Jdef}.
Then
for any $h$, $1/3\leq h\leq 2/3$, and any $s$, $0<s\leq 1/s_0$,
we can find positive constants $\tilde{\sigma}_0$, $0<\tilde{\sigma}_0\leq 1$, and $\gamma_0$,
depending on $L$, $R$, $\|D^2(T-G)\|_{L^1(\mathbb{R}^2)}$, $p$,
and $ss_0$ only,
such that for any
$\sigma$, $0<\sigma\leq \tilde{\sigma}_0s$, and any $\gamma$, $0<\gamma\leq \gamma_0$,
the following holds.
Let $\mathcal{A}=\mathcal{A}^{0,1}(r,L,R)$, where $r=ss_0R_0$. Let $\tilde{R}=R+2m_1\delta R_0$, where $m_1$, $0<m_1\leq 1$, depends on $L$ only.
Then, for any $u\in \mathcal{A}_{\gamma}$, we have that $\Omega(u)\Subset B_{\tilde{R}}$ and
$\Omega(u)\in \mathcal{A}^{1,\alpha}(r_1,L_1,\tilde{R})$.
Here $r_1=ss_0\tilde{R}_0\leq r$, where $\tilde{R}_0 \leq \delta_1R_0/8$ depends on $L$ only,
whereas $L_1\geq L$ depends on
$L$, $R$, $\|D^2(T-G)\|_{L^1(\mathbb{R}^2)}$, $p$ and $ss_0$ only.
Moreover, the map $\mathcal{W}:\mathcal{A}_{\gamma}\to BV(B_{\tilde{R}})$ is uniformly continuous with respect to the $L^1$ norm on $\mathcal{A}_{\gamma}$
and the distance $d_{st}$ on $BV(B_{\tilde{R}})$.
\end{teo}
\begin{oss}\label{ossimp}
We observe that the distance $d_{st}$ in $BV(B_{\tilde{R}})$ between $\mathcal{W}(u_1)$
and $\mathcal{W}(u_2)$
corresponds to the distance $d_3$ related to $B_{\tilde{R}}$ between $\Omega(u_1)$ and $\Omega(u_2)$.
\end{oss}
\proof{ of Theorem~\ref{mainteo}.}
The proof is a consequence of the previous analysis.
We fix $s$, $0<s\leq 1/s_0$,
and $h$, $1/3\leq h\leq 2/3$.
Let us begin with the following preliminary case. Let $u=\chi_D$, where $D\in\mathcal{A}$,
and let $v=\mathcal{T}_{ss_0}(u)$ and
$\tilde{W}=\mathcal{H}_h((\mathcal{T}_{ss_0}(u))^2)$.
We apply Corollary~\ref{cors2} and we obtain the following results.
If $\tilde{\Omega}$ is the open set such that $\tilde{W}=\chi_{\tilde{\Omega}}$, then, by \eqref{in}, we notice that $\partial
\tilde{\Omega}\subset \overline{B}_{m_1\delta r}(\partial D)$ and that $(D\backslash \overline{B}_{m_1\delta r}(\partial D))\subset \tilde{\Omega}$ and
$(\mathbb{R}^2\backslash \overline{B}_{m_1\delta r}(\overline{D}))\cap \tilde{\Omega}=\emptyset$. Therefore
$\tilde{\Omega}\Subset B_{\tilde{R}}$.
We take any
$x\in \partial D$ and any $y\in M_{\delta_1/2}(x)$, with respect to the coordinate system depending on $x$. Then we consider
the points $y^-=y-\delta re_2$ and $y^+=y+\delta re_2$. We have that
$y^{\pm}\in\partial N_{\delta}(x)\backslash \overline{B}_{m_1\delta r}(\partial D)$. Moreover,
$y^-\in D$ and $v(y^-)\geq 11/12$, whereas $y^+\not\in D$ and
$v(y^+)\leq 1/12$. Let us call $\tilde{y}^+=y+t_0\delta re_2$, where $t_0\in (-1,1]$, $t_0$ depends on $y$,
and $v(\tilde{y}^+)=1/12$ whereas $v(y+t\delta re_2)< 1/12$ for any $t\in (t_0,1]$.
Then we use \eqref{gradlower} and we obtain that, for any $t\in [-1,t_0]$,
$v(y+t\delta re_2)\geq 1/12$ and
$$-\frac{\mathrm{d}}{\mathrm{d} t}(v(y+t\delta re_2))^2\geq \delta ra_1/(12ss_0).$$
We may conclude that there exists a function $\varphi_{1}:[-\delta_1r/2,\delta_1r/2]\to \mathbb{R}$
such that, for any $y=(y_1,y_2)\in N_{\delta}(x)$ with $|y_1|\leq \delta_1r/2$,
$(v(y))^2=h$ if and only if
$y_2=\varphi_1(y_1)$.
We recall that
\begin{equation}\label{aa1}
\|v\|_{L^{\infty}(\mathbb{R}^2)}\leq C_1,\quad\|\nabla v\|_{L^{\infty}(\mathbb{R}^2)}\leq C_1/(ss_0),
\end{equation}
where $C_1$ is an absolute constant, and
\begin{equation}\label{aa2}
\|\nabla v\|_{C^{0,\alpha}(\mathbb{R}^2)}\leq C_2/(ss_0)^2,
\end{equation}
where $C_2$ depends on $R$, $\|D^2(T-G)\|_{L^1(\mathbb{R}^2)}$ and $p$ only.
We obtain that $v^2$ is a $C^{1,\alpha}$ function and, by the implicit function theorem, we conclude
that the function $\varphi_1$ is actually $C^{1,\alpha}$.
We observe that
$$\|\varphi'_1\|_{L^{\infty}[-\delta_1r/2,\delta_1r/2]}\leq C_3,$$
where $C_3$ depends on $L$ only. Without loss of generality, by a translation
we may assume that
$\varphi_1(0)=0$, thus
$\|\varphi_1\|_{L^{\infty}[-\delta_1r/2,\delta_1r/2]}\leq C_3\delta_1r/2$.
Finally, for any $t_1$, $t_2\in [-\delta_1r/2,\delta_1r/2]$,
$$|\varphi'_1(t_1)-\varphi'_1(t_2)|\leq (C_4/(ss_0))|t_1-t_2|^{\alpha},$$
where $C_4$ is a constant depending on $L$, $R$, $\|D^2(T-G)\|_{L^1(\mathbb{R}^2)}$ and $p$ only.
Then, it is not difficult to prove that for some $r_1=ss_0\tilde{R}_0\leq r$, with $\tilde{R}_0 \leq \delta_1R_0/8$ depending on $L$ only, we can find
$L_1\geq L$, depending on $L$, $R$, $\|D^2(T-G)\|_{L^1(\mathbb{R}^2)}$, $p$ and $ss_0$ only,
such that $\tilde{\Omega}\in \mathcal{A}^{1,\alpha}(r_1,L_1,\tilde{R})$.
Let us also remark that we have obtained that
$\tilde{d}_1(\tilde{\Omega},D)\leq\delta r$.
Let us call $\varepsilon=\varepsilon(s,\sigma)=
\varepsilon(\sigma/s)=
\|J-1\|_{L^{\infty}(B_{2R})}$.
We notice that, as $\sigma/s\to 0^+$, we have that $\varepsilon$ goes to $0$ as well.
We also assume, without loss of generality, that $\varepsilon$ is increasing with respect to the variable $\sigma/s$.
Let us recall that, for any $u\in A$, if $w=I_{s,\sigma}(u)$, with
$0<s\leq 1/s_0$ and $0<\sigma\leq s$,
then
\begin{equation}\label{b1}
\|w\|_{L^{\infty}(\mathbb{R}^2)}\leq C_5,\quad \|\nabla w\|_{L^{\infty}(\mathbb{R}^2)}\leq C_5/(ss_0),
\end{equation}
and
\begin{equation}\label{b2}
\|\nabla w\|_{C^{0,\alpha}(\mathbb{R}^2)}\leq C_6/(ss_0)^2,
\end{equation}
where $C_5$ is an absolute constant
and $C_6$ depends on $R$, $\|D^2(T-G)\|_{L^1(\mathbb{R}^2)}$ and $p$ only.
For positive constants $\tilde{\sigma}_0$, $0<\tilde{\sigma}_0\leq 1$, and $\gamma_0$, to be precised later,
let us fix $\sigma$, $0<\sigma\leq \tilde{\sigma}_0s$ and $\gamma$, $0<\gamma\leq\gamma_0$.
We take $u\in\mathcal{A}_{\gamma}$, $v=\mathcal{T}_{ss_0}(u)$,
$w=I_{s,\sigma}(u)$, and $D\in\mathcal{A}$ such that $\|u-\chi_D\|_{L^1(B_R)}\leq \gamma$. Then we use Proposition~\ref{coherenceproposition} to infer that
\begin{multline*}
\left\|I_{\sigma,s}(u)-(\mathcal{T}_{ss_0}(\chi_D))^2\right\|_{L^{\infty}(\mathbb{R}^2)}\leq
\|I_{s,\sigma}(u)-v^2\|_{L^{\infty}(\mathbb{R}^2)}+
\|(\mathcal{T}_{ss_0}(u))^2-(\mathcal{T}_{ss_0}(\chi_D))^2\|_{L^{\infty}(\mathbb{R}^2)}
\leq \\
\|\tilde{G}\|^2_{L^1(\mathbb{R}^2)}\varepsilon+(2C/(ss_0))\|\tilde{G}\|_{L^1(\mathbb{R}^2)}\|\tilde{G}\|_{W^{1,1}(\mathbb{R}^2)}\|u-\chi_D\|_{L^p(B_R)}\leq
(C_7/(ss_0))\left(\varepsilon+\gamma^{1/p}\right),
\end{multline*}
where $C$ is an absolute constant
and consequently $C_7$ depends on $p$ only.
Analogously, we can prove that
$$\left\|\nabla\left(I_{s,\sigma}(u)-(\mathcal{T}_{ss_0}(\chi_D))^2\right)\right\|_{L^{\infty}(\mathbb{R}^2)}\leq (C_8/(ss_0)^2)\left(\varepsilon+\gamma^{1/p}\right),$$
where the constant $C_8$ depends on $\|D^2(T-G)\|_{L^1(\mathbb{R}^2)}$ and $p$ only.
We now choose the positive constants $\tilde{\sigma}_0$ and $\gamma_0$ in such a way that
$$2(C_7/(ss_0))\left(\varepsilon(\tilde{\sigma}_0)+\gamma_0^{1/p}\right)\leq 1/6$$
and
$$(C_8/(ss_0))\left(\varepsilon(\tilde{\sigma}_0)+\gamma_0^{1/p}\right)\leq a_1/24.$$
Clearly, $\tilde{\sigma}_0$ and $\gamma_0$ depends on $L$, $\|D^2(T-G)\|_{L^1(\mathbb{R}^2)}$, $p$ and
$ss_0$ only.
Then we can apply to $w=I_{s,\sigma}(u)$ and $\Omega=\Omega(u)$
the same analysis we have used for $v$ and $\tilde{\Omega}$ in the first part of this proof. We may therefore conclude that if $u\in\mathcal{A}_{\gamma}$ and $D\in\mathcal{A}$ is such that $\|u-\chi_D\|_{L^1(B_R)}\leq\gamma$, then
$\Omega\Subset B_{\tilde{R}}$,
$\tilde{d}_1(\Omega,D)\leq \delta r$
and,
taken $r_1$ as before, possibly with a smaller $\tilde{R}_0$ still depending on $L$ only, we can find
$L_1\geq L$, depending on $L$, $R$, $\|D^2(T-G)\|_{L^1(\mathbb{R}^2)}$, $p$ and $ss_0$ only,
such that $\Omega\in \mathcal{A}^{1,\alpha}(r_1,L_1,\tilde{R})$.
This kind of argument leads us also to show that $\Omega$ shares the same topological properties of $D$, that is for example $\Omega$ and $\partial \Omega$
have the same number of connected components of $D$ and $\partial D$, respectively.
It remains to show the uniform continuity property. We recall that the operator $\mathcal{P}_{J,ss_0}$ is H\"older continuous from $A$, with the $L^1(B_R)$ norm,
into $C^{1,\alpha}(\mathbb{R}^2)$, with its usual norm. This means that there exists a constant $\tilde{C}$ such that for any
$u_1$ and $u_2\in A$, if we call $w_1=I_{s,\sigma}(u_1)$
and $w_2=I_{s,\sigma}(u_2)$, then
$$\|w_1-w_2\|_{C^{1,\alpha}(\mathbb{R}^2)}\leq \tilde{C}\|u_1-u_2\|_{L^1(B_R)}^{1/p}.$$
A simple application of the previous analysis allows us to prove this claim
\begin{claim}\label{claim1}
There exists a function $g:[0,+\infty)\to[0,+\infty)$, which is continuous, increasing and such that $g(0)=0$,
satisfying the following property.
For any $u\in\mathcal{A}_{\gamma}$,
for any $\varepsilon>0$ and any $x\in\mathbb{R}^2$ we have
\begin{equation}\label{claim}
\text{if }x\not\in B_{\varepsilon}(\partial\Omega(u)),\text{ then }|I_{s,\sigma}(u)-h|> g(\varepsilon).
\end{equation}
\end{claim}
\bigskip
Let us now assume that
$u_1$ and $u_2$ belong to $\mathcal{A}_{\gamma}$ and let us fix $\varepsilon>0$. We can find $\eta>0$
such that if $\|u_1-u_2\|_{L^1(B_R)}\leq \eta$, then $\|w_1-w_2\|_{L^{\infty}}(\mathbb{R}^2)\leq g(\varepsilon)$.
Let us now take $x\in\partial \Omega(u_1)$, that is $x\in\mathbb{R}^2$ such that
$w_1(x)=h$. We infer that $|w_2(x)-h|\leq g(\varepsilon)$, therefore by the claim
we deduce that $x\in B_{\varepsilon}(\partial\Omega(u_2))$. That is
$\partial \Omega(u_1)\subset B_{\varepsilon}(\partial\Omega(u_2))$. By symmetry,
we conclude that
$\tilde{d}_1(\Omega(u_1),\Omega(u_2))\leq \varepsilon$.
In other words, the map which to any $u\in\mathcal{A}_{\gamma}$ associates the open
set $\Omega(u)$ is uniformly continuous with respect to the $L^1$ norm on $\mathcal{A}_{\gamma}$ and the distance $\tilde{d}_1$. However, we have shown in Subsection~\ref{geomsubsec} that the distances $d_1$, $\tilde{d}_1$, $d_2$ and $d_3$
are topologically equivalent on $\mathcal{A}^{1,\alpha}(r_1,L_1,\tilde{R})$, to which all
$\Omega(u)$ belongs, for any $u\in\mathcal{A}_{\gamma}$. Therefore the
map $\mathcal{A}_{\gamma}\ni u\to \Omega(u)$ is uniformly continuous with respect
to the $L^1$ norm on $\mathcal{A}_{\gamma}$ and any of the distances
$d_1$, $\tilde{d}_1$, $d_2$ and $d_3$ related to $B_{\tilde{R}}$.
We observe that
$$d_2(\Omega(u_1),\Omega(u_2))=
\|\mathcal{W}(u_1)-\mathcal{W}(u_2)\|_{L^1(B_{\tilde{R}+1})}=\|\mathcal{W}(u_1)-\mathcal{W}(u_2)\|_{L^1(B_{\tilde{R}})}$$
whereas
$$d_3(\Omega(u_1),\Omega(u_2))=
d_2(\Omega(u_1),\Omega(u_2))+|P(\Omega(u_1))-P(\Omega(u_2))|
=d_{st}(\mathcal{W}(u_1),\mathcal{W}(u_2))$$
where $d_{st}$ is here the distance inducing strict convergence in $BV(B_{\tilde{R}})$. Therefore we conclude that $\mathcal{W}:\mathcal{A}_{\gamma}\to
BV(B_{\tilde{R}})$ is uniformly continuous with respect to the
$L^1$ norm on $\mathcal{A}_{\gamma}$ and, on $BV(B_{\tilde{R}})$,
with respect either to the $L^1$ norm or to the $d_{st}$ distance.\hfill$\square$
\begin{oss} Let us finally remark that if, instead of taking $h\in [1/3,2/3]$, we simply assume
$0<h<1$, then the same analysis may still be carried over. Clearly we need to change the values of $R_0$ and $\tilde{h}_1$ in Proposition~\ref{firstapproxprop}, so that they depend on $h$ as well. As a consequence also the quantities introduced in the above Theorem~\ref{mainteo} would depend on $h$.
\end{oss}
\section{Analysis of the inverse problem}\label{approxsec}
Throughout this section, we shall keep the notation of Theorem~\ref{mainteo} and we shall also assume that the hypotheses of Theorem~\ref{mainteo}
are satisfied. We shall fix $h$, $1/3\leq h\leq 2/3$, and $s$, $0<s\leq 1/s_0$,
and we shall take $\sigma$, $0<\sigma\leq \tilde{\sigma}_0s$, and $\gamma$, $0<\gamma\leq\gamma_0$,
$\tilde{\sigma}_0$ and $\gamma_0$ as in Theorem~\ref{mainteo}.
We call $\Omega_0$
the circuit to be reconstructed and we shall assume that it belongs to
$\mathcal{A}=\mathcal{A}^{0,1}(r,L,R)$.
We recall that, by Proposition~\ref{compactprop},
$\mathcal{A}$ is compact with respect to the $d_2$ distance, which corresponds to the
distance induced by the $L^1$ norm for the corresponding
characteristic functions.
Then it is an immediate consequence of the last part of Theorem~\ref{mainteo}, see also
Remark~\ref{ossimp}, that the problem
$$\min_{D\in\mathcal{A}}d_3(\Omega(D),\Omega_0)$$
admits a solution. We note that $\Omega(D)=\Omega(\chi_D)$ and that here $d_3$ is the distance defined in \eqref{d3def} related to $B_{\tilde{R}}$ .
From a numerical point of view, the class $\mathcal{A}$ is rather difficult to handle.
We try to reduce this difficulty by enlarging the class $\mathcal{A}$
to a class of characteristic functions of sets with finite perimeter. In order to keep the lower semicontinuity of the functional, we restrict ourselves to
characteristic functions of sets with finite perimeter which are contained in
$\mathcal{A}_{\gamma}$. Namely, we define the following functional
$F_0:A\to [0,+\infty]$ such that for any $u\in A$ we have
\begin{equation}\label{F0}
F_0(u)=
d_{st}(\mathcal{W}(u),\chi_{\Omega_0})+bP(u),
\end{equation}
where $P$ is the functional defined in \eqref{Pdef}
with $\mathcal{D}$ chosen to be $B_R$, $b$ is a positive parameter and
$d_{st}$ is the strict convergence distance in $BV(B_{\tilde{R}})$. We recall that,
whenever $u\in\{0,1\}$ almost everywhere in $B_R$ and $u\in BV(B_{R+1})$, then
$P(u)=P(u,B_{R+1})=|Du|(B_{R+1})$.
Otherwise, $P(u)$, and consequently also $F_0(u)$, is equal to $+\infty$.
Moreover, if $u\in \mathcal{A}_{\gamma}$, in particular if $u=\chi_D$ for some $D\in\mathcal{A}$, then
$d_{st}(\mathcal{W}(u),\chi_{\Omega_0})=d_3(\Omega(u),\Omega_0)$,
where again $d_3$ is the distance defined in \eqref{d3def} related to $B_{\tilde{R}}$ .
We look for the solution to the following minimization problem
\begin{equation}\label{min0}
\min\{F_0(u)\,:\ u\in \mathcal{A}_{\gamma}\}.
\end{equation}
We notice that such a minimization problem admits a solution.
Even if the class $\mathcal{A}_{\gamma}$ might still be not very satisfactory to handle from a numerical point of view, since it somehow involves handling the class $\mathcal{A}$,
we believe that from a practical point of view such a restriction might be dropped and we might use the class $A\subset L^1(B_R)$ instead. In fact, we have a good initial guess,
given by the target circuit $\chi_{\Omega_0}$, and it is reasonable to assume that the
optimal mask will be a rather small perturbation of $\Omega_0$ itself. In fact, under our assumptions, by the arguments developed in the proof of Theorem~\ref{mainteo},
we can show that $\Omega(u)$ has the same topological properties of $D$, where $\chi_D$ is the element of $\mathcal{A}$ which is closest to $u$. Therefore if we look for
a set $\Omega(u)$ as close as possible to $\Omega_0$, then at least we need to require that the set $D$ has the same topological properties of $\Omega_0$.
For this reason and
since $\Omega_0\in \mathcal{A}$, it might be essentially the same to perform the minimization in a small neighbourhood of $\mathcal{A}$ or in the whole $A$. On the other hand, again by our assumptions, we notice that whenever the boundary of $\Omega_0$ presents a corner, and this is often case, as $\partial\Omega_0$ is often
the union of a finite number of segments, then $\Omega_0$ cannot be reconstructed in
an exact way, since $\Omega(u)$, for any $u\in\mathcal{A}_{\gamma}$, is a $C^{1,\alpha}$ set,
thus its boundary cannot have any corner.
Besides dealing with the class $\mathcal{A}_{\gamma}$,
there are several other difficulties. In particular, computing $F_0(\chi_E)$ for some $E\subset B_R$ is not an easy task, since it involves at least the computation of the perimeters of $E$ and of $\Omega(\chi_E)$. Furthermore, solving a minimization problem in the class of sets of finite perimeter
is not a
straightforward task from the numerical point of view.
In order to solve these difficulties, we use the following strategy. We approximate, in the sense of $\Gamma$-convergence, the functional
$F_0$
with a family of functional $\{F_{\varepsilon}\}_{\varepsilon>0}$ which are easier
to compute numerically and are
defined on a set of smooth functions.
As in Section 2.3, we take a $C^{\infty}$ function $\phi:\mathbb{R}\to\mathbb{R}$ such that $\phi$
is nondecreasing, $\phi(t)=0$ for any $t\leq -1/2$ and $\phi(t)=1$ for any $t\geq 1/2$.
For any $\eta>0$ and any $\tau\in\mathbb{R}$, let
$$\phi_{\eta, \tau}(t)=\phi\left(\frac{t-\tau}{\eta}\right),\quad\text{for any }t\in\mathbb{R}.$$
Then we have the following result.
\begin{prop}\label{uniformcontprop}
For any $\eta>0$, let
$\Phi_{\eta}:A\to C^{1,\alpha}(\mathbb{R}^2)$
be defined as
$$\Phi_{\eta}(u)=\phi_{\eta,h}(I_{s,\sigma}(u)),\quad\text{for any }u\in A.$$
Then, for any $\eta$, $0<\eta\leq h$,
$\Phi_{\eta}$ is H\"older continuous, with exponent $1/p$,
from $A$, with the $L^1(B_R)$ norm,
into $C^{1,\alpha}(\mathbb{R}^2)$, with its usual norm.
Furthermore, as $\eta\to 0^+$, $(\mathcal{W}-\Phi_{\eta}):\mathcal{A}_{\gamma}\to BV(B_{\tilde{R}})$ converges uniformly to zero on $\mathcal{A}_{\gamma}$
with respect to the distance $d_{st}$ on $BV(B_{\tilde{R}})$.
\end{prop}
\proof{.} The continuity property of $\Phi_{\eta}$ immediately follows
by the continuity of $\mathcal{P}_{J,ss_0}$ and by the properties of $\phi$. We just note that
the H\"older exponent is fixed, whereas the H\"older constant might depend upon
$\eta$.
About the convergence result,
we begin by recalling that $\mathcal{W}(u)=\mathcal{H}_h(I_{s,\sigma}(u))$,
$u\in\mathcal{A}_{\gamma}$.
We use Claim~\ref{claim1} introduced in the proof of Theorem~\ref{mainteo}.
We call $t_0=\min(1,\sup\{g(s)\,:\ s\in[0,+\infty)\})$ and $s_0$ the positive real number
such that $g(s_0)=t_0/2$. We call $g^{-1}:[0,t_0/2]\to [0,s_0]$ the continuous, increasing function which is the inverse of $g$ on such intervals.
For any $\eta$, $0<\eta\leq t_0$,
we infer that $\Phi_{\eta}(u)(x)$ might be different from
$\mathcal{W}(u)(x)$ only if $x\in B_{g^{-1}(\eta/2)}(\partial\Omega(u))$.
By estimates like \eqref{length} and \eqref{neigh}, which are independent of $u\in\mathcal{A}_{\gamma}$, we obtain that
$\|(\mathcal{W}-\Phi_{\eta})(u)\|_{L^1(B_{\tilde{R}})}$ converges to zero, as
$\eta\to 0^+$, uniformly for $u\in\mathcal{A}_{\gamma}$.
For any $t\in\mathbb{R}$ and any $u\in\mathcal{A}_{\gamma}$,
we call
$$P(u,t)=P(\{x\in\mathbb{R}^2\,:\ I_{s,\sigma}(u)(x)>t\},B_{\tilde{R}}).$$
It remains to prove that, as $\eta\to 0^+$, $|D(\Phi_{\eta}(u))|(B_{\tilde{R}})=
\int_{B_{\tilde{R}}}|\nabla(\Phi_{\eta}(u))|$
converges to $|D(\mathcal{W}(u))|(B_{\tilde{R}})=P(u,h)$ uniformly for
$u\in\mathcal{A}_{\gamma}$.
We argue in the following way. We have that, for any $\eta$, $0<\eta\leq h$,
$$|D(\Phi_{\eta}(u))|(B_{\tilde{R}})=
\int_{B_{\tilde{R}}}|\nabla(\Phi_{\eta}(u))|=\int_{B_{\tilde{R}}}|\phi'_{\eta,h}(I_{s,\sigma}(u))||\nabla (I_{s,\sigma}(u))|.$$
Since $\phi'_{\eta,h}\geq 0$, and for $\eta$ small enough, uniformly with respect to $u\in\mathcal{A}_{\gamma}$,
$\phi'_{\eta,h}(I_{s,\sigma}(u))=0$ outside $B_{\tilde{R}}$,
without loss of generality, we have that
$$|D(\Phi_{\eta}(u))|(B_{\tilde{R}})=\int_{\mathbb{R}^2}\phi'_{\eta,h}(I_{s,\sigma}(u))|\nabla (I_{s,\sigma}(u))|.$$
By the coarea formula,
$$
|D(\Phi_{\eta}(u))|(B_{\tilde{R}})=
\int_{-\infty}^{+\infty}\left(\int_{\{I_{s,\sigma}(u)=t\}}
\phi'_{\eta,h}(t)
\mathrm{d} \mathcal{H}^1(y)\right)\mathrm{d} t .
$$
Therefore,
$$
|D(\Phi_{\eta}(u))|(B_{\tilde{R}})=
\frac{1}{\eta}\int_{-\infty}^{+\infty}
\phi'\left(\frac{(t-h)}{\eta}\right)P(u,t)
\mathrm{d} t =
\int_{-1/2}^{+1/2}
\phi'(s)P(u,s\eta+h)
\mathrm{d} s .
$$
Since $|D(\mathcal{W}(u))|(B_{\tilde{R}})=P(u,h)$ and
$\int_{-1/2}^{+1/2}
\phi'(s)\mathrm{d} s=1$, we obtain that
$$
\big||D(\Phi_{\eta}(u))|(B_{\tilde{R}})-|D(\mathcal{W}(u))|(B_{\tilde{R}})\big|\leq
\int_{-1/2}^{+1/2}
\phi'(s)|P(u,s\eta+h)-P(u,h)|
\mathrm{d} s.
$$
It remains to show that, as $\eta\to 0^+$,
$\sup\{|P(u,t+h)-P(u,h)|\,:\ t\in [-\eta/2,+\eta/2]\}$ goes to zero uniformly with respect to $u\in\mathcal{A}_{\gamma}$. Therefore the proof is concluded by using the following claim.
\begin{claim}\label{claim2}
There exist a positive constant $\eta_0$ and a continuous, increasing function $g_1:[0,\eta_0]\to[0,+\infty)$, such that $g_1(0)=0$, such that
for any $\eta$, $0<\eta\leq \eta_0$, and any $u\in \mathcal{A}_{\gamma}$, we have that
$$\sup\{|P(u,t+h)-P(u,h)|\,:\ t\in [-\eta/2,+\eta/2]\}\leq g_1(\eta).$$
\end{claim}
\bigskip
The proof of Claim~\ref{claim2} is a straightforward, although maybe lengthy to describe,
consequence of the analysis developed in the proof of Theorem~\ref{mainteo}. We leave the details to the reader. We just notice that Claim~\ref{claim2} is a sort of generalization of Claim~\ref{claim1} and the arguments used to prove the two claims are essentially
analogous.\hfill$\square$
\bigskip
We are now in the position of describing the approximating functionals and
proving the $\Gamma$-convergence result. Let us a fix a constant $p_1$, $1<p_1<+\infty$, and a continuous function $W:\mathbb{R}\to[0,+\infty)$ such that
$W(t)=0$ if and only if $t\in\{0,1\}$.
Let us denote by $P_{\varepsilon}$, $\varepsilon>0$, the functional defined in
\eqref{modmordef} with $p=p_1$, the function $W$ and $\mathcal{D}=B_R$. We recall that the functional $P$
is defined in \eqref{Pdef}, again with $\mathcal{D}=B_R$.
Then, for any $\varepsilon>0$, let us define
$F_{\varepsilon}:A\to [0,+\infty]$
such that for any $u\in A$ we have
\begin{equation}
F_{\varepsilon}(u)=d_{st}(\Phi_{\eta(\varepsilon)}(u),\chi_{\Omega_0})+
bP_{\varepsilon}(u)
\end{equation}
where $\eta:[0,+\infty)\to[0,+\infty)$ is a continuous, increasing function such that
$\eta(0)=0$.
By the direct method, we can prove that each of the functionals
$F_{\varepsilon}$, $\varepsilon>0$, admits a minimum either over $A$ or over
$\mathcal{A}_{\gamma}$.
The $\Gamma$-convergence result is the following.
\begin{teo}\label{gammaconvteo}
Let us consider the metric space $(X,d)$ where $X=\mathcal{A}_{\gamma}$ and
$d$ is the metric induced by the $L^1$ norm. Then, as $\varepsilon\to 0^+$,
$F_{\varepsilon}$ $\Gamma$-converges to $F_0$
on $X$ with respect to the distance $d$.
\end{teo}
\proof{.} Let us fix a sequence $\{\varepsilon_n\}_{n=1}^{\infty}$ of positive numbers converging to zero as $n\to\infty$. Let, for any $n\in\mathbb{N}$, $F_n=F_{\varepsilon_n}$. We need to prove that $\Gamma\textrm{-}\!\lim_nF_n=F$.
Let us also remark that we may extend $F_n$ and $F$ over $L^1(\mathbb{R}^2)$
by setting them equal to $+\infty$ outside $\mathcal{A}_{\gamma}$.
Let us define $\tilde{P}_{\varepsilon}$, $\varepsilon>0$, and $\tilde{P}$
as the functionals which are equal to the functionals $P_{\varepsilon}$ and $P$, respectively,
on $\mathcal{A}_{\gamma}$ and $+\infty$ elsewhere.
We recall that $P_{\varepsilon}$, $\varepsilon>0$, and $P$ are defined in \eqref{modmordef} and in \eqref{Pdef}, respectively, with $p=p_1$ and $\mathcal{D}=B_R$.
We observe that, as a
consequence of Proposition~\ref{uniformcontprop} and of the stability of
$\Gamma$-convergence under uniformly converging continuous perturbations,
it is enough to show that $\Gamma\textrm{-}\!\lim_n\tilde{P}_n=\tilde{P}$,
where $\tilde{P}_n=\tilde{P}_{\varepsilon_n}$, $n\in\mathbb{N}$.
Let us prove this $\Gamma$-convergence result.
The $\Gamma$-liminf inequality is an
immediate consequence of Theorem~\ref{Mod-Morteo} and of
the fact that $\mathcal{A}_{\gamma}$ is a closed subset of $L^1(B_R)$.
For what concerns the recovery sequence, then we argue in the following way.
If $u\in A$ is such that $\|u-\chi_D\|_{L^1(B_R)}<\gamma$, for some $D\in\mathcal{A}$, then we again use Theorem~\ref{Mod-Morteo} to construct a recovery sequence for such
a function $u$, that is a sequence $\{u_n\}_{n=1}^{\infty}$ contained in $\mathcal{A}_{\gamma}$ such that, as $n\to\infty$, $u_n\to u$ in $L^1(B_R)$
and $\tilde{P}_n(u_n)\to \tilde{P}(u)$.
It remains to study the case when $u\in \partial\mathcal{A}_{\gamma}$ and $\tilde{P}(u)<+\infty$. In this case, we have
that $u=\chi_E$, where $E\subset B_R$ is a set of finite perimeter,
and we
pick $D\in\mathcal{A}$ such that
$\|\chi_E-\chi_D\|_{L^1(B_R)}=|E\Delta D|=\gamma$. Then at least one of these two cases must be satisfied. Either there exists $x\in B_R\backslash\overline{D}$
such that
$$\lim_{\rho\to 0^+}\frac{|E\cap B_{\rho}(x)|}{|B_{\rho}(x)|}=1$$
or there exists $x\in D$ such that
$$\lim_{\rho\to 0^+}\frac{|E\cap B_{\rho}(x)|}{|B_{\rho}(x)|}=0.$$
We choose an arbitrary sequence $\{\rho_j\}_{j=1}^{\infty}$ of positive numbers
such that $\lim_j\rho_j=0$.
In the first case, for any $j\in\mathbb{N}$, we choose $E_j$ such that
$\chi_{E_j}=\chi_E(1-\chi_{B_{\rho_j}(x)})$.
In the second case, we choose $E_j$ such that
$\chi_{E_j}=\chi_E(1-\chi_{B_{\rho_j}(x)})+\chi_{B_{\rho_j}(x)}$.
We notice that, in either cases, for any $j\in\mathbb{N}$, $E_j$ is a set of finite perimeter
such that $\|\chi_{E_j}-\chi_D\|_{L^1(B_R)}<\gamma$. Furthermore,
as $j\to\infty$ we have that $\chi_{E_j}\to\chi_E$ in $L^1(B_R)$ and
$P(E_j)\to P(E)$, that is $\tilde{P}(\chi_{E_j})\to \tilde{P}(\chi_E)$. Then the proof may be concluded
by following the arguments of Section~4.2 in \cite{Bra} which we have briefly recalled in the proof of Theorem~\ref{Mod-Morteo}.\hfill$\square$
\bigskip
We remark that $\Omega_0\in\mathcal{A}$, therefore we may find a
family $\{\tilde{u}_{\varepsilon}\}_{\varepsilon>0}$ such that,
as $\varepsilon\to 0^+$, $\tilde{u}_{\varepsilon}\to \chi_{\Omega_0}$ in $L^1(B_R)$
and
$P_{\varepsilon}(\tilde{u}_{\varepsilon})\to P(\Omega_0)$.
Without loss of generality, we may assume that, for any $\varepsilon>0$,
$0\leq \tilde{u}_{\varepsilon}\leq 1$ almost everywhere in $B_R$ and that
$\tilde{u}_{\varepsilon}\in\mathcal{A}_{\gamma}$. By Proposition~\ref{uniformcontprop},
we conclude that $F_{\varepsilon}(\tilde{u}_{\varepsilon})\to F_0(\Omega_0)<+\infty$.
We obtain that for any $\varepsilon_0>0$
there exists a constant $C_1$ such that
\begin{equation}\label{uniformbound}
\min_{\mathcal{A}_{\gamma}}F_{\varepsilon}\leq C_1\quad\text{for any }\varepsilon,\
0<\varepsilon\leq \varepsilon_0.
\end{equation}
Obviously, the same property is shared by the minimum values of $F_{\varepsilon}$
over $A$.
It remains to prove that the functionals $F_{\varepsilon}$ are equicoercive
over $\mathcal{A}_{\gamma}$, that is that the following result holds.
\begin{prop}\label{equicoerciveprop}
For any $\varepsilon_0>0$,
there exists a compact subset $\mathcal{K}$ of $\mathcal{A}_{\gamma}$ such that for any $\varepsilon$, $0<\varepsilon\leq\varepsilon_0$, we have
$$\min_{\mathcal{K}}F_{\varepsilon}=\min_{\mathcal{A}_{\gamma}}F_{\varepsilon}.$$
\end{prop}
\proof{.}
Let us take the constant $C_1$ as in \eqref{uniformbound}.
Let $u_{\varepsilon}\in\mathcal{A}_{\gamma}$, $0<\varepsilon\leq \varepsilon_0$,
be such that $F_{\varepsilon}(u_{\varepsilon})=\min_{\mathcal{A}_{\gamma}}F_{\varepsilon}$.
Then we observe that the set $\{u_{\varepsilon}\}_{0<\varepsilon\leq\varepsilon_0}$
satisfies the properties of Remark~\ref{compactnessoss} for some constant $C$.
Therefore $\{u_{\varepsilon}\}_{0<\varepsilon\leq\varepsilon_0}$ is precompact in
$L^1(B_R)$ and the proof is concluded.\hfill$\square$
\begin{oss}
With an analogous proof, the same result of Proposition~\ref{equicoerciveprop} holds
if we replace $\mathcal{A}_{\gamma}$ with $A$.
\end{oss}
\bigskip
By Theorem~\ref{gammaconvteo} and Proposition~\ref{equicoerciveprop}, we can apply the Fundamental Theorem of $\Gamma$-convergence to conclude with the following result.
\begin{teo}\label{finalteo}
We have that $F_0$ admits a minimum over $\mathcal{A}_{\gamma}$ and
$$
\min_{\mathcal{A}_{\gamma}} F_0=
\lim_{\varepsilon\to 0^+}\inf_{\mathcal{A}_{\gamma}}
F_{\varepsilon}=\lim_{\varepsilon\to 0^+}\min_{\mathcal{A}_{\gamma}}
F_{\varepsilon}.
$$
Let $\varepsilon_n$, $n\in \mathbb{N}$, be a sequence of positive numbers converging to $0$. For any $n\in\mathbb{N}$, let $F_n=F_{\varepsilon_n}$.
If
$\{u_n\}_{n=1}^{\infty}$ is a sequence contained in $\mathcal{A}_{\gamma}$ which
converges, as $n\to\infty$, to $u\in \mathcal{A}_{\gamma}$ in $L^1(B_R)$ and
satisfies $\lim_n F_n(u_n)=\lim_n\inf_{\mathcal{A}_{\gamma}} F_n$, then $u$ is a minimizer
for $F_0$ on $\mathcal{A}_{\gamma}$, that is $u$
solves the minimization problem \eqref{min0}.
\end{teo}
We conclude with the following remark. With the notation of Theorem~\ref{finalteo}, if
$\{u_n\}_{n=1}^{\infty}$ is a sequence contained in $\mathcal{A}_{\gamma}$ which
satisfies $\lim_n F_n(u_n)=\lim_n\inf_{\mathcal{A}_{\gamma}} F_n$, then, by
Remark~\ref{compactnessoss}, we have that, up to passing to a subsequence,
$\{u_n\}_{n=1}^{\infty}$ actually
converges, as $n\to\infty$, to some function $u\in \mathcal{A}_{\gamma}$ in $L^1(B_R)$.
\section{Discussion}
We have provided a mathematical study of the inverse problem of photolithography. The approach we
propose is to seek an approximate solution by formulating the geometrical problem using a
phase-field method. We further relax the hard threshold involved in image exposure with an approximate
Heaviside function. We show that the variational problem for the approximate solution is well-posed.
This opens a way into designing mathematically rigorous numerical methods. We further show that
as the approximation parameter goes to zero, a theoretical limit, the original optimization problem
involving geometry is recovered.
\subsubsection*{Acknowledgements}
The authors learned about the inverse problem of photolithography from Apo Sezginer who gave
a seminar on this topic at the Institute for Mathematics and its Applications in 2004. We
thank Dr.~Sezginer for helpful discussions.
Luca Rondi is partially supported by GNAMPA under 2008 and 2009 projects. Part of this work was done while Luca Rondi was visiting the School of Mathematics at the University of Minnesota, Minneapolis, USA, whose support and hospitality is gratefully
acknowledged. Fadil Santosa's research is supported in part by NSF award DMS0807856.
| proofpile-arXiv_065-10981 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{introduction}
Studies of stellar populations are
of high importance for understanding the formation and
evolution of the Milky Way. In this context, it has been discussed
whether the Galactic halo consists of more than one population. The
monolithic collapse model of Eggen et al.
(\cite{eggen62}) corresponds to a single
population, but from a study of globular clusters, Searle and Zinn
(\cite{searle78}) suggested that the halo
contains two populations: $i)$ an inner, old, flattened
population with a slight prograde rotation formed during a dissipative
collapse, and $ii)$ an outer, younger, spherical population
accreted from dwarf galaxies.
The presence of this dichotomy was supported by a study of
$\sim \! 20\,000$ stars in the SDSS survey performed
by Carollo et al. (\cite{carollo07}).
Elemental abundances of stars in the solar neighborhood may provide
additional information about the halo populations.
In this context, the ratio \alphafe , where $\alpha$ refers to the
average abundance of Mg, Si, Ca, and Ti, is of particular
interest. The $\alpha$-elements are produced mainly
during Type II supernovae (SNe) explosions on a short timescale
($\sim \! 10^7$ years), whereas iron is also produced by Type Ia SNe on a
much longer timescale ($\sim \! 10^9$ years). Hence, \alphafe\ can be
used as a `clock' to probe the star formation history of
a Galactic component.
Several previous studies have focused on the possible differences in
\alphafe\ for stars in the solar neighborhood.
Fulbright (\cite{fulbright02}), Stephens \& Boesgaard (\cite{stephens02}),
and Gratton et al. (\cite{gratton03}) all find evidence that stars
associated with the outer halo have lower
\alphafe\ than stars connected to the inner halo.
The differences in \alphafe\ found in these studies are, however,
not larger than 0.1\,dex, and it is unclear whether the distribution of \alphafe\
is continuous or bimodal. Nissen \& Schuster (\cite{nissen97})
achieved a higher precision measurement of \alphafe\ and
found evidence of a bimodal distribution of
\alphafe\ for 13 halo stars with $-1.3 < \feh < -0.4$, but
a larger sample of these `metal-rich' halo stars needs to be
observed to confirm these findings and study possible correlations with kinematics.
In this Letter, we present the first results of such a study.
\section{Sample selection and observed spectra}
\label{sect:obs}
Stars were selected from the Schuster et al.
(\cite{schuster06}) $uvby$-$\beta$ catalogue of high-velocity and metal-poor
stars. To ensure that a star has a high probability of belonging to the halo
population, the total space velocity, \Vtotal , with respect to
the local standard of rest (LSR) was constrained to be larger
than 180\,\kmprs\ (Venn et al. \cite{venn04}).
Furthermore, the Str\"{o}mgren indices \mbox{($b\!-\!y)$} , $m_{\rm 1}$ and
$c_{\rm 1}$ were used to select dwarfs and subgiants with
$5200 < \teff < 6300$\,K and $\feh \lower.5ex\hbox{\gtsima} -1.6$.
This produced a list of about 200 stars, of which
37 have VLT/UVES spectra available in the ESO/ST-ECF Science Archive
(Table \ref{table:UVES.RV}).
Furthermore, 16 stars with thick-disk kinematics and UVES spectra were
included.
In addition, 53 randomly selected stars were observed with the FIbre fed
Echelle Spectrograph (FIES) at the Nordic Optical Telescope (NOT)
(Table \ref{table:FIES.RV}).
Six were found to be double-lined spectroscopic binaries
and excluded.
All FIES stars and most of the UVES stars are brighter than $V = 11.1$,
three having $V$ = 11.2, 12.2, and 12.8. The average distance is
115\,pc with $D_{\rm max} = 335$\,pc.
The UVES spectra cover the spectral region 4800 - 6800\,\AA\ and
have resolutions $R\simeq \! 55\,000$ and signal-to-noise ratios
($S/N$) from 250 to 500.
The FIES spectra range from 4000 to 7000\,\AA ,
but only the 4700 - 6400\,\AA\ region was employed,
with a resolution $R\simeq \! 40\,000$ and $S/N \simeq$ 140 - 200.
The majority of the UVES stars had reduced spectra
available in the archive, but for stars
observed with an image slicer, the raw data were reduced
using the echelle package in IRAF.
The FIES data were handled by {\tt FIEStool}, a data
reduction package developed by E. Stempels.
Equivalent widths (EWs) of 130 to 180 atomic lines were measured
for each star. The large majority of the lines have EWs
between 2 and 90\,m\AA . For six stars,
both UVES and FIES spectra are available. The average EW difference
(FIES -- UVES)
is 0.6\,m\AA\ with a rms deviation of 1.3\,m\AA .
\section{Stellar parameters and abundances}
\label{sect:par-abun}
Element abundances are derived from
EWs using the Uppsala
EQWIDH program together with model atmospheres
interpolated from the new MARCS grid (Gustafsson et
al. \cite{gustafsson08}). Two sets of models are available with different
values of \alphafe , which makes it possible to
interpolate to a model having the same \alphafe\ as the star.
Local thermodynamic equilibrium (LTE) is assumed in the line
calculations, and line broadening caused by microturbulence, \mbox{$\xi_{\rm turb}$} , and
collisional damping is included.
The abundance analysis
is performed differentially with respect to two bright thick-disk stars,
\object{HD\,22879} and \object{HD\,76932}. Their effective temperatures
are determined from \mbox{($b\!-\!y)$}\ and \mbox{($V\!-\!K)$}\ using the calibrations of
Ram\'{\i}rez \& Mel\'{e}ndez (\cite{ramirez05}).
Surface gravities are derived from Hipparcos
parallaxes as described by Nissen et al. (\cite{nissen04}),
and chemical abundances
from a differential analysis with respect to the Sun,
using a subset of $\sim \! 80$ lines, which
are relatively unblended in the solar flux spectrum
of Kurucz et al. (\cite{kurucz84}).
In an `inverted' abundance analysis, the data
from the star-Sun analysis are then used to determine
$gf$-values for the whole set of $\sim \! 180$ lines.
These $gf$-values are applied for the abundance analysis
of all program stars.
We then determine \teff\ so that the
\feh\ derived from the \FeI\ lines is independent of
excitation potential. As the \FeI\ lines are also
used to determine \mbox{$\xi_{\rm turb}$}\ by minimizing the dependence of \feh\
on EW, iteration is needed
to obtain consistent values of \teff\ and \mbox{$\xi_{\rm turb}$} .
We estimate a differential error of $\sigma (\teff )$ = $\pm 30$\,K
by comparing \teff\ values derived from the \FeI\ excitation balance
with those inferred from \mbox{($b\!-\!y)$}\ and \mbox{($V\!-\!K)$}\ colors
for a subset of 44 nearby stars that appear to be
unreddened according to the absence of interstellar NaD lines.
The surface gravity is estimated by ensuring that
\FeI\ and \FeII\ lines provide consistent Fe abundances.
Comparison of these spectroscopic gravities
with values determined from Hipparcos parallaxes
for the nearby stars shows that \logg\ is determined differentially
to a precision of 0.05\,dex.
The derived abundance ratios of Na, Mg, Si, Ca, Ti, Cr, and Ni with
respect to Fe are given in Tables \ref{table:UVES} and \ref{table:FIES}.
All abundance ratios are based on neutral lines.
The numbers of the lines applied are
\NaI\ 2-5, \MgI\ 1-2, \SiI\ 5-10, \CaI\ 6-9, \TiI\ 9-14, \CrI\ 4-7, \FeI\ 70-92, \FeII\ 14-16, and \NiI\ 20-27, where the first number refers to
the most metal-poor stars, and the last to the most metal-rich.
The errors in the abundance
ratios were estimated by comparing results obtained from
UVES and FIES spectra for the six stars observed with
both instruments (see Tables \ref{table:UVES} and \ref{table:FIES}).
The spectra of these stars have typical $S/N$,
except \object{HD\,189558} that has an unusually high quality FIES spectrum
($S/N \simeq 350$).
This comparison shows that differential values of \feh , \nafe , \mgfe , and \sife\
are determined to a 1-$\sigma$ precision of 0.03 to 0.04\,dex, whereas
the precision of \cafe , \tife , \crfe , and \alphafe\ is about 0.02\,dex.
The error in \nife\ is as small as 0.01\,dex, because of the many
\FeI\ and \NiI\ lines available. We note that errors in
the abundance ratios caused by errors in \teff\ and \logg\ are
small compared to errors induced by the EW measurements, because all ratios
are derived from neutral atomic lines with similar sensitivity to \teff\ and \logg .
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{13877fig1.eps}}
\caption{\mgfe\ and \alphafe\ versus \feh . Crosses refer to thick-disk stars
and circles to halo stars observed with UVES. Triangles indicate halo
stars with FIES spectra. Halo stars above the long-dashed line in the
\mgfe\ diagram are defined as belonging to the high-$\alpha$ population
and are indicated by open (blue) symbols.
The stars below the long-dashed line are defined to be low-$\alpha$
stars and are shown with filled (red) symbols.
Based on \mgfe , this classification is maintained in all the following figures.
The components of a visual binary star,
\object{G\,112-43} and \object{G\,112-44}, are
connected by a straight line.}
\label{fig:mg.alpha-fe}
\end{figure}
Figure \ref{fig:mg.alpha-fe} shows \mgfe\ and \alphafe\
as a function of \feh . We note that there are no systematic offsets
between the UVES and the FIES data. The corresponding figure for
\sife , \cafe , and \tife\ is shown in the Online Section.
As can be seen, the halo stars consist of two distinct populations,
the `high-$\alpha$' stars with a nearly constant \alphafe\
and the `low-$\alpha$' stars with a declining
\alphafe\ as a function of increasing metallicity. A
classification into these two populations was performed
on the basis of \mgfe . In the range $-1.6 < \feh
< -1.4$, the two populations tend to merge, and the classification
is less clear. The high-$\alpha$ and low-$\alpha$ halo populations
also separate well in \nafe\ and \nife\ with the exceptions of two Na-rich stars.
The abundance differences can be seen directly from the observed
spectra as shown in the Online Section.
The scatter in the abundance ratios for the high-$\alpha$
and thick-disk stars relative to the best-fit linear relations
is 0.032\,dex for \mgfe\ and 0.030\,dex for \alphafe .
This is similar to the estimated errors of the analysis.
For the low-$\alpha$ stars, there are, however, abundance differences
from the trends that cannot be explained by the errors alone,
especially in the case of \nafe\ and \nife . The clear correlation
between these ratios (Fig. \ref{fig:ni-na}) confirms that
cosmic variations in these ratios are present at a given \feh .
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{13877fig2.eps}}
\caption{\nife\ versus \nafe\ with the same symbols
as in Fig. \ref{fig:mg.alpha-fe}. The linear fit
does not include the two Na-rich stars.}
\label{fig:ni-na}
\end{figure}
\section{Kinematics}
\label{sect:kinematics}
To calculate the stellar space velocities, we acquired
proper motions from
the Tycho--2 catalogue (H{\o}g et al. \cite{hoeg00}, 88 stars),
the new reduction of the Hipparcos data (van Leeuwen \cite{leeuwen07}, 4 stars),
and the revised NLTT (Salim \& Gould \cite{salim03}, 2 stars).
Distances of stars were calculated from the parallaxes
of van Leeuwen (\cite{leeuwen07}), when the errors in these are less than
10\%, and if not, from the photometric absolute magnitude calibration
by Schuster et al. (\cite{schuster04}, \cite{schuster06}).
The radial velocities of the stars were derived from
our own spectra and have errors of $\pm 0.3$\,\kmprs .
With these data as input, the formulae and matrix equations of
Johnson \& Soderblom (\cite{johnson87}) were used to calculate
the Galactic velocity components ($U, V, W$) and their errors.
Correction for a solar motion of (+7.5, +13.5, +6.8)\,\kmprs\
with respect to the LSR was adopted from
Francis \& Anderson (\cite{francis09}). The resulting values of
$U_{\rm LSR}, V_{\rm LSR}$, and $W_{\rm LSR}$ are given in
Tables \ref{table:UVES} and \ref{table:FIES}.
The average errors of these velocities for the halo stars
are $(\pm 12, \pm 16, \pm 9)$\,\kmprs\ with a major contribution
from the error in the distances.
Figure \ref{fig:Toomre} shows the Toomre diagram for the thick-disk and
halo stars that could be clearly classified as belonging to
either the high-$\alpha$ or the low-$\alpha$ population.
Assuming Gaussian velocity
distributions with canonical dispersions and asymmetric drifts
for the thin-disk, thick-disk, and halo populations, stars with
$\Vtotal > 180$\,\kmprs\ generally have a high probability of belonging
to the halo population (Venn et al. \cite{venn04}).
If, on the other hand, the velocity distribution of the thick disk is
non-Gaussian with an extended tail toward high velocities, as in
the model of the Galactic disks by Sch\"{o}nrich \& Binney
(\cite{schonrich09}), then the high-$\alpha$ stars with
$180 < \Vtotal < 210$\,\kmprs\ might belong to the thick-disk population.
Nevertheless, the remaining high-$\alpha$ halo stars exhibit a
well-defined trend that is clearly separated from that of
the low-$\alpha$ stars.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{13877fig3.eps}}
\caption{Toomre diagram for stars with $\feh > -1.4$.
{\rm High-$\alpha$ halo stars are shown with open (blue) circles,
low-$\alpha$ halo stars with filled (red) circles, and thick-disk stars
with crosses.} The long-dashed line corresponds to $V_{\rm total} = 180$\,\kmprs .
The short-dashed line indicates zero rotation in the Galaxy.}
\label{fig:Toomre}
\end{figure}
\section{Discussion}
\label{sect:discussion}
As discussed in detail by Gilmore \& Wyse
(\cite{gilmore98}), the near-constancy of \alphafe\
for the high-$\alpha$ and thick-disk stars suggests that they
formed in regions with such a high star formation rate that only
Type II SNe contributed to their chemical enrichment up to
$\feh \simeq -0.4$.
On the other hand, the low-$\alpha$ stars originate in
regions with a relatively slow chemical evolution so that
Type Ia SNe have started to contribute iron at
$\feh \simeq -1.5$ causing \alphafe\ to decrease toward higher
metallicities.
The distinction between the two halo populations is
greater for \mgfe\ than for both \cafe\ and \tife ,
probably because of different SNe Ia yields.
According to Tsujimoto et al. (\cite{tsujimoto95}),
the relative contributions of SNe Ia to the solar abundances
are negligible for Mg, 17\% for Si, 25\% for Ca, and
57\% for Fe.
As discussed by Venn et al. (\cite{venn04}), the yields of the
neutron-rich isotopes $^{23}$Na and $^{58}$Ni from massive stars
is controlled by the neutron excess, which depends on the initial
heavy-element abundance (Arnett \cite{arnett71}).
It would be interesting to investigate
in more detail whether these dependences could explain
the underabundances of Na and Ni in low-$\alpha$ stars
and the correlation seen in Fig. \ref{fig:ni-na}.
As seen in the Toomre diagram, the high-$\alpha$ stars show
evidence of being more bound to the Galaxy and
favoring prograde Galactic orbits, while the low-$\alpha$
stars are less bound with two-thirds of them being on
retrograde orbits. This suggests that the high-$\alpha$
population is connected to a dissipative component of the Galaxy
that experienced rapid chemical evolution similar to that of the thick disk,
whereas the low-$\alpha$ stars were accreted
from dwarf galaxies that had lower star formation rates.
Present-day dwarf spheroidal galaxies tend to have even
lower values of \alphafe , \nafe , and \nife\
than the low-$\alpha$ halo stars for the range $-1.6 < \feh < -0.8$
(Tolstoy et al. \cite{tolstoy09}). This offset agrees with
the predictions of the simulations of
a hierarchically formed stellar halo in a
$\Lambda$CDM Universe by Font et al. (\cite{font06}).
The bulk of halo stars originate from early accreted,
massive dwarf galaxies with efficient star formation, whereas
surviving satellite galaxies in the outer halo are on average
of lower mass and experience slower chemical evolution with a greater
contribution from Type Ia SNe at a given metallicity.
The predicted \mgfe\ versus \feh\ relation for the accreted halo stars agrees
remarkably well with the trend for the low-$\alpha$ halo stars. However,
Font et al. do not explain the existence of high-$\alpha$ halo stars.
Two $\Lambda$CDM simulations suggest a dual origin
of stars in the inner Galactic halo. Purcell et al. (\cite{purcell09})
propose that ancient stars formed in the Galactic disk may be ejected
into the halo by the merging of satellite galaxies, and
Zolotov et al. (\cite{zolotov09}) find that stars formed out of accreted
gas in the inner 1\,kpc of the Galaxy can be displaced into the halo through
a succession of mergers. Alternatively, the high-$\alpha$ population
might have formed as the first stars in a dissipative collapse
of a proto-Galactic gas cloud (Gilmore et al. \cite{gilmore89};
Schuster et al. \cite{schuster06}, Sect. 8.2).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{13877fig4.eps}}
\caption{\nafe\ versus $U_{\rm LSR}$ for stars with $\feh > -1.4$.
The same symbols as in Fig. \ref{fig:Toomre} are used.}.
\label{fig:na-U}
\end{figure}
The retrograde low-$\alpha$ stars in Fig. \ref{fig:Toomre}
have an average Galactic rotation velocity of
$V_{\rm LSR} \simeq -260$\,\kmprs , which is close to that of the
$\omega$\,Cen globular cluster (Dinescu et al. \cite{dinescu99}).
As often discussed (e.g., Bekki \& Freeman \cite{bekki03}), $\omega$\,Cen is
probably the nucleus of a captured satellite galaxy with its
own chemical enrichment history. Meza et al. (\cite{meza05})
simulated the orbital characteristics of the tidal debris of
such a satellite dragged into the
Galactic plane by dynamical friction. The captured stars
have rather small $W$-velocities but a wide, double-peaked
$U$-distribution, similar to the $W$-$U$ distribution
observed for the low-$\alpha$ halo (see Online Section).
As shown in Fig. \ref{fig:na-U}, stars with
extreme $U$ velocities tend to have the lowest \nafe\ values,
which corroborates their having a special origin.
In support of a connection between low-$\alpha$ stars and
$\omega$\,Cen, we note that stars in this globular cluster
exhibit a wide range of \feh\ values and a decline in \alphafe\
for metallicities above $\feh \sim -1$ (Origlia et al. \cite{origlia03}).
Johnson et al. (\cite{johnson09}), on the other hand,
find that \nafe\ in $\omega$\,Cen red giants increases from about $-0.2$\,dex
at $\feh \sim -1.7$ to +0.8\,dex at $\feh \sim -1.0$.
A similar increase is not seen for the low-$\alpha$ halo stars.
Enhancements of Na and a
Na-O anticorrelation are present in all well-studied globular clusters
(Carretta et al. \cite{carretta09}) and may be caused by the
chemical enrichment from intermediate-mass AGB stars undergoing
hot-bottom hydrogen burning. According to the hydrodynamical
simulations of D'Ercole et al. (\cite{dercole08}), the gas ejected
from these AGB stars collects in the cluster core via cooling flows,
which may explain the difference in \nafe\ between stars remaining in
$\omega$\,Cen itself and those originating in the progenitor galaxy.
We conclude that the derived abundance ratios provide clear evidence
of two distinct populations of stars that are among the most
metal-rich in the Galactic halo. The reason that previous studies
have failed to detect this dichotomy may be ascribed to the lower precision
of the abundances for less homogeneous samples of stars,
and greater focus on metal-poor stars.
The high-$\alpha$
stars may be ancient disk or bulge stars `heated' to halo kinematics
by merging satellite galaxies or they could be the first stars
formed in a dissipative
collapse of a proto-Galactic gas cloud. The low-$\alpha$ stars are probably
accreted from dwarf galaxies, and some are likely
to be associated with the $\omega$\,Cen progenitor galaxy.
Further studies of possible
correlations between the abundance ratios and orbital parameters of
the stars may help us to clarify the origin of the two populations.
\begin{acknowledgements}
We thank the anonymous referee for comments and suggestions, which
helped to improve this Letter significantly.
\end{acknowledgements}
| proofpile-arXiv_065-10993 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Supersymmetry provides a natural solution to
the gauge hierarchy problem in the Standard Model (SM). In
supersymmetric SMs with $R$ parity under which the SM particles
are even while their supersymmetric partners are odd,
the gauge couplings for $SU(3)_C$, $SU(2)_L$ and $U(1)_Y$
gauge symmetries are unified at about $2\times 10^{16}$~GeV~\cite{Ellis:1990zq},
the lightest supersymmetric particle (LSP) like the neutralino can
be the cold dark matter candidate~\cite{Ellis:1983wd, Goldberg:1983nd},
and the electroweak precision constraints
can be evaded, etc. In particular, gauge coupling unification~\cite{Ellis:1990zq}
strongly suggests Grand Unified Theories (GUTs), which explain the
quantum numbers of the SM fermions and charge quantization.
Thus, the great challenge is how to test the supersymmetric
GUTs at the Large Hadron Collider (LHC), the future International
Linear Collider (ILC), and other experiments.
In the supersymmetric SMs, supersymmetry is broken in
the hidden sector, and then its breaking effects
are mediated to the SM observable sector. However,
the relations between the supersymmetric particle
(sparticle) spectra and
the fundamental theories can be very complicated and model
dependent. Interestingly, comparing to the supersymmetry
breaking soft masses for squarks and sleptons, the gaugino masses
have the simplest form and appear to be the least model
dependent. With gravity mediated supersymmetry breaking
in GUTs, we have a universal gaugino mass $M_{1/2}$ at the
GUT scale, which is called the minimal supergravity (mSUGRA)
scenario~\cite{mSUGRA}. Thus, we have the gauge coupling
relation and the gaugino mass
relation at the GUT scale $M_{\rm GUT}$:
\begin{eqnarray}
{{1}\over {\alpha_3}} ~=~ {{1}\over {\alpha_2}}
~=~ {{1}\over {\alpha_1}} ~,~\,
\label{mSUGRA-C}
\end{eqnarray}
\begin{eqnarray}
{{M_3}\over {\alpha_3}} ~=~ {{M_2}\over {\alpha_2}}
~=~ {{M_1}\over {\alpha_1}} ~,~\,
\label{mSUGRA}
\end{eqnarray}
where $\alpha_3$, $\alpha_2$, and $\alpha_1\equiv 5\alpha_{Y}/3$ are gauge
couplings respectively for $SU(3)_C$, $SU(2)_L$,
and $U(1)_Y$ gauge symmetries, and
$M_3$, $M_2$, and $M_1$ are the masses
for $SU(3)_C$, $SU(2)_L$, and $U(1)_Y$
gauginos, respectively. Interestingly,
$1/\alpha_i$ and $M_i/\alpha_i$ satisfy the same
equation $x_3=x_2=x_1$ at the GUT scale, which will be
proved as a general result.
Because $M_i/\alpha_i$ are
constant under one-loop renormalization group
equation (RGE) running, we obtain that
the above gaugino mass relation in Eq.~(\ref{mSUGRA}) is valid
from the GUT scale to the electroweak scale at one loop.
Note that the two-loop RGE running effects on gaugino masses
are very small, thus, we can test this gaugino mass relation
at the LHC and ILC where the gaugino masses can be
measured~\cite{Cho:2007qv, Barger:1999tn}.
However, the SM gauge couplings in GUTs need not
be unified at the GUT scale after the GUT gauge symmetry
breaking since the high-dimensional
operators will contribute to the different gauge kinetic terms for
the $SU(3)_C$, $SU(2)_L$, and $U(1)_Y$ gauge
symmetries~\cite{Hill:1983xh, Shafi:1983gz, Ellis:1984bm, Ellis:1985jn, Drees:1985bx}.
Furthermore, we will have non-universal
gaugino masses at the GUT scale as
well~\cite{Ellis:1985jn, Drees:1985bx, Anderson:1999uia,
Chamoun:2001in, Chakrabortty:2008zk, Martin:2009ad, Bhattacharya:2009wv,
Feldman:2009zc, Chamoun:2009nd}. In particular,
in the GUTs with large number of fields, the renormalization
effects significantly decrease the scale at which quantum
gravity becomes strong, so, these high-dimensional operators
are indeed important and need to be considered
seriously~\cite{Calmet:2008df}.
Therefore, the key question is whether we still have the
gaugino mass relations that can be tested at
the LHC and ILC. It is amusing to notice that the first systematic
studies for $SU(5)$ models
in the framework of $N=1$ supergravity for non-universal
gauge couplings and gaugino masses at the GUT scale were done
twenty-five years ago~\cite{Ellis:1985jn}.
On the other hand, in F-theory model building~\cite{Vafa:1996xn,
Donagi:2008ca, Beasley:2008dc, Beasley:2008kw, Donagi:2008kj,
Font:2008id, Jiang:2009zza, Blumenhagen:2008aw, Jiang:2009za,
Li:2009cy, Leontaris:2009wi, Li:2010mr}, the
GUT gauge fields are on the observable seven-branes which
wrap a del Pezzo $n$ surface $dP_n$ for the extra four space
dimensions. The
SM fermions and Higgs fields are on the complex codimension-one
curves (two-dimensional real subspaces) in $dP_n$, and
the SM fermion Yukawa couplings arise from the intersections
of the SM fermion and Higgs curves.
A brand new feature is that the $SU(5)$ gauge symmetry
can be broken down to the SM gauge symmetry
by turning on $U(1)_Y$
flux~\cite{Beasley:2008dc, Beasley:2008kw, Li:2009cy},
and the $SO(10)$ gauge
symmetry can be broken down to the $SU(5)\times U(1)_X$
and $SU(3)\times SU(2)_L\times SU(2)_R\times U(1)_{B-L}$
gauge symmetries by turning on the $U(1)_X$ and $U(1)_{B-L}$
fluxes, respectively~\cite{Beasley:2008dc, Beasley:2008kw,
Jiang:2009zza, Jiang:2009za, Font:2008id, Li:2009cy}.
It has been shown that the gauge kinetic functions receive
the corrections from $U(1)$ fluxes~\cite{Donagi:2008kj,
Blumenhagen:2008aw, Jiang:2009za, Li:2009cy}. Thus, whether we
can test F-theory GUT at the LHC and ILC is another
interesting question~\cite{Li:2010mr}.
In this paper, we consider the generalization of the mSUGRA (GmSUGRA).
In GUTs with gravity mediated supersymmetry
breaking, we study for the first time the generic gauge
coupling relations at the GUT scale, and the general
gaugino mass relations which are valid
from the GUT scale to the electroweak scale at one loop.
Interestingly, the gauge coupling relations and the
gaugino mass relations at the GUT scale are given by the same
equation. In other words,
$1/\alpha_i $ and $M_i/\alpha_i$ satisfy the same
equation at the GUT scale respectively for
the gauge coupling relations and the
gaugino mass relations. Thus, we define
the index $k$ for these relations, which can be calculated in
GUTs and can be determined at the LHC and ILC. Therefore,
we present a concrete definition of the GUT scale in these theories,
and suggest a new way to test general GUTs at the LHC and ILC.
Also, we discuss five special scenarios with interesting possibilities.
With our generic formulae,
we present all the GUT-scale gauge coupling relations and all
the gaugino mass relations in the $SU(5)$ and
$SO(10)$ models, and calculate the corresponding indices.
Especially, in the
traditional $SU(5)$ and $SO(10)$ models that have been
studied extensively thus far, the index $k$ is $5/3$,
which was first pointed out for $SU(5)$ models in Ref.~\cite{Ellis:1985jn}.
Moreover, we give the field theory realization of the $U(1)$ flux
effects on the SM gauge kinetic functions in F-theory GUTs. We find
that in the $SU(5)$ and $SO(10)$ models respectively
with $U(1)_Y$ and $U(1)_{B-L}$ fluxes, the index $k$ is $5/3$,
while in the $SO(10)$ models with $U(1)_X$ flux, the gauge coupling
relation and the gaugino
mass relation are the same as these in the mSUGRA. Furthermore,
in four-dimensional
GUTs, the GUT gauge symmetry breaking may also affect the supersymmetry
breaking scalar masses, trilinear soft terms as well as
the SM fermion Yukawa couplings, which
will be studied elsewhere~\cite{TLDN-P}.
\section{Gauge Coupling Relations and Gaugino Mass Relations}
After the GUT gauge symmetry breaking,
we can parametrize the gauge kinetic functions
$f_3$, $f_2$ and $f_1$ respectively
for $SU(3)_C$, $SU(2)_L$ and $U(1)_Y$ gauge symmetries
at the GUT scale as follows
\begin{eqnarray}
f_i &=& \sum_m a'_{m}\tau_m + \epsilon \left (\sum_{n} a_{in} S_n \right)~,~\,
\end{eqnarray}
where the first term is the original GUT gauge kinetic function,
and the second term arise from the GUT gauge symmetry
breaking. $\epsilon$ is
a small paramter close to the ratio between the GUT Higgs vacuum
expectation value (VEV) and the fundamental scale $M_*$.
$\tau_m$ and $S_n$ are the hidden sector fields whose $F$-terms
may break supersymmetry. In particular, for $a_{1n}=a_{2n}=a_{3n}$,
the gauge coupling relation at the GUT scale and the gaugino mass relation
are the same as these in the mSUGRA.
{\bf Theorem.} If there exist three real numbers $b_i$ such that
$\sum_{i=1}^3 b_i f_i = 0$, we have the following gauge coupling relation
at the GUT scale
\begin{eqnarray}
{{b_3}\over {\alpha_3}} + {{b_2}\over {\alpha_2}}
+ {{b_1}\over {\alpha_1}} =0~.~\,
\label{GaugeCR}
\end{eqnarray}
Using one-loop RGE running, we have the following gaugino mass
relation which is renormalization scale invariant from
the GUT scale to the electroweak scale at one loop
\begin{eqnarray}
{{b_3 M_3}\over {\alpha_3}} + {{b_2 M_2}\over {\alpha_2}}
+ {{b_1 M_1}\over {\alpha_1}} =0~.~\,
\label{GauginoMR}
\end{eqnarray}
{\bf Proof.} Because $f_i=1/(4\pi\alpha_i)$, the
gauge coupling relation in Eq.~(\ref{GaugeCR}) at the GUT scale
is obtained automatically.
From $\sum_{i=1}^3 b_i f_i = 0$, we have
\begin{eqnarray}
\sum_{i=1}^3 b_i ~=~0~,~~~\sum_{i=1}^3 b_i a_{in} ~=~0~.~\,
\label{Conditions}
\end{eqnarray}
Assuming that the F-terms of $\tau_m$ and $S_n$ break supersymmetry,
we obtain the ratios between the gaugino masses and gauge couplings
\begin{eqnarray}
{{M_i}\over {\alpha_i}}~=~ 4\pi \left[\sum_m a'_m F^{\tau_m}
+\epsilon \left(\sum_n a_{in} F^{S_n}\right) \right]~.~\,
\end{eqnarray}
Using Eq.~(\ref{Conditions}), we obtain the gaugino mass relation
given in Eq.~(\ref{GauginoMR}) at the GUT scale. Because
$M_i/\alpha_i$ are invariant under one-loop RGE running,
we prove the theorem. The gaugino mass relation will have very
small deviation due to the two-loop RGE running~\cite{Li:2010mr}.
Interestingly, the GUT-scale gauge coupling relation in Eq.~(\ref{GaugeCR})
and the gaugino mass
relation in Eq.~(\ref{GauginoMR}) give the same equation as follows
\begin{eqnarray}
b_3 x_3 + b_2 x_2 + b_1 x_1 ~=~0~.~\,
\end{eqnarray}
In other words, $1/\alpha_i$ and $M_i/\alpha_i$ at the GUT scale
satisfy the same equation respectively for the gauge coupling relation
and the gaugino mass relation.
Thus, we can define
the GUT scale in these theories:
{\bf Definition.} The GUT scale is the scale at which
$1/\alpha_i$ and $M_i/\alpha_i$
satisfy the same equation respectively for the gauge coupling
relation and the gaugino mass relation.
For simplicity, we consider two supersymmetry breaking
fields $\tau$ and $S$. The generic gauge kinetic
function can be parametrized as follows
\begin{eqnarray}
f_i &=& \tau + \epsilon a_i S~.~\,
\label{GKF-1}
\end{eqnarray}
If $a_1=a_2=a_3$, similar to the mSUGRA, we obtain the
GUT-scale gauge coupling
relation in Eq.~(\ref{mSUGRA-C}) and the
gaugino mass relation
in Eq.~(\ref{mSUGRA}).
If there exists at least one $a_i \not= a_j $ for $i\not= j$,
we obtain the generic solution for $b_i$ up to a scale factor
\begin{eqnarray}
b_1 ~=~ a_2-a_3~,~~ b_2~=~ a_3-a_1~,~~b_3~=~ a_1-a_2~.~\,
\end{eqnarray}
Using our theorem, we obtain the gauge coupling
relation at the GUT scale
\begin{eqnarray}
{{a_1-a_2}\over {\alpha_3}} + {{a_3-a_1}\over {\alpha_2}}
+ {{a_2-a_3}\over {\alpha_1}} =0~.~\,
\label{CouplingR}
\end{eqnarray}
In addition, we obtain the gaugino mass relation
which is valid from the GUT scale to the electroweak scale
under one-loop RGE running
\begin{eqnarray}
{{(a_1-a_2) M_3}\over {\alpha_3}} + {{(a_3-a_1) M_2}\over {\alpha_2}}
+ {{(a_2-a_3) M_1}\over {\alpha_1}} =0~.~\,
\label{MassR}
\end{eqnarray}
Except the mSUGRA, we always have $a_1 \not= a_2$ or $a_1\not=a_3$
in GmSUGRA in the following discussions. Thus, we can rewrite the
GUT-scale gauge coupling relation and
the gaugino
mass relation as follows
\begin{eqnarray}
{{1}\over {\alpha_2}} - {{1}\over {\alpha_3}}
~=~k \left( {{1}\over {\alpha_1}}
- {{1}\over {\alpha_3}} \right) ~,~\,
\label{GCRelation}
\end{eqnarray}
\begin{eqnarray}
{{M_2}\over {\alpha_2}} - {{M_3}\over {\alpha_3}}
~=~k \left( {{M_1}\over {\alpha_1}}
- {{M_3}\over {\alpha_3}} \right) ~,~\,
\label{GMRelation}
\end{eqnarray}
where $k$ is the index of these relations, and is defined as follows
\begin{eqnarray}
k ~\equiv~ {{a_2-a_3}\over {a_1-a_3}}~.~\,
\label{index}
\end{eqnarray}
Because $M_i/\alpha_i$ are renormalization scale invariant under one-loop RGE running
and can be calculated from the LHC and ILC experiments,
$k$ can be determined at the low energy as well. Therefore, we can test GUTs since
its $k$ can be calculated.
Although $k$ is not well defined in the
mSUGRA, we symbolically define the index $k$ for mSUGRA as $k=0/0$.
In other words, for $k=0/0$, we have the gauge coupling relation
at the GUT scale given by Eq.~(\ref{mSUGRA-C}),
and the gaugino mass relation given by Eq.~(\ref{mSUGRA}).
In addition, the concrete GUT scale can be redefined as follows:
{\bf Definition.} The GUT scale is the scale at which the gauge coupling
relation and the gaugino mass relation have the same index $k$.
Because the GUT gauge couplings should be positive and finite, we
obtain that ${\rm Re} \tau > 0$. Let us consider five
special cases in the following: \\
(1) ${\rm Re} S\not=0$, $F^{\tau}\not=0$, $F^S=0$. \\
In this case, the gauge coupling relation at the GUT scale
is still given by Eq.~(\ref{CouplingR})
or Eq.~(\ref{GCRelation}). However, the gaugino mass relation is given
by the mSUGRA gaugino mass relation in Eq.~(\ref{mSUGRA}).
This implies that even if we obtain the mSUGRA gaugino mass relation
at the LHC and ILC, we may still have the non-unified SM gauge couplings
at the GUT scale. Unfortunately, we can not calculate $k$ in this case
at the LHC and ILC. \\
(2) ${\rm Re} S \not=0$, $F^{\tau}=0$, $F^S\not=0$. \\
This case has been studied carefully
in Refs.~\cite{Anderson:1999uia, Chamoun:2001in, Chakrabortty:2008zk,
Martin:2009ad, Bhattacharya:2009wv, Feldman:2009zc, Chamoun:2009nd}. In this case,
the gauge coupling relation at the GUT scale
is still given by Eq.~(\ref{CouplingR})
or Eq.~(\ref{GCRelation}), and the gaugino mass relation is
given by Eq.~(\ref{MassR}) or Eq.~(\ref{GMRelation}). In particular,
for $a_i\not= 0$ we obtain the gaugino mass relation
\begin{eqnarray}
{{M_3}\over {a_3\alpha_3}} ~=~ {{M_2}\over {a_2\alpha_2}}
~=~ {{M_1}\over {a_1\alpha_1}} ~.~\,
\label{Ftm=0}
\\ \nonumber
\end{eqnarray}
(3) ${\rm Re} S=0$, $F^{\tau}\not=0$, $F^S\not=0$. \\
In this case, the gauge coupling relation at the GUT scale is given by
the mSUGRA gauge coupling relation in Eq.~(\ref{mSUGRA-C}), while
the gaugino mass relation is given by Eq.~(\ref{MassR})
or Eq.~(\ref{GMRelation}). Thus,
even if we obtain the non-universal gaugino mass relation
from the LHC and ILC, we may still have the gauge coupling unification
at the GUT scale. \\
(4) ${\rm Re} S=0$, $F^{\tau}\not=0$, $F^S=0$. \\
This case is the same as the mSUGRA. \\
(5) ${\rm Re} S=0$, $F^{\tau}=0$, $F^S\not=0$. \\
In this case, the gauge coupling relation at the GUT scale is given
by Eq.~(\ref{mSUGRA-C}),
while the gaugino mass relation is
given by Eq.~(\ref{MassR}) or Eq.~(\ref{GMRelation}).
Also, the gaugino mass relation for $a_i\not= 0$ is given
by Eq.~(\ref{Ftm=0}) as well.
\section{Grand Unified Theories}
In four-dimensional GUTs, the non-universal SM
gauge kinetic function can be generated after GUT gauge
symmetry breaking by the high-dimensional
operators~\cite{Hill:1983xh, Shafi:1983gz, Ellis:1984bm, Ellis:1985jn,
Drees:1985bx, Anderson:1999uia, Chamoun:2001in, Chakrabortty:2008zk,
Martin:2009ad, Bhattacharya:2009wv, Feldman:2009zc, Chamoun:2009nd}.
The generic gauge kinetic function in the superpotential is
\begin{eqnarray}
W \supset {1\over 2} {\rm Tr} \left[ W^a W^b
\left(\tau \delta_{ab} + \lambda{{\Phi_{ab}}\over {M_*}}S\right) \right]~,~\,
\end{eqnarray}
where $\lambda$ is the Yukawa coupling constant, and
$\Phi_{ab}$ transforms as the symmetric product of
two adjoint representations. After $\Phi_{ab}$ obtains
a VEV, we obtain the gauge kinetic functions in
Eq.~(\ref{GKF-1}) where $\epsilon$ is the product of $\lambda$,
the VEV of $\Phi_{ab}$, and suitable normalization factors.
First, let us study the $SU(5)$ models. The symmetric
product of the adjoint representation ${\mathbf{24}}$
of $SU(5)$ can be decomposed into irreducible representations
of $SU(5)$ as follows
\begin{eqnarray}
({\mathbf{24}} \times {\mathbf{24}})_{\rm symmetric}
&=& {\mathbf{1}} \oplus {\mathbf{24}} \oplus {\mathbf{75}}
\oplus {\mathbf{200}}~.~\,
\end{eqnarray}
We present $a_i$ and index $k$ for each irreducible
representation in Table~\ref{SU(5)-I}.
Thus, using our general formulae in Section II, we have all the GUT-scale
gauge coupling relations and all the gaugino mass relations
in $SU(5)$ models. Especially,
in the traditional $SU(5)$ models that have been studied extensively
so far, the GUT Higgs field is in the representation ${\mathbf{24}}$,
and then the index $k$ is $5/3$.
By the way, the gaugino mass relations for the Higgs fields
in the representations ${\mathbf{24}}$ and ${\mathbf{75}}$ have
been studied previously~\cite{Ellis:1985jn}.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$SU(5)$ & $a_1$ & $a_2$ &
$a_3$ & $k$ \\
\hline
${\mathbf{1}}$ & $1$ & $1$ & $1$ & $0/0$ \\
\hline
${\mathbf{24}}$ & $-1/2$ & $-3/2$ & $1$ & $5/3$ \\
\hline
${\mathbf{75}}$ & $-5$ & $3$ & $1$ & $-1/3$ \\
\hline
${\mathbf{200}}$ & $10$ & $2$ & $1$ & $1/9$ \\
\hline
\end{tabular}
\end{center}
\caption{$a_i$ and $k$ for each irreducible
representation in $SU(5)$ models.}
\label{SU(5)-I}
\end{table}
Second, let us consider the $SO(10)$ models. The symmetric
product of the adjoint representation ${\mathbf{45}}$
of $SO(10)$ can be decomposed into irreducible representations
of $SO(10)$ as follows
\begin{eqnarray}
({\mathbf{45}} \times {\mathbf{45}})_{\rm symmetric}
&=& {\mathbf{1}} \oplus {\mathbf{54}} \oplus {\mathbf{210}}
\oplus {\mathbf{770}}~.~\,
\end{eqnarray}
The $SO(10)$ models can be broken down to the Georgi-Glashow
$SU(5)\times U(1)$ models, the flipped $SU(5)\times U(1)_X$ models,
and the Pati-Salam $SU(4)_C\times SU(2)_L\times SU(2)_R$ models.
We present $a_i$ and indices $k$ for each irreducible
representation in Table~\ref{SO(10)-GG}, Table~\ref{SO(10)-F}
and Table~\ref{SO(10)-PS} for the $SO(10)$ models
whose gauge symmetries are broken down to the Georgi-Glashow
$SU(5)\times U(1)$ gauge symmetries, the flipped $SU(5)\times U(1)_X$
gauge symmetries, and
the Pati-Salam $SU(4)_C\times SU(2)_L\times SU(2)_R$ gauge symmetries,
respectively. We emphasize that our numbers $a_i$ in
Table~\ref{SO(10)-GG}, Table~\ref{SO(10)-F} and Table~\ref{SO(10)-PS}
are the same as the results obtained in the corresponding
Tables in Ref.~\cite{Martin:2009ad}.
Thus, using our generical formulae in Section II,
we have all the GUT-scale
gauge coupling relations and all the gaugino mass relations
in $SO(10)$ models.
In the traditional $SO(10)$ models that have been studied extensively
so far, the GUT Higgs fields are in the representations ${\mathbf{45}}$
as well as $\mathbf{16}$ and $\mathbf{\overline{16}}$~\cite{Georgi:1979dq}.
Thus, the above
discussions can not be applied directly. In this case, the discussions
on the GUT-scale gauge coupling relation and
the gaugino mass relation are similar to these in the field theory
realization of the F-theory $SO(10)$ models with $U(1)_{B-L}$ flux.
As discussed in the following, the index $k$ in the traditional
$SO(10)$ models is $5/3$ as well.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$SO(10)$ & $SU(5)\times U(1)$ & $a_1$ & $a_2$ &
$a_3$ & $k$ \\
\hline
${\mathbf{1}}$ & $(\mathbf{1}, \mathbf{0})$ & $1$ & $1$ & $1$ & $0/0$ \\
\hline
${\mathbf{54}}$ & $(\mathbf{24}, \mathbf{0})$ & $-1/2$ & $-3/2$ & $1$ & $5/3$ \\
\hline
& $(\mathbf{1}, \mathbf{0})$ & $1$ & $1$ & $1$ & $0/0$ \\
${\mathbf{210}}$ & $(\mathbf{24}, \mathbf{0})$ & $-1/2$ & $-3/2$ & $1$ & $5/3$ \\
& $(\mathbf{75}, \mathbf{0})$ & $-5$ & $3$ & $1$ & $-1/3$ \\
\hline
& $(\mathbf{1}, \mathbf{0})$ & $1$ & $1$ & $1$ & $0/0$ \\
${\mathbf{770}}$ & $(\mathbf{24}, \mathbf{0})$ & $-1/2$ & $-3/2$ & $1$ & $5/3$ \\
& $(\mathbf{75}, \mathbf{0})$ & $-5$ & $3$ & $1$ & $-1/3$ \\
& $(\mathbf{200}, \mathbf{0})$ & $10$ & $2$ & $1$ & $1/9$ \\
\hline
\end{tabular}
\end{center}
\caption{$a_i$ and $k$ for each irreducible
representation in $SO(10)$ models whose gauge symmetry
is broken down to the Georgi-Glashow $SU(5)\times U(1)$
gauge symmetries.}
\label{SO(10)-GG}
\end{table}
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$SO(10)$ & $SU(5)\times U(1)_X$ & $a_1$ & $a_2$ &
$a_3$ & $k$ \\
\hline
${\mathbf{1}}$ & $(\mathbf{1}, \mathbf{0})$ & $1$ & $1$ & $1$ & $0/0$ \\
\hline
${\mathbf{54}}$ & $(\mathbf{24}, \mathbf{0})$ & $-1/2$ & $-3/2$ & $1$ & $5/3$ \\
\hline
& $(\mathbf{1}, \mathbf{0})$ & $-19/5$ & $1$ & $1$ & $0$ \\
${\mathbf{210}}$ & $(\mathbf{24}, \mathbf{0})$ & $7/10$ & $-3/2$ & $1$ & $25/3$ \\
& $(\mathbf{75}, \mathbf{0})$ & $-1/5$ & $3$ & $1$ & $-5/3$ \\
\hline
& $(\mathbf{1}, \mathbf{0})$ & $77/5$ & $1$ & $1$ & $0$ \\
${\mathbf{770}}$ & $(\mathbf{24}, \mathbf{0})$ & $-101/10$ & $-3/2$ & $1$ & $25/111$ \\
& $(\mathbf{75}, \mathbf{0})$ & $-1/5$ & $3$ & $1$ & $-5/3$ \\
& $(\mathbf{200}, \mathbf{0})$ & $2/5$ & $2$ & $1$ & $-5/3$ \\
\hline
\end{tabular}
\end{center}
\caption{$a_i$ and $k$ for each irreducible
representation in $SO(10)$ models whose gauge symmetry
is broken down to the flipped $SU(5)\times U(1)_X$
gauge symmetries.}
\label{SO(10)-F}
\end{table}
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$SO(10)$ & $SU(4)_C\times SU(2)_L \times SU(2)_R$ & $a_1$ & $a_2$ &
$a_3$ & $k$ \\
\hline
${\mathbf{1}}$ & $(\mathbf{1}, \mathbf{1}, \mathbf{1})$ & $1$ & $1$ & $1$ & $0/0$ \\
\hline
${\mathbf{54}}$ & $(\mathbf{1}, \mathbf{1}, \mathbf{1})$ & $-1/2$ & $-3/2$ & $1$ & $5/3$ \\
\hline
& $(\mathbf{1}, \mathbf{1}, \mathbf{1})$ & $-3/5$ & $1$ & $0$ & $-5/3$ \\
${\mathbf{210}}$ & $(\mathbf{15}, \mathbf{1}, \mathbf{1})$ & $-4/5$ & $0$ & $1$ & $5/9$ \\
& $(\mathbf{15}, \mathbf{1}, \mathbf{3})$ & $1$ & $0$ & $0$ & $0$ \\
\hline
& $(\mathbf{1}, \mathbf{1}, \mathbf{1})$ & $19/10$ & $5/2$ & $1$ & $5/3$ \\
${\mathbf{770}}$ & $(\mathbf{1}, \mathbf{1}, \mathbf{5})$ & $1$ & $0$ & $0$ & $0$ \\
& $(\mathbf{15}, \mathbf{1}, \mathbf{3})$ & $1$ & $0$ & $0$ & $0$ \\
& $(\mathbf{84}, \mathbf{1}, \mathbf{1})$ & $32/5$ & $0$ & $1$ & $-5/27$ \\
\hline
\end{tabular}
\end{center}
\caption{$a_i$ and $k$ for each irreducible
representation in $SO(10)$ models whose gauge symmetry
is broken down to the Pati-Salam $SU(4)_C\times SU(2)_L \times SU(2)_R$
gauge symmetries.}
\label{SO(10)-PS}
\end{table}
\section{F-Theory GUTs}
We consider the field theory realization
of the $U(1)$ flux
effects on the SM gauge kinetic functions in F-theory GUTs, and study their
GUT-scale gauge coupling relations and their
gaugino mass relations~\cite{Donagi:2008kj, Blumenhagen:2008aw,
Jiang:2009za, Li:2009cy}. In the F-theory $SU(5)$ models,
the $SU(5)$ gauge symmetry
is broken down to the SM gauge symmetry by turning on
the $U(1)_Y$ flux. To realize the $U(1)_Y$
flux corrections to the SM gauge kinetic functions in
the four-dimensional $SU(5)$ models,
we consider the following superpotential term
for the $SU(5)$ gauge kinetic function
\begin{eqnarray}
W\supset {1\over 2} {\rm Tr} \left[ W^a W^b
\left(\tau \delta_{ab} +
\left({Z^2\delta_{ab}+ \lambda{\Phi_{a} \Phi_{b}}\over {M^2_*}}\right) S
\right) \right],~\,
\end{eqnarray}
where $Z$ is a SM singlet Higgs field, and $\Phi_a$ and $\Phi_b$
are the Higgs
fields in the adjoint representation of $SU(5)$ which have
VEVs along the $U(1)_Y$ direction.
Five stacks of seven-branes give us $U(5)$ symmetry,
thus, the $Z^2$ term is similar to the flux for the
global $U(1)$ of $U(5)$,
and the $\Phi_{a} \Phi_{b}$ term is similar to the
$U(1)_Y$ flux~\cite{Donagi:2008kj, Blumenhagen:2008aw}.
After $SU(5)$ gauge symmetry is broken down to the SM
gauge symmetry, with suitable definition of $\epsilon$,
we obtain~\cite{Donagi:2008kj, Blumenhagen:2008aw}
\begin{eqnarray}
a_1 = {1\over 2}\left(\alpha+{6\over 5}\right)~,~
a_2={1\over 2}\left( \alpha + 2 \right)~,~ a_3= {1\over 2}
\alpha ~,~\,
\end{eqnarray}
where $\alpha$ is a real number. In F-theory models,
$\alpha$ should be quantized due to flux quantization.
Thus, using Eqs.~(\ref{CouplingR}) and (\ref{MassR})
or Eqs.~(\ref{GCRelation}) and (\ref{GMRelation}),
we can easily obtain the gauge coupling relation
at the GUT scale and the gaugino mass relation whose
index $k$ is $5/3$~\cite{Li:2010mr}.
In the F-theory $SO(10)$ models where the $SO(10)$ gauge symmetry
is broken down to the flipped $SU(5)\times U(1)_X$ gauge symmetry
by turning on the $U(1)_X$ flux~\cite{Beasley:2008dc, Beasley:2008kw,
Jiang:2009zza, Jiang:2009za}, we can show that the gauge kinetic
functions for $SU(5)$ and $U(1)_X$ are exactly the
same at the unification scale~\cite{Jiang:2009za}.
In the field theory realization, we consider
the following superpotential term for
the $SO(10)$ gauge kinetic function
\begin{eqnarray}
W\supset {1\over 2} {\rm Tr} \left[ W^a W^b
\left(\tau \delta_{ab} +
\lambda {{\Phi_{a} \Phi_{b}}\over {M^2_*}} S
\right) \right]~,~\,
\label{SO(10)-GKF}
\end{eqnarray}
where $\Phi_a$ and $\Phi_b$ are the Higgs fields
in the adjoint representation of $SO(10)$. To have similar effects
as the $U(1)_X$ flux, $\Phi_a$ and $\Phi_b$ obtain VEVs along the
$U(1)_X$ direction. Thus, with suitable definition of $\epsilon$,
we get~\cite{Jiang:2009za}
\begin{eqnarray}
a_1~=~a_2~=~a_3~=~1 ~.~\,
\end{eqnarray}
Therefore,
similar to the mSUGRA, we obtain the gauge coupling unification at the GUT scale
in Eq.~(\ref{mSUGRA-C}) and the gaugino mass relation
in Eq.~(\ref{mSUGRA}), {\it i.e.}, we have $k=\infty$.
In the F-theory $SO(10)$ models, the $SO(10)$ gauge symmetry can also
be broken down to the
$SU(3)_C\times SU(2)_L\times SU(2)_R\times U(1)_{B-L}$
gauge symmetry by turning on the $U(1)_{B-L}$
flux~\cite{Font:2008id, Li:2009cy}. To realize the $U(1)_{B-L}$
flux corrections
to the SM gauge kinetic functions in four-dimensional $SO(10)$ models,
we still consider the superpotential term in Eq.~(\ref{SO(10)-GKF}),
where $\Phi_a$ and $\Phi_b$ obtain VEVs along the
$U(1)_{B-L}$ direction. Thus, with suitable definition of $\epsilon$,
we get~\cite{Li:2009cy}
\begin{eqnarray}
a_1~=~{2\over 5}~,~~
a_2~=~0~,~~ a_3~=~1 ~.~\,
\end{eqnarray}
Therefore, using Eqs.~(\ref{CouplingR}) and (\ref{MassR})
or Eqs.~(\ref{GCRelation}) and (\ref{GMRelation}),
we can easily obtain the gauge coupling relation
at the GUT scale and the gaugino mass relation whose
index $k$ is $5/3$~\cite{Li:2010mr}.
\section{Conclusions}
In GUTs with gravity mediated
supersymmetry breaking,
we considered the generic gauge coupling relations at
the GUT scale, and the general gaugino mass relations
which are valid from the GUT scale to the electroweak scale
at one loop.
Interestingly, the gauge coupling relations and the
gaugino mass relations at the GUT-scale are given by the same
equation, {\it i.e.},
$1/\alpha_i $ and $M_i/\alpha_i$ satisfy the same
equation respectively for the gauge coupling relations
and the gaugino mass relations. Thus, we define
the index $k$ for these relations.
Because the index $k$ can be calculated in GUTs and
can be determined at the LHC and future ILC,
we gave a concrete definition of the GUT scale in these theories,
and suggested a new way to test general GUTs at the future experiments.
We also dicussed five special scenarios with interesting possibilities.
With our generic formulae, we presented all the GUT-scale gauge coupling
relations and all the gaugino mass relations in the $SU(5)$ and
$SO(10)$ models, and calculated the corresponding indices.
In particular, the index $k$ is $5/3$~\cite{Ellis:1985jn} in the
traditional $SU(5)$ and $SO(10)$ models that have been
studied extensively so far.
Moreover, we studied the field theory realization of the $U(1)$ flux
effects on the SM gauge kinetic functions in F-theory GUTs. We found
that in the $SU(5)$ and $SO(10)$ models respectively
with $U(1)_Y$ and $U(1)_{B-L}$ fluxes, the index $k$ is $5/3$,
while in the $SO(10)$ models with $U(1)_X$ flux,
the GUT-scale gauge coupling relation and gaugino mass relation
are the same as these in mSUGRA.
In short, the gaugino mass relation with index $k=5/3$~\cite{Ellis:1985jn}
definitely deserve further detail study.
\begin{acknowledgments}
This research was supported in part
by the DOE grant DE-FG03-95-Er-40917 (TL and DVN),
by the Natural Science Foundation of China
under grant No. 10821504 (TL),
and by the Mitchell-Heep Chair in High Energy Physics (TL).
\end{acknowledgments}
| proofpile-arXiv_065-11143 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In recent years, there has been a growing interest in jets studies in order to identify a boosted massive particle decaying hadronically, for instance the $W$ boson \cite{Seymour:1993mx,Butterworth:2002tt,Skiba:2007fw,Holdom:2007nw}, top quarks \cite{Almeida:2008tp,Kaplan:2008ie,Thaler:2008ju,Plehn:2009rk}, supersymmetric particles \cite{Butterworth:2007ke,Butterworth:2009qa} and heavy resonances \cite{Baur:2008uv,FileviezPerez:2008ib,Bai:2008sk} (see also \cite{Ellis:2009me} for related work on general massive jets). Some of these studies revealed themselves to be successful in looking for a boosted light Higgs boson decaying into $b\bar{b}$ at the LHC \cite{Plehn:2009rk,MyFirstPaper,ATL-PHYS-PUB-2009-088,Kribs:2009yh}. That of \cite{MyFirstPaper,ATL-PHYS-PUB-2009-088} can be briefly summed up as follows: after having clustered the event with a radius $R$ large enough to catch the $b$ and $\bar{b}$ from the Higgs decay into a single jet,\footnote{The value chosen was $R=1.2$.} this jet can be analysed in $2$ steps:
\begin{itemize}
\item A Mass Drop (MD) analysis that allows one to identify the splitting responsible for the large jet mass, i.e. separate the $b$ and $\bar{b}$ and thus measuring the angular distance $R_{bb}$ between them, while suppressing as much QCD background as possible.
\item A Filtering analysis where one reclusters the $2$ resulting subjets with a smaller radius and takes the $3$ highest-$p_t$ subjets\footnote{The value of $3$ was found to work well in \cite{MyFirstPaper}.} obtained in order to keep the major part of the perturbative radiation while getting rid of as many underlying event (UE) and pile-up (PU) particles as possible (used also in \cite{Plehn:2009rk,Kribs:2009yh,Cacciari:2008gd}, and a variant is proposed in \cite{Krohn:2009th}).
\end{itemize}
Concerning the MD analysis, the only thing we need to know is that we end up with $2$ $b$-tagged jets, each with a radius roughly equal to $R_{bb}$. Notice that due to angular ordering \cite{Fadin:1983aw,Ermolaev:1981cm,Mueller:1981ex,Dokshitzer:1982xr,Bassetto:1984ik}, these $2$ jets should capture the major part of the perturbative radiation from the $b\bar{b}$ dipole. The whole procedure is depicted in figure~\ref{MD_and_F_analysis} (taken from \cite{MyFirstPaper}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.7]{figures/2cones.eps}
\caption{The Mass Drop and Filtering analysis in the procedure used to enhance the signal from a light Higgs decaying into $b\bar{b}$ at the LHC.}
\label{MD_and_F_analysis}
\end{figure}
In this paper, we are going to focus on the Filtering analysis. One can generalize it with respect to its original definition using $2$ parameters, $\filt{n}$ and $\filt{\eta}$ (as discussed also in \cite{Krohn:2009th}): after the MD analysis was carried out, one reclusters the $2$ resulting subjets with a radius $\filt{R} = \filt{\eta}R_{bb}$ and takes the $\filt{n}$ hardest jets obtained. Obviously, the larger the value of $\filt{\eta}$ the more perturbative radiation we keep, but also the more the UE/PU degrades the Higgs peak. The same holds for $\filt{n}$. So there is a compromise to make between losing too much perturbative radiation and being contaminated by soft particles from UE/PU. In \cite{MyFirstPaper}, the values that were found to give nice results were $\filt{n} = 3$ and $\filt{\eta} = \min(0.3/R_{bb},1/2)$. However, these values had been chosen on brief Monte-Carlo event generator study and one would like to gain a little more analytical control over them. One question would be for instance to understand even approximately how the optimal ($\filt{n}$,$\filt{\eta}$) values change when one increases the Higgs $p_{t_H}$ cut, or when the PU becomes more and more important during the high luminosity running of the LHC. Though the MD and Filtering analysis were originally designed to identify a light Higgs boson, one should be aware that similar calculations may apply in other uses of filtering, for instance to study any boosted colorless resonance decaying hadronically, including $W$ and $Z$ bosons.
The article will be devoted in large part to the study of the dependence of the perturbative radiation loss with respect to the filtering parameters. As usual in this kind of work, large logarithms arise due to soft or collinear gluon emissions, and one is forced to deal with them in order to obtain reliable results in the region where the observable is sensitive to this kind of emission. We will thus compute analytically the two first orders in the leading soft logarithmic (LL) approximation when $\filt{n}=2$ and to all-orders in the large-$N_c$ limit\footnote{$N_c=3$ denotes as usual the number of QCD colors.} when $\filt{n}=2$ or $3$ for small enough values of $\filt{\eta}$ (section~\ref{NG_structure_analytical_insights}). With these in hand, and using a program that allows one to make all-orders leading-log calculations in the large-$N_c$ approximation, we check the analytical results and examine if the small-$\filt{\eta}$ limit and/or the truncation of the LL expansion can be trusted to estimate the loss of perturbative radiation in practice (section~\ref{NG_structure_numerical_results}). Finally, in section~\ref{choice_of_the_filtering_parameters}, we will analyse the Higgs mass peak width due respectively to the loss of perturbative radiation and to the presence of UE/PU, before combining them in a simple and approximate but physically reasonable way in order to be able to conclude about the optimal parameters choices.
\section{Non-Global structure: analytical insights \label{NG_structure_analytical_insights}}
\subsection{The filtered Higgs mass: a Non-Global observable \label{the_filtered_higgs_mass_a_non_global_observable}}
It is now very well known \cite{Catani:1991kz,Catani:1991pm,Catani:1992ua,Catani:1992jc,Catani:1998sf,Dokshitzer:1998kz,Antonelli:1999kx,Burby:1999yb,Burby:2001uz,Banfi:2000si,Dasgupta:2001sh,Banfi:2003jj,Banfi:2004nk,Banfi:2004yd,Becher:2008cf,Dissertori:2009ik} that soft or collinear gluons can give rise, in multiscale problems, to the appearance of large logarithms in the perturbative expansion of an observable, and more precisely in a region of phase space where it is sensitive to the soft or collinear divergences of QCD. In this article, the observable considered is $\Delta M = M_H-M_{\mbox{\scriptsize filtered jet}}$, where $M_{\mbox{\scriptsize filtered jet}}$ is the reconstructed Higgs-jet mass and $M_H$ is its true mass. $\Delta M$ has the property that it is $0$ when no gluon is emitted. We are interested in $\Sigma(\Delta M)$, the probability for the difference between the reconstructed and true Higgs masses to be less than a given $\Delta M$. In this case, large soft logarithms have to be resummed at all-orders to obtain a reliable description of the small $\Delta M$ distribution.
For this observable, soft gluons emissions lead to powers of $\ln\frac{M_H}{\Delta M}$, whereas collinear gluons emissions leads to powers of $\ln\frac{R_{bb}}{\filt{R}}$. In this study, gluons are strongly ordered in energy (the first emitted gluon being the most energetic one, and so on), and we aim to control the $\left(\alpha_s\ln\frac{M_H}{\Delta M}\right)^k$ series, in a region where
\begin{equation}
\ln\frac{M_H}{\Delta M}\gg \ln\frac{R_{bb}}{\filt{R}}\,.
\end{equation}
Therefore, at leading-log accuracy, one has to resum terms like
\begin{equation}
I_k(\Delta M) = f_k\left(\frac{R_{bb}}{\filt{R}}\right)\left(\alpha_s\ln\frac{M_H}{\Delta M}\right)^k\,,
\end{equation}
where all the $f_k$ are functions to be computed. We thus disregard all the subleading terms, i.e. those suppressed by at least one power of $\ln\frac{M_H}{\Delta M}$. Unfortunately, such a calculation is highly non-trivial due to the fact that the observable is {\it non-global}. This property, first studied in \cite{Dasgupta:2001sh}, means that it is sensitive to radiation in only a part of the phase space. In the case of $\Delta M$, only emissions of gluons outside the filtered jets region contribute to the observable (cf figure~\ref{non-global_configuration}).
\begin{figure}[htb]
\centering
\subfigure[]{
\includegraphics[scale=0.4]{figures/NG-nfilt2.eps}
\label{NG_nfilt2}
}~
\subfigure[]{
\includegraphics[scale=0.4]{figures/NG-nfilt3.eps}
\label{NG_nfilt3}
}
\caption{Configurations leading to non-global logarithms when \subref{NG_nfilt2} $\filt{n}=2$ and \subref{NG_nfilt3} $\filt{n}=3$. In each case, the hardest gluon $1$, which is inside the filtered jet region, emits a softer gluon $2$ outside the filtered jet region.}
\label{non-global_configuration}
\end{figure}
As a consequence of this property, one must consider soft gluons emissions not just from the $b\bar{b}$ dipole (usually called primary emissions, the only ones that would be present in QED) but also from the whole ensemble of already emitted gluons \cite{Dasgupta:2001sh,Dasgupta:2002bw}. As the number of gluons is increased, the geometry and the color structure of all these gluons become rapidly too complex to perform an analytical calculation. Therefore, to deal with this, one is forced to apply numerical Monte-Carlo calculations that can easily take care of the geometry. But the colour structure remains prohibitive, and one must usually also resort to the large-$N_c$ approximation in order to go beyond the $2$ first orders in perturbation theory~\cite{Dasgupta:2001sh,Dasgupta:2002bw,Delenda:2006nf,Appleby:2002ke} (though some authors have derived some analytical results in special cases \cite{Banfi:2002hw,Hatta:2009nd} and others have examined contributions beyond the leading large-$N_c$ approximation \cite{Weigert:2003mm,Forshaw:2006fk}).
However, before considering numerical calculations, some results can be derived analytically at $2^{nd}$ order for $\filt{n}=2$ (where $f_1$ and $f_2$ are computed exactly) and $\filt{n}=3$ (where only the leading behaviour of the $f_k$ in $\ln\frac{R_{bb}}{\filt{R}}$ and $N_c$ is looked for).
\subsection{Some results for $\filt{n}=2$ \label{some_results_for_nf2}}
Perturbatively, one can write $\Sigma(\Delta M)$ as
\begin{equation}
\Sigma(\Delta M) = 1+\sum_{k=1}^{\infty}I_k(\Delta M)\,,
\label{perturbative_expansion_of_Sigma_DeltaM}
\end{equation}
where $I_k(\Delta M)$ is the ${\cal O}\left(\alpha_s^k\right)$ contribution to the observable. To simplify the calculation, $\Sigma(\Delta M)$ will be computed using the anti-$k_t$ algorithm \cite{Cacciari:2008gp}, even if the numerical study will be done using the $C/A$ algorithm \cite{Dokshitzer:1997in,Wobisch:1998wt} to be in accordance with the choice in \cite{MyFirstPaper}. However, the anti-$k_t$ algorithm is enough to catch the dominant behaviour of the leading-log series, in the sense that it does not affect the leading large collinear logarithm in the function $f_k$ at small $\filt{R}$:
\begin{equation}
f_k\left(\frac{R_{bb}}{\filt{R}}\right) = a_k\ln^k\left(\frac{R_{bb}}{\filt{R}}\right) + {\cal O}\left(\ln^{k-1}\left(\frac{R_{bb}}{\filt{R}}\right)\right)\,,\label{function_f_k}
\end{equation}
i.e. $a_k$ is unchanged when moving from $C/A$ to anti-$k_t$.\footnote{When $\filt{R}\sim \frac{R_{bb}}{2}$, the discarding of the ${\cal O}\left(\ln^{k-1}\left(\frac{R_{bb}}{\filt{R}}\right)\right)$ terms is not a priori justified, but fig.~\ref{comparison_with_theory}, which compares numerical results obtained using $C/A$ with analytical estimates using anti-$k_t$, supports the dominance of the leading collinear logarithms.} This jet algorithm gives simpler results because the gluons outside the filtered jet region tend not to cluster with the ones inside. It is this property which ensures that the hardest jets in an event are generally perfect cones, as particles usually cluster with the hardest ones in their neighbourhood first \cite{Cacciari:2008gp}.
As a first step, primary emissions are considered, defined to be those one would obtain if gluons were only emitted from the $b\bar{b}$ dipole (as for photons in QED).
\subsubsection{Primary emissions}
Due to the use of the anti-$k_t$ algorithm, the result of the primary emissions can be easily shown to exponentiate, as will be roughly seen in the next section with the ${\cal O}\left(\alpha_s^2\right)$ analysis. Here, we just review the very well known result that the contribution to $\Sigma(\Delta M)$ from primary emissions, denoted $\Sigma^{(P)}(\Delta M)$, can be written as:\footnote{The superfix $(P)$ serves as a reminder that only primary emissions are being accounted for.}
\begin{equation}
\Sigma^{(P)}(\Delta M) = e^{I_1(\Delta M)}\,,
\label{primary_exponentiation}
\end{equation}
with:
\begin{equation}
I_1(\Delta M) = \int \frac{d^3\vec{k}_1}{(2\pi)^32|\vec{k}_1|}M(k_1)\left(\Theta\left(\vec{k}_1\in J_{b\bar{b}}\right) + \Theta\left(\vec{k}_1\notin J_{b\bar{b}}\right)\Theta\left(\Delta M - \Delta M(\vec{k}_1)\right) - 1\right)\,.\label{definition_of_I_1}
\end{equation}
$M(k_1)$ is the matrix element squared for emitting one soft gluon from the $b\bar{b}$ dipole (the $b$ quark is taken to be massless):
\begin{equation}
M(k_1) = 4\pi\alpha_sC_F\frac{2(p_b.p_{\bar{b}})}{(p_b.k_1)(k_1.p_{\bar{b}})}\,.
\end{equation}
Concerning the notations, $\Theta\left(\vec{k}_1\in J_{b\bar{b}}\right)$ equals $1$ when gluon $1$ is emitted inside the jet regions around $b$ and $\bar{b}$, denoted by $J_{b\bar{b}}$ (and is $0$ otherwise), which, for $\filt{R}<R_{bb}$, is just $2$ cones of radius $\filt{R}$ centered on $b$ and $\bar{b}$ (figure~\ref{NG_nfilt2}). Then, concerning the expression in brackets in eq.~(\ref{definition_of_I_1}), we separate the $2$ different regions where the gluon can be: either inside or outside the filtered Higgs jet. The first term $\Theta(\vec{k}_1\in J_{b\bar{b}})$ means that the gluon does not contribute to the observable (as it is kept in the Higgs jet, the reconstructed Higgs mass is the true Higgs mass: $\Delta M(k_1)=0$). If the gluon is outside the filtered jet region (second term), then it does contribute to the observable:
\begin{equation}
\Delta M(k)\sim k_t\frac{M_H}{p_{t_H}}\,,
\end{equation}
up to prefactors that can be neglected in the leading-log approximation, see appendix~\ref{app:analytical_considerations}. Finally, the $-1$ stands for the virtual corrections, for which there's obviously no loss of mass for the Higgs, and whose matrix element is just the opposite of the soft real one.\footnote{Even if the result seems obvious here, this way of doing the calculation can be easily generalised to higher orders and other kinds of jet algorithms.} One thus obtains:
\begin{equation}
I_1(\Delta M) = -\int_{\vec{k}_1\notin J_{b\bar{b}}}\frac{d^3\vec{k}_1}{(2\pi)^32|\vec{k}_1|}M(k_1)\Theta\left(\Delta M(k_1) - \Delta M\right)\,.
\label{PrimaryIntegral}
\end{equation}
The computation of this integral in the boosted regime, where $p_{tH}\gg M_H$, or equivalently $R_{bb}\ll 1$, is done in appendix~\ref{app:analytical_considerations}. From now on, we will essentially use $\filt{\eta} = \filt{R}/R_{bb}$ instead of $\filt{R}$ and we define $n\equiv \filt{n}$ and $\eta\equiv\filt{\eta}$ for more clarity in mathematical formulae. In order to keep in mind that it depends on the $2$ parameters of the Filtering analysis, the distribution $\Sigma(\Delta M)$ is renamed $\Sigma^{(n)}(\eta,\Delta M)$. What we obtain at fixed coupling is the following:\footnote{To obtain the result at running coupling, one simply makes the replacement (see eqs.~(\ref{t_running_coupling},\ref{t_fixed_coupling}) later in the article):
\begin{equation*}
\alpha_s\ln\frac{M_H}{\Delta M} \rightarrow \frac{1}{2\beta_0}\ln\left(\frac{1}{1-2\beta_0\alpha_s(M_H)\ln\frac{M_H}{\Delta M}}\right)\,.
\end{equation*}}
\begin{equation}
\Sigma^{(2),(P)}(\eta,\Delta M) = e^{-\frac{\alpha_sC_F}{\pi}J(\eta)\ln\frac{M_H}{\Delta M}}\,,
\label{exponentiated_primary_result}
\end{equation}
with
\begin{align}
J(\eta) & = 2\ln\left(\frac{1-\eta^2}{\eta^2}\right) \hspace{5.02cm} \mbox{ if } \eta < \frac12 \label{primary_emission_result_eta_small}\,,\\
& = \frac{8}{\pi}\int_{\eta}^{+\infty}\frac{du}{u(u^2-1)}\arctan\left(\frac{u-1}{u+1}\sqrt{\frac{2u+1}{2u-1}}\right) \mbox{ if } \frac12 < \eta < 1\,. \label{primary_emission_result_eta_large}
\end{align}
We give the value of $J(1)$, a quantity that is important to discuss some aspects of the results obtained in the following sections:
\begin{equation}
J(1)\simeq 0.646\,.
\end{equation}
Notice that the case $\eta>1$ will not be used in this study, but is mentioned in appendix~\ref{app:analytical_considerations}. The function $J(\eta)$ is plotted in figure~\ref{J_eta}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.32]{figures/J_eta.ps}
\end{center}
\vspace{-0.8cm}
\caption{The coefficient $J(\eta)$ in front of the primary soft logarithm $\ln\frac{M_H}{\Delta M}$.}
\label{J_eta}
\end{figure}
Two remarks can be made:
\begin{enumerate}
\item The result does not depend on the energy fraction $z$ of the Higgs splitting into $b\bar{b}$.
\item When $\eta\ll 1$, another large logarithm $\ln\frac{1}{\eta}$ appears due to collinear enhancement.
\end{enumerate}
\subsubsection{Non-global contributions}
Now, we turn to the ${\cal O}\left(\alpha_s^2\right)$ term, and more precisely to the contribution of the non-global terms that have to be added to the primary logarithms computed in the previous section. That corresponds to the analysis of $I_2(\Delta M)$ in the perturbative expansion of $\Sigma(\Delta M)$ from eq.~(\ref{perturbative_expansion_of_Sigma_DeltaM}). The matrix element squared for $2$ real gluons emission is expressed as \cite{Bassetto:1984ik,Dasgupta:2001sh,Fiorani:1988by,Dokshitzer:1992ip}:
\begin{align}
M(k_1\mbox{ real},k_2\mbox{ real}) & = (4\pi\alpha_s)^2(W_1+W_2)\,,\\
\mbox{with } W_1 & = 4C_F^2\frac{(p_b.p_{\bar{b}})}{(p_b.k_1)(k_1.p_{\bar{b}})}\frac{(p_b.p_{\bar{b}})}{(p_b.k_2)(k_2.p_{\bar{b}})}\,, \\
W_2 & = 2C_FC_A\frac{(p_b.p_{\bar{b}})}{(p_b.k_1)(k_1.p_{\bar{b}})}\left(\frac{(p_b.k_1)}{(p_b.k_2)(k_2.k_1)}+\frac{(p_{\bar{b}}.k_1)}{(p_{\bar{b}}.k_2)(k_2.k_1)}-\frac{(p_b.p_{\bar{b}})}{(p_b.k_2)(k_2.p_{\bar{b}})}\right)\,.
\end{align}
This expression is valid when there is a strong energy ordering between the two real gluons $1$ and $2$, either $E_1\gg E_2$ or $E_2\gg E_1$ (the formula is completely symmetric under the interchange $k_1\leftrightarrow k_2$). For the cases with one or both gluons being virtual, the following matrix elements are obtained, valid only when $E_1\gg E_2$~\cite{Dokshitzer:1992ip}:
\begin{align}
M(k_1\mbox{ real},k_2\mbox{ virt}) & = -(4\pi\alpha_s)^2(W_1+W_2)\,, \nonumber\\
M(k_1\mbox{ virt},k_2\mbox{ real}) & = -(4\pi\alpha_s)^2W_1\,, \nonumber \\
M(k_1\mbox{ virt},k_2\mbox{ virt}) & = (4\pi\alpha_s)^2W_1\,.
\end{align}
Using these properties, separating the $4$ phase space regions depending on whether the gluons are inside or outside the filtered jet region in the same way as was done for $I_1$, and defining $dk$ as:
\begin{equation}
dk = \frac{d^3\vec{k}}{(2\pi)^32|\vec{k}|}\,,
\end{equation}
we can then write $I_2$ in the following form:
\begin{align}
I_2 = & \int dk_1dk_2 (4\pi\alpha_s)^2\Theta(E_1-E_2) \big\{ \nonumber \\
& \quad\hspace{0.08cm} \Theta(k_1\in J_{b\bar{b}})\Theta(k_2\in J_{b\bar{b}})\big((W_1+W_2) - (W_1+W_2) - W_1 +W_1\big)\nonumber \\
& {} + \Theta(k_1\in J_{b\bar{b}})\Theta(k_2\notin J_{b\bar{b}})\big((W_1+W_2)\Theta(\Delta M - \Delta M(k_2))-(W_1+W_2)-W_1\Theta(\Delta M - \Delta M(k_2))+W_1\big)\nonumber \\
& {} + \Theta(k_1\notin J_{b\bar{b}})\Theta(k_2\in J_{b\bar{b}})\big((W_1+W_2)\Theta(\Delta M - \Delta M(k_1))-(W_1+W_2)\Theta(\Delta M - \Delta M(k_1))-W_1+W_1 \big)\nonumber \\
& {} + \Theta(k_1\notin J_{b\bar{b}})\Theta(k_2\notin J_{b\bar{b}})\big((W_1+W_2)\Theta(\Delta M - \Delta M(k_1,k_2)) - (W_1+W_2)\Theta(\Delta M - \Delta M(k_1))\nonumber\\
& {} \hspace{4.5cm} -W_1\Theta(\Delta M - \Delta M(k_2))+W_1 \big) \big\}\,.
\label{SystematicApproach}
\end{align}
For each phase space region, the $4$ terms $(k_1,k_2)=$ (real,real) $-$ (real,virt) $-$ (virt,real) $+$ (virt,virt) are considered. The strong energy ordering $E_1\gg E_2$ implies that $\Delta M(k_1,k_2) = \Delta M(k_1)$, and one immediately gets:
\begin{align}
I_2 = & \quad \int dk_1dk_2 (4\pi\alpha_s)^2\Theta(E_1-E_2)\Theta(k_1\notin J_{b\bar{b}})\Theta(k_2\notin J_{b\bar{b}})W_1\Theta(\Delta M(k_2) - \Delta M) \nonumber\\
& {} - \int dk_1dk_2 (4\pi\alpha_s)^2\Theta(E_1-E_2)\Theta(k_1\in J_{b\bar{b}})\Theta(k_2\notin J_{b\bar{b}})W_2\Theta(\Delta M(k_2) - \Delta M)\,, \nonumber\\
= & \quad I_2^{(P)}(\Delta M) + I_2^{(NG)}(\Delta M)\,, \label{DefinitionOfI_2}
\end{align}
where $I_2^{(P)}(\Delta M)$ corresponds to the first integral containing the function $W_1$ whereas $I_2^{(NG)}(\Delta M)$ corresponds to the second integral with the function $W_2$. $I_2^{(P)}$ is just the second order contribution to the primary emissions, already computed above. To be convinced, one can notice that $(4\pi\alpha_s)^2W_1$ can be expressed as the product of $2$ one-gluon matrix elements $M(k_1)M(k_2)$ and, when $E_1\gg E_2$,
\begin{equation}
\Theta\left(\Delta M(k_2)-\Delta M\right) = \Theta\left(\Delta M(k_2)-\Delta M\right)\Theta\left(\Delta M(k_1)-\Delta M\right)\,,
\end{equation}
if $k_1$ and $k_2$ belong to the same phase space region. Therefore $I_2^{(P)}$ can be written in a more symmetric way:
\begin{align}
I_2^{(P)}(\Delta M) & = \frac12\int dk_1dk_2 (4\pi\alpha_s)^2\Theta(k_1\notin J_{b\bar{b}})\Theta(k_2\notin J_{b\bar{b}})W_1\Theta(\Delta M(k_1) - \Delta M)\Theta(\Delta M(k_2) - \Delta M)\,, \nonumber \\
& = \frac12\left(\int dk \Theta(k\notin J_{b\bar{b}})M(k)\Theta(\Delta M(k) - \Delta M)\right)^2\,, \nonumber \\
& = \frac12 \left(I_1(\Delta M)\right)^2\,,
\end{align}
so that it corresponds to the second order perturbative expansion of the result eq.~(\ref{primary_exponentiation}), obtained with primary emissions only.
The important term for this section is the one containing $W_2$, denoted by $I_2^{(NG)}$. As mentioned in section~\ref{the_filtered_higgs_mass_a_non_global_observable}, it receives a non-zero contribution when the hardest gluon $1$ is emitted inside the filtered jet region whereas the softest gluon $2$ is emitted outside. For the opposite configuration, there is an exact cancellation between gluon $2$ being real and virtual. Here again the computation of $I_2^{(NG)}$ is postponed to appendix~\ref{app:analytical_considerations}, giving directly what will help to interpret some results later. $S_2$ is defined such that
\begin{equation}
I_2^{(NG)}(\eta,\Delta M) = \frac{1}{2}C_FC_A\left(\frac{\alpha_s}{\pi}\ln\left(\frac{M_H}{\Delta M}\right)\right)^2S_2(\eta)\,,
\end{equation}
where we explicitly introduce the dependence on $\eta$ and we factorize out the soft divergence, still revealed in the large logarithm $\ln\frac{M_H}{\Delta M}$. When $\eta<1/2$, the result for $S_2$ can be written as:
\begin{align}
S_2(\eta) & = -\frac{\pi^2}{3}+8\int_0^1\frac{du_1}{u_1}\int_0^1\frac{du_2}{u_2}\left(\frac{1}{\sqrt{(1-\eta^2(u_1^2+u_2^2))^2-4\eta^4u_1^2u_2^2}}-\frac{1}{1-\eta^2(u_1^2+u_2^2)}\right) \,,\nonumber \\
& = -\frac{\pi^2}{3} + 4\eta^4 + 12\eta^6 + {\cal O}\left(\eta^8\right)\,.
\label{S_2_computed}
\end{align}
The important point to notice in this result is the absence of collinear logarithms, which would appear as $\ln\frac{1}{\eta}$, contrary to the primary emission case (eq.~(\ref{primary_emission_result_eta_small})). So that the primary emissions dominate for this observable, at least for $\eta$ sufficiently small.
As mentioned in previous studies \cite{Dasgupta:2001sh,Dokshitzer_et_al}, one notices the presence of ``$\pi^2$ terms'' in non-global results at second order.
\subsection{Some results for $\filt{n}=3$ \label{Some_results_for_nfilt_3}}
The goal in this section is to have an estimate of the analytical behavior in the large $N_c$ limit of $\Sigma^{(n)}(\eta,\Delta M)$ for $n=3$, which is the probability of having no second gluon emission leading to a $\Delta M'$ greater than $\Delta M$. Notice that, contrary to the previous part where we obtained the function $\Sigma^{(2)}$, only the leading behavior in $L=\ln\frac{1}{\eta}$ and $N_c$ will be derived, so that in this context $\Sigma^{(2)}$ can be simply written:\footnote{This results simply from the combination of equations~(\ref{exponentiated_primary_result}) and~(\ref{primary_emission_result_eta_small}) with $\eta \ll 1$, and $C_F=\frac{N_c}{2}$ in the large $N_c$ limit.}
\begin{equation}
\Sigma^{(2)}(L,t) = e^{-4N_cLt} \label{Sigma2LargeLLimit}
\end{equation}
where for further convenience we introduce the parameter $t = \frac{\alpha_s}{2\pi}\ln\frac{M_H}{\Delta M}$ and we change the arguments of $\Sigma$ which becomes now a function of $L$ and $t$. In this formula, $2L = 2\ln\frac{R_{bb}}{\filt{R}}$ can be interpreted as the ``logarithmic size'' of the $b\bar{b}$ dipole, i.e. the allowed phase space in rapidity for an emission from this dipole (in its center of mass frame) outside the jet region. The parameter $t$ means that this emission cannot occur with a $t'$ between $0$ and $t$.
Now we turn to $\Sigma^{(3)}(L,t)$. To have no second gluon emission in $[0,t]$, either there is no first gluon emission in $[0,t]$ outside the jet region (which corresponds to $\Sigma^{(2)}(L,t)$), or there is such an emission but the new dipole configuration is prohibited from emitting a second gluon in $[0,t]$ outside the jet region. This is depicted in figure~\ref{PictureOfSigma3}.
\begin{figure}[htbp]
\begin{psfrags}
\psfrag{S}{\large $\Sigma^{(3)}(L,t)\hspace{0.3cm} \simeq$}
\psfrag{A}{$2L$}
\psfrag{C}{$2l$}
\psfrag{I}{\huge $\int$}
\psfrag{J}{$dl dt'$}
\psfrag{D}{$g(t')$}
\begin{center}
\includegraphics[scale=0.55]{figures/Sigma_3_psfrag.eps}
\end{center}
\end{psfrags}
\caption{How to compute the leading behavior of $\Sigma^{(3)}(L,t)$ from $\Sigma^{(2)}(L,t)$ when $L\gg 1$ and $N_c\gg 1$. In the second term, $t'$ is the gluon's emission scale.}
\label{PictureOfSigma3}
\end{figure}
As the calculation is done in the large-$N_c$ limit, after the emission of a first gluon, the second one cannot be emitted from the $b\bar{b}$ dipole, but only from the $bg$ and $\bar{b} g$ ones. Fig.~\ref{PictureOfSigma3} can be translated mathematically as:
\begin{equation}
\Sigma^{(3)}(L,t) \simeq \Sigma^{(2)}(L,t) + \int_0^{t}dt'4N_c\Sigma^{(2)}(L,t')\int_0^Ldl\hspace{0.1cm}\Sigma^{(2)}(L,t-t')\Sigma^{(2)}(l,t-t')\,. \label{Sigma3_not_yet_integrated}
\end{equation}
Notice that $L_{bg}$, the logarithmic size of the $bg$ dipole in figure~\ref{PictureOfSigma3}, does not depend on $l$ in the leading collinear log approximation.\footnote{One can easily show the following relation:
\begin{equation*}
L_{bg} = 2L + {\cal O}\left(e^{l-L}\right)\,.
\end{equation*}
If we introduce the neglected component of $L_{bg}$ in the $\Sigma^{(3)}$ calculation eq.~(\ref{Sigma3_not_yet_integrated}), then we have to compute an integral of the form
\begin{equation*}
\int_0^Ldl\, \frac{1-e^{(l+{\cal O}(e^{l-L}))t}}{l+{\cal O}(e^{l-L})}\,.
\end{equation*}
Expanding the exponential and keeping the term of order $k$ gives
\begin{equation*}
\int_0^Ldl\, \left(l+{\cal O}(e^{l-L})\right)^{k-1}t^k = \frac{(Lt)^k}{k}+{\cal O}(L^{k-2}t^k)\,.
\end{equation*}
The leading ${\cal O}\left((Lt)^k\right)$ term is already taken into account in eq.~(\ref{Sigma3LargeLLimit}). Therefore, including the $l$ dependent component of $L_{bg}$ gives rise to terms of the form $N_c^kL^{k-2}t^k$ at order $k$, suppressed by $2$ powers of $L$ with respect to the leading one.} In this expression, $4N_cL\Sigma^{(2)}(L,t')dt'$ is the probability not to emit the first gluon in $[0,t']$ and to emit it only at $t\in [t',t'+dt']$. The remaining part $\frac{1}{L}\int_0^Ldl\,\Sigma^{(2)}(L,t-t')\Sigma^{(2)}(l,t-t')$ is the probability to emit no second gluon from the $bg$ and $\bar{b} g$ dipoles in $[t',t]$. Using eq.~(\ref{Sigma2LargeLLimit}) for $\Sigma^{(2)}$, $\Sigma^{(3)}$ is then given by:
\begin{equation}
\Sigma^{(3)}(L,t) \simeq e^{-4N_cLt}\left(1+\int_0^{4N_cLt}dt'\frac{1-e^{-t'}}{t'}\right)\,.
\label{Sigma3LargeLLimit}
\end{equation}
Two limits can be considered:
\begin{equation}
\Sigma^{(3)}(L,t) \simeq \left\{\begin{array}{ll}
1-\frac{3}{4}(4N_cLt)^2+{\cal O}\left((4N_cLt)^3+N_ct\right) & \mbox{ if } 4N_cLt\ll 1\,,\\
& \\
e^{-4N_cLt}\left(\ln\left(4N_cLt\right)+{\cal O}(1)\right) & \mbox{ if } 4N_cLt\gg 1\,. \end{array} \right.
\label{Sigma3allTLimit}
\end{equation}
The limit $4N_cLt\ll 1$ reveals two important aspects:
\begin{enumerate}
\item One can notice the absence of the ${\cal O}(Lt)$ term, which is indeed the goal of the filtering analysis as it was presented in its original version \cite{MyFirstPaper}: it is intended to catch the major part of the ${\cal O}(\alpha_s)$ perturbative radiation. It cannot catch {\it all} the ${\cal O}(\alpha_s)$ contribution because a hard gluon emitted at an angle $\theta>R_{bb}$ from the $b$ and $\bar{b}$ escapes the filtering process as it is rejected by the Mass Drop analysis. Therefore, when $4N_cLt\ll 1$, the expansion eq.~(\ref{Sigma3allTLimit}) misses a term ${\cal O}(N_ct)$, but this is legitimate in a leading collinear log estimate. Notice that the missing term is simply $-J(1)N_ct$ where $J(\eta)$ was given in eq.~(\ref{primary_emission_result_eta_large}).
\item it shows that the purely non-global result for $n=3$ contains large collinear logarithms $L$, contrary to the case $n=2$ (eq.~(\ref{S_2_computed})). Indeed, the primary result for $n=3$ at second order can be proven to behave as\footnote{In fact, one can show the following general estimate for the primary emissions in the leading soft and collinear approximations: $$\Sigma^{(n)}(L,t) = e^{-8C_FLt}\sum\limits_{k=0}^{n-2}\frac{(8C_FLt)^k}{k!}\,.$$} $-32C_F^2(Lt)^2$ at order $\alpha_s^2$, so that the $S_2$ term for $n=3$ should be equivalent to $-8C_FC_A(Lt)^2$ at large $L$.
\end{enumerate}
Having understood some analytical features of the Filtering analysis, we now examine what can be learnt from a numerical calculation of the reconstructed Higgs mass observable.
\section{Non-Global structure: numerical results \label{NG_structure_numerical_results}}
In all that follows $t$ is defined so as to gather all the information about the soft logarithms in a running coupling framework:
\begin{align}
t & = \frac{1}{2\pi}\int_0^{p_{t_H}}\frac{dk_t}{k_t}\alpha_s\left(k_t\frac{M_H}{p_{t_H}}\right)\Theta\left(\Delta M(k)-\Delta M\right)\,,\nonumber\\
& = \frac{1}{2\pi}\int_{p_{t_H}\frac{\Delta M}{M_H}}^{p_{t_H}}\frac{dk_t}{k_t}\alpha_s\left(k_t\frac{M_H}{p_{t_H}}\right)\,,\nonumber\\
& = \frac{1}{4\pi\beta_0}\ln\left(\frac{1}{1-2\beta_0\alpha_s(M_H)\ln\frac{M_H}{\Delta M}}\right)\,, \label{t_running_coupling}
\end{align}
where the last equality holds at the one-loop level and $\beta_0=\frac{11C_A-2n_f}{12\pi}$. The argument of $\alpha_s$ was taken as the gluon's transverse momentum with respect to the Higgs boson direction, of order $k_t\frac{M_H}{p_{t_H}}$, $k_t$ being its transverse momentum with respect to the beam. In the case of a fixed coupling constant $\alpha_s$, the definition for $t$ here coincides with that of section~\ref{Some_results_for_nfilt_3}:
\begin{equation}
t = \frac{\alpha_s}{2\pi}\ln\frac{M_H}{\Delta M}\,.
\label{t_fixed_coupling}
\end{equation}
But from now on, and unless stated otherwise, $t$ is given in the running coupling framework, eq.~(\ref{t_running_coupling}), and the function $\Sigma(\eta,\Delta M)$ is rewritten as $\Sigma(\eta,t)$.
To get an idea of the range of values covered by $t$, table~\ref{some_particular_t_values} presents a few $t$ values corresponding to a given $\Delta M$ for a Higgs mass of $115$ GeV ($\alpha_s(M_H)=0.114$). It reveals that the physical values for $t$ are below $0.15$.
\begin{table}[htb]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$\Delta M$ (GeV) & 1 & 2 & 5 & 10 & 20 & 50 & 115 \\
\hline
$t$ & 0.141 & 0.108 & 0.075 & 0.054 & 0.036 & 0.016 & 0 \\
\hline
\end{tabular}
\caption{Correspondence between $\Delta M$ and $t$ for some particular values.}
\label{some_particular_t_values}
\end{table}
\vspace{0.2cm}
To numerically investigate non-global observables, two approaches can be followed:
\begin{itemize}
\item an all-orders approach where one resums the leading-logs at all-orders in the large-$N_c$ limit, the output being the function $\Sigma(t)$, i.e. the probability that the loss of perturbative emission results in a Higgs mass in the range $[M_H-\Delta M(t),M_H]$, with
\begin{equation}
\Delta M(t) = M_He^{-\frac{1}{2\beta_0\alpha_s}\left(1-e^{-4\pi\beta_0t}\right)}\,,
\end{equation}
simply obtained by inverting the relation eq.~(\ref{t_running_coupling}).
\item a fixed-order approach where the first few coefficients from the expansion of $\Sigma(t)$ are computed in the large-$N_c$ limit. More precisely, if $\Sigma(t) = \sum\limits_{k=0}^{\infty}\frac{c_k}{k!}\left(N_ct\right)^k$, then the program returns the first few coefficients $c_k$.
\end{itemize}
From a numerical point of view, the way to write an all-orders program was explained in \cite{Dasgupta:2001sh}. On the other side, a result at fixed-order may be obtained by developping a systematic approach like the one presented at second order in eq.~(\ref{SystematicApproach}). For the filtered Higgs jet mass observable, we used the Fastjet package \cite{Cacciari:2005hq} to perform the clustering (and mass-drop $+$ filtering) with the $C/A$ algorithm, consistently with the choice made in \cite{MyFirstPaper}.
As the all-orders program gives immediately what we are looking for, which is $\Sigma(t)$, we will use it (section~\ref{study_of_the_Higgs_perturbative_width}) to compute the perturbative Higgs width. But in order to check it and be confident with the results obtained, we compare them with the previous analytical estimates and see how well the perturbative leading log series fits them. This leads us to study the behaviour of the higher order terms and to gain a better understanding of the convergence and structure of the non-global series. Though treated in more details in appendix~\ref{app:convergence_of_the_non-global_series}, the main points are mentioned in this section.
\subsection{Comparison with analytics \label{comparison_with_analytics}}
Using the all-orders Monte-Carlo program, a comparison between the all-orders numerical curves obtained using the $C/A$ algorithm and their corresponding analytical estimates obtained previously with anti-$k_t$ in eqs.~(\ref{Sigma2LargeLLimit},\ref{Sigma3LargeLLimit}) can be done. The results are presented in figure~\ref{comparison_with_theory} and show good agreement, at least in the region of physical $t$ values.
\begin{figure}[htb]
\centering
\subfigure[]{
\includegraphics[scale=0.275]{figures/ao_vs_an_nfilt2.ps}
\label{ao_vs_an_nf2}
}~
\subfigure[]{
\includegraphics[scale=0.275]{figures/ao_vs_an_nfilt3.ps}
\label{ao_vs_an_nf3}
}
\caption{Comparison between numerical all-orders result (obtained using $C/A$ algorithm) and leading collinear logarithm estimate of $\Sigma(t)$ derived with anti-$k_t$ for \subref{ao_vs_an_nf2} $n=2$ and \subref{ao_vs_an_nf3} $n=3$ for $2$ values of $\eta$: $0.1$ and $0.5$.}
\label{comparison_with_theory}
\end{figure}
Notice that the slight discrepancy between analytical estimations and numerics starts to occur at $t>0.1$, which is at the edge of the physical region (cf table~\ref{some_particular_t_values}), beyond which $\Delta M$ would be below the perturbative scale of around $1$ GeV. This agreement manifests that:
\begin{itemize}
\item In the physical region, the leading terms in $(\alpha_sLt)^k$, with $L=\ln\frac{1}{\eta}$ seem to completely dominate and we do not need to compute the subleading corrections.
\item One can use these analytical expressions to get an accurate estimate of the reconstructed Higgs peak width.
\end{itemize}
\subsection{Comparison with fixed-order results \label{comparison_with_fixed_order_results}}
The structure of the non-global series at fixed-order is now examined so as to independently cross-check the all-orders program and to understand if the perturbative leading-log series can be usefully truncated.
As an example, figure~\ref{ao_vs_fo_nf2} compares the all-orders result to the fixed-order ones up to $\alpha_s^5$ for $n=2$ and two different values of $\eta$ (only the coefficients with an uncertainty of at most a few percent are plotted\footnote{This uncertainty obviously increases with the perturbative order, but also with $\eta$ because at small $\eta$, the coefficients are sensitive to the large logarithm $\ln\frac{1}{\eta}$, which is easy to compute.}). The curves are represented up to $t=0.3$, which is far beyond the physical region but is instructive to study the convergence of the series.
\begin{figure}[htb]
\centering
\subfigure[]{
\includegraphics[scale=0.275]{figures/ao_vs_fo_nf2_eta0.3.ps}
\label{ao_vs_fo_nf2_eta0.3}
}~
\subfigure[]{
\includegraphics[scale=0.275]{figures/ao_vs_fo_nf2_eta0.9.ps}
\label{ao_vs_fo_nf2_eta0.9}
}
\caption{Comparison between fixed-order (FO) expansion and all-orders result when $n=2$ for \subref{ao_vs_fo_nf2_eta0.3} $\eta=0.3$ and \subref{ao_vs_fo_nf2_eta0.9} $\eta=0.9$. Both fixed-order and all-orders results were obtained using the $C/A$ algorithm.}
\label{ao_vs_fo_nf2}
\end{figure}
The left plot for $\eta=0.3$ shows a nice convergence of the perturbative series eq.~(\ref{perturbative_expansion_of_Sigma_DeltaM}), as the $t$ range for which the all-orders and fixed-order curves coincide grows with $k$. However, the second plot for $\eta=0.9$ gives an unexpected result: the fourth order diverges with respect to the third one, in the sense that the point of disagreement is shifted to smaller $t$. The question arises whether this divergence will remain at higher orders. To answer it, one needs to go further in perturbation theory. In appendix~\ref{app:convergence_of_the_non-global_series}, a parallel is made between the filtered Higgs jet observable and the slice observable, studied for instance in \cite{Dasgupta:2002bw}, for which, due to computationnal speed, it is possible to obtain reliable coefficients up to order $6$. The same effect is observed and is even enhanced at orders $5$ and $6$. Therefore, it seems that the fixed-order information cannot be safely used in general: one has to be aware that the leading-log large-$N_c$ non-global series may be divergent for any value of $t$.
\section{Choice of the filtering parameters \label{choice_of_the_filtering_parameters}}
In the previous sections we examined the structure and convergence of the perturbative leading-log series, analytically and numerically. We could then cross-check the analytical expressions and the fixed-order approach with the all-orders program, which we are going to use throughout this part.
We would like to decide how one should choose the filtering parameters ($n$,$\eta$) depending on the level of UE and PU as well as the $p_t$ of the Higgs boson. Here, we do not claim to make an exact and complete analysis, but we want to obtain some estimates. First, we consider the width of the Higgs mass distribution separately in presence of perturbative radiation (using the all-orders results) and UE/PU (using a simple model for it). Then, we try to minimize the Higgs width in presence of both of these effects. Finally, we will estimate hadronisation corrections.
In all this part, we set the Higgs mass $M_H$ at $115$ GeV, as in \cite{MyFirstPaper}.
\subsection{Study of the Higgs perturbative width \label{study_of_the_Higgs_perturbative_width}}
As we could see in the previous sections, even without considering additional particles from UE/PU, $\Delta M \equiv M_H-M_{\mbox{\tiny filtered jet}}\neq 0$ because of the loss of perturbative radiation. The Higgs boson thus acquires a perturbative width, denoted $\delta M_{PT}$. At first sight, knowing the distributions $\Sigma^{(n)}(\eta,\Delta M)$, one might simply define it as:
\begin{equation}
\delta M_{PT} = 2\sqrt{\langle \Delta M^2\rangle - \langle \Delta M\rangle^2}\,,\label{simple_definition_for_delta_M}
\end{equation}
as we do for gaussian distributions for instance. Unfortunately, if we simply take $n=2$ as an example and if we consider the primary emission result eq.~(\ref{exponentiated_primary_result}), we can deduce the following distribution for $\Delta M$:
\begin{equation}
\frac{d\Sigma^{(2)}(\eta,\Delta M)}{d\Delta M} = \frac{\alpha_s C(\eta)}{M_H^{\alpha_s C(\eta)}}\frac{1}{\Delta M^{1-\alpha_s C(\eta)}}\,,\label{differential_Sigma_2}
\end{equation}
with
\begin{equation}
C(\eta) = \frac{C_F J(\eta)}{\pi}\,.
\end{equation}
Computing $\langle \Delta M \rangle$ and $\langle \Delta M^2 \rangle$ implies dealing with integrals of the form
\begin{align}
\int_0^{M_H}\frac{d\Delta M}{\Delta M^{1-\alpha_s C(\eta)}}\Delta M & = \int_0^{M_H} d\Delta M \, \Delta M^{\alpha_s C(\eta)}\,,\\
\int_0^{M_H}\frac{d\Delta M}{\Delta M^{1-\alpha_s C(\eta)}}\Delta M^2 & = \int_0^{M_H} d\Delta M \, \Delta M^{1+\alpha_s C(\eta)}\,.
\end{align}
Such integrals give a large importance to the $\Delta M \sim M_H/2$ region, where there should be very few events, and do not describe what happens in the neighbourhood of the peak near $\Delta M = 0$. Therefore, the definition eq.~(\ref{simple_definition_for_delta_M}) does not seem adequate for the perturbative width. That's why we shall adopt another definition, adapted from \cite{Buttar:2008jx}. The Higgs perturbative width is defined as the size $\delta M_{PT}$ for which a given fraction $f$ of events satisfy $0<\Delta M<\delta M_{PT}$. Using the all-orders function previously computed, this is equivalent to solving the equation $\Sigma^{(n)}(\eta,\Delta M) = f$. This leads to the width function $\delta M_{PT}(n,\eta,f)$.
\begin{figure}[hbt]
\centering
\subfigure[]{
\includegraphics[scale=0.275]{figures/dM_f0.68.ps}
\label{Higgs_PT_width_GeV_f0.68}
}~
\subfigure[]{
\includegraphics[scale=0.275]{figures/dM_f0.95.ps}
\label{Higgs_PT_width_GeV_f0.95}
}
\caption{Perturbative width of the Higgs boson (in GeV) as a function of $\eta$ for several values of $n$ when \subref{Higgs_PT_width_GeV_f0.68} $f=0.68$ and \subref{Higgs_PT_width_GeV_f0.95} $f=0.95$.}
\label{Higgs_PT_width_GeV}
\end{figure}
Fig.~\ref{Higgs_PT_width_GeV} shows $\delta M_{PT}$ as a function of $\eta$ for $n=2\mathellipsis 6$. When $\Delta M\sim 50$ GeV (i.e. $\sim M_H/2$), one should be aware that soft approximation loses sense and results on these plots should no longer be taken seriously. We chose the values $f=0.68$ and $f=0.95$, corresponding respectively to $2\sigma$ and $4\sigma$ for gaussian distributions, to show that the Higgs mass perturbative distribution is not gaussian (otherwise, going from $2\sigma$ to $4\sigma$ would have multiplied the width by a factor of $2$, see also eq.~(\ref{differential_Sigma_2})). One important thing to notice is a kind of ``saturation'' effect that one observes for $\eta$ close enough to $1$ for every fraction $f$. It manifests itself as a flat curve at a value $\delta M_{PT} = \delta M_{sat}(f)$, independent of $n$. For instance, $\delta M_{sat}(f=0.68) \simeq 1$ GeV and $\delta M_{sat}(f=0.95) \simeq 33$ GeV. This can be understood simply by considering that when the radius of the filtering is large enough, say $\eta>\eta_{sat}(n)$, it captures (almost) all the particles resulting from the Mass Drop analysis, i.e. all those that are within angular distance $R_{bb}$ from $b$ or $\bar{b}$, but it still fails to capture particles outside the Mass Drop region.\footnote{The probability to emit a gluon outside the MD region in $[0,t]$ is roughly given by $1-e^{-J(1)N_ct}$.} Of course, the larger $n$, the smaller $\eta_{sat}(n)$ as we keep more jets. This saturation property is equivalent to saying that all the functions $\Sigma^{(n)}(\eta,\Delta M)$ become independent of $n$ and $\eta$ when $\eta>\eta_{sat}(n)$.
For the rest of this analysis we keep the value $f=0.68$, even if it is not clear which value should be chosen, and more generally what should be the relevant definition of the Higgs perturbative width. However, we will mention in section~\ref{variations_of_the_results_with_z_and_f} what happens if we vary $f$ between $f=0.5$ and $f=0.8$, so as to obtain a measure of the uncertainty of the calculations.
The curves in figure~\ref{Higgs_PT_width_GeV} only give us an overview of the scales involved in the Higgs boson width. But one can go a little further. At small $\eta$, we should get a large collinear enhancement revealing itself as a large logarithm $L=\ln\frac{1}{\eta}$ multiplying $t$. The perturbative expansion is thus a series in $\left(N_cLt\right)^k$. As a direct consequence, at small $\eta$, the all-orders function $\Sigma^{(n)}(\eta,t)$ can be written as a function of a single variable $\Sigma^{(n)}(N_cLt)$. Solving the ``width equation''
\begin{equation}
\Sigma^{(n)}(N_cLt) = f\,,
\end{equation}
gives
\begin{equation}
t_{PT} = \frac{C_{PT}(n,f)}{L}\,,
\end{equation}
where $t_{PT}$ is simply related to $\delta M_{PT}$ by
\begin{equation}
t_{PT} = \frac{1}{4\pi\beta_0}\ln\left(\frac{1}{1-2\beta_0\alpha_s(M_H)\ln\frac{M_H}{\delta M_{PT}}}\right)\,,
\label{Definition_of_t_PT}
\end{equation}
and where $C_{PT}(n,f)$ is a function, independent of $\eta$, which increases with $n$ and decreases when $f$ increases. This is confirmed by figure~\ref{Higgs_PT_width_t} which shows that $t_{PT}L$ is indeed independent of $\eta$ as long as $\eta$ and $n$ are not too large.
As an example, for $n=2$, let us take the simple result $\Sigma^{(2)}(L,t) = e^{-4N_cLt}$ from eq.~(\ref{Sigma2LargeLLimit}) in the small $\eta$ limit. It was shown in section~\ref{comparison_with_analytics} that this result is very close to the all-orders one in the physical $t$ region. Solving $\Sigma^{(2)}(L,t)=f$ immediately implies
\begin{equation}
C_{PT}(2,f) = \frac{\ln\frac{1}{f}}{4N_c}\,, \label{C_PT_2_f}
\end{equation}
which, for $f=0.68$, gives $C_{PT} \simeq 0.032$ in accordance with figure~\ref{Higgs_PT_width_t}.
\begin{figure}[htb]
\centering
\subfigure[]{
\includegraphics[scale=0.275]{figures/t_f0.68_rescaled.ps}
\label{Higgs_PT_width_t}
}~
\subfigure[]{
\includegraphics[scale=0.275]{figures/dM_PT_f0.68.ps}
\label{Higgs_PT_width_dM_f0.68}
}
\caption{\subref{Higgs_PT_width_t} $t_{PT}L$ as a function of $\eta$ for $f=0.68$ and different values of $n$. The saturation curve simply comes from the fact that all widths saturate to the same constant $\delta M_{sat}$ for $\eta$ large enough, and its equation is therefore given by $t_{sat}L$ (with $t_{sat}(f=0.68)\simeq 0.136$). \subref{Higgs_PT_width_dM_f0.68} $\delta M_{PT}$ as a function of $\eta$ for $f=0.68$ and different values of $n$ (curves with points). For each $n$ is also represented the corresponding approximate width (lines) given by eq.~(\ref{parametrisation_of_PT_width}).}
\label{Higgs_PT_width_t_dM}
\end{figure}
One observes that $t_{PT}L$ is not strictly speaking a constant for higher $n$ values. This may be due to the saturation effects discussed above. Indeed, even at large $L$, the perturbative expansion is not only a function of $Lt$ but also of $t$ for the lowest orders, as mentioned at the end of section~\ref{Some_results_for_nfilt_3}:
\begin{equation}
\Sigma^{(n)}(L,t) = 1+\sum_{k=1}^{n-2}a_kt^k+\sum_{k=n-1}^{+\infty}\left(a_k(Lt)^k+{\cal O}\left(L^{k-1}t^k\right)\right)\,,
\end{equation}
If we only had QED like emissions, i.e. primary ones, with the use of the anti-$k_t$ jet algorithm, we would obtain $a_k = \frac{(-J(1)N_c)^k}{k!}$ for $k\leq n-2$, where $J(\eta)$ was derived in section~\ref{some_results_for_nf2}. As $n$ increases, the term $a_1t$ becomes more and more important with respect to $a_{n-1}(Lt)^{n-1}$, leading to larger and larger deviations from the simple law $t_{PT}L$ $=$ constant. However, until $n=5$, assuming $t_{PT}L$ is a constant at small $\eta$ seems a good approximation. Therefore, using eq.~(\ref{Definition_of_t_PT}), one can model the Higgs perturbative width in the following form:
\begin{equation}
\delta M_{PT}(n,L,f) = \left\{\begin{array}{ll}
M_He^{-\frac{1}{2\beta_0\alpha_s}\left(1-e^{-4\pi\beta_0\frac{C_{PT}(n,f)}{L}}\right)} & \mbox{ if } \eta<\eta_{sat}(n,f)\,,\\
& \\
\delta M_{sat}(f) & \mbox{ if } \eta>\eta_{sat}(n,f)\,.
\end{array}\right.
\label{parametrisation_of_PT_width}
\end{equation}
$\eta_{sat}(n,f)$ is given by the intersection between the curve $t_{PT}=C_{PT}/L$ and $t_{PT}=t_{sat}$. Therefore:
\begin{equation}
\eta_{sat}(n,f) = e^{-\frac{C_{PT}(n,f)}{t_{sat}}}\,.\label{eta_sat}
\end{equation}
Table~\ref{Cpt_and_eta_sat_values} shows $C_{PT}$ and $\eta_{sat}$ for $f=0.68$ and different $n$ values.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$n$ & 2 & 3 & 4 & 5\\
\hline
$C_{PT}$ & 0.032 & 0.078 & 0.117 & 0.149\\
\hline
$\eta_{sat}$ & 0.79 & 0.56 & 0.42 & 0.34\\
\hline
\end{tabular}
\caption{$C_{PT}$ and $\eta_{sat}$ as a function of $n$ when $f=0.68$.}
\label{Cpt_and_eta_sat_values}
\end{table}
Figure~\ref{Higgs_PT_width_dM_f0.68} shows the curves corresponding to the parametrisation eq.~(\ref{parametrisation_of_PT_width}). We can see that it works rather well for all values of $n$ except $n=2$ in the region $\eta\sim 0.4-0.6$. This can be improved using the relation $J(\eta)t_{PT} = $ constant, which works better for $n=2$ because it is exact for primary emissions with anti-$k_t$. But implementing it would not change the main conclusions presented in sections~\ref{sec:higgs-width}-\ref{hadronisation_corrections}. Therefore, for the sake of simplicity, we will not use it here: we keep eq.~(\ref{parametrisation_of_PT_width}) as the expression for $\delta M_{PT}$ for the rest of this study.
Of course, were it only for the perturbative radiation, it would be nicer to choose $\eta\ge\eta_{sat}$ in order to catch as many gluons as possible, leading to $\delta M_{PT}\rightarrow \delta M_{sat}$. But we also have to take into account Initial State Radiation (ISR) from the incoming $q\bar{q}$ pair and non-perturbative effects like PU and UE that can spoil our Higgs neighbourhood, thus increasing the jet mass.
For the purpose of this article we will only add UE and PU to the Final-State Radiation (FSR) effect studied above. We will thus ignore ISR, this partially for a question of simplicity of the analysis, but also because the results of work such as \cite{Cacciari:2008gd,Dasgupta:2007wa} suggest that for LHC processes whose hard scales are few hundred GeV, the crucial interplay is that between FSR and UE/PU. This is evident in the preference for small $R$ values in dijet mass reconstructions in those references, where ISR is not playing a major role. Similarly we believe that the optimal values of $\eta$ that we will determine here will have limited impact from ISR, though we shall not check this explicitly.
\subsection{Study of the Higgs width due to underlying event and pile-up}
For this simple analysis, which does not aim to give precise numbers but only an estimate of the influence of the UE/PU on the mass of the Higgs jet when we vary for instance $\eta$, $n$, or when the Higgs boson becomes more and more boosted, we model the UE and PU as soft particles uniformly distributed in the $(y,\phi)$ plane \cite{Cacciari:2008gn,Cacciari:2009dp}, and with transverse momentum per unit area denoted by $\rho$. In order to get this estimate, we consider the simple case of a symmetric ($z=1/2$) Higgs decay along the $x$ axis. In the limit $M_H\ll p_{t_H}$, the Higgs momentum $p_H$ is given by:
\begin{equation}
p_H = \left(p_{t_H}+\frac{M_H^2}{2p_{t_H}},p_{t_H},0,0\right)\,.
\end{equation}
The UE/PU momentum, denoted $p_{UE}$,\footnote{For brevity, we define $p_{UE}$ to be the sum of the UE and/or PU particles' momentum but without referencing the PU dependence, which will always be implicit.} is simply the sum of all the UE/PU particles $g$ belonging to the filtered jet $J$. Still in the limit $M_H\ll p_{t_H}$, we recall the following formula \cite{MyFirstPaper}:
\begin{equation}
R_{bb}\simeq\frac{1}{\sqrt{z(1-z)}}\frac{M_H}{p_{t_H}}\,.\label{Rbb_boosted_limit}
\end{equation}
Throughout this section, we will apply it with $z=1/2$. We can now write $\Delta M = M_{\mbox{\scriptsize filtered jet}} - M_H$ as:
\begin{align}
\Delta M & = \frac{1}{2M_H}\left((p_H+p_{UE})^2-M_H^2\right)\,, \nonumber\\
& \simeq \frac{1}{M_H}\sum_{g\in J}p_{t_g}p_{t_H}\left(\frac{\theta_{gH}^2}{2}+\frac{M_H^2}{2p_{t_H}^2}\right)\,,\nonumber\\
& \simeq \frac{M_H}{p_{t_H}}\sum_{g\in J}p_{t_g}\,.\label{average_delta_M_UE}
\end{align}
In the last line we used the approximation:
\begin{equation}
\theta_{gH}\sim \theta_{bH} = \frac{R_{bb}}{2}\,, \label{theta_gh_approx}
\end{equation}
which comes from the fact that the UE and PU particles tend to cluster around the perturbative radiation, which is usually close to the $b$ and $\bar{b}$ because of the collinear logarithmic divergence of QCD. As all the filtered UE/PU particles flow approximately in the same direction, the remaining sum is just the total transverse momentum of the UE which, by definition of $\rho$, is equal to $\rho A$, $A$ being the total area of the filtered jets.\footnote{in the active sense, see \cite{Cacciari:2008gn}.} We thus obtain
\begin{equation}
\Delta M \simeq \frac{\rho A M_H}{p_{t_H}}\,,
\label{AverageDeltaM}
\end{equation}
with
\begin{equation}
\langle A\rangle \simeq n\pi\eta^2R_{bb}^2\,, \label{A_average_for_C_A}
\end{equation}
for the $C/A$ jet algorithm, taking into account the anomalous dimension that comes from the fact that there should be some perturbative radiation in the jets (cf figure~$14$ in \cite{Cacciari:2008gn}). Notice that eq.~(\ref{A_average_for_C_A}) is only true if all the jets do not overlap, so usually when $\eta$ is small enough. But this is sufficient for the purpose of our study, and we shall use this formula in all the following calculations. The correction eq.~(\ref{AverageDeltaM}) for $\Delta M$ only induces a shift towards higher masses of the Higgs mass peak. However, there are $3$ sources of fluctuations that give a width to this Higgs peak:
\begin{enumerate}
\item $\rho$ is not strictly uniform in the $(y,\phi)$ plane in a given event.
\item $\rho$ is not the same from one event to the next.
\item The jets' area fluctuates.
\end{enumerate}
Following \cite{Cacciari:2008gn}, we can write the total UE/PU transverse momentum contributing to the Higgs $p_t$ as
\begin{equation}
p_{t_{UE}} = \rho A \pm \left(\sqrt{A}\sigma + A\delta\rho + \rho\Sigma \right)\,,
\end{equation}
where
\begin{align}
\sigma & = \sqrt{\langle \rho^2\rangle -\langle \rho\rangle ^2} \quad \mbox{ with $\langle ...\rangle $ a spatial average in a given event}\,, \\
\delta\rho & = \sqrt{\langle \rho^2\rangle -\langle \rho\rangle ^2} \quad \mbox{ with $\langle ...\rangle $ an average over events}\,, \\
\Sigma & = \sqrt{\langle A^2\rangle -\langle A\rangle ^2} \quad \mbox{ with $\langle A\rangle $ the average over events of the filtered jets' area}\,.
\end{align}
For pure UE events, i.e. without PU, these terms can be estimated \cite{Cacciari:2008gn,Cacciari:2009dp}:
\begin{align}
\rho_{UE} & \simeq 2-3 \mbox{ GeV/area} \,, \label{rho_UE}\\
\sigma_{UE} & \simeq 0.6\rho_{UE} \,, \label{sigma_UE}\\
\delta\rho_{UE} & \simeq 0.8\rho_{UE} \,, \label{delta_rho_UE}\\
\Sigma & \simeq 0.26\sqrt{n}\pi\eta^2R_{bb}^2 \,. \label {Sigma_UE}
\end{align}
Though $\rho_{UE}$ seems to be around $2$ GeV/area, the tuning used in \cite{MyFirstPaper} was closer to $3$ GeV/area, the value that we choose here. In presence of PU, i.e. when there is more than $1$ $pp$ collision per bunch crossing at the LHC (thus leading to the emission of other soft particles), $\rho$, $\sigma$ and $\delta\rho$ have to be modified. We define $N_{PU}$ to be the number of $pp$ collisions in a bunch crossing except the one at the origin of the hard interaction. We use a simple model to write the parameters of the UE/PU as:
\begin{align}
\rho & \simeq \left(1+\frac{N_{PU}}{4}\right)\rho_{UE} \,, \label{rho_PU}\\
\sigma & \simeq \sqrt{1+\frac{N_{PU}}{4}}\sigma_{UE} \,, \label{sigma_PU}\\
\delta\rho & \simeq \sqrt{1+\frac{N_{PU}}{4}}\delta\rho_{UE} \,. \label{delta_rho_PU}
\end{align}
Some comments are needed: since $\rho$ measures the level of noise, it should grow like $N_{PU}$. In the expression $1+N_{PU}/4$, the $1$ corresponds to the $pp$ collision that leads to the UE and to the hard interaction, whereas the $N_{PU}/4$ term simply corresponds to the other $pp$ interactions and could be derived from the numbers given in \cite{Cacciari:2007fd}. The intra and inter events fluctuations of $\rho$ are modelled as growing like $\sqrt{\rho}$: we thus just give $\sigma$ and $\delta\rho$ the factor $\sqrt{1+N_{PU}/4}$, though further studies might be of value to parametrize these terms in a more adequate manner. Notice that the value given for $\delta\rho$ ignores the fluctuations in the number of PU events from one bunch crossing to the next, but this is beyond the accuracy of our model here. At high luminosity at LHC, $N_{PU}$ is expected to be $\sim 20$, which implies $\rho\sim10-20$ GeV \cite{Cacciari:2007fd,Sjostrand:2000wi,Sjostrand:2003wg}.
Assuming gaussian distributions for these three kinds of fluctuations, one can deduce the Higgs width due to the presence of UE/PU,\footnote{Here again, for brevity, we define $\delta M_{UE}$ to be the Higgs width in presence of UE and/or PU without referencing the PU dependence. Actually it serves only to distinguish the width due to UE/PU from the perturbative width $\delta M_{PT}$.} $\delta M_{UE} = 2\sqrt{\langle \Delta M^2\rangle - \langle \Delta M\rangle ^2}$:
\begin{equation}
\delta M_{UE} = 2\sqrt{A\sigma^2+A^2\delta\rho^2+\rho^2\Sigma^2}\frac{M_H}{p_{t_H}}\,.\label{WidthUE}
\end{equation}
For a gaussian peak, defining a $2\sigma$ width means that we keep roughly $68\%$ of the events around the average, which is in correspondence with the value $f=0.68$ chosen for the perturbative calculation.
We now have all the important results in hand to consider both UE/PU and FSR simultaneously.
\subsection{Study of the Higgs width in presence of both UE/PU and perturbative radiation}
\label{sec:higgs-width}
The purpose of this part is to give an estimate of how one should choose the couple of filtering parameters ($n$,$\eta$). For that, one has to convolute the effects of UE/PU and perturbative radiation and compute the resulting reconstructed Higgs peak width, and then minimize it with respect to the filtering parameters. This is highly non trivial to do analytically and we leave it for future work. The simple choice made here is to say that, for a given $n$, the optimal $\eta$, denoted $\eta_{opt}$, is the one for which the two widths are equal. This is obviously not true in general, but seems reasonable to obtain an estimate (figure~\ref{Higgs_total_width}) and to understand how $\eta_{opt}$ changes when we vary $p_{t_H}$ and $N_{PU}$. Notice that, using this method, we have to impose $\eta_{opt}<\eta_{sat}$ where $\eta_{sat}$ is the saturation point (eq.~(\ref{eta_sat})), because beyond $\eta_{sat}$, increasing $\eta$ makes $\delta M_{UE}$ larger without decreasing $\delta M_{PT}$, thus solving the equation $\delta M_{PT}=\delta M_{UE}$ has no sense in this region. Finally, we numerically minimize $\sqrt{\delta M_{PT}^2+\delta M_{UE}^2}$, calculated at $\eta=\eta_{opt}(n)$, with respect to $n$ in order to find $n_{opt}$.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.275]{figures/Combined_Higgs_width.ps}
\caption{The Higgs width due to UE/PU and loss of perturbative radiation, combined as if the $2$ distributions were gaussians, i.e. $\delta M_{tot} = \sqrt{\delta M_{PT}^2+\delta M_{UE}^2}$ when $n=3$. In this case, $\eta_{opt}$, though slightly larger, is approximately given by the intersection of the $2$ curves, at least as long as $\eta$ is not in the saturation region.}
\label{Higgs_total_width}
\end{figure}
First, we would like to understand how $\eta_{opt}$ evolves with respect to the physical parameters. The equality $\delta M_{PT}=\delta M_{UE}$ gives an equation in $L=\ln\frac{1}{\eta}$:
\begin{equation}
M_He^{-\frac{1}{2\beta_0\alpha_s}\left(1-e^{-4\pi\beta_0\frac{C_{PT}}{L}}\right)} = 2\sqrt{c_{\sigma}^2e^{-2L}+c_{\delta\rho}^2e^{-4L}+c_{\Sigma}^2e^{-4L}}\rho_{UE}\frac{M_H}{p_{t_H}}\,, \label{exact_equation_for_L}
\end{equation}
where the coefficients $c_{\sigma}$, $c_{\delta\rho}$ and $c_{\Sigma}$ can be easily calculated using eqs.~(\ref{A_average_for_C_A},\ref{rho_UE}-\ref{delta_rho_PU},\ref{WidthUE}):
\begin{align}
c_{\sigma}(n,N_{PU},R_{bb}) & \simeq 0.6\sqrt{\pi}\sqrt{n}R_{bb}\sqrt{1+\frac{N_{PU}}{4}}\,,\label{c_sigma_with_PU}\\
c_{\delta\rho}(n,N_{PU},R_{bb}) & \simeq 0.8\pi n R_{bb}^2\sqrt{1+\frac{N_{PU}}{4}}\,,\label{c_delta_rho_with_PU}\\
c_{\Sigma}(n,N_{PU},R_{bb}) & \simeq 0.26\pi\sqrt{n}R_{bb}^2\left(1+\frac{N_{PU}}{4}\right)\,.\label{c_Sigma_with_PU}
\end{align}
If the solution of eq.~(\ref{exact_equation_for_L}) for a given $n$ is found to be above $\eta_{sat}(n,f)$, then $\eta_{opt}=\eta_{sat}(n,f)$ in order to take the saturation of $\delta M_{PT}$ into account. We start by solving this equation numerically. In figure~\ref{solutions_of_the_equation_for_L} we show $\eta_{opt}$ as a function of $p_{t_H}$ and $N_{PU}$ for different values of $n$. As it should, $\eta_{opt}$ increases with $p_{t_H}$ at fixed $N_{PU}$. Indeed, if $p_{t_H}$ grows at fixed $\eta$, $R_{bb}$ decreases and so does the effect of UE/PU, whereas the perturbative radiation is kept fixed (no dependence on $R_{bb}$). Notice also, for $n=3$, that the values obtained for $\eta_{opt}$ are roughly consistent with the choice in \cite{MyFirstPaper} where we had $\eta = \min(0.3/R_{bb},1/2)$. The saturation comes into effect at relatively low $p_{t_H}$, around $400-500$ GeV. Above this value, the total width is small and hadronisation corrections start to become relevant, so that the results presented on these plots become not very reliable. However, for $p_t>\sim 500$ GeV and $\eta>\eta_{sat}$, the Higgs width due to perturbative radiation and UE/PU vary slowly with $\eta$ and we thus believe that the precise value chosen for $\eta$ is not so important: one can take any value above $\eta_{sat}$ without changing the result too much. The decrease of $\eta_{opt}$ with $N_{PU}$ seems to be weaker than one might have expected {\it a priori}. However, in fig.~\ref{Higgs_total_width}, we can see that the negative slope of the perturbative width is very large, and therefore increasing the noise from PU will not change too much the $\eta_{opt}$ value.
\begin{figure}[htb]
\centering
\subfigure[]{
\includegraphics[scale=0.275]{figures/eta_optimal_ptH_NPU0_f0.68.ps}
\label{solution_for_L_ptH_curve}
}~
\subfigure[]{
\includegraphics[scale=0.275]{figures/eta_optimal_NPU_ptH200_f0.68.ps}
\label{solution_for_L_NPU_curve}
}
\caption{The numerical solutions (points) of eq.~(\ref{exact_equation_for_L}) shown for different values of $n$: \subref{solution_for_L_ptH_curve} as a function of $p_{t_H}$ when $N_{PU}=0$, and \subref{solution_for_L_NPU_curve} as a function of $N_{PU}$ when $p_{t_H}=200$ GeV. We also show the corresponding approximate analytical solutions (lines) derived in eq.~(\ref{eta_approximate_solution}).}
\label{solutions_of_the_equation_for_L}
\end{figure}
It would be interesting to understand analytically the evolution of $\eta_{opt}$ with respect to the physical parameters $p_{t_H}$ and $N_{PU}$. Unfortunately, eq.~(\ref{exact_equation_for_L}) cannot be easily dealt with. That's why we have to make an approximation: in this equation, one of the $3$ terms under the square root may be dominant when $\eta = \eta_{opt}$. At first sight, one would expect that at low $\eta_{opt}$, the $c_{\sigma}^2e^{-2L}$ term, which scales like $\eta^2$, should be the largest, whereas at large $N_{PU}$, it should be the $c_{\Sigma}^2e^{-4L}$ term that is the largest one as it scales like $N_{PU}^2$. But figure~\ref{which_UE_term_is_dominant} for $n=3$ reveals that the $c_{\delta\rho}$ term surprisingly brings the largest contribution to $\delta M_{UE}$ for physical values of the parameters (the same holds for other values of $n$).
\begin{figure}[bht]
\centering
\subfigure[]{
\includegraphics[scale=0.275]{figures/dMUE_nf3_ptH.ps}
\label{dMUE_wrt_ptH}
}~
\subfigure[]{
\includegraphics[scale=0.275]{figures/dMUE_nf3_NPU.ps}
\label{dMUE_wrt_NPU}
}
\caption{$\delta M_{UE}$ computed at $\eta = \eta_{opt}$ with respect to \subref{dMUE_wrt_ptH} $p_{t_H}$ when $N_{PU}=0$ and \subref{dMUE_wrt_NPU} $N_{PU}$ when $p_{t_H}=200$ GeV. On these plots is also represented the contribution to $\delta M_{UE}$ of each term separately. When the UE/PU width falls below the saturation line $\delta M_{UE}=\delta M_{sat}$, then $\eta_{opt}=\eta_{sat}$.}
\label{which_UE_term_is_dominant}
\end{figure}
Therefore, to simplify things a little, one can consider eq.~(\ref{exact_equation_for_L}) and put $c_{\sigma}=c_{\Sigma}=0$. However, to be more general, and to consider the possible situation where one of the other terms might be dominant,\footnote{The subtraction procedure proposed in \cite{Cacciari:2007fd} seems to eliminate most of the fluctuations from the $c_{\delta\rho}$ and $c_{\Sigma}$ terms, so that the remaining $c_{\sigma}$ term would be dominant in this case.} we rewrite eq.~(\ref{exact_equation_for_L}) in the following approximate form:
\begin{equation}
M_He^{-\frac{1}{2\beta_0\alpha_s}\left(1-e^{-4\pi\beta_0\frac{C_{PT}}{L}}\right)} = C_{UE}\rho_{UE}e^{-pL}R_{bb}^p\frac{M_H}{p_{t_H}}\,,\label{approximate_equation_for_L_1}
\end{equation}
where $p=1$ if the $c_{\sigma}$ term dominates and $p=2$ otherwise. Moreover:
\begin{equation}
C_{UE}(n,N_{PU}) = \left\{\begin{array}{ll}
1.2\sqrt{\pi}\sqrt{n}\sqrt{1+\frac{N_{PU}}{4}}\,, & \mbox{ if the $c_{\sigma}$ term is dominant,}\\
1.6\pi n \sqrt{1+\frac{N_{PU}}{4}}\,, & \mbox{ if the $c_{\delta\rho}$ term is dominant,}\\
0.52\pi\sqrt{n}\left(1+\frac{N_{PU}}{4}\right)\,, & \mbox{ if the $c_{\Sigma}$ term is dominant.}\end{array}\right.
\label{definition_of_C_UE}
\end{equation}
Eq.~(\ref{approximate_equation_for_L_1}) can be written in a slightly different way:
\begin{equation}
\frac{B_{PT}}{L} = \ln\left(\frac{1}{B_{UE}-2\beta_0\alpha_s pL}\right)\,,\label{approximate_equation_for_L_2}
\end{equation}
with:
\begin{align}
B_{PT} & = 4\pi\beta_0C_{PT}\,, \label{value_of_B_PT}\\
B_{UE} & = 1-2\beta_0\alpha_s\ln\left(\frac{p_{t_H}}{C_{UE}\rho_{UE}R_{bb}^p}\right)\,.
\end{align}
Despite its simpler form, eq.~(\ref{approximate_equation_for_L_2}) for $L$ cannot be solved analytically. Here comes the second approximation, which is to make a perturbative expansion:
\begin{equation}
\frac{B_{PT}}{L} = \ln\frac{1}{B_{UE}}+\frac{2\beta_0\alpha_s p}{B_{UE}}L+{\cal O}\left((\alpha_s L)^2\right)\,.
\end{equation}
Neglecting the ${\cal O}\left((\alpha_s L)^2\right)$ term, the resulting quadratic equation immediately implies
\begin{equation}
L_{opt} = \frac{-B_{UE}\ln\frac{1}{B_{UE}}+\sqrt{B_{UE}^2\ln^2\frac{1}{B_{UE}}+8\beta_0\alpha_s pB_{UE}B_{PT}}}{4\beta_0\alpha_s p}\,.\label{solution_for_L_opt}
\end{equation}
Taking into account the saturation effect, $\eta_{opt}$ is then given by:
\begin{equation}
\eta_{opt} = \left\{\begin{array}{ll}
e^{-L_{opt}}\,, & \mbox{ if } L_{opt}>-\ln\eta_{sat}\,,\\
\eta_{sat}\,, & \mbox{ otherwise}\,. \end{array}\right.
\label{eta_approximate_solution}
\end{equation}
We used this expression with $C_{UE}$ corresponding to the $\delta\rho$ term in eq.~(\ref{definition_of_C_UE}) and $p=2$ in order to plot the approximate solutions in figure~\ref{solutions_of_the_equation_for_L}. This reveals that the above relation for $\eta_{opt}$ (eq.~(\ref{eta_approximate_solution})) works rather well, within a few $\%$.
As a second step, we would like to find the optimal $n$, denoted $n_{opt}$. This also should depend on the way UE/PU and perturbative radiation are combined. However, as a simple approximation, one can combine them as if they were both gaussian distributions. Therefore, one should minimize
\begin{equation}
\delta M_{tot}(n) = \sqrt{\delta M_{PT}^2(n)+\delta M_{UE}^2(n)}\,,
\end{equation}
computed at $\eta = \eta_{opt}(n)$ for a given $p_{t_H}$ and $N_{PU}$.
\begin{figure}[htb]
\centering
\subfigure[]{
\includegraphics[scale=0.275]{figures/dMtot_ptH_f0.68.ps}
\label{dMtot_wrt_ptH}
}~
\subfigure[]{
\includegraphics[scale=0.275]{figures/dMtot_NPU_f0.68.ps}
\label{dMtot_wrt_NPU}
}
\caption{$\delta M_{tot}$ computed at $\eta = \eta_{opt}$ as a function of \subref{dMtot_wrt_ptH} $p_{t_H}$ and \subref{dMtot_wrt_NPU} $N_{PU}$ for different values of $n$.}
\label{solution_for_n_opt}
\end{figure}
The results are plotted in figure~\ref{solution_for_n_opt}. We can notice that the larger $n$, the narrower the peak, and thus the better the result. However, one should keep in mind that when $n$ increases, the optimal $\filt{R}=\eta R_{bb}$ becomes small, and we have to deal with hadronisation corrections that grow as $1/\filt{R}$ \cite{Dasgupta:2007wa} as well as detector resolution and granularity $\delta\eta\times\delta\phi = 0.1\times0.1$ that both start to have an important impact on the reconstructed Higgs width, and thus degrade the results presented here. In section~\ref{hadronisation_corrections} we will examine what happens when we include a very rough estimate for hadronisation corrections. However, at first sight, it seems that one should definitely not take $n=2$. The value $n=3$ chosen in \cite{MyFirstPaper} is good, but it may be possible to do better with $n=4$. Beyond this value, the optimal $\filt{R}$ falls below $\sim$ $0.2$ (cf figure~\ref{solutions_of_the_equation_for_L}), which is too small for this study to be fully reliable, as we shall see in section~\ref{hadronisation_corrections}.
\subsection{Variations of the results with $z$ and $f$ \label{variations_of_the_results_with_z_and_f}}
Until now, we have only presented some results for $f=0.68$ and $z=1/2$, $z$ being defined as
\begin{equation}
z=\min\left(\frac{E_b}{E_H},\frac{E_{\bar{b}}}{E_H}\right)\,,
\end{equation}
with $E_i$ the energy of particle $i$ in the Higgs splitting into $b\bar{b}$. What happens if we change these values?
Let us start with $z$. Though the Higgs splitting into $b\bar{b}$ is more often symmetric than in QCD events (and this is what was used in \cite{MyFirstPaper} to distinguish it from pure QCD splittings), it still has a distribution in $z$ that is uniform in the range:
\begin{equation}
\frac12\left(1-\frac{1}{\sqrt{1+\frac{M_H^2}{p_{t_H}^2}}}\right) < z <\frac12\,,\label{kinematical_limit_for_z}
\end{equation}
which follows from simple kinematics in the limit $m_b=0$. But in order to reduce the large QCD background, one usually cuts on small $z$, so that
\begin{equation}
z_{cut}<z<\frac12\,, \label{z_acceptance}
\end{equation}
with $z_{cut}\sim 0.1$. As an example, assume the $b$ quark carries the fraction $z$ of the Higgs splitting. In such a case, $b$ and $\bar{b}$ are not equidistant from the Higgs direction: they are respectively at an angular distance $(1-z)R_{bb}$ and $zR_{bb}$ from $H$ (see for instance figure~\ref{variables} in appendix~\ref{app:analytical_considerations}). Therefore, as UE/PU particles tend to cluster around the perturbative radiation, eq.~(\ref{theta_gh_approx}) has to be modified:
\begin{equation}
\theta_{gH}\sim zR_{bb} \mbox{ or } \theta_{gH}\sim (1-z)R_{bb}\,,
\end{equation}
for a given UE/PU particle $g$ in the filtered jet. This leads to the modification of eq.~(\ref{average_delta_M_UE}) according to $g$ being relatively close to $b$ (region called ``$J_1$'') or $\bar{b}$ (region called ``$J_2$''):
\begin{align}
\Delta M & \simeq \frac{p_{t_H}}{M_H}\left(\sum_{g\in J_1}p_{t_g}\left(\frac{(1-z)^2R_{bb}^2}{2}+\frac{M_H^2}{2p_{t_H}^2}\right)+\sum_{g\in J_2}p_{t_g}\left(\frac{z^2R_{bb}^2}{2}+\frac{M_H^2}{2p_{t_H}^2}\right)\right)\,,\nonumber\\
& \simeq \frac{M_H}{2p_{t_H}}\left(\frac1z\sum_{g\in J_1}p_{t_g}+\frac{1}{1-z}\sum_{g\in J_2}p_{t_g}\right)\,,\nonumber\\
& = \frac{M_H}{2p_{t_H}}\left(\frac1z\rho A_1+\frac{1}{1-z}\rho A_2\right)\,. \label{average_DeltaM_for_any_z_and_n_2}
\end{align}
In this calculation we used eq.~(\ref{Rbb_boosted_limit}). To compute the dependence of the fluctuations on $z$, we take the simplest case $n=2$. For the $\sigma$ and $\Sigma$ fluctuations, the terms $\rho A_1$ and $\rho A_2$ vary independently, leading to the following contribution to $\delta M_{UE}$:\footnote{the factor of $4$ comes from the fact that we compute the width at $2\sigma$.}
\begin{equation}
\delta M_{UE,\sigma,\Sigma}^2=4\left(\frac{M_H\rho_{UE}}{2p_{t_H}}\right)^2\left(\frac{1}{z^2}\delta_{1,\sigma,\Sigma}^2+\frac{1}{(1-z)^2}\delta_{2,\sigma,\Sigma}^2\right)\,, \label{z_dependence_n_2_for_s_and_drho}
\end{equation}
where
\begin{equation}
\delta_{1,\sigma,\Sigma}^2 = \delta_{2,\sigma,\Sigma}^2 = c_{\sigma}^2e^{-2L}+c_{\Sigma}^2e^{-4L}\,,\label{delta_sigma_Sigma}
\end{equation}
with $c_{\sigma}$ and $c_{\Sigma}$ given by eqs.~(\ref{c_sigma_with_PU},\ref{c_Sigma_with_PU}) for $n=1$. Concerning the $\delta\rho$ fluctuations, the $2$ terms $\rho A_1$ and $\rho A_2$ vary the same way from one event to another. Therefore, if it were only for the $\delta\rho$ term, we would write $\rho A_1 = \rho A_2$ leading to:
\begin{equation}
\delta M_{UE,\delta\rho}^2 = 4\left(\frac{M_H\rho_{UE}}{2p_{t_H}}\right)^2\frac{1}{z^2(1-z)^2}\delta_{\delta\rho}^2\,,
\end{equation}
where
\begin{equation}
\delta_{\delta\rho}^2 = c_{\delta\rho}^2e^{-4L}\,,\label{delta_delta_rho}
\end{equation}
with $c_{\delta\rho}$ given by eq.~(\ref{c_delta_rho_with_PU}) for $n=1$. Adding all these contributions,
\begin{equation}
\delta M_{UE}^2 = \delta M_{UE,\sigma,\Sigma}^2 + \delta M_{UE,\delta\rho}^2\,,
\end{equation}
this apparently leads to an enhancement of the width by a factor of $1/z$. But we have to take into account that the coefficients $c_{\delta\rho}$ and $c_{\Sigma}$ also contain a factor $R_{bb}^2$ (eqs.~(\ref{c_sigma_with_PU},\ref{c_delta_rho_with_PU})) leading to another factor $1/z$, and thus an enhancement $1/z^2$ at small $z$.\footnote{This is valid when $p_{t_H}>\frac{1}{\sqrt{z(1-z)}}\frac{M_H}{R_0}$, with $R_0$ the radius of the initial clustering of the event, in order for the $b$ and $\bar{b}$ to be clustered together. For lower $p_{t_H}$, there is a kinematic cut on $z$ and the enhancement is less strong.} Therefore, we can conclude that the effect of $z\neq 1/2$ is to broaden the reconstructed Higgs peak. Such a factor may partly explain the width of $\sim$ $14$ GeV that was observed in \cite{Rubin:2009ft}, to be compared with the various widths found in the previous subsection (see for instance figure~\ref{solution_for_n_opt}), and should also lead to decreasing $\eta_{opt}$. This is illustrated in fig.~\ref{eta_optimal_and_dMtot_for_z_0.2}, which was obtained with the results derived in appendix~\ref{app:some_analytical_results_for_the_dependence_on_z_and_f}, where we carry out the above analysis for a general $n$.
\begin{figure}[hbt]
\centering
\subfigure[]{
\includegraphics[scale=0.275]{figures/eta_optimal_ptH_z0.2.ps}
\label{eta_optimal_ptH_z0.2}
}~
\subfigure[]{
\includegraphics[scale=0.275]{figures/dMtot_ptH_z0.2.ps}
\label{dMtot_ptH_z0.2}
}
\caption{\subref{eta_optimal_ptH_z0.2} $\eta_{opt}$ as a function of $p_{t_H}$ when $f=0.68$ and $z=0.2$ for different values of $n$. The points correspond to the numerical determination of $\eta_{opt}$, found solving eq.~(\ref{exact_equation_for_L}) whose $z$ dependence is derived in appendix~\ref{app:some_analytical_results_for_the_dependence_on_z_and_f}, whereas the curves correspond to its approximate analytical solutions. \subref{dMtot_ptH_z0.2} $\delta M_{tot}=\sqrt{\delta M_{PT}^2+\delta M_{UE}^2}$ computed at $\eta=\eta_{opt}$ as a function of $p_{t_H}$ for $f=0.68$ and $z=0.2$.}
\label{eta_optimal_and_dMtot_for_z_0.2}
\end{figure}
Now, we turn to the $f$ value. As we explained in section~\ref{study_of_the_Higgs_perturbative_width}, the choice $f=0.68$ was made to correspond to a $2\sigma$ gaussian width, as we did for $\delta M_{UE}$, which is somewhat arbitrary. We would like to estimate how the results change when $f$ is modified. We thus also consider a range of values for $f$ between $0.5$ and $0.8$. In this case the $C_{PT}(n,f)$ constants caracterizing $\delta M_{PT}$ are changed (see for instance eq.~(\ref{C_PT_2_f})), and $\delta M_{UE}$ is also changed, i.e. eqs.~(\ref{WidthUE},\ref{exact_equation_for_L},\ref{definition_of_C_UE}) have to be slightly modified:
\begin{align}
\delta M_{UE} & = 2\sqrt2\,\mbox{erf}^{-1}(f)\,\sqrt{A\sigma^2+A^2\delta\rho^2+\rho^2\Sigma^2}\frac{M_H}{p_{t_H}}\,,\\
& = 2\sqrt2\,\mbox{erf}^{-1}(f)\,\sqrt{c_{\sigma}^2e^{-2L}+c_{\delta\rho}^2e^{-4L}+c_{\Sigma}^2e^{-4L}}\rho_{UE}\frac{M_H}{p_{t_H}}\,,
\end{align}
where $\mbox{erf}(x)$ is the usual error function:
\begin{equation}
\mbox{erf}(x) = \frac{2}{\sqrt{\pi}}\int_0^{x}e^{-u^2}du\,.
\end{equation}
Notice that the constants $c_{\sigma}$, $c_{\delta\rho}$ and $c_{\Sigma}$ are left unchanged with this convention. However $C_{UE}$ becomes:
\begin{equation}
C_{UE}(n,f,N_{PU}) = \left\{\begin{array}{ll}
2\sqrt2\,\mbox{erf}^{-1}(f)\,0.6\sqrt{\pi}\sqrt{n}\sqrt{1+\frac{N_{PU}}{4}}\,, & \mbox{ if the $c_{\sigma}$ term is dominant,}\\
2\sqrt2\,\mbox{erf}^{-1}(f)\,0.8\pi n \sqrt{1+\frac{N_{PU}}{4}}\,, & \mbox{ if the $c_{\delta\rho}$ term is dominant,}\\
2\sqrt2\,\mbox{erf}^{-1}(f)\,0.26\pi\sqrt{n}\left(1+\frac{N_{PU}}{4}\right)\,, & \mbox{ if the $c_{\Sigma}$ term is dominant.} \end{array}\right.\label{C_UE_wrt_f}
\end{equation}
\begin{figure}[bth]
\centering
\subfigure[]{
\includegraphics[scale=0.275]{figures/eta_optimal_ptH_with_bands.ps}
\label{eta_optimal_ptH_with_bands}
}~
\subfigure[]{
\includegraphics[scale=0.275]{figures/eta_optimal_NPU_with_bands.ps}
\label{eta_optimal_NPU_with_bands}
}
\caption{Uncertainty on $\eta_{opt}$ when $f$ varies from $0.5$ to $0.8$, for different values of $n$ as a function of \subref{eta_optimal_ptH_with_bands} $p_{t_H}$ when $N_{PU}=0$ and \subref{eta_optimal_NPU_with_bands} $N_{PU}$ when $p_{t_H}=200$ GeV. The results for $f=0.68$ are also plotted as a reference.}
\label{eta_optimal_with_bands}
\end{figure}
The bands corresponding to the uncertainties on $\eta_{opt}$ that we obtain including these modifications are presented in figure~\ref{eta_optimal_with_bands}. The uncertainty that we get, $\sim 20-30\%$, is not larger than the precision of the whole study of this paper, which limits itself to a large-$N_c$ leading-log calculation. Notice that the variation with $N_{PU}$ remains small.
One finally observes that $\eta_{sat}(n,f)$ is almost independent of $f$ for $n=3$. In appendix~\ref{app:some_analytical_results_for_the_dependence_on_z_and_f}, we will show that it can be approximately written as:
\begin{equation}
\eta_{sat} \simeq e^{-0.58}\left(1+0.044\left(f-\frac12\right)+{\cal O}\left(\left(f-\frac12\right)^2\right)\right)\,.\label{analytical_eta_sat_for_nf3}
\end{equation}
Because of the small coefficient of its first order correction, $\eta_{sat}=e^{-0.58}$ is a good approximation within less than $1\%$ for a large range of $f$ values. But this seems to be a coincidence with no deep physical reason.
\subsection{Hadronisation corrections\label{hadronisation_corrections}}
It is difficult to calculate what happens during the process of hadronisation, though some analytical results can be found concerning jet studies for instance \cite{Dasgupta:2007wa,Korchemsky:1994is,Dasgupta:2009tm}. In particular, it was shown in \cite{Dasgupta:2007wa} that such non-perturbative corrections lead to a $p_t$ shift for QCD jets equals on average $\sim -\Lambda/\filt{R}C_i$ where $\Lambda = 0.4$ GeV and $C_i$ $=$ $C_F$ or $C_A$ depending on whether it is a quark jet or a gluon jet. This can be translated in our study by the following averaged $p_t$ shift for the filtered jet:
\begin{align}
\langle \delta p_t \rangle_{had} & = -\left(2C_F+(n-2)C_A\right)\frac{\Lambda}{\filt{R}}\,,\nonumber\\
& \simeq -\frac{(n-1)N_c\Lambda}{\filt{R}}\,,
\end{align}
where the second equality holds in the large $N_c$ limit. Unfortunately, there is no result concerning the dispersion of the $p_t$ distribution, which is the relevant quantity to compute in our case. Therefore, we are going to assume that the spread is of the same order of magnitude as the shift. This is in principle a crude approximation, but the only aim here is to illustrate the consequences of including hadronisation corrections in order to emphasize the fact that taking $n$ too large is certainly not a good choice. Therefore, we use eq.~(\ref{average_delta_M_UE}) to estimate very roughly the hadronisation corrections to the reconstructed Higgs mass peak width:
\begin{equation}
\delta M_{had} \sim \frac{(n-1)N_c\Lambda}{\filt{R}}\frac{M_H}{p_{t_H}} = \frac{(n-1)N_c\Lambda}{2\eta}\,,\label{dM_had}
\end{equation}
when $z=1/2$. As before, one should know how to combine perturbative radiation with UE/PU and hadronisation corrections in order to minimize the resulting combined width. However, we simply choose to minimize the quantity
\begin{equation}
\delta M_{tot} = \sqrt{\delta M_{PT}^2+\delta M_{UE}^2+\delta M_{had}^2}\,,
\end{equation}
with respect to $\eta$ and plot the resulting minimal $\delta M_{tot}$ for different values of $n$ (fig.~\ref{dMtot_with_had}).
\begin{figure}[htb]
\centering
\subfigure[]{
\includegraphics[scale=0.275]{figures/dMtot_ptH_whad.ps}
\label{dMtot_wrt_ptH_whad}
}~
\subfigure[]{
\includegraphics[scale=0.275]{figures/dMtot_NPU_whad.ps}
\label{dMtot_wrt_NPU_whad}
}
\caption{$\delta M_{tot}$ including hadronisation corrections computed at $\eta = \eta_{opt}$ as a function of \subref{dMtot_wrt_ptH_whad} $p_{t_H}$ and \subref{dMtot_wrt_NPU_whad} $N_{PU}$ for different values of $n$.}
\label{dMtot_with_had}
\end{figure}
The first thing one can notice on these plots is that increasing $n$ also increases the hadronisation corrections. For $n=5$ they become so important that it is now clearly not an optimal filtering parameter contrary to what could be deduced from figure~\ref{solution_for_n_opt}. The relevant $p_t$ region in our study is roughly $200-400$ GeV where we find the major part of the Higgs cross-section above $200$ GeV and where our results are more reliable (see section~\ref{sec:higgs-width}). In this region $n=3$ gives the best result. And at high PU, $n=3$ and $n=4$ both seem optimal, whereas $n=2$ is far from being as good.
To conclude, our estimates seem to indicate within the accuracy of our calculations that $n=2$ is not a good choice, nor is $n\ge 5$. Taking $n=3$ or $n=4$ gives equally good quality to the mass peak. Increasing the hadronisation effects with respect to eq.~(\ref{dM_had}), would lead to $n_{opt}=3$, whereas if we lower them, we would find $n_{opt}=4$. The only thing we can say is that $n=3$ and $n=4$ both seem to work rather well.
One way to go beyond these results would be to use event generators like Herwig \cite{Corcella:2000bw,Corcella:2002jc} or Pythia \cite{Sjostrand:2006za}, to compute directly the Higgs width in presence of UE/PU, perturbative radiation, ISR and hadronisation, and to find for which value of the couple $(n,\eta)$ the reconstructed Higgs mass peak width $\delta M_H$ becomes minimal (that would still depend on $p_{t_H}$ and the level of UE/PU). But our study here aimed to understand as much as possible the physical aspects behind such an optimisation, the price to pay being larger uncertainties on the result because of the necessary simplifications that were made.
\section{Conclusions}
\label{sec:conclusions}
This work has investigated the effect of QCD radiation on the
reconstruction of hadronically decaying boosted heavy particles,
motivated in part by the proposal of \cite{MyFirstPaper} to use a
boosted search channel for the $H\to b\bar b$ decay. Though this
article took the Higgs boson as an example, all the results presented
here can be applied to the $W$ and $Z$ bosons, as well as any new
colorless resonance decaying hadronically that might be observed
at the LHC.
The main effect of the QCD radiation is to distort and spread out the
boosted heavy resonance shape well beyond the intrinsic width of the
resonance.
The aim therefore is to calculate the resulting resonance lineshape.
This is a function of the parameters of the reconstruction method,
notably of the ``filtering'' procedure, which aims to limit
contamination from underlying event and pile-up, but which causes more
perturbative radiation to be lost than would otherwise be the case.
Calculations were performed in a leading (single) logarithmic and
leading colour approximation, which is the state of the art for this
kind of problem.
Analytic results were provided up to $\alpha_s^2 \ln^2 \frac{M_H}{\Delta M}$ for
$n=2$, and all-orders analytic results for the cases $n=2$ and $n=3$
were given for the terms that dominate in the small $\eta$
limit.
Numerical fixed-order results up to $\alpha_s^5 \ln^5 \frac{M_H}{\Delta M}$ and all-orders resummed results were also given and are treated in more details in appendix~\ref{app:convergence_of_the_non-global_series} for a range of $n$ and $\eta$.
For the $n=2$ and $n=3$ cases there is quite acceptable
agreement between the small-$\eta$ analytic results and the full
numerical results, even for values of $\eta\simeq 0.5$.
One unexpected feature that was observed was the behaviour of
order-by-order expansion as compared to the resummed result: indeed
there are indications that the series in $\alpha_s \ln \frac{M}{\Delta M}$ has a radius of convergence that is zero (but in a way that is unrelated to the renormalon divergence of the perturbative QCD series).
This seems to be a general feature of the non-global logarithm series.
Its practical impact seems to be greater for large $\eta$, or
equivalently when the coefficients of the ``primary'' logarithms are
small.
With these results in hand, it was then possible to examine how the
perturbative width of the resonance peak depends on the parameters of
the filtering.
Though this was accessible only numerically for the full range of
filtering parameters, figure~\ref{Higgs_PT_width_t} lends itself to a
simple parametrisation for practically interesting parameter-ranges.
This parametrisation was then used in section~\ref{sec:higgs-width}
together with a parametrisation for the effect of UE and PU, so as
to examine how to minimize the overall resonance width as a function
of the filtering parameters and of the physical parameters of the
problem such as the resonance $p_t$ and the level of UE/PU.
The approximations used might be described as overly simple, yet they
do suggest interesting relations between optimal choices of
the filtering parameters and the physical parameters of the problem.
Though it is beyond the scope of this article to test these relations
in full Monte Carlo simulation, we believe that investigation of their
applicability in realistic conditions would be an interesting subject
for future work. It should also be noticed that the methods used in
this paper may be adapted to other reconstruction procedures like
jet pruning \cite{Ellis:2009me} and jet trimming \cite{Krohn:2009th}
as well as filtering as applied to jets without explicit substructure
\cite{Cacciari:2008gd}.
\section*{Acknowledgements}
\label{sec:acknowledgements}
I am very grateful to Gavin Salam for suggesting this work and for helpful
discussions while it was being carried out. I also wish to thank him as well as Sebastian Sapeta for comments on the manuscript.
This work was supported in part by the French ANR under contract
ANR-09-BLAN-0060.
| proofpile-arXiv_065-11228 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\subsection {Motivation}
One of the difficulties in optimal design of decentralized control systems is
handling the increase of data at the control stations with time. This increase
in data means that the domain of control laws increases with time which, in
turn, creates two difficulties. Firstly, the number of control strategies
increases doubly exponentially with time; this makes it harder to search for an
optimal strategy. Secondly, even if an optimal strategy is found, implementing
functions with time increasing domain is difficult.
In centralized stochastic control~\cite{KumarVaraiya:1986}, these difficulties
can be circumvented by using the conditional probability of the state given the
data available at the control station as a sufficient statistic (where the data
available to a control station comprises of all observations and control actions
till the current time) . This conditional probability, called \emph{information
state}, takes values in a time-invariant space. Consequently, we can restrict
attention to control laws with time-invariant domain. Such results, in which data
that is increasing with time is ``compressed'' to a sufficient statistic taking
values in a time-invariant space, are called \emph{structural results}. While
the information state and structural result for centralized stochastic control
problems are well known, no general methodology to find such information states
or structural results exists for decentralized stochastic control problems.
The structural results in centralized stochastic control are related to the
concept of separation. In centralized stochastic control, the information state,
which is conditional probability of the state given all the available data, does
not depend on the control strategy (which is the collection of control laws used
at different time instants). This has been called a one-way separation between
estimation and control. An important consequence of this separation is that for
any given choice of control laws till time $t-1$ and a given realization of the
system variables till time $t$, the information states at future times do not
depend on the choice of the control law at time $t$ but only on the realization
of control action at time $t$. Thus, the future information states are
\emph{separated} from the choice of the current control law. This fact is
crucial for the formulation of the classical dynamic program where at each step
the optimization problem is to find the best control action for a given
realization of the information state. No analogous separation results are known
for general decentralized systems.
In this paper, we find structural results for decentralized control systems with
delayed sharing information structures. In a system with $n$-step delayed
sharing, every control station knows the $n$-step prior observations and control
actions of all other control stations. This information structure, proposed by
Witsenhausen in~\cite{Witsenhausen:1971}, is a link between the classical
information structures, where information is shared perfectly among the
controllers, and the non-classical information structures, where there is no
``lateral'' sharing of information among the controllers. In his seminal
paper~\cite{Witsenhausen:1971}, Witsenhausen asserted a structural result for
this model without any proof. Varaiya and Walrand~\cite{WalrandVaraiya:1978}
proved that Witsenhausen's assertion was true for $n=1$ but false for $n>1$. For
$n>1$, Kurtaran~\cite{Kurtaran:1979} proposed another structural result.
However, Kurtaran proved his result only for the terminal time step (that is,
the last time step in a finite horizon problem); for non-terminal time steps, he
gave an abbreviated argument, which we believe is incomplete. (The details are
given in Section~\ref{sec:kurtaran} of the paper).
We prove two structural results of the optimal control laws for the delayed
sharing information structure. We compare our results to those conjectured by
Witsenhausen and show that our structural results for $n$-step delay sharing
information structure simplify to that of Witsenhausen for $n=1$; for $n>1$, our
results are different from the result proposed by Kurtaran.
Our structural results do not have the separated nature of
centralized stochastic control: for any given realization of the system
variables till time $t$, the realization of information states at future times
depend on the choice of the control law at time $t$.
However, our second structural result shows that this dependence only propagates
to the next $n-1$ time steps. Thus, the information states from time $t+n-1$
onwards are separated from the choice of control laws before time $t$; they only
depend on the realization of control actions at time $t$. We call this a
\emph{delayed} separation between information states and control laws.
The absence of classical separation rules out the possibility of a classical
dynamic program to find the optimum control laws. However, optimal control laws
can still be found in a sequential manner. Based on the two structural results,
we present two sequential methodologies to find optimal control laws. Unlike
classical dynamic programs, each step in our sequential decomposition involves
optimization over a space of functions instead of the space of control actions.
\subsection {Notation}
Random variables are denoted by upper case letters; their realization by the
corresponding lower case letter. $X_{a:b}$ is a short hand for the vector $(X_a,
X_{a+1}, \dots, X_b)$ while $X^{c:d}$ is a short hand for the vector $(X^c,
X^{c+1}, \dots, X^{d})$. The combined notation $X^{c:d}_{a:b}$ is a short hand
for the vector $(X^j_i : i = a, a+1, \dots, b$, $j = c, c+1, \dots, d)$.
$\PR{\cdot}$ is the probability of an event, $\EXP{\cdot}$ is the expectation of
a random variable. For a collection of functions $\boldsymbol{g}$, we use
$\PR^{\boldsymbol{g}}{\cdot}$ and $\EXP^{\boldsymbol{g}}{\cdot}$ to denote that
the probability measure/expectation depends on the choice of functions in
$\boldsymbol{g}$ .$\IND_A(\cdot)$ is the indicator function of a set $A$. For
singleton sets $\{a\}$, we also denote $\IND_{\{a\}}(\cdot)$ by $\IND_a(\cdot)$.
For a finite set $A$, $\PSP{A}$ denotes the space of probability mass functions
on $A$. For convenience of exposition, we will assume all sets have finite
cardinality.
\subsection {Model} \label{sec:PF}
Consider a system consisting of a plant and $K$ controllers with decentralized
information. At time $t$, $t=1,\dots,T$, the state of the plant $X_t$ takes
values in $\ALPHABET X$; the control action $U^k_t$ at station $k$,
$k=1,\dots,K$, takes values in $\ALPHABET U^k$. The initial state $X_0$ of the
plant is a random variable. With time, the plant evolves according to
\begin{equation}\label{eq:dynamics}
X_t = f_t(X_{t-1}, U^{1:K}_t, V_t)
\end{equation}
where $V_t$ is a random variable taking values in $\ALPHABET
V$. $\{V_t; \allowbreak t = 1,\dots, T\}$ is a sequence of independent
random variables that are also independent of $X_0$.
The system has $K$ observation posts. At time $t$, $t=1,\dots, T$, the
observation $Y^k_t$ of post $k$, $k=1,\dots,K$, takes values in $\ALPHABET Y^k$.
These observations are generated according to
\begin{equation}
\label{eq:obs}
Y^k_t = h^k_t(X_{t-1}, W^k_t)
\end{equation}
where $W^k_t$ are random variables taking values in $\ALPHABET W^k$. $\{W^k_t;
\allowbreak t =1, \dots, T; \allowbreak k =1,\dots, K\}$ are independent random
variables that are also independent of $X_0$ and $\{V_t; \allowbreak
t=1,\dots,T\}$.
The system has $n$-step delayed sharing. This means that at time $t$, control
station $k$ observes the current observation $Y^k_t$ of observation post $k$,
the $n$ steps old observations $Y^{1:K}_{t-n}$ of all posts, and the $n$ steps
old actions $U^{1:K}_{t-n}$ of all stations. Each station has perfect recall;
so, it remembers everything that it has seen and done in the past. Thus, at time
$t$, data available at station $k$ can be written as $(\Delta_t, \Lambda^k_t)$,
where \[\Delta_t \DEFINED (Y^{1:K}_{1:t-n}, U^{1:K}_{1:t-n})\] is the data known
to all stations and \[\Lambda^k_t \DEFINED (Y^k_{t-n+1:t}, U^k_{t-n+1:t-1})\] is
the additional data known at station $k$, $k=1,\dots,K$. Let \(\mathcal{D}_t\)
be the space of all possible realizations of \(\Delta_t\); and \(\mathcal{L}^k\)
be the space of all possible realizations of \(\Lambda^k_t\). Station $k$
chooses action $U^k_t$ according to a control law $g^k_t$, i.e.,
\begin{equation}
\label{eq:control}
U^k_t = g^k_t(\Lambda^k_t, \Delta_t).
\end{equation}
The choice of $\boldsymbol{g} = \{g^k_t; \allowbreak k=1,\dots, K; \allowbreak
t=1,\dots,T\}$ is called a \emph{design} or a \emph{control strategy}.
$\ALPHABET G$ denotes the class of all possible designs. At time $t$, a cost
$c_t(X_t, U^1_t, \dots, U^K_t)$ is incurred. The performance
$\mathcal{J}(\boldsymbol g)$ of a design is given by the expected total cost
under it, i.e.,
\begin{equation} \label{eq:cost}
\mathcal{J}(\boldsymbol g)
= \EXP^{\boldsymbol{g}}{\sum_{t=1}^T c_t(X_t, U^{1:K}_t)}
\end{equation}
where the expectation is with respect to the joint measure on all the system
variables induced by the choice of $\boldsymbol{g}$. We consider the following
problem.
\begin{problem}\label{prob:main}
Given the statistics of the primitive random variables $X_0$, $\{V_t;
\allowbreak t=1,\dots,T\}$, $\{W^k_t; \allowbreak k=1,\dots, K; \allowbreak
t=1,\dots,T\}$, the plant functions $\{f_t; \allowbreak t=1,\dots, T\}$, the
observation functions $\{h^k_t; \allowbreak k =1,\dots,K; \allowbreak
t=1,\dots,T\}$, and the cost functions $\{c_t; \allowbreak t=1,\dots,T\}$
choose a design $\boldsymbol g^*$ from $\ALPHABET G$ that minimizes the
expected cost given by~\eqref{eq:cost}.
\end{problem}
\subsection {The structural results} \label{sec:results}
Witsenhausen~\cite{Witsenhausen:1971} asserted the following structural result
for Problem~\ref{prob:main}.
\begin{structure}[Witsenhausen~\cite{Witsenhausen:1971})]
In Problem~\ref{prob:main}, without loss of optimality we can restrict
attention to control strategies of the form
\begin{equation} \label{eq:Wit}
U^k_t = g^k_t(\Lambda^k_t, \PR{X_{t-n} | \Delta_t}).
\end{equation}
\end{structure}
Witsenhausen's result claims that all control stations can ``compress'' the
common information $\Delta_t$ to a sufficient statistic $\PR{X_{t-n} |
\Delta_t}$. Unlike $\Delta_t$, the size of $\PR{X_{t-n} | \Delta_t}$ does not
increase with time.
As mentioned earlier, Witsenhausen asserted this result without a proof. Varaiya
and Walrand~\cite{WalrandVaraiya:1978} proved that the above separation result
is true for $n=1$ but false for $n>1$. Kurtaran~\cite{Kurtaran:1979} proposed
an alternate structural result for $n>1$.
\begin{structure}[Kurtaran~\cite{Kurtaran:1979}]
In Problem~\ref{prob:main}, without loss of optimality we can restrict
attention to control strategies of the form
\begin{equation}
U^k_t = g^k_t\big(Y^k_{t-n+1:t},
\PR^{g^{1:K}_{1:t-1}}{X_{t-n}, U^{1:K}_{t-n+1:t-1} |
\Delta_t}\big).
\end{equation}
\end{structure}
Kurtaran used a different labeling of the time indices, so the
statement of the result in his paper is slightly different from what we have
stated above.
Kurtaran's result claims that all control stations can ``compress'' the common
information $\Delta_t$ to a sufficient statistic $\PR^{g^{1:K}_{1:t-1}}{X_{t-n},
U^{1:K}_{t-n+1:t-1}|\Delta_t}$, whose size does not increase with time.
Kurtaran proved his result for only the terminal time-step and gave an
abbreviated argument for non-terminal time-steps. We believe that his proof is
incomplete for reasons that we will point out in Section~\ref{sec:kurtaran}. In
this paper, we prove two alternative structural results.
\begin{first-structure}[this paper]
In Problem~\ref{prob:main}, without loss of optimality we can restrict
attention to control strategies of the form
\begin{equation} \label{eq:our_result}
U^k_t = g^k_t\big(\Lambda^k_t, \PR^{g^{1:K}_{1:t-1}}{X_{t-1},
\Lambda^{1:K}_t | \Delta_t}\big).
\end{equation}
\end{first-structure}
This result claims that all control stations can ``compress'' the common
information $\Delta_t$ to a sufficient statistic $\PR^{g^{1:K}_{1:t-1}}{X_{t-1},
\Lambda^{1:K}_t | \Delta_t}$, whose size does not increase with time.
\begin{second-structure}[this paper]
In Problem~\ref{prob:main}, without loss of optimality we can restrict
attention to control strategies of the form
\begin{equation} \label{eq:our_result_2}
U^k_t = g^k_t\big(\Lambda^k_t, \PR{X_{t-n}|\Delta_t}, r^{1:K}_t \big).
\end{equation}
where $r^{1:K}_t$ is a collection of partial functions of the previous $n-1$
control laws of each controller,
\begin{equation*}
r^k_t \DEFINED
\{(g^k_m(\cdot, Y^k_{m-n+1:t-n}, U^k_{m-n+1:t-n},\Delta_m),
t-n+1\leq m \leq t-1 \},
\end{equation*}
for $k=1,2,\ldots,K$. Observe that $r^k_t$ depends only on the previous $n-1$
control laws ($g^k_{t-n+1:t-1}$) and the realization of $\Delta_t$ (which
consists of $Y^{1:K}_{1:t-n},U^{1:K}_{1:t-n}$). This result claims that the
belief $\PR{X_{t-n}|\Delta_t}$ and the realization of the partial functions
$r^{1:K}_t$ form a sufficient representation of $\Delta_t$ in order to
optimally select the control action at time $t$.
\end{second-structure}
Our structural results cannot be derived from Kurtaran's result and vice-versa.
At present, we are not sure of the correctness of Kurtaran's result. As we
mentioned before, we believe that the proof given by Kurtaran is incomplete. We
have not been able to complete Kurtaran's proof; neither have we been able to
find a counterexample to his result.
Kurtaran's and our structural results differ from those asserted by Witsenhausen
in a fundamental way. The sufficient statistic (also called information state)
$\PR{X_{t-n} | \Delta_t}$ of Witsenhausen's assertion does not depend on the
control strategy. The sufficient statistics
$\PR^{g^{1:K}_{1:t-1}}{X_{t-n}, U^{1:K}_{t-n+1:t-1}|\Delta_t}$ of Kurtaran's
result and $\PR^{g^{1:K}_{1:t-1}}{X_{t-1}, \Lambda^{1:K}_{t}|\Delta_t}$ of our
first result \emph{depend on the control laws used before time $t$}. Thus, for a
given realization of the primitive random variables till time $t$, the
realization of future information states depend on the choice of control laws at
time $t$. On the other hand, in our second structural result, the belief
$\PR{X_{t-n} | \Delta_t}$ is indeed independent of the control strategy, however
information about the previous $n-1$ control laws is still needed in the form of
the partial functions $r^{1:K}_t$. Since the partial functions $r^{1:K}_t$ do
not depend on control laws used before time $t-n+1$, we conclude that the
information state at time $t$ is separated from the choice of control laws
before time $t-n+1$. We call this a delayed separation between information
states and control laws.
The rest of this paper is organized as follows. We prove our first structural
result in Section~\ref{sec:structural_result}. Then, in
Section~\ref{sec:second_str} we derive our second structural result. We discuss
a special case of delayed sharing information structures in
Section~\ref{sec:aicardi}. We discuss Kurtaran's structural result in
Section~\ref{sec:kurtaran} and conclude in Section~\ref{sec:conclusion}.
\section{Proof of the first structural result} \label{sec:structural_result}
In this section, we prove the structural result~\eqref{eq:our_result} for
optimal strategies of the $K$ control stations. For the ease of notation, we
first prove the result for $K=2$, and then show how to extend it for general
$K$.
\subsection{Two Controller system ($K=2$)}
The proof for $K=2$ proceeds as follows:
\begin{enumerate}
\item First, we formulate a centralized stochastic control problem from the
point of view of a coordinator who observes the shared information
\(\Delta_t\), but does not observe the private information $(\Lambda^1_t,
\Lambda^2_t)$ of the two controllers.
\item Next, we argue that any strategy for the coordinator's problem can be
implemented in the original problem and vice versa. Hence, the two problems
are equivalent.
\item Then, we identify states sufficient for input-output mapping for the
coordinator's problem.
\item Finally, we transform the coordinator's problem into a MDP (Markov
decision process), and obtain a structural result for the coordinator's
problem. This structural result is also a structural result for the delayed
sharing information strucutres due to the equivalence between the two
problems.
\end{enumerate}
Below, we elaborate on each of these stages.
\subsection*{Stage 1}
We consider the following modified problem. In the model described in
Section~\ref{sec:PF}, in addition to the two controllers, a coordinator
that knows the common (shared) information $\Delta_t$
available to both controllers at time $t$ is present. At time $t$, the coordinator decides the \emph{partial
functions}
\begin{equation*}
\gamma^k_t : \ALPHABET L^k \mapsto \ALPHABET U^k
\end{equation*}
for each controller $k$, $k=1,2$. The choice of the partial functions at time
$t$ is based on the realization of the common (shared) information and the
partial functions selected before time $t$. These functions map each
controller's \emph{private information} $\Lambda^k_t$ to its control action
$U^k_t$ at time $t$. The coordinator then informs all controllers of all the
partial functions it selected at time $t$. Each controller then uses its
assigned partial function to generate a control action as follows.
\begin{equation}
\label{eq:Control1}
U^k_t = \gamma^k_t(\Lambda^k_t).
\end{equation}
The system dynamics and the cost are same as in the original problem. At next
time step, the coordinator observes the new common observation
\begin{equation} \label{eq:CoordObs}
Z_{t+1} \DEFINED
\{Y^1_{t-n+1}, Y^2_{t-n+1}, U^1_{t-n+1}, U^2_{t-n+1}\}.
\end{equation}
Thus at the next time, the coordinator knows
$\Delta_{t+1} = Z_{t+1} \cup \Delta_t$ and its choice of all past partial functions and it selects the next partial
functions for each controller. The system proceeds sequentially in this
manner until time horizon $T$.
In the above formulation, the only decision maker is the coordinator: the
individual controllers simply carry out the necessary evaluations prescribed
by~\eqref{eq:Control1}. At time $t$, the coordinator knows the common (shared)
information $\Delta_t$ and all past partial functions $\gamma^{1}_{1:t-1}$ and
$\gamma^{2}_{1:t-1}$. The coordinator uses a decision rule $\psi_t$ to map this
information to its decision, that is,
\begin{gather}
(\gamma^1_t, \gamma^2_t)
= \psi_t(\Delta_t,\gamma^{1}_{1:t-1},\gamma^{2}_{1:t-1}), \\
\shortintertext{or equivalently,}
\gamma^k_t = \psi^k_t(\Delta_t, \gamma^{1}_{1:t-1},\gamma^{2}_{1:t-1}), \quad
k=1,2.
\end{gather}
The choice of $\boldsymbol\psi = \{\psi_t; \allowbreak t = 1,\dots, T\}$
is called a \emph{coordination strategy}. $\Psi$ denotes the class of
all possible coordination strategies. The performance of a coordinating
strategy is given by the expected total cost under that strategy, that is,
\begin{equation}
\label{eq:cost-coordinator}
\hat{\mathcal{J}}(\boldsymbol\psi) =
\EXP^{\boldsymbol\psi}{\sum_{t=1}^T c_t(X_t, U^1_t, U^2_t) }
\end{equation}
where the expectation is with respect to the joint measure on all the
system variables induced by the choice of $\boldsymbol\psi$. The coordinator has
to solve the following optimization problem.
\begin{problem}[The Coordinator's Optimization Problem]\label{prob:coordinator}
Given the system model of Problem~\ref{prob:main}, choose a
coordination strategy $\boldsymbol\psi^*$ from $\Psi$ that minimizes the
expected cost given by~\eqref{eq:cost-coordinator}.
\end{problem}
\subsection*{Stage 2}
We now show that the Problem~\ref{prob:coordinator} is equivalent to
Problem~\ref{prob:main}. Specifically, we will show that any design
$\boldsymbol{g}$ for Problem~\ref{prob:main} can be implemented by the
coordinator in Problem~\ref{prob:coordinator} with the same value of the problem
objective. Conversely, any coordination strategy $\boldsymbol\psi$ in
Problem~\ref{prob:coordinator} can be implemented in Problem~\ref{prob:main}
with the same value of the performance objective.
Any design $\boldsymbol{g}$ for Problem~\ref{prob:main} can be implemented by
the coordinator in Problem~\ref{prob:coordinator} as follows. At time $t$ the
coordinator selects partial functions $(\gamma^1_t, \gamma^2_t)$ using the
common (shared) information $\delta_t$ as follows.
\begin{equation} \label{eq:equiv1}
\gamma^k_t(\cdot) = g^k_t(\cdot, \delta_t)
\BYDEFINITION \psi^k_t(\delta_t) ,
\quad k = 1,2.
\end{equation}
Consider Problems~\ref{prob:main} and~\ref{prob:coordinator}. Use design
$\boldsymbol{g}$ in Problem~\ref{prob:main} and coordination strategy
$\boldsymbol{\psi}$ given by~\eqref{eq:equiv1} in
Problem~\ref{prob:coordinator}. Fix a specific realization of the initial state
$X_0$, the plant disturbance $\{V_t; \allowbreak t=1,\dots,T\}$, and the
observation noise $\{W^1_t,W^2_t; \allowbreak t=1,\dots,T\}$. Then, the choice
of $\boldsymbol{\psi}$ according to~\eqref{eq:equiv1} implies that the
realization of the state $\{X_t; \allowbreak t=1,\dots,T\}$, the observations
$\{Y^1_t, Y^2_t; \allowbreak t=1,\dots,T\}$, and the control actions $\{U^1_t,
U^2_t; \allowbreak t=1,\dots,T\}$ are identical in Problem~\ref{prob:main}
and~\ref{prob:coordinator}. Thus, any design $\boldsymbol{g}$ for
Problem~\ref{prob:main} can be implemented by the coordinator in
Problem~\ref{prob:coordinator} by using a coordination strategy given
by~\eqref{eq:equiv1} and the total expected cost under $\boldsymbol{g}$ in
Problem~\ref{prob:main} is same as the total expected cost under the
coordination strategy given by~\eqref{eq:equiv1} in
Problem~\ref{prob:coordinator}.
\begin{subequations}\label{eq:equiv2}
By a similar argument, any coordination strategy $\boldsymbol{\psi}$
for Problem~\ref{prob:coordinator} can be implemented by the control
stations in Problem~\ref{prob:main} as follows. At time $1$, both
stations know $\delta_1$; so, all of them can compute $\gamma^1_1 =
\psi^1_1(\delta_1)$, $\gamma^2_1 =
\psi^2_1(\delta_1)$. Then station $k$ chooses action
$u^k_1 = \gamma^k_1(\lambda^k_1)$. Thus,
\begin{equation}
g^k_1(\lambda^k_1, \delta_1) = \psi^k_1(\delta_1)(\lambda^k_1),
\quad k = 1,2.
\end{equation}
At time $2$, both stations know $\delta_2$ and $\gamma^1_1, \gamma^2_1$, so both of them can
compute $\gamma^k_2 = \psi^k_2(\delta_2, \gamma^1_1, \gamma^2_1 )$,
$k=1,2$. Then station $k$ chooses action $u^k_2 =
\gamma^k_2(\lambda^k_2)$. Thus,
\begin{equation}
g^k_2(\lambda^k_2, \delta_2) = \psi^k_2(\delta_2,\gamma^1_1,
\gamma^2_1)(\lambda^k_2), \quad k = 1,2.
\end{equation}
Proceeding this way, at time $t$ both stations know $\delta_t$ and $\gamma^1_{1:t-1}$ and $\gamma^2_{1:t-1}$, so both of them can compute
$(\gamma^1_{1:t}, \gamma^2_{1:t}) = \psi_t(\delta_t,
\gamma^1_{1:t-1}, \gamma^2_{1:t-1}) $.
Then, station $k$ chooses action $u^k_t = \gamma^k_t(\lambda^k_t)$.
Thus,
\begin{equation}
g^k_t(\lambda^k_t, \delta_t) = \psi^k_t(\delta_t,
\gamma^1_{1:t-1}, \gamma^2_{1:t-1})(\lambda^k_t),
\quad k = 1,2.
\end{equation}
\end{subequations}
Now consider Problems~\ref{prob:coordinator} and~\ref{prob:main}. Use
coordinator strategy $\boldsymbol{\psi}$ in Problem~\ref{prob:coordinator} and
design $\boldsymbol{g}$ given by~\eqref{eq:equiv2} in Problem~\ref{prob:main}.
Fix a specific realization of the initial state $X_0$, the plant disturbance
$\{V_t; \allowbreak t=1,\dots,T\}$, and the observation noise $\{W^1_t, W^2_t;
\allowbreak t=1,\dots,T\}$. Then, the choice of $\boldsymbol{g}$ according
to~\eqref{eq:equiv2} implies that the realization of the state $\{X_t;
\allowbreak t=1,\dots,T\}$, the observations $\{Y^1_t, Y^2_t;\allowbreak
t=1,\dots,T\}$, and the control actions $\{U^1_t, U^2_t; \allowbreak
t=1,\dots,T\}$ are identical in Problem~\ref{prob:coordinator}
and~\ref{prob:main}. Hence, any coordination strategy $\boldsymbol\psi$ for
Problem~\ref{prob:coordinator} can be implemented by the stations in
Problem~\ref{prob:main} by using a design given by~\eqref{eq:equiv2} and the
total expected cost under $\boldsymbol\psi$ in Problem~\ref{prob:coordinator} is
same as the total expected cost under the design given by~\eqref{eq:equiv2} in
Problem~\ref{prob:main}.
Since Problems~\ref{prob:main} and~\ref{prob:coordinator} are equivalent, we
derive structural results for the latter problem. Unlike,
Problem~\ref{prob:main}, where we have multiple control stations, the
coordinator is the only decision maker in Problem~\ref{prob:coordinator}.
\subsection*{Stage 3}
We now look at Problem~\ref{prob:coordinator} as a controlled input-output
system from the point of view of the coordinator and identify a state sufficient
for input-output mapping. From the coordinator's viewpoint, the input at time
$t$ has two components: a stochastic input that consists of the plant
disturbance $V_t$ and observation noises $W^1_t,W^2_t$; and a controlled input
that consists of the partial functions $\gamma^1_t, \gamma^2_t$. The output is
the observations $Z_{t+1}$ given by~\eqref{eq:CoordObs}. The cost is given by
$c_t(X_t, U^1_t, U^2_t)$. We want to identify a state sufficient for
input-output mapping for this system.
A variable is a state sufficient for input output mapping of a control system if
it satisfies the following properties (see~\cite{Witsenhausen:1976}).
\begin{itemize
\item[P1)] The next state is a function of the current state and the
current inputs.
\item[P2)] The current output is function of the current state and
the current inputs.
\item[P3)] The instantaneous cost is a function of the current state,
the current control inputs, and the next state.
\end{itemize}
We claim that such a state for Problem~\ref{prob:coordinator} is the
following.
\begin{definition}
For each $t$ define
\begin{equation} \label{eq:state}
S_t \DEFINED (X_{t-1}, \Lambda^1_t, \Lambda^2_t)
\end{equation}
\end{definition}
Next we show that $S_t$, $t=1,2,\ldots,T+1$, satisfy properties (P1)--(P3).
Specifically, we have the following.
\begin{proposition}\label{prop:state}
\strut
\begin{enumerate}
\item There exist functions $\hat f_t$, $t=2,\dots,T$ such that
\begin{equation}
S_{t+1} = \hat f_{t+1}(S_t, V_t, W^1_{t+1}, W^2_{t+1}, \gamma^1_t,
\gamma^2_t).
\end{equation}
\item There exist functions $\hat h_t$, $t=2,\dots,T$ such that
\begin{equation}\label{eq:coordinator-observation}
Z_t = \hat h_t(S_{t-1}).
\end{equation}
\item There exist functions $\hat c_t$, $t=1,\dots,T$ such that
\begin{equation}
c_t(X_t, U^1_t, U^2_t) = \hat c_t(S_t, \gamma^1_t, \gamma^2_t, S_{t+1}).
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
Part~1 is an immediate consequence of the definitions of $S_t$ and
$\Lambda^k_t$, the dynamics of the system given by~\eqref{eq:dynamics}, and
the evaluations carried out by the control stations according
to~\eqref{eq:Control1}. Part~2 is an immediate consequence of the definitions
of state $S_t$, observation $Z_t$, and private information $\Lambda^k_t$.
Part~3 is an immediate consequence of the definition of state and the
evaluations carried out by the control stations according
to~\eqref{eq:Control1}.
\end{proof}
\subsection*{Stage 4}
Proposition~\ref{prop:state} establishes $S_t$ as the state sufficient for
input-output mapping for the coordinator's problem. We now define information
states for the coordinator.
\begin{definition}[Information States]\label{def:info}
For a coordination strategy $\boldsymbol\psi$, define \emph{information states}
$\Pi_t$ as
\begin{equation} \label{eq:define_pi}
\Pi_t(s_t) \DEFINED
\PR^{\boldsymbol\psi}{S_t = s_t | \Delta_t, \gamma^1_{1:t-1},\gamma^2_{1:t-1}}.
\end{equation}
\end{definition}
As shown in Proposition~\ref{prop:state}, the state evolution of $S_t$ depends
on the controlled inputs $(\gamma^1_t, \gamma^2_t)$ and the random noise $(V_t,
W^1_{t+1}, W^2_{t+1})$. This random noise is independent across time.
Consequently, $\Pi_t$ evolves in a controlled Markovian manner as below.
\begin{proposition}\label{prop:info}
For $t=1,\dots, T-1$, there exists functions $F_t$ (which do not depend on the
coordinator's strategy) such that
\begin{equation} \label{eq:info_state_update}
\Pi_{t+1} = F_{t+1}(\Pi_{t}, \gamma^1_{t}, \gamma^2_{t}, Z_{t+1}).
\end{equation}
\end{proposition}
\begin{proof}
See Appendix~\ref{app:info}.
\end{proof}
At $t=1$, since there is no shared information, $\Pi_1$ is simply the
unconditional probability $\PR{S_1} = \PR{X_0,Y^1_1,Y^2_1}$. Thus, $\Pi_1$ is
fixed a priori from the joint distribution of the primitive random variables and
does not depend on the choice of coordinator's strategy $\psi$.
Proposition~\ref{prop:info} shows that at $t=2,\dots,T$, $\Pi_{t}$ depends on
the strategy $\boldsymbol\psi$ only through the choices of $\gamma^1_{1:t-1}$
and $\gamma^2_{1:t-1}$. Moreover, as shown in Proposition~\ref{prop:state}, the
instantaneous cost at time $t$ can be written in terms of the current and next
states $(S_t, S_{t+1})$ and the control inputs $(\gamma^1_t, \gamma^2_t)$.
Combining the above two properties, we get the following:
\begin{proposition}\label{prop:MDP}
The process $\Pi_t$, $t=1,2,\ldots,T$ is a controlled Markov chain with
\(\gamma^1_t,\gamma^2_t\) as the control actions at time $t$, i.e.,
\begin{equation}
\PR{\Pi_{t+1}|\Delta_t, \Pi_{1:t}, \gamma^1_{1:t},\gamma^2_{1:t}} =\PR{\Pi_{t+1}|\Pi_{1:t}, \gamma^1_{1:t},\gamma^2_{1:t}} = \PR{\Pi_{t+1}|\Pi_{t},\gamma^1_{t},\gamma^2_{t}}. \label{eq:Markov_State}
\end{equation}
Furthermore, there exists a deterministic function $C_t$ such that
\begin{equation}\label{eq:MDP_Cost}
\EXP{\hat{c}_t(S_t,\gamma^1_t,\gamma^2_t,S_{t+1})|
\Delta_t,\Pi_{1:t},\gamma^{1}_{1:t},\gamma^{2}_{1:t}}
= C_t(\Pi_t, \gamma^1_1,\gamma^2_t).
\end{equation}
\end{proposition}
\begin{proof}
See Appendix~\ref{app:MDP}.
\end{proof}
The controlled Markov property of the process $\{\Pi_t, \allowbreak
t=1,\dots,T\}$ immediately gives rise to the following structural result.
\begin{theorem}\label{thm:coordinator}
In Problem~\ref{prob:coordinator}, without loss of optimality we can restrict
attention to coordination strategies of the form
\begin{equation}
(\gamma^1_t, \gamma^2_t) = \psi_t(\Pi_t), \quad t = 1, \dots, T.
\end{equation}
\end{theorem}
\begin{proof}
From Proposition \ref{prop:MDP}, we conclude that the optimization problem for
the coordinator is to control the evolution of the controlled Markov process
$\{\Pi_t$, $t=1,2,\ldots,T\}$ by selecting the partial functions $\{\gamma^1_t,
\gamma^2_t$, $t=1,2,\ldots,T\}$ in order to minimize $\sum_{t=1}^{T}\EXP
{C_t(\Pi_t,\gamma^1_t,\gamma^2_t)}$. This is an instance of the well-known
Markov decision problems where it is known that the optimal strategy is a
function of the current state. Thus, the structural result follows from Markov
decision theory~\cite{KumarVaraiya:1986}.
\end{proof}
The above result can also be stated in terms of the original problem.
\begin{theorem}[Structural Result] \label{thm:structural_result}
In Problem~\ref{prob:main} with $K=2$, without loss of optimality we can
restrict attention to coordination strategies of the form
\begin{equation}
U^k_t = g^k_t(\Lambda^k_t, \Pi_t), \quad k=1,2.
\end{equation}
where
\begin{equation}
\Pi_t = \PR^{(g^1_{1:t-1}, g^2_{1:t-1})}
{X_{t-1}, \Lambda^1_t, \Lambda^2_t | \Delta_t}
\end{equation}
where $\Pi_1 = \PR{X_0,Y^1_1,Y^2_1}$ and for
$t=2,\ldots,T$, $\Pi_t$ is evaluated as follows:
\begin{equation}
\Pi_{t+1} = F_{t+1}(\Pi_{t}, g^1_t(\cdot, \Pi_t), g^2_{t}(\cdot, \Pi_t), Z_{t+1})
\end{equation}
\end{theorem}
\begin{proof}
Theorem~\ref{thm:coordinator} established the structure of the optimal
coordination strategy. As we argued in Stage~2, this optimal coordination
strategy can be implemented in Problem~\ref{prob:main} and is optimal for the
objective~\eqref{eq:cost}. At $t=1$, $\Pi_1 =
\PR{X_0,Y^1_1,Y^2_1}$ is known to both controllers and they can use the
optimal coordination strategy to select partial functions according to:
\begin{equation*}
(\gamma^1_1, \gamma^2_1) = \psi_1(\Pi_1)
\end{equation*}
Thus,
\begin{equation}
U^k_1 = \gamma^k_1(\Lambda^k_1)
= \psi^k_1(\Pi_1)(\Lambda^k_1)
\BYDEFINITION g^k_1(\Lambda^k_1,\Pi_1), \quad k=1,2.
\end{equation}
At time instant $t+1$, both controllers know $\Pi_t$ and the common
observations $Z_{t+1} = (Y^1_{t-n+1}, Y^2_{t-n+1}, \allowbreak U^1_{t-n+1},
U^2_{t-n+1})$; they use the partial functions ($g^1_t(\cdot,\Pi_t),
g^2_t(\cdot, \Pi_t)$) in equation \eqref{eq:info_state_update} to evaluate
$\Pi_{t+1}$. The control actions at time $t+1$ are given as:
\begin{align}
U^k_{t+1} = \gamma^k_{t+1}(\Lambda^k_{t+1})
&= \psi_{t+1}(\Pi_{t+1})(\Lambda^k_{t+1}) \notag \\
&\BYDEFINITION g^k_{t+1}(\Lambda^k_{t+1},\Pi_{t+1}), \label{eq:thm2_eq2}
\quad k=1,2.
\end{align}
Moreover, using the design $\boldsymbol g$ defined according to
\eqref{eq:thm2_eq2}, the coordinator's information state $\Pi_t$ can also be
written as:
\begin{align}
\Pi_t &=\PR^{\boldsymbol\psi}{X_{t-1}, \Lambda^1_t, \Lambda^2_t | \Delta_t, \gamma^1_{1:t-1},\gamma^2_{1:t-1}} \nonumber \\
&= \PR^{\boldsymbol g}{X_{t-1}, \Lambda^1_t, \Lambda^2_t |\Delta_t,
g^{1:2}_1(\cdot,\Pi_1),\ldots,g^{1:2}_{t-1}(\cdot,\Pi_{t-1})} \nonumber \\
&= \PR^{(g^1_{1:t-1}, g^2_{1:t-1})}{X_{t-1}, \Lambda^1_t, \Lambda^2_t |
\Delta_t} \label{eq:thm2_eq3}
\end{align}
where we dropped the partial functions from the conditioning terms in
\eqref{eq:thm2_eq3} because under the given control laws $(g^1_{1:t-1},
g^2_{1:t-1})$, the partial functions used from time $1$ to $t-1$ can be
evaluated from $\Delta_t$ (by using Proposition~\ref{prop:info} to evaluate
$\Pi_{1:t-1}$).
\end{proof}
Theorem~\ref{thm:structural_result} establishes the first structural result stated in Section~\ref{sec:results} for $K=2$. In the next section, we show how to extend the result for general $K$.
\subsection{Extension to General $K$} \label{sec:extension}
Theorem~\ref{thm:structural_result} for two controllers ($K=2$) can be easily
extended to general $K$ by following the same sequence of arguments as in stages
1 to 4 above. Thus, at time $t$, the coordinator introduced in Stage~1 now
selects partial functions $\gamma^k_t: \mathcal{L}^k \mapsto \mathcal{U}^k$, for
$k=1,2,\ldots,K$. The state sufficient for input output mapping from the
coordinator's perspective is given as $S_t \DEFINED
(X_{t-1},\Lambda^{1:K}_t)$ and the information state $\Pi_t$ for
the coordinator is
\begin{equation}
\Pi_t(s_t) \DEFINED
\PR^{\boldsymbol\psi}{S_t = s_t | \Delta_t, \gamma^{1:K}_{1:t-1}}.
\end{equation}
Results analogous to Propositions~\ref{prop:state}--\ref{prop:MDP} can now be
used to conclude the structural result of Theorem~\ref{thm:structural_result}
for general~$K$.
\subsection{Sequential Decomposition}
In addition to obtaining the structural result of Theorem~\ref{thm:structural_result},
the coordinator's problem also allows us to write a dynamic program for finding the optimal control
strategies as shown below. We first focus on the two controller case ($K=2$) and then extend the result to general $K$.
\begin{theorem}\label{thm:seq_decomposition}
The optimal coordination strategy can be found by the following dynamic program:
For $t=1,\dots,T$, define the functions $J_{t} : \PSP{S}
\mapsto \reals$ as follows. For ${\pi} \in \PSP{\ALPHABET S}$ let
\begin{equation}
J_{T}(\pi) = \inf _{\tilde\gamma^1,\tilde\gamma^2}
\EXP{C_T(\Pi_T,\gamma^1_T,\gamma^2_T) |
\Pi_T=\pi,\gamma^1_T=\tilde\gamma^1, \gamma^2_T=\tilde\gamma^2}.
\end{equation}
For $t=1,\dots,T-1$, and $\pi \in \PSP{\ALPHABET S}$ let
\begin{equation}
J_{t}(\pi) = \inf_{\tilde\gamma^1,\tilde\gamma^2}
\EXP{C_t(\Pi_t,\gamma^1_t,\gamma^2_t)+ J_{t+1}(\Pi_{t+1}) |
\Pi_t=\pi, \gamma^1_t=\tilde\gamma^1, \gamma^2_t=\tilde\gamma^2 }.
\end{equation}
The arg inf $(\gamma^{*,1}_t, \gamma^{*,2}_t)$ in the RHS of $J_t(\pi)$
is the optimal action for the coordinator at time $t$ then $\Pi_t = \pi$.
Thus,
\begin{equation*}
(\gamma^{*,1}_t, \gamma^{*,2}_t) = \phi^*_t(\pi_t)
\end{equation*}
The corresponding control strategy for Problem~\ref{prob:main}, given
by~\eqref{eq:equiv2} is optimal for Problem~\ref{prob:main}.
\end{theorem}
\begin{proof}
As in Theorem~\ref{thm:coordinator}, we use the fact that the coordinator's
optimization problem can be viewed as a Markov decision problem with $\Pi_t$
as the state of the Markov process. The dynamic program follows from standard
results in Markov decision theory~\cite{KumarVaraiya:1986}. The optimality of
the corresponding control strategy for Problem~\ref{prob:main} follows from
the equivalence between the two problems.
\end{proof}
The dynamic program of Theorem~\ref{thm:seq_decomposition} can be extended to
general $K$ in a manned similar to Section~\ref{sec:extension}
\subsection{Computational Aspects}
In the dynamic program for the coordinator in
Theorem~\ref{thm:seq_decomposition}, the value functions at each time are
functions defined on the continuous space $\PSP{\ALPHABET S}$, whereas the
minimization at each time step is over the finite set of functions from the
space of realizations of the private information of controllers
($\mathcal{L}^k$, $k=1,2$) to the space of control actions ($\mathcal{U}^k$,
$k=1,2$). While dynamic programs with continuous state space can be hard to
solve, we note that our dynamic program resembles the dynamic program for
partially observable Markov decision problems (POMDP). In particular, just as in
POMDP, the value-function at time $T$ is piecewise linear in $\Pi_{T}$ and by
standard backward recursion, it can be shown that value-function at time $t$ is
piecewise linear and concave function of $\Pi_t$. (See
Appendix~\ref{app:convex}). Indeed, the coordinator's problem can be viewed as a
POMDP, with $S_t$ as the underlying partially observed state and the belief
$\Pi_t$ as the information state of the POMDP. The characterization of value
functions as piecewise linear and concave is utilized to find computationally
efficient algorithms for POMDPs. Such algorithmic solutions to general POMDPs
are well-studied and can be employed here. We refer the reader to
\cite{Zhang:2009} and references therein for a review of algorithms to solve
POMDPs.
\subsection{One-step Delay}
We now focus on the one-step delayed sharing information structure, i.e., when
$n=1$. For this case, the structural result~\eqref{eq:Wit} asserted by
Witsenhausen is correct~\cite{WalrandVaraiya:1978}. At first glance, that
structural result looks different from our structural
result~\eqref{eq:our_result} for $n=1$. In this section, we show that for $n=1$,
these two structural results are equivalent.
As before, we consider the two-controller system ($K=2$). When delay $n=1$, we
have
\begin{gather*}
\Delta_t = (Y^{1}_{1:t-1},Y^{2}_{1:t-1},U^{1}_{1:t-1},U^{2}_{1:t-1}), \\
\Lambda^1_t = (Y^1_t), \quad \Lambda^2_t = (Y^2_t), \\
\shortintertext{and}
Z_{t+1} = (Y^1_t,Y^2_t,U^1_t,U^2_t).
\end{gather*}
The result of Theorem~\ref{thm:structural_result} can now be restated for this
case as follows:
\begin{corollary}
In Problem~\ref{prob:main} with $K=2$ and $n=1$, without loss of optimality we
can restrict attention to control strategies of the form:
\begin{equation} \label{eq:one_step_result_1}
U^k_t = g^k_t(Y^k_t, \Pi_t), \quad k=1,2.
\end{equation}
where
\begin{equation} \label{eq:first_one_step_result}
\Pi_t \DEFINED
\PR^{(g^1_{1:t-1}, g^2_{1:t-1})}{X_{t-1}, Y^1_t, Y^2_t| \Delta_t}
\end{equation}
\end{corollary}
We can now compare our result for one-step delay with the structural
result~\eqref{eq:Wit}, asserted in~\cite{Witsenhausen:1971} and
proved in \cite{WalrandVaraiya:1978}. For $n=1$, this result state that
without loss of optimality, we can restrict attention to control laws of the
form:
\begin{equation} \label{eq:WV}
U^k_t = g^k_t(Y^k_t, \PR{X_{t-1}|\Delta_t}), \quad k = 1,2.
\end{equation}
The above structural result can be recovered
from~\eqref{eq:first_one_step_result} by observing that there is a one-to-one
correspondence between $\Pi_t$ and the belief $\PR{X_{t-1}|\Delta_t}$. We first
note that
\begin{align}
\Pi_t &= \PR^{(g^1_{1:t-1}, g^2_{1:t-1})}{X_{t-1}, Y^1_t, Y^2_t| \Delta_t} \notag \\
&= \PR{Y^1_t|X_{t-1}}\cdot\PR{Y^2_t|X_{t-1}}
\cdot
\PR^{(g^1_{1:t-1}, g^2_{1:t-1})}{X_{t-1}| \Delta_t}
\end{align}
As pointed out in~\cite{Witsenhausen:1971, WalrandVaraiya:1978} (and
proved later in this paper in Proposition~\ref{prop:equiv_info}), the last
probability does not depend on the functions $(g^1_{1:t-1}, g^2_{1:t-1})$.
Therefore,
\begin{equation} \label{eq:one_step_eqn}
\Pi_t = \PR{Y^1_t|X_{t-1}}\cdot\PR{Y^2_t|X_{t-1}}\cdot\PR{X_{t-1}| \Delta_t}
\end{equation}
Clearly, the belief $\PR{X_{t-1}|\Delta_t}$ is a marginal of $\Pi_t$ and
therefore can be evaluated from $\Pi_t$. Moreover, given the belief
$\PR{X_{t-1}|\Delta_t}$, one can evaluate $\Pi_t$ using equation
\eqref{eq:one_step_eqn}. This one-to-one correspondence between $\Pi_t$ and
$\PR{X_{t-1} | \Delta_t}$ means that the structural result proposed in this
paper for $n=1$ is effectively equivalent to the one proved
in~\cite{WalrandVaraiya:1978}.
\section{Proof of the second structural result} \label{sec:second_str}
In this section we prove the second structural result~\eqref{eq:our_result_2}.
As in Section~\ref{sec:structural_result}, we prove the result for $K=2$ and
then show how to extend it for general $K$. To prove the result, we reconsider
the coordinator's problem at Stage~3 of Section~\ref{sec:structural_result} and
present an alternative characterization for the coordinator's optimal strategy
in Problem~\ref{prob:coordinator}. The main idea in this section is to use the
dynamics of the system evolution and the observation equations (equations
\eqref{eq:dynamics} and \eqref{eq:obs}) to find an equivalent representation of
the coordinator's information state.
We also contrast this information state with that proposed by Witsenhausen.
\subsection{Two controller system ($K=2$)}
Consider the coordinator's problem with $K=2$. Recall that $\gamma^1_t$ and
$\gamma^2_t$ are the coordinator's actions at time $t$. $\gamma^k_t$ maps the
private information of the $k^{th}$ controller ($Y^k_{t-n+1:t},U^k_{t-n+1:t-1}$)
to its action $U^k_t$. In order to find an alternate characterization of
coordinator's optimal strategy, we need the following definitions:
\begin{definition}\label{def:equiv_info}
For a coordination strategy $\boldsymbol\psi$, and for $t=1,2,\ldots,T$ we
define the following:
\begin{enumerate}
\item $\Theta_t \DEFINED \PR{X_{t-n}|\Delta_t}$
\item For $k=1,2$, define the following partial functions of $\gamma^k_m$
\begin{equation}
r^k_{m,t}(\cdot) \DEFINED
\gamma^k_m(\cdot, Y^{k}_{m-n+1:t-n}, U^{k}_{m-n+1:t-n}),
\quad m= t-n+1,t-n+2, \ldots, t-1 \label{eq:define_r_1}
\end{equation}
Since $\gamma^k_m$ is a function that maps
($Y^k_{m-n+1:m},U^k_{m-n+1:m-1}$) to $U^k_m$, $r^k_{m,t}(\cdot)$ is a
function that maps ($Y^k_{t-n+1:m},U^k_{t-n+1:m-1}$) to $U^k_m$. We
define a collection of these partial functions as follows:
\begin{equation}
r^{k}_t \DEFINED (r^k_{m,t}, m=t-n+1,t-n+2,\ldots,t-1)
\label{eq:define_r_2}
\end{equation}
Note that for $n=1$, $r^k_t$ is empty.
\end{enumerate}
\end{definition}
We need the following results to address the coordinator's problem:
\begin{proposition}\label{prop:equiv_info}
\begin{enumerate}
\item For $t=1,\dots, T-1$, there exists functions $Q_t,Q^k_t$, $k=1,2$,
(which do not depend on the coordinator's strategy) such that
\begin{align} \label{eq:info_state_update_2}
\Theta_{t+1} & = Q_t(\Theta_t,Z_{t+1}) \notag\\
r^{k}_{t+1} &= Q^k_t(r^{k}_t,Z_{t+1},\gamma^k_t)
\end{align}
\item The coordinator's information state $\Pi_t$ is a function of
$(\Theta_t,r^1_t,r^2_t)$. Consequently, for $t=1,\dots, T$, there exist
functions $\hat{C}_t$ (which do not depend on the coordinator's strategy)
such that
\begin{equation} \label{eq:info_state_update_3}
\EXP{\hat{c}_t(S_t,\gamma^1_t,\gamma^2_t,S_{t+1})|
\Delta_t,\Pi_{1:t},\gamma^{1}_{1:t},\gamma^{2}_{1:t}}
= \hat{C}_t(\Theta_t,r^1_t,r^2_t,\gamma^1_t,\gamma^2_t)
\end{equation}
\item The process $(\Theta_t,r^1_t,r^2_t)$, $t=1,2,\ldots,T$ is a controlled
Markov chain with \(\gamma^1_t,\gamma^2_t\) as the control actions at time
$t$, i.e.,
\begin{align}
&\PR{\Theta_{t+1},r^1_{t+1},r^2_{t+1}|
\Delta_t,\Theta_{1:t},r^1_{1:t},r^2_{1:t},
\gamma^1_{1:t},\gamma^2_{1:t}} \notag \\
&\quad=
\PR{\Theta_{t+1},r^1_{t+1},r^2_{t+1}|\Theta_{1:t},r^1_{1:t},r^2_{1:t},
\gamma^1_{1:t},\gamma^2_{1:t}} \notag \\
&\quad=
\PR{\Theta_{t+1},r^1_{t+1},r^2_{t+1}|
\Theta_{t},r^1_{t},r^2_{t},\gamma^1_{t},\gamma^2_{t}}.
\label{eq:Markov_State_2}
\end{align}
\end{enumerate}
\end{proposition}
\begin{proof}
See Appendix~\ref{proof:equiv_info}.
\end{proof}
At $t=1$, since there is no sharing of information, $\Theta_1$ is simply the
unconditioned probability $\PR{X_0}$. Thus, $\Theta_1$ is fixed a priori from
the joint distribution of the primitive random variables and does not depend on
the choice of the coordinator's strategy $\psi$. Proposition 4 shows that the
update of $\Theta_t$ depends only on $Z_{t+1}$ and not on the coordinator's
strategy. Consequently, the belief $\Theta_t$ depends only on the distribution
of the primitive random variables and the realizations of $Z_{1:t}$. We can now
show that the coordinator's optimization problem can be viewed as an MDP with
$(\Theta_t,r^1_t,r^2_t)$, $t=1,2,\ldots,T$ as the underlying Markov process.
\begin{theorem} \label{thm:equiv_info}
$(\Theta_t,r^1_t,r^2_t)$ is an information state for the coordinator. That is,
there is an optimal coordination strategy of the form:
\begin{equation}
(\gamma^1_t, \gamma^2_t) =
\psi_t(\Theta_t,r^1_t,r^2_t), \quad t = 1, \dots, T.
\end{equation}
Moreover, this optimal coordination strategy can be found by the following
dynamic program:
\begin{equation}
J_{T}(\theta, \tilde{r}^1,\tilde{r}^2) =
\inf _{\tilde\gamma^1,\tilde\gamma^2}
\EXP{\hat{C}_T(\Theta_T,r^1_T,r^2_T,\gamma^1_T,\gamma^2_T)|
\Theta_T=\theta, r^1_T=\tilde{r}^1,r^2_T=\tilde{r}^2,
\gamma^1_T=\tilde\gamma^1,\gamma^2_T=\tilde\gamma^2}.
\end{equation}
For $t=1,\dots,T-1$, let
\begin{equation} \label{eq:equiv_info_dp}
J_{t}(\theta, \tilde{r}^1,\tilde{r}^2) =
\inf_{\tilde\gamma^1,\tilde\gamma^2}
\EXP{\hat{C}_t(\Theta_t,r^1_t,r^2_t,\gamma^1_1,\gamma^2_t) +
J_{t+1}(\Theta_{t+1}, r^1_{t+1},r^2_{t+1}) |
\Theta_t,=\theta,
\begin{array}{l}
r^1_t=\tilde{r}^1,r^2_t=\tilde{r}^2,\\
\gamma^1_t=\tilde\gamma^1,\gamma^2_t=\tilde\gamma^2
\end{array}}.
\end{equation}
where $\theta \in \PSP{\mathcal{X}}$, and $\tilde{r}^1,\tilde{r}^2$ are
realizations of partial functions defined in~\eqref{eq:define_r_1}
and~\eqref{eq:define_r_2}. The arg inf $(\gamma^{*,1}_t, \gamma^{*,2}_t)$ in
the RHS of \eqref{eq:equiv_info_dp} is the optimal action for the coordinator
at time $t$ when $(\Theta_t,r^1_t,r^2_t) = (\theta, \tilde{r}^1,\tilde{r}^2)$.
Thus,
\begin{equation*}
(\gamma^{*,1}_t, \gamma^{*,2}_t) = \psi^*_t(\Theta_t,r^1_t,r^2_t)
\end{equation*}
The corresponding control strategy for Problem~\ref{prob:main}, given
by~\eqref{eq:equiv2} is optimal for Problem~\ref{prob:main}.
\end{theorem}
\begin{proof}
Proposition~\ref{prop:equiv_info} implies that the coordinator's optimization
problem can be viewed as an MDP with $(\Theta_t,r^1_t,r^2_t)$,
$t=1,2,\ldots,T$ as the underlying Markov process and
$\hat{C}_t(\Theta_t,r^1_t,r^2_t,\gamma^1_t,\gamma^2_t)$ as the instantaneous
cost. The MDP formulation implies the result of the theorem.
\end{proof}
The following result follows from Theorem~\ref{thm:equiv_info}.
\begin{theorem}[Second Structural Result] \label{thm:second_structural_result}
In Problem~\ref{prob:main} with $K=2$, without loss of optimality we can
restrict attention to coordination strategies of the form
\begin{equation}
U^k_t = g^k_t(\Lambda^k_t, \Theta_t,r^1_t,r^2_t), \quad k=1,2.
\end{equation}
where
\begin{equation}
\Theta_t = \PR{X_{t-n}| \Delta_t}
\end{equation}
and
\begin{equation}
r^k_t = \{(g^k_m(\cdot, Y^k_{m-n+1:t-n}, U^k_{m-n+1:t-n},\Delta_m), t-n+1\leq m \leq t-1 \}
\end{equation}
\end{theorem}
\begin{proof}
As in Theorem~\ref{thm:structural_result}, equations~\eqref{eq:equiv2} can be
used to identify an optimal control strategy for each controller from the
optimal coordination strategy given in Theorem~\ref{thm:equiv_info}.
\end{proof}
Theorem~\ref{thm:equiv_info} and Theorem~\ref{thm:second_structural_result} can
be easily extended for $K$ controllers by identifying $(\Theta_t,r^{1:K}_t)$ as
the information state for the coordinator.
\subsection{Comparison to Witsenhausen's Result}
We now compare the result of Theorem~\ref{thm:equiv_info} to Witsenhausen's
conjecture which states that there exist optimal control strategies of the form:
\begin{equation} \label{eq:Wit_2}
U^k_t = g^k_t(\Lambda^k_t, \PR{X_{t-n} | \Delta_t}).
\end{equation}
Recall that Witsenhausen's conjecture is true for $n=1$ but false for $n>1$.
Therefore, we consider the cases $n=1$ and $n>1$ separately:
\subsubsection*{Delay $n=1$}
For a two-controller system with $n=1$, we have
\begin{gather*}
\Delta_t = (Y^{1}_{1:t-1},Y^{2}_{1:t-1},U^{1}_{1:t-1},U^{2}_{1:t-1}), \\
\Lambda^1_t = (Y^1_t), \quad \Lambda^2_t = (Y^2_t), \\
\shortintertext{and}
r^1_t = \emptyset, \quad r^2_t = \emptyset
\end{gather*}
Therefore, for $n=1$, Theorem~\ref{thm:second_structural_result} implies that
there exist optimal control strategies of the form:
\begin{equation}
U^k_t = g^k_t(\Lambda^k_t, \PR{X_{t-n}|\Delta_t}), \quad k=1,2.
\label{eq:one_step_result}
\end{equation}
Equation \eqref{eq:one_step_result} is the same as equation~\eqref{eq:Wit_2} for
$n=1$. Thus, for $n=1$, the result of Theorem~\ref{thm:equiv_info} coincides
with Witsenhausen's conjecture which was proved in~\cite{WalrandVaraiya:1978}.
\subsubsection*{Delay $n>1$}
Witsenhausen's conjecture implied that the controller $k$ at time $t$ can choose
its action based only on the knowledge of $\Lambda^k_t$ and
$\PR{X_{t-n}|\Delta_t}$, without any dependence on the choice of previous
control laws ($g^{1:2}_{1:t-1}$). In other words, the argument of the control
law $g^k_t$ (that is, the information state at time $t$) is separated from
$g^{1:2}_{1:t-1}$. However, as Theorem~\ref{thm:second_structural_result} shows,
such a separation is not true because of the presence of the collection of
partial functions $r^1_t,r^2_t$ in the argument of the optimal control law at
time $t$. These partial functions depend on the choice of previous $n-1$ control
laws. Thus, the argument of $g^k_t$ depends on the choice of
$g^{1:2}_{t-n+1:t-1}$. One may argue that
Theorem~\ref{thm:second_structural_result} can be viewed as a \emph{delayed or
partial} separation since the information state for the control law $g^k_t$ is
separated from the choice of control laws before time $t-n+1$.
Witsenhausen's conjecture implied that controllers employ common information
only to form a belief on the state $X_{t-n}$; the controllers do not need to use
the common information to guess each other's behavior from $t-n+1$ to the
current time $t$. Our result disproves this statement. We show that in addition
to forming the belief on $X_{t-n}$, each agent should use the common information
to predict the actions of other agents by means of the partial functions
$r^1_t,r^2_t$.
\section{A Special Case of Delayed Sharing Information Structure}
\label{sec:aicardi}
Many decentralized systems consist of coupled subsystems, where each subsystem
has a controller that perfectly observes the state of the subsystem. If all
controllers can exchange their observations and actions with a delay of $n$
steps, then the system is a special case of the $n$-step delayed sharing
information structure with the following assumptions:
\begin{enumerate}
\item \emph{Assumption 1:} At time $t=1,\dots,T$, the state of the system is
given as the vector $X_t \DEFINED (X_t^{1:K})$, where $X^i_t$ is the state
of subsystem $i$.
\item \emph{Assumption 2:} The observation equation of the $k^{th}$
controller is given as:
\begin{equation}
Y^k_t = X^k_t
\end{equation}
\end{enumerate}
This model is the same as the model considered in \cite{Aicardi:1987}. Clearly,
the first structural result and the sequential decomposition of
Section~\ref{sec:structural_result} apply here as well with the observations
$Y^k_t$ being replaced by $X^k_t$. Our second structural result simplifies when
specialized to this model. Observe that in this model
\begin{align}
\Delta_t = (Y^{1:K}_{1:t-n},U^{1:K}_{1:t-n}) = (X_{1:t-n},U^{1:K}_{1:t-n})
\end{align}
and therefore the belief,
\begin{align}
\Theta_t = \PR{X_{t-n}|\Delta_t}
\end{align}
is $1$ for the true realization of $X_{t-n}$ and $0$ otherwise. The result of
Theorem~\ref{thm:equiv_info} can now be restated for this case as follows:
\begin{corollary}
In Problem~\ref{prob:main} with assumptions~1 and~2, there is an optimal
coordination strategy of the form:
\begin{equation}
(\gamma^1_t, \gamma^2_t) = \psi_t(X_{t-n},r^1_t,r^2_t),
\quad t = 1, \dots, T.
\end{equation}
Moreover, this optimal coordination strategy can be found by the following
dynamic program:
\begin{equation}
J_{T}(x, \tilde{r}^1,\tilde{r}^2) =
\inf _{\tilde\gamma^1,\tilde\gamma^2}
\EXP{\hat{C}_T(X_{T-n},r^1_T,r^2_T,\gamma^1_T,\gamma^2_T)|
X_{T-n}=x, r^1_T=\tilde{r}^1,r^2_T=\tilde{r}^2,
\gamma^1_T=\tilde\gamma^1,\gamma^2_T=\tilde\gamma^2}.
\end{equation}
For $t=1,\dots,T-1$, let
\begin{equation}
J_{t}(x, \tilde{r}^1,\tilde{r}^2) =
\inf_{\tilde\gamma^1,\tilde\gamma^2}
\EXP{\hat{C}_t(X_{t-n},r^1_t,r^2_t,\gamma^1_1,\gamma^2_t) +
J_{t+1}(X_{t-n+1}, r^1_{t+1},r^2_{t+1}) |
\begin{array}{l}
X_{t-n}=x,\\r^1_t=\tilde{r}^1,r^2_t=\tilde{r}^2,\\
\gamma^1_t=\tilde\gamma^1,\gamma^2_t=\tilde\gamma^2
\end{array}}.
\end{equation}
\end{corollary}
We note that the structural result and the sequential decomposition in the
corollary above is analogous to Theorem 1 of \cite{Aicardi:1987}.
\section{Kurtaran's Separation Result} \label{sec:kurtaran}
In this section, we focus on the structural result proposed by
Kurtaran~\cite{Kurtaran:1979}. We restrict to the two controller system ($K=2$)
and delay $n=2$. For this case, we have
\begin{gather*}
\Delta_t = (Y^{1}_{1:t-2},Y^{2}_{1:t-2},U^{1}_{1:t-2},U^{2}_{1:t-2}), \\
\Lambda^1_t = (Y^1_t, Y^1_{t-1}, U^1_{t-1}), \quad \Lambda^2_t = (Y^2_t,
Y^2_{t-1}, U^2_{t-1}), \\
\shortintertext{and}
Z_{t+1} = (Y^1_{t-1},Y^2_{t-1},U^1_{t-1},U^2_{t-1}).
\end{gather*}
Kurtaran's structural result for this case states that without loss of
optimality we can restrict attention to control strategies of the form:
\begin{equation}
U^k_t = g^k_t(\Lambda^k_t,\Phi_t), \quad k=1,2,
\end{equation}
where
\[ \Phi_t \DEFINED \PR^{\boldsymbol g}{X_{t-2}, U^1_{t-1},U^2_{t-1}|\Delta_t}.
\]
Kurtaran~\cite{Kurtaran:1979} proved this result for the terminal time-step~$T$
and simply stated that the result for $t=1,\dots,T-1$ can be established by the
dynamic programming argument given in~\cite{Kurtaran:1976}. We believe that this
is not the case.
In the dynamic programming argument in~\cite{Kurtaran:1976}, a critical step is
the update of the information state $\Phi_t$, which is given
by~\cite[Eq~(30)]{Kurtaran:1976}. For the result presented
in~\cite{Kurtaran:1979}, the corresponding equation is
\begin{equation}\label{eq:wrong_update}
\Phi_{t+1} = F_t(\Phi_t,Y^1_{t-1},Y^2_{t-1},U^1_{t-1},U^2_{t-1}).
\end{equation}
We believe that such an update equation cannot be established.
To see the difficulty in establishing~\eqref{eq:wrong_update}, lets follow an
argument similar to the proof of~\cite[Eq~(30)]{Kurtaran:1976} given
in~\cite[Appendix~B]{Kurtaran:1976}. For a fixed strategy $\boldsymbol{g}$, and
a realization $\delta_{t+1}$ of $\Delta_{t+1}$, the realization $\varphi_{t+1}$
of $\Phi_{t+1}$ is given by
\begin{align}
\varphi_{t+1} &= \PR{x_{t-1}, u^1_{t},u^2_{t}|\delta_{t+1}} \notag \\
&= \PR{x_{t-1},u^1_{t},u^2_{t}|\delta_t, y^1_{t-1},y^2_{t-1},u^1_{t-1},u^2_{t-1}} \notag \\
&= \frac {\PR{x_{t-1},u^1_{t},u^2_{t},y^1_{t-1},y^2_{t-1},u^1_{t-1},u^2_{t-1}|\delta_t}}
{\sum\limits_{(x',a^1, a^2) \in \ALPHABET X \times
\ALPHABET U^1 \times \ALPHABET U^2}
\vphantom{\sum\limits^{-}}
\PPR( X_{t-1} = x', U^1_t = a^1, U^2_t = a^2,}
{y^1_{t-1}, y^2_{t-1}, u^1_{t-1}, u^2_{t-1} \mid \delta_t)}}
\end{align}
The numerator can be expressed as:
\begin{align}
\hskip 2em & \hskip -2em
\PR{x_{t-1},u^1_{t},u^2_{t},y^1_{t-1},y^2_{t-1},u^1_{t-1},u^2_{t-1}|\delta_t} \notag \\
\displaybreak[2]
&= \smashoperator[l]{\sum_{(x_{t-2},y^1_t,y^2_t) \in \ALPHABET X \times
\ALPHABET Y^1 \times \ALPHABET Y^2}}
\Pr(x_{t-1},u^1_{t},u^2_{t},y^1_{t-1},y^2_{t-1}
u^1_{t-1},u^2_{t-1},x_{t-2},y^1_t,y^2_t|\delta_t)
\notag \\
&= \smashoperator[l]{\sum_{(x_{t-2},y^1_t,y^2_t) \in \ALPHABET X \times
\ALPHABET Y^1 \times \ALPHABET Y^2}}
\IND_{g^1_t(\delta_t,u^1_{t-1},y^1_{t-1},y^1_t)}[u^1_t]
\cdot
\IND_{g^2_t(\delta_t,u^2_{t-1},y^2_{t-1},y^2_t)}[u^2_t]
\cdot
\PR{y^1_t|x_{t-1}} \cdot \PR{y^2_t|x_{t-1}}\notag \\
& \quad \cdot \PR{x_{t-1}|x_{t-2},u^1_{t-1},u^2_{t-1}}
\cdot
\IND_{g^1_{t-1}(\delta_{t-1},u^1_{t-2},y^1_{t-2},y^1_{t-1})}[u^1_{t-1}]
\cdot
\IND_{g^2_t(\delta_{t-1},u^2_{t-2},y^2_{t-2},y^2_{t-1})}[u^2_{t-2}] \notag
\\
& \quad \cdot \PR{y^1_{t-1}|x_{t-2}}\cdot \PR{y^2_{t-1}|x_{t-2}}
\cdot
\PR{x_{t-2}|\delta_t} \label{eq:kurtaran_update}
\end{align}
If, in addition to $\varphi_t$, $y^1_{t-1}$, $y^2_{t-1}$, $u^1_{t-1}$, and
$u^2_{t-1}$, each term of~\eqref{eq:kurtaran_update} depended only on terms that
are being summed over ($x_{t-2}$, $y^1_{t}$, $y^2_{t}$),
then~\eqref{eq:kurtaran_update} would prove~\eqref{eq:wrong_update}. However,
this is not the case: the first two terms also depend on $\delta_t$. Therefore,
the above calculation shows that $\varphi_{t+1}$ is a function of
$\varphi_{t},Y^1_{t-1},Y^2_{t-1},U^1_{t-1},U^2_{t-1}$ \emph{and} $\delta_t$.
This dependence on $\delta_t$ is not an artifact of the order in which we
decided to use the chain rule in~\eqref{eq:kurtaran_update} (we choose the
natural sequential order in the system). No matter how we try to write
$\varphi_{t+1}$ in terms of $\varphi_t$, there will be a dependence on
$\delta_t$.
The above argument shows that it is not possible to
establish~\eqref{eq:wrong_update}. Consequently, the dynamic programming
argument presented in~\cite{Kurtaran:1976} breaks down when working with the
information state of~\cite{Kurtaran:1979}, and, hence, the proof
in~\cite{Kurtaran:1979} is incomplete. So far, we have not been able to correct
the proof or find a counterexample to it.
\section {Conclusion} \label{sec:conclusion}
We studied the stochastic control problem with $n$-step delay sharing
information structure and established two structural results for it. Both the
results characterize optimal control laws with time-invariant domains. Our
second result also establishes a partial separation result, that is, it shows
that the information state at time $t$, is separated from choice of laws before
time $t-n+1$. Both the results agree with Witsenhausen's conjecture for $n=1$.
To derive our structural results, we formulated an alternative problem from the
point of a coordinator of the system. We believe that this idea of formulating
an alternative problem from the point of view of a coordinator which has access
to information common to all controllers is also useful for general
decentralized control problems, as is illustrated
by~\cite{NayyarTeneketzis:2008} and~\cite{MahajanNayyarTeneketzis:2008}.
| proofpile-arXiv_065-11243 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Out of the almost infinite chemical design space of inorganic materials, there are only 262,242 experimentally synthesized crystal structures as deposited in the ICSD database \cite{zagorac2019recent} for now in April 2022. Intelligent computational algorithms are strongly needed to navigate the huge uncharted chemical space for discovering novel materials. Currently, there are three major new materials discovery strategies: the first one is experimental tinkering in which researchers manipulate a given composition, synthesize it, and characterize its structure or function \cite{zunger2021understanding}; the second approach uses computational models to generate new compositions, and uses crystal structure prediction algorithms to predict their structures, and then uses DFT calculations to characterize their properties \cite{dan2020generative}; the third approach directly trains generative models for creating crystal structures for the downstream property prediction or simulation \cite{zhao2021high}. The first approach is too costly to explore the huge design space while the third approach is currently limited by the capability for existing crystal structure generation algorithms to generate stable structures. Considering the fact that most existing materials can be assigned to a limited number of prototypes, the emergence of template-based crystal structure prediction algorithms \cite{wei2021tcsp} has made it promising to explore new materials discovery using the composition generation and template-based crystal structure prediction (TCSP).
Here we propose to use the deep learning language models for the generative materials composition discovery. Our work is inspired by the fact that a material composition or formula can be conveniently converted into a unique sequence of elements by assuming a specific element order (e.g. SrTiO3 --> Sr Ti O O O by the ascending element electronegativity) and that pretrained language models have been widely used in the generation of natural language texts, molecules, and protein sequences. These pretrained self-supervised learning models such as BERT \cite{devlin2018bert} and GPT-3 \cite{brown2020language} are able to learn language/chemical grammars \cite{wei2021frequency} for the text/molecule/protein generation \cite{rothe2020leveraging,li2021pretrained}. However, no such language models have been used for the generation of inorganic materials.
There are several categories of pretrained language models for the text generation as reviewed in \cite{li2022learning} including masked language models, causal language models, prefix language models, and encoder-decoder language models. The masked language models such as BERT are trained by predicting the masked tokens using the contextualized information, which is not directly in alignment with the text generation task. They however can be used in the encoder and decoder part for text generation models by exploiting their excellent bidirectional encoding capacities. Causal language models such as GPT \cite{radford2018improving}, GPT-2 \cite{radford2019language}, and GPT-3 are trained to calculate/predict the probability of the occurrence of several words given all preceding words, making them ideal for the text generation. However, they have the weakness of neglecting the bidirectional information. Prefix language models such as UniLM \cite{dong2019unified} and XLNet \cite{yang2019xlnet} aim to combine the advantages of the bidirectional masked language models (LMs) and the unidirectional causal LMs in the text generation. A majority class of text generators such as T5 \cite{raffel2019exploring} and BART \cite{lewis2019bart} belongs to the encoder-decoder language models, which consist of stacks of both encoder and decoder layers. These models have all been used in the molecule or protein sequence generation but their performance for the inorganic composition generation is unknown.
Despite that crystal inorganic materials, organic molecules, and proteins are all composed of atoms, they have a distinct difference in terms of their building blocks and topology: organic molecules consist of atoms connected by bonds while protein sequences are composed of chains of amino acids. In contrast, crystal materials are periodic structures, of which each unit cell contains a repeating structural pattern. There is no strict chain of proteins or connected components of organic molecules. On the other hand, they can all be represented as sequences, and the models for the generative design of proteins and molecules can serve as the source of reference of developing generative models for the material composition generation.
Deep language models have been used for the generation of molecules \cite{bagal2021molgpt,rothchild2021c5t5}. Inspired by the GPT model, Bagal et al. \cite{bagal2021molgpt} trained a transformer-decoder on the task that predicts the next token using masked self-attention for the generation of druglike molecules. Rothchild \cite{rothchild2021c5t5} proposed their novel self-supervised pretraining method that enables transformers to make zero-shot select-and-replace edits, altering organic substances toward the desired property, which shows better performance and potential than graph-based methods. In \cite{kim2021generative}, Kim et al. combined a transformer encoder with a conditional variational autoencoder (cVAE) to achieve the high performance molecule generation. A similar generative VAE was also proposed in \cite{dollar2021attention}. However, all these generative language models do not explicitly model the generative process and work more like black-box generators. In addition to VAE-based models, researchers also use some GAN-based or RNN-based models to generate molecules. Guimaraes et al. \cite{guimaraes2017objective} proposed a method that combines GANs and reinforcement learning to achieve that while reinforcement learning (RL) biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. De Cao et al. \cite{de2018molgan} adapted GANs to operate directly on graph-structured data and design experiments on the QM9 chemical database to validate the performance of their model.
Language models have also been applied for the protein sequence generation \cite{madani2020progen,wu2020signal}. Madani et al. proposed an autoregressive transformer model named ProGen \cite{madani2020progen}, an 1.2 billion parameter conditional language model trained on a dataset of 280 million protein sequences. They also incorporated conditioning tags for taxonomic, functional, and locational information to enable the generation for targeted properties. Hesslow et al. \cite{hesslow2022rita} trained decoder-only transformer models without any conditioning information for the protein sequence generation. Ingraham \cite{ingraham2019generative} proposed a conditional generative model for the protein sequence generation given 3D structures based on graph representations. More recently, Ram and Bepler developed the MSA-to-protein transformer, a generative model of protein sequences conditioned on protein families represented by multiple sequence alignments (MSAs). Compared with previous generative studies usually without rigorous performance evaluations, Ferruz et al. \cite{ferruz2022deep} developed GPT-X based transformer models ProtGPT2 for generating de novo protein sequences. They found their generated protein sequences share some similarities with natural ones such as amino acid propensities and disorders. Linder et al. \cite{linder2020generative} developed a language model that can maximize the fitness and diversity of synthetic DNA and protein sequences. Overall, except for the normalizing flow models, most generative models, such as autoregressive models (RNN/LSTM/Transformers), VAE, and GAN, have been applied for the protein generation \cite{osadchy2021deep}. However, compared to molecule deep generation model studies, there lacks standard benchmark datasets and performance evaluation metrics, which significantly hinder the development of generative protein sequence models.
Despite the success of deep language models in the protein and molecule sequence generation, no studies have been reported successfully applied deep language models to the inorganic materials composition generation except for our recent work on generative transformers \cite{wei2022crystal}. Here, we develop and evaluate six materials transformers based on different language models for the materials composition generation. We train all the models using the materials composition/formula data in the form of unlabeled expanded element symbol sequences from selected samples from ICSD/OQMD/Materials Projects (MP) databases. Compared to natural language texts, inorganic materials composition sequences have strong constraints among the elements due to the requirements to form chemically valid and structurally stable periodic structures. This involves complex atomic interactions from ionic or covalent bonds and/or oxidation states of constituent elements. Effective materials composition generation models have to learn complex local and long-range dependencies and generation contexts, for which transformer neural network models excel at detecting and modeling. Our language model-based composition generators have an advantage over the heuristic or data mining element substitution models \cite{hautier2011data,sun2019map} as they can consider the chemical context within the formulas rather than only element property compatibility. Our extensive generative composition design experiments show that the transformer-based materials generators can learn chemical grammars and achieve a high composition generation performance. Our additional experiments also show that materials transformers have different generation preferences or biases such as the tendency to generate materials compositions with >4 elements and with low numbers of atoms per element.
\section{Results}
\label{sec:headings}
\subsection{Pretrain Transformer language models for material composition generation }
We select 6 different transformer-based language models as implemented in the Huggingface package \cite{shen2020blank} to generate a series of Material Transformer (MT) generators. The group includes four GPT series language models, BART \cite{lewis2019bart}, and RoBERTa \cite{liu2019roberta}. In addition, we also use our previous work, BLMM \cite{wei2022crystal}, in our experiments to show the performance.
\begin{itemize}
\item \textit{GPT (Generative Pre-trained Transformer) model} \cite{radford2018improving}: GPT is a transformer-based language model(LM) working on various NLP tasks with the unsupervised training. Original GPT uses 12-layer decoder-only transformer with masked self-attention, which is therefore powerful at predicting the next token in a sequence. Considering this property, we use GPT to generate our crystal formulas, and we call this GPT-based materials composition generation model as MT-GPT.
\item \textit{GPT-2 model} \cite{radford2019language}: GPT-2 is a large transformer-based causal language model derived from GPT that is trained simply to predict the next word in large text corpses given all of the previous words within some text. GPT-2 models are trained with much more diverse text data with over an order of magnitude parameters than GPT. It uses a modified Byte Pair Encoding as input representation to combine the benefits of word-level LM with the generality of byte-level approaches. GPT-2 also has a few neural network architecture changes such as moving layer normalization to the input of each sub-block and adding a layer normalization to the final self-attention block. This model maps naturally to the crystal formula generation task. We call this GPT-2-based materials composition generation model as MT-GPT2.
\item \textit{GPT-J model} \cite{wang2021gpt}: GPT-J is an open-source version of the multi-head GPT-3 model \cite{brown2020language}. As an auto-regressive causal language model, it is originally used for the text generation task. GPT3 models use the same network architecture as GPT-2 except their use of alternating dense and locally banded sparse attention patterns in the transformer layers.
Since the core ability of GPT-J is to take a string of the text and predict the next token, GPT-J is good at generating texts from a prompt. In this paper, We call the GPT-J-based materials composition generation model as MT-GPTJ.
\item \textit{GPT-NEO model} \cite{gpt-neo,gao2020pile}: GPT-Neo is an implementation of GPT3-like causal language model using the Mesh-TensorFlow library. The original architecture of GPT-Neo is similar to GPT-3 except that GPT-Neo uses local attention in every other layers with a window size of 256 tokens. GPT-Neo is trained as an auto-regressive language model, which means that it can also predict the next token as previous models. We call this GPT-Neo-based materials composition generation model as MT-GPTNeo.
\item \textit{RoBERTa model} \cite{liu2019roberta}: RoBERTa is a dynamic masking pretrained language model based on the BERT model \cite{devlin2018bert}. It achieves higher performance by applying an improved pretraining procedure over original BERT, which includes training with more epochs and larger mini-batches, removing the next sentence prediction objective, training on longer sequences and dynamically changing the masking pattern applied to the training data. We call this RoBERTa-based materials composition generation model as MT-RoBERTa.
\item \textit{BART model} \cite{lewis2019bart}: BART combines the design of the transformer architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT) to form a denoising autoencoder model. BART models are trained by corrupting text with a noising function and learning a model to reconstruct the original text. BART is particularly effective on the text generation task while still performing well on the comprehension tasks. We call this BART-based materials composition generation model as MT-BART.
\item BLMM \cite{wei2022crystal} is a transformer-based generative language model for materials composition generation, which is based on the blank filling language model BLM \cite{shen2020blank}. It formulates the composition generation problem as a sequential probabilistic sequence rewriting problem, which allows it to directly model the generation process enabling its high interpretability and high efficiency in generation.
\end{itemize}
\subsection{De novo generative design of materials composition }
\paragraph{Model Training and hyper-parameters}
We prepare two sets of training datasets to train different MT models for the materials composition generation. The first set includes two datasets, the ICSD-mix dataset and the ICSD-pure dataset, which contain selected compositions from the ICSD database. The former includes samples that do not satisfy charge neutrality (CN) or balanced electronegativity (EB) while the latter contains only samples that satisfy both chemical criteria. To evaluate whether increasing the number of training samples can improve the generation performance, we also prepare the second set of datasets including Hybrid-mix, Hybrid-strict, and Hybrid-pure datasets with selected compositions from ICSD, Materials Projects (MP), and OQMD databases. The Hybrid-mix dataset includes all formulas from the three databases. The Hybrid-strict dataset are selected from the Hybrid-mix dataset, which contains only samples that satisfy charge neutrality and balanced electronegativity with the ICSD-oxidation assignments for all elements. The Hybrid-pure dataset has the same requirement except that the CN and EB are evaluated using more relaxed oxidation assignments for elements implemented as the default oxidation states in SMACT\cite{davies2019smact}. The detailed information of each dataset is in the Section \ref{subsec:dataset}, and the detailed sample numbers of each dataset are shown in Table\ref{tab:datasets}.
For each dataset, we conduct hyper-parameter tuning to figure out the best hyper-parameters with reasonable tuning efforts (See Section \ref{subsec:para-tuning}). We determine the training epochs based on the check of the learning curves of the training and validation loss to avoid overfitting. For each trained transformer model, we generate 50,000 candidate compositions that contain more than one and less than nine elements and the total number of atoms is smaller than or equal to thirty. We then check the percentages of these samples that satisfy CN and/or EB.
\paragraph{Generation of hypothetical material compositions:}
\begin{figure}[ht!]
\begin{subfigure}[t]{0.33\textwidth}
\centering
\includegraphics[height=0.8\textwidth]{train.png}
\caption{}
\vspace{-3pt}
\label{fig:train_tsne}
\end{subfigure}
\begin{subfigure}[t]{0.33\textwidth}
\centering
\includegraphics[height=0.8\textwidth]{train_GPT_BLM_generate.png}
\caption{}
\vspace{-3pt}
\label{fig:mix_tsne}
\end{subfigure}
\begin{subfigure}[t]{0.34\textwidth}
\raisebox{-0.085\height}{\includegraphics[height=0.81\textwidth]{matgan_distribution.png}}
\caption{}
\vspace{-3pt}
\label{fig:matgan_tsne}
\end{subfigure}
\caption{The distributions of existing materials and hypothetical materials generated by BLMM, MT-GPT2, and MATGAN. The distributions are generated by calculating the one-hot representation for the compositions and then we use t-SNE to project them into 2-dimension space. (a) is the distribution of the training set. (b) is the distribution of the training set and generated samples by our BLMM and MT-GPT2. (c) is the distribution of the training set, the test set, and generated samples of MATGAN \cite{dan2020generative}. }
\label{fig:distribution}
\end{figure}
To evaluate whether our language MT models can learn the chemistry of inorganic materials (compositions) and use it to generate valid hypothetical formulas, we first compare the distributions of the generated samples by BLMM, MT-GPT2, and MATGAN \cite{dan2019generative} with respect to the training set. We represent each formula using the one-hot encoding as described in \cite{dan2020generative} and then map all the sample matrix representations into two-dimension space using the t-SNE algorithm. The results are shown in Figure\ref{fig:distribution}. As shown in Figure\ref{fig:train_tsne}, we find that the compositions of the ICSD materials in the training set are not evenly distributed with a shift towards the left half space. Figure \ref{fig:mix_tsne} shows the distributions of the training set and samples generated by MT-GPT2 and BLMM, and the BLMM-generated samples have more overlaps with the training set compared to the MT-GPT2 generated ones. What is interesting is that both distributions of these two transformer generators are very different from the distribution of generated samples as distributed in Figure \ref{fig:matgan_tsne} in which the training samples are organized into several clusters corresponding to materials families and the known materials (training and testing samples) are only a tiny portion of whole composition space and the MATGAN tends to generate very different samples.
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.9\linewidth]{tsne_distribution.png}
\vspace{3pt}
\caption{The distributions of the training set and hypothetical materials generated by different materials transformers. The distributions are generated by calculating the one-hot representation for the compositions and then using t-SNE to project them into 2-dimension space. (a) Training samples; (b) BLMM; (c) Random algorithm; (d) MT-GPT; (e) MT-GPT2; (f) MT-GPTNeo; (g) MT-GPTJ; (h) MT-BART; (i) MT-RoBERTa. }
\label{fig:dist_all}
\end{figure}
To further illustrate the composition pattern of the materials generators, we combine the training set and the generated samples by BLMM, pseudo-random algorithm, MT-GPT, MT-GPT2, MT-GPTNeo, MT-GPTJ, MT-BART, and MT-RoBERTa and conduct the t-SNE dimension reduction to map the 9 sets of samples into 2D space and then plot each set of compositions separately with the same coordinate system as shown in Figure \ref{fig:dist_all}. First, we find that the BLMM-generated samples in (b) show the highest similarity to the training samples in (a) compared to all other generators. The next most similar distribution with regard to the training samples is from MT-BART, which has a similar rectangular shape but with fewer samples in the upper-left area. The GPT series transformers tend to generate compositions with similar distributions with variation in several local areas in the chemical space. For example, they all have sparse samples in the upper-left area, which may be due to they have difficulty generating compositions with fewer than 5 elements (See Figure\ref{fig:dist_all}). The pseudo-random generator also shows a certain degree of distribution similarity with the training set because it uses its composition prototypes as the templates for composition generation, which means that it has the same number of binary, ternary, and quaternary samples as the training set. Finally, it is found that the samples generated by MT-RoBERTa have a very different distribution. This model is also the one that has the most difficulty to generate compositions with <=30 atoms with <=8 elements.
\subsection{Evaluations of materials transformer generation performance using validity, uniqueness, recovery rate, and novelty}
We evaluate the performance of our generative models for materials composition design and compare with the baseline random formula generator using four evaluation criteria including validity, uniqueness, recovery rate, and novelty as described in the Section \ref{subsec:criteria}.
\paragraph{The CN and EB performance}
Figure\ref{fig:validity} shows the composition generation validity performance of seven transformer-based models compared them with the pseudo-random generator as evaluated on the ICSD-pure (37459 samples with 95.94\% CN and 93.12\% CN+EB ) and ICSD-mix ( 50755 samples with 78.40\% CN and 72.36\% CN+EB) datasets. Note that all the percentages are calculated on the generated samples with less than or equal to 9 elements and 30 atoms. First, the CN and EB percentages of generated samples of all seven transformer models range from 65.87\% to 97.54\%, which are more than six times higher compared to the ones of the random generator (max CN: 10.13\% and max CN+EB: 5.91\%). Out of the seven models, we find that overall, the MT-GPTNeo and MT-GPTJ have the best validity performance, which is followed by MT-GPT2, MT-BART, and MT-RoBERTa. The BLMM and MT-GPT models tend to have the lowest validity, even though their gaps with MT-BART and MT-RoBERTa are small. However, a close investigation of the filtering process shows that more than 45\% of generated compositions have been filtered out for MT-BART, MT-RoBERTa, and the GPT series models while the BLMM filters out less than 0.3\% samples, as shown in Figure \ref{fig:elemenet_dist}, MT models tend to generate formulas more than 5 elements, but we would like to filter these long formulas in the filtering process. Especially, we would like to note here that MT-RoBERTa's performance evaluated here is calculated on less than 10,000 generated samples as it has difficulty generating 50,000 candidate samples with atom numbers less than or equal to 30 and fewer than 9 elements within a reasonable amount of time (most samples are filtered out). The non-BLMM models tend to generate compositions with more than 8 elements. In addition, these seven models show a big difference in terms of sample generation speed as shown in Figure \ref{fig:time}.
Another performance trend we find is that the CN/EB validity percentages of samples generated by models trained on the ICSD-pure dataset are in general higher than those by the models trained on ICSD-mix except for the case of the CN+EB percentages of MT-GPT2, MT-GPTJ, and MT-RoBERTa. After close examination, we find these exceptions are mainly due to the filtering process as discussed above.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{ICSD_performance.png}
\caption{The comparison of the validity of materials composition generators on the ICSD-mix and ICSD-pure datasets. The percentages of CN and CN+EB samples out of the training set and all generated samples by the generator models are used to represent validity.}
\label{fig:validity}
\end{figure}
We also compare the validity performance of our MT-GPTNeo model with our previously developed MATGAN models that are based on the generative adversarial network \cite{dan2020generative}. We find our MT-GPTNeo model achieves 97.54\% CN and 84.64\% EN (for the filtered samples with less than 31 atoms and 9 elements) compared to 80.3\% CN and 70.3\% EB achieved by the GAN model trained with ICSD-mix dataset, showing our models have about 14-17\% advantage. Similar performance advantages are also observed for our models trained with the ICSD-pure dataset. However, we want to note that this performance advantage is evaluated over the filtered samples.
To check whether increasing the training samples can improve the generator performance, we train our models on the three Hybrid datasets including Hybrid-mix, Hybrid-pure, and Hybrid-strict which have 398033, 244281, and 202139 training samples respectively. The model hyper-parameters are also tuned with the new datasets. The final performances are shown in Table \ref{tab:hybrid-performance}. First, we find that the CN and EB percentage of the training set of Hybrid-mix is much lower than those of the ICSD-mix with only 61.67\% CN and 50.61\% respectively. However, the validity percentages of all models trained on the Hybrid-mix dataset are much higher than those of the training set. Especially GPT series models achieve CN percentages higher than 92\% and EB percentages higher than 78\%. These unexpected high validity percentages are due to the filtering process: they are good at generating chemically valid material compositions with less than 9 elements and 31 atoms while they also generate a large percentage of large formulas with many atoms (>30) and elements (>8). Then, the Hybrid-pure dataset has higher CN (83.48\%) and EB (75.94\%) than the Hybrid-mix dataset. And it helps to improve the CN percentages of MT-GPTNeo, MT-GPTJ, MT-BART, and MT-RoBERTa and the EB percentages of MT-GPTNeo and Mt-BART. Finally, the training set of the Hybrid-strict dataset has the highest CN (99.54\%) and EB (99.25\%). Almost all models except MT-GPTNeo achieve the highest CN and EB, indicating that high quality training samples contribute to the high validity performance of the trained models. Overall, we found that the MT-GPTJ has the best performance with CN of 96.68\% and EB of 90.33\% on Hybrid-mix, CN of 97.54\% on Hybrid-pure, and CN of 97.61\% on the Hybrid-strict dataset. We also found that MT-BART and MT-RoBERTa on the Hybrid-mix dataset have the lowest validity performance compared to those of GPT series.
As we mention earlier, the Hybrid datasets are datasets with samples ranging from 200,000 to 40,000, while the ICSD datasets are datasets with samples from 35,000 to 53,000. Therefore, we also compared the model performance trained by the Hybrid datasets and the ICSD datasets to investigate the effect of the quantity of samples on the generation performance. It is found that the CN and EB percentages of MT-GPT are not only increased from 81.01\% and 66.54\% to 92.24\% and 78.29\% on the Hybrid-mix, but also increased from 81.47\% and 71.96\% to 91.51\% and 77.76\% on Hybrid-pure. For MT-GPT2 and MT-RoBARTa, their CN and EB percentages have improved on the Hybrid-mix dataset compared to the ICSD-mix dataset. As for MT-GPTJ and MT-Bart, they achieve better CN and EB percentages on the Hybrid-pure dataset than those on the ICSD-pure dataset.
\begin{table}[]
\centering
\caption{Comparison of generator performances of models trained with Hybrid datasets.}
\label{tab:hybrid-performance}
\begin{tabular}{
>{\columncolor[HTML]{FFE599}}l |cc|cc|cc}
\hline
& \multicolumn{2}{c|}{\cellcolor[HTML]{FFE599}Hybrid-mix} & \multicolumn{2}{c|}{\cellcolor[HTML]{FFE599}Hybrid-pure} & \multicolumn{2}{c}{\cellcolor[HTML]{FFE599}Hybrid-strict} \\ \hline
Model & \multicolumn{1}{c|}{CN} & CN + EB & \multicolumn{1}{c|}{CN} & CN+EB & \multicolumn{1}{c|}{CN} & CN+EB \\ \hline
TrainingSet & \multicolumn{1}{c|}{61.67\%} & 50.61\% & \multicolumn{1}{c|}{83.48\%} & 75.94\% & \multicolumn{1}{c|}{99.54\%} & 99.25\% \\ \hline
MT-GPT & \multicolumn{1}{c|}{92.24\%} & 78.29\% & \multicolumn{1}{c|}{91.51\%} & 77.76\% & \multicolumn{1}{c|}{92.21\%} & 84.57\% \\ \hline
MT-GPT2 & \multicolumn{1}{c|}{92.96\%} & 79.79\% & \multicolumn{1}{c|}{87.82\%} & 70.83\% & \multicolumn{1}{c|}{96.99\%} & { \textbf{92.81\%}} \\ \hline
MT-GPTNeo & \multicolumn{1}{c|}{93.84\%} & 84.37\% & \multicolumn{1}{c|}{97.29\%} & { \textbf{91.40\%}} & \multicolumn{1}{c|}{94.69\%} & 77.39\% \\ \hline
MT-GPTJ & \multicolumn{1}{c|}{{ \textbf{96.98\%}}} & { \textbf{90.33\%}} & \multicolumn{1}{c|}{{ \textbf{97.54\%}}} & 87.05\% & \multicolumn{1}{c|}{{ \textbf{97.61\%}}} & 91.22\% \\ \hline
MT-BART & \multicolumn{1}{c|}{81.10\%} & 62.83\% & \multicolumn{1}{c|}{85.23\%} & 70.07\% & \multicolumn{1}{c|}{88.01\%} & 80.47\% \\ \hline
MT-RoBERTa & \multicolumn{1}{c|}{71.16\%} & 61.00\% & \multicolumn{1}{c|}{84.66\%} & 60.26\% & \multicolumn{1}{c|}{93.68\%} & 80.81\% \\ \hline
\end{tabular}
\end{table}
\FloatBarrier
\paragraph{The stability performance} We also evaluate the materials composition generation validity performance by checking the stability of the generated compositions by all MT models and the pseudo-random generator as partially represented by their predicted formation energies. We train a Roost \cite{goodall2020predicting} based formation energy machine learning prediction model (see Section \ref{subsec:formation_energy}) using all the MP samples. The formation energy distributions of the ICSD-pure training set, the generated samples of all MT models and random samples are shown in Figure\ref{fig:formenergy_dist}. First, we find that the formation energies of the training set are mostly less than zero eV and the shape of the distribution looks like a standard violin while the distribution of the random samples is very different from the training set with a large percentage of samples located in the near-zero eV area. Out of the seven MT models, the generated samples of MT-GPT and MT-RoBERTa are of lower quality as they are more located in the high-energy region. While the BLMM model's samples show higher similarity in terms of the distribution shape to the training samples, the samples generated by MT-GPT2, MT-GPTNeo, and MT-BART tend to have lower formation energy. The best quality of generated samples as represented by the predicted formation energy comes from the MT-GPTJ model, which has a peak density of samples around -200 eV. Overall, Figure \ref{fig:formenergy_dist} shows that our MT models can generate chemical valid samples with negative formation energies.
\paragraph{The recovery rate performance} Another way to check the validity of generators is to check the number of generated samples that do not exist in the training set and exist as known materials in the leave-out dataset or third-party experimental or computational databases. The criterion is sometimes calculated as the recovery rate. We check the 56,162 generated samples of MT-GPTJ and find that 196 samples exist in one of the three databases, ICSD/OQMD/MP. In terms of holdout recovery rate, our model's performance (overall 0.81\%) is much lower compared to those of the BLMM model, which achieves 100\% for binary materials, 63.37\% for ternary materials, and 29.17\% for quaternary ones. It is also lower than the MATGAN model which has binary, ternary, and quaternary compounds recovery rates of only 82.7\%, 31.2\%, and 5.2\%. The reason is that our transformer-based language models tend to generate compositions with more than 5 elements (see Figure \ref{fig:atom_dist}).
\begin{figure}[ht!]
\centering
\includegraphics[width=\linewidth]{FE_9_models.pdf}
\caption{The comparison of formation energy distributions of the training samples, samples from the pseudo-random generator, and generated samples by different models trained on the ICSD-pure dataset.}
\label{fig:formenergy_dist}
\end{figure}
\paragraph{The uniqueness performance} Another important performance measure of generative models is the uniqueness, which represents the percentage of unique samples out of all generated samples \cite{polykovskiy2020molecular}. Here for five MT-GPTNeo models trained on the five datasets, we calculate the uniqueness percentages for every 200 generated samples up to 10,800 samples with <=30 atoms and <=8 elements. The results are shown in Figure\ref{fig:uniq}. First, we find that all five MT-GPTNeo models show high uniqueness: after generating 10,800 filtered samples, the uniqueness percentages remain around or above 50\%: ICSD-mix (59.91\%), ICSD-pure (61.38\%), Hybrid-mix (55.06\%), Hybrid-strict (47.69\%), and Hybrid-pure (28.27\%). Another interesting observation is that the models trained on the ICSD-mix and ICSD-pure datasets dominate in terms of the uniqueness, which is probably due to the fact that these two datasets have much fewer training samples (50755 and 37459 versus more than 200,000 for hybrid datasets). The more training samples, the more strict language constraints the models learn, and thus the lower the diversity/uniqueness in the generated samples. Among all the three Hybrid models, the model trained with the Hybrid-mix has the highest uniqueness since it contains diverse samples that do not satisfy CN/BN chemical rules. The uniqueness difference of these three models can be attributed to their different distributions among the training sets.
\paragraph{The novelty performance} We also check the novelty of the composition generators, which calculates the percentages of samples that do not exist in the training set. All our six models achieve more than 97\% overall novelty when trained over the ICSD-pure dataset. In contrast, the BLMM model achieves 97.66\%, 96.11\%, and 95.55\% novelty for binary, ternary and quaternary compounds respectively, indicating comparable or slightly better capability to generate new materials.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.6\linewidth]{uniq.pdf}
\caption{The uniqueness of the MT-GPTNeo model trained on ICSD-pure dataset.}
\label{fig:uniq}
\end{figure}
\FloatBarrier
\subsection{Conditional generative design of materials compositions with high bandgap}
\label{subsec:bandgap}
Conditional generation capability is highly desirable for function-oriented computational materials design. \cite{flam2022language} used this evaluation method for benchmarking molecule generator models. Therefore, to evaluate whether our language model can capture the composition rules for assembling high-bandgap materials to directed high-bandgap materials generation, we collect 30,000 formulas with band gaps above 1.98 eV from the Materials Projects database (for those formulas with multiple phases, we include it if it has one phase with band gap greater than 2.0 eV). We call this Bandgap-30K dataset. We train an MT-GPT2 composition generator and use it to generate 100,000 formulas from which 87,233 compositions satisfy the charge neutrality and balanced electronegativity requirements. We then use the composition-based band gap prediction model (See Section \ref{subsec:formation_energy}) to predict the band gaps of these filtered hypothetical material compositions and plot their distribution against the band gap distributions of the training set and the whole MP samples. As shown in Figure \ref{fig:bandgap}, the band gap distribution of our hypothetical materials is much closer to the high-bandgap training set compared to the band gap distribution of all MP samples, which indicates that the MT-GPT2 bandgap model has learned the implicit rules to generate high-bandgap materials.
\vspace{-4mm}
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\linewidth]{figures/BD_MP_high_gene.pdf}
\vspace{-5mm}
\caption{Band gap distribution for (1) the whole Materials Projects (MP) materials, (2) the training set of high-bandgap MP materials for the MT-GPT2 model; (3) the generated samples from the MT-GPT2 models trained on the MP dataset. The band gap distribution of the generated ones is much closer to the training set than the whole MP dataset, they tend to have higher band gaps.}
\label{fig:bandgap}
\end{figure}
\subsection{New materials predicted by MT-GPT2 and validated using DFT}
We use the Bandgap-30K dataset to train a MT-GPT2 model and use it to generate more than 100,000 material compositions. Then we predict their formation energy using the composition based formation energy prediction model (See Section \ref{subsec:formation_energy}). When we get the formation energies, we calculate their total energy and predict their e-above-hull energies to rank these candidates. We then pick the top 100 formulas with the lowest predicted e-above-hull energy and apply our previous TCSP, a template based crystal structure prediction algorithm \cite{wei2021tcsp}, to obtain the structures. For the predicted structures with the best quality scores, we run DFT relaxation to get the final structures and to calculate their formation energy and e-above-hull energy (see Section \ref{subsec:DFT}). Table \ref{tab:finding} shows the top 20 discovered new materials along with their formation energies. Out of the predicted structures, we identify two new crystal materials with the e-above-hull energy of 0 eV, and their structures are shown in Figure \ref{fig:structuresfound}.
\begin{table}[]
\centering
\caption{Twenty materials found with negative formation energy ($E_\mathrm{form}$) using DFT}
\label{tab:finding}
\vspace{5pt}
\begin{tabular}{|l|l|l|l|}
\hline
Formula & $E_\mathrm{form}$ & Formula & $E_\mathrm{form}$ \\ \hline
SrAlClO2 & -3.0077 & BaSrTiO3 & -2.5268 \\ \hline
LiMgBrF2 & -2.9039 & LiScNiF3 & -2.5214 \\ \hline
BaScSeF2 & -2.8662 & ScBeOF & -2.5074 \\ \hline
AlBrF2 & -2.8367 & KSrAlO3 & -2.5024 \\ \hline
LiMg2IF4 & -2.8118 & AlFeOF3 & -2.5015 \\ \hline
KSrScO3 & -2.7351 & Sc2AlZnO4 & -2.4719 \\ \hline
BaScO2 & -2.6801 & LiBeOF & -2.4639 \\ \hline
KBaScO3 & -2.6221 & SrBeSeF2 & -2.4584 \\ \hline
KBaAlO3 & -2.5924 & MgVHF4 & -2.4521 \\ \hline
RbBeOF & -2.5392 & KSrZrO3 & -2.4300 \\ \hline
\end{tabular}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{figures/structures.eps}
\caption{Candidate new structures discovered by our MT-GPT2 model with zero e-above-hull.}
\label{fig:structuresfound}
\end{figure}
\FloatBarrier
\section{Discussion}
We have applied a series of transformer-based pretrained language models for the generative design of inorganic materials compositions. Our extensive experiments over five datasets with diverse sample sizes and quality (in terms of charge neutrality and balanced electronegativity) show that these materials transformer models are competent in generating chemically valid compositions or formulas while most of them have their own generation preference or bias. We find that MT-GPTJ overall has the best generation performance (after simple filtering) in terms of validity and generation speed.
We check the time complexity of the generators as shown in Figure \ref{fig:time}. We count the amount of time (in seconds) that each model needs to generate 1,000 compositions without any filtering. It is found that the BLMM model has the fastest generation with 145 seconds. The second fastest models are MT-GPT2, MT-GPTJ and MT-GPTNeo which use 219, 449, 366 seconds respectively. The slowest generators include MT-GPT, MT-BART, and MT-RoBERTa which are almost 7.6 to 13.6 times slower compared to BLMM. We also find that the generator models vary a lot in terms of generating qualified candidates that have <=30 atoms and <=8 elements. Figure S9 in the supplementary file shows the number of qualified candidate compositions generated by different models within 25,000 loops, each of which a sequence of 256 tokens is generated and then partitioned into multiple compositions. We find that MT-RoBERTa has an extremely low yield rate in generating qualified compositions compared to other transformer models. It is surprising that the MT-GPT model trained on the ICSD-pure dataset also has the difficulty to generate such qualified individuals. Both of them tend to generate formulas with >8 elements.
Another potential factor that affects the generator performance is the training set size. To check this issue, we train 15 MT-GPT2 models with training set sizes ranging from 1000 to 377,084. Their generation performances in terms of validity as represented by the CN and EB percentages are shown in the supplementary file Figure S7. A general trend we find is that increasing the training set size can lead to better generator performance. Another major decision in materials transformer training is to determine the optimal training epochs for each model over different datasets so that overfitting can be avoided. In the supplementary file, Figure S8 (b) shows the learning curves of training and validation errors for training MT-GPT2. It is found that after around 700 epochs, the training process starts to overfit as the validation loss begins to dominate the training loss. However, we observe that the corresponding CN/EB percentages do not degrade much despite the occurrence of the overfitting.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{time_chart.pdf}
\caption{Time for generating 1,000 compositions by different models. The fastest model is BLMM while the slowest one is MT-RoBERTa.}
\label{fig:time}
\end{figure}
To further examine the generation capabilities and potential bias of different materials transformers, we train the BLMM and MT-GPT2 models using the Bandgap-30K dataset as defined in Section \ref{subsec:bandgap} (Conditional generation) and generate 89,321 and 87,232 samples respectively. We then plot the distributions of the number of elements within compositions, the number of atoms per element, and the total number of atoms within a composition as shown in Figure \ref{fig:elment_atom_dist}. First, we find that the distribution of the number of elements within the training compositions are highly imbalanced (Figure \ref{fig:elemenet_dist}) with the highest percentage of ternary and quaternary materials followed by quinary and binary materials. There are very few samples with more than 6 elements. It is then interesting to find that the element number distribution of the BLMM-generated samples follows closely to the training set, indicating that the BLMM model tends to learn well of the chemical patterns from the training set. On the other hand, the MT-GPT2-generated samples have a very different distribution in terms of the element number of compositions: it tends to generate a large proportion of samples with more than four elements.
Next, Figure \ref{fig:atom_dist} shows the distribution of the number of atoms per element within the samples. The training samples have a variety of atom numbers per element ranging from 1 to more than 10, with relatively small percentages of ones. In contrast, the samples generated by BLMM contain much more compositions with 1-atom elements. However, the MT-GPT2 model is even more biased as it tends to generate most of single-atom elements in the compositions (65\%) followed by 2-atom elements (16\%) (check yellow bars). It has a low probability to generate samples with more than four-atom elements. We find the generation preferences of MT-GPT2 also apply to all other transformer-based models except BLMM.
We further check the distribution of the total atoms within a composition as shown in Figure \ref{fig:total_atom_dist}. First, it is found that the training set has many samples with a large number of atoms with peaks at 9, 20, 24, and 28. In contrast, the total atom numbers of the samples generated by GPT2 peak at 4, 5, 6, 7, 8, and 9, which are much smaller than the training set. It can barely generate samples with total atom numbers more than 20. On the other hand, the BLMM model has a more balanced generation capability: while it also peaks at 9, 4, 5, 6, 7, 8, 10, and 11 in terms of the total atom number within compositions, it can still generate large percentages of samples with more than 20 atoms. It is also interesting to notice that there are no materials with 15 atoms within the formula.
\begin{figure}[ht!]
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[width=1.0\textwidth]{element_distribution.pdf}
\caption{}
\vspace{-3pt}
\label{fig:elemenet_dist}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[width=1.0\textwidth]{atom_distribution.pdf}
\caption{}
\vspace{-3pt}
\label{fig:atom_dist}
\end{subfigure}
\begin{subfigure}[t]{1.0\textwidth}
\includegraphics[width=1.0\textwidth]{total_atom_distribution.pdf}
\caption{}
\vspace{3pt}
\label{fig:total_atom_dist}
\end{subfigure}
\caption{The comparison of the generation preferences of training samples, BLMM, and MT-GPT2 in terms of generated composition properties. (a) Distributions of the element numbers in each composition; (b) Distributions of the atom numbers for each element in all compositions. For the training set, the elements with more than 28 atoms are not counted in this plot. (c) Distribution of total atom number for each composition. For the training set, the compositions with more than 30 atoms are not plotted in this figure.}
\label{fig:elment_atom_dist}
\end{figure}
\FloatBarrier
Our extensive experiments show that the transformer-based language models can generate chemically valid hypothetical material compositions as shown by the high percentages of generated charge neutral and balanced electronegativity samples. However, it is not clear whether these compositions can be synthesized into stable structures. This especially may become a concern when a generative model tends to generate compositions with a large number of elements. While several machine learning models have been developed for synthesizability prediction \cite{jang2020structure}, formation energy prediction \cite{omee2022scalable}, and e-above-hull calculation, these models and algorithms usually require the availability of the crystal structures which are not available for composition generators that we propose here. To do that, we can use the recently developed template-based crystal structure prediction algorithms \cite{kusaba2022crystal,wei2021tcsp} or the deep learning-based \cite{hu2021alphacrystal}, and global optimization-based crystal structure prediction tools \cite{oganov2012crystal,shao2022symmetry} to predict the crystal structures for the generated hypothetical compositions by our materials transformer models. Together, we are able to explore and discover new materials in a much larger area of the almost infinite chemical design space.
\section{Materials and Methods}
\label{sec:others}
\subsection{Dataset}
\label{subsec:dataset}
To evaluate the performance of our language model-based generators, we prepare six different datasets: Hybrid-mix, Hybrid-pure, Hybrid-strict, ICSD-mix, ICSD-pure, and Bandgap-30K.
The formulas in the Hybrid-mix dataset are selected from the ICSD/MP/OQMD databases with the number of elements less than 9, the number of atoms in the unit cell less than 100, and without fractional atom numbers for any element in the formula. While the Hybrid-mix dataset may contain a certain amount of materials that are not charge neutral or balanced electronegativity, the Hybrid-pure dataset has samples selected from the Hybrid-mix dataset, which are charge neutral and have balanced electronegativity. The Hybrid-strict dataset is obtained using a similar method to the Hybrid-pure dataset, but we use the strict ICSD oxidation states of the elements to calculate the CN and EB, which is different (with more strict constraints) from the Hybrid-pure dataset and the dataset used in our previous study \cite{wei2022crystal}.
For the two ICSD datasets, the formulas in the ICSD-mix dataset are sampled from the ICSD database with the number of elements less than 9, the number of atoms in unit cell less than 100, and without fractional coordinates. It may contain formulas that do not meet the CN and EB criteria. Samples in the ICSD-pure dataset are selected from the ICSD-mix dataset, which satisfy CN and EB rules.
In addition, for the experiment of band gap prediction, we prepare a Bandgap-30K dataset, which contains 30,000 formulas from the MP database with band gaps above 1.98 eV. For those formulas with multiple phases, we include it if it has one phase with the band gap greater than 2.0 eV.
Overall, we get the Hybrid-mix, Hybrid-pure, Hybrid-strict, ICSD-mix, ICSD-pure, and Bandgap-30K dataset with total of 418983, 257138, 212778, 52317, 39431, and 30000 samples respectively. We divide all datasets into the training set, the test set, and the validation set in a ratio of 9/0.5/0.5, and the detailed number of each set is shown in Table \ref{tab:datasets}.
\begin{table}[]
\centering
\caption{Six datasets used in experiments. For Hybrid and ICSD datasets, -pure datasets only include selected samples with the neutral charge and balanced electronegativity; -mix datasets do not have such limits.}
\label{tab:datasets}
\vspace{4pt}
\begin{tabular}{|c|ccc|cc|c|}
\hline
\rowcolor[HTML]{FFCC67}
\cellcolor[HTML]{FFFFFF} & \multicolumn{3}{c|}{\cellcolor[HTML]{FFCC67}Hybrid datasets (ICSD+MP+OQMD)} & \multicolumn{2}{c|}{\cellcolor[HTML]{FFCC67}ICSD datasets} & MP dataset \\ \cline{2-7}
\multirow{-2}{*}{\cellcolor[HTML]{FFFFFF}} & \multicolumn{1}{c|}{Hybrid-mix} & \multicolumn{1}{c|}{Hybrid-pure} & Hybrid-strict & \multicolumn{1}{c|}{ICSD-mix} & ICSD-pure & Bandgap-30K \\ \hline
Total & \multicolumn{1}{c|}{418983} & \multicolumn{1}{c|}{257138} & 212778 & \multicolumn{1}{c|}{52317} & 39431 & 30000 \\ \hline
Train & \multicolumn{1}{c|}{398033} & \multicolumn{1}{c|}{244281} & 202139 & \multicolumn{1}{c|}{50755} & 37459 & 28500 \\ \hline
Valid & \multicolumn{1}{c|}{10475} & \multicolumn{1}{c|}{6428} & 5319 & \multicolumn{1}{c|}{1336} & 986 & 750 \\ \hline
Test & \multicolumn{1}{c|}{10475} & \multicolumn{1}{c|}{6429} & 5320 & \multicolumn{1}{c|}{1336} & 986 & 750 \\ \hline
\end{tabular}
\end{table}
\subsection{Pseudo-random composition generator}
We build a pseudo-random composition generator as the baseline generation model. For all generated samples, we count the numbers of samples with different numbers of elements from 2 to 8. Then for each sample with the number of $K$ elements, we generate the same number of composition samples with $K$ elements. For these samples with $K$ elements, We randomly pick the atom number from 1 to 20 for each of these $K$ elements. This process ensures the distribution of binary, ternary, etc.
\subsection{DFT calculations}
\label{subsec:DFT}
We use the first-principles calculations based on the density functional theory (DFT) using the Vienna \textit{ab initio} simulation package (VASP) to check the structural stability of the predicted materials \cite{Vasp1,Vasp2,Vasp3,Vasp4}. The projected augmented wave (PAW) pseudo-potentials, where 520 eV plane-wave cutoff energy, are used to treat the electron-ion interactions \cite{PAW1, PAW2}. The exchange-correlation functional is considered with the generalized gradient approximation (GGA) based on the Perdew-Burke-Ernzerhof (PBE) method \cite{GGA1, GGA2}. The energy convergence criterion is set as 10$^{-5}$ eV, while the atomic positions are optimized with the force convergence criterion of 10$^{-2}$ eV/{\AA}. The Brillouin zone integration for the unit cells was computed using the $\Gamma$-centered Monkhorst-Pack $k$-meshes. The Formation energies (in eV/atom) of several materials are determined based on the expression in Eq.~\ref{eq:form}, where $E[\mathrm{Material}]$ is the total energy per unit formula of the considered structure, $E[\textrm{A}_i]$ is the energy of $i^\mathrm{th}$ element of the material, $x_i$ indicates the number of A$_i$ atoms in a unit formula, and $n$ is the total number of atoms in a unit formula($n=\sum_i x_i$).
\begin{equation}
E_{\mathrm{form}} =\frac{1}{n}(E[\mathrm{Material}] - \sum_i x_i E[\textrm{A}_i])
\label{eq:form}
\end{equation}
\subsection{Evaluation criteria}
\label{subsec:criteria}
There are three main criteria we used in this paper, the validity, the uniqueness, the recovery rate and the novelty \cite{dan2020generative}.
\textbf{Validity.} First, the two basic chemical rules of crystals, charge neutrality (CN) and balanced electronegativity (EB) percentages, are used to evaluate the validity of generated formulas. We use the method to calculate CN and EB proposed in \cite{davies2019smact} to obtain the percentages of generated samples that conform to these two rules. In addition, we check the stability of the generated samples to evaluate their validity performance using their predicted formation energies. The higher-energy region they are in, the lower quality they have.
\textbf{Uniqueness.} The uniqueness represents the ability of a generative model to generate unique formulas, which is used to calculate the percentage of the number of unique samples in the whole generated samples. The higher the uniqueness, the more diverse samples the model can produce.
\textbf{Recovery Rate and Novelty.} The recovery rate is used to estimate the percentage of generated formulas from the training set or the test set. The samples in the training set and test set are known, which means that the high recovery rate shows that this generative model has high performance on discovery materials. Another criterion related to the recovery rate is the novelty. The novelty of a generative model measures the percentage of the generated formulas that do not exist in the training or test sets.
\subsection{Hyper-parameters Tuning}
\label{subsec:para-tuning}
Since the content and quantity of Hybrid datasets and ICSD datasets are very different, we tuned hyper-parameters for the Hybrid-mix dataset and the ICSD-pure dataset to choose appropriate hyper-parameters for Hybrid datasets and ICSD datasets respectively. Using the Hybrid-mix and the ICSD-pure datasets, we evaluated how the key hyper-parameters affect the generation performance on the MT-GPT2, the MT-BART, and the MT-RoBERTa models. We trained these models with different hyper-parameters, then generated about 10,000 formulas to evaluate their performance using two criteria, CN (charge neutral) and EB (balanced electronegativity). To find suitable parameters, We set default parameters for each model, then changed one of them each time to train models and evaluated their performance. In this section, we only go to the detail of the hyper-parameter tuning of the MT-GPT2 model on the Hybrid-mix dataset (More details on the hyper-parameter tuning of other models are based on the supplementary file Figure S1-S5). For the MT-GPT2 model, the hyper-parameters we set include the maximum length of the formula tokens that this model might ever be used with, the dimensionality of the embeddings and hidden states, the number of hidden layers in the transformer encoder, and the number of attention heads for each attention layer in the transformer encoder.
\textbf{The maximum length of the formula tokens}. The default value we set is 256. As shown in Figure \ref{fig:gpt2_hy_po}, we evaluated different lengths, 128, 512, 1024, and 2048. Taking both CN and EB percentages into account, models on the Hybrid-mix dataset can hit a better performance with the max length of 128, which achieves the CN percentage of 89.81\% and the EB percentage of 81.95\%. As the max length increases, the CN and EB percentages of generated formulas gradually decrease.
\textbf{Embeddings dimension.} The default value of the embedding dimension is 180. As shown in Figure \ref{fig:gpt2_hy_emb}, different values were set as 100, 256, 512, and 1024. Then models on the Hybrid-mix dataset achieved the CN percentages of 87.47\%, 87.55\%, 90.49\%, 90.22\%, 85.88\% and EB percentages of 71.51\%, 71.95\%, 74.88\%, 71.29\%, 68.12\% for varying embedding dimension from small to large respectively. When the embedding dimension is 256, the CN and EB percentages can reach 90.49\% and 74.88\%, respectively.
\textbf{The number of hidden layers.} We set the default value as 8, and evaluated models with 4, 6, 10, and 12 hidden layers. As shown in \ref{fig:gpt2_hy_layer}, as the number of hidden layers increases, the model can learn more about the formulas in the training set, then can generate formulas with higher CN and EB percentages. Therefore, the best CN and EN percentages can be 90.41\% and 80.28\% when the number of hidden layers is 12.
\textbf{The number of attention heads.} We increase the number of attention heads from 2 to 12, and set the default value as 4. It is observed from Figure \ref{fig:gpt2_hy_head} that the best result is achieved with 6 heads. It is worth mentioning that when we set the number of attention heads, we need to make sure that the embedding dimension is a multiple of the number of attention heads.
For each model, we perform similar tuning procedures as described above, and the final hyper-parameters of each model are shown in Table S1 from the supplementary file.
In addition, We wonder if different training sizes might affect the performance of generated formulas. Therefore, in addition to the tuning of hyper-parameters for models, we also conducted experiments to evaluate the effect of different sizes of the training set. We changed the size of the Hybrid-mix dataset from 1,000 to 377,083 (90\% of the Hybrid-mix dataset) to train different MT-GPT2 models, then used the trained models to generate 10,000 formulas. Figure S6 in the supplementary file shows that although the curve is a bit bumpy, the proportion of eligible formulas in the generated formulas shows an overall upward trend.
\begin{figure}[hb!]
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=\textwidth]{gpt2_hy_position.png}
\caption{}
\vspace{-3pt}
\label{fig:gpt2_hy_po}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=\textwidth]{gpt2_hy_embedding.png}
\caption{}
\vspace{-3pt}
\label{fig:gpt2_hy_emb}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=\textwidth]{gpt2_hy_layer.png}
\caption{}
\vspace{-3pt}
\label{fig:gpt2_hy_layer}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=\textwidth]{gpt2_hy_head.png}
\caption{}
\vspace{-3pt}
\label{fig:gpt2_hy_head}
\end{subfigure}
\caption{Hyper-parameter tuning of MT-GPT2 model on the Hybrid-mix dataset. (a)-(d) show the percentages of charge-neutral (CN in the figures) and balanced electronegativity (CN+EB in the figures) out of all generated samples by the MT-GPT2 models trained on the Hybrid-mix dataset with different maximum sequence lengths, different dimensionalities of the embeddings, different numbers of the hidden layers in the Transformer encoder, and different number of attention heads for each attention layer in the Transformer encoder.}
\label{fig:hyperparameter}
\end{figure}
\subsection{Formation energy and band gap prediction models based on Roost}
\label{subsec:formation_energy}
To evaluate the generator performances, we train two composition-based machine learning models. Both formation energy and band gap prediction use the dataset downloaded from the MP database \cite{jain2013commentary}. The machine learning model we used is based on Roost, a graph message passing neural network as described in \cite{goodall2020predicting}. The training set of Roost-FE (formation energy) contains 125,613 unique compositions. For those compositions with multiple phases, we only keep the records with the lowest formation energies. The Roost-Bandgap model is trained with 113,501 samples. The formation energy roost model achieves an MAE of 70.181 eV while the band gap predictor achieves an MAE of 0.6645 eV as evaluated on the 10\% hold-out test sets.
\FloatBarrier
\bibliographystyle{unsrt}
| proofpile-arXiv_065-11342 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In the last years the understanding of the dynamics underneath the pedestrian crowd motion has attracted a wide attention in public debate and in the scientific community. The complete comprehension of such phenomena remains still as a subject of active research. Nevertheless, important progresses have been made in the past two decades and several mathematical models have been proposed for its description. \\
\noindent The different approaches to tackle the problem can be classified according to the scale of the model: they range from microscopic systems, where each trajectory is described individually, to macroscopic systems, where the dynamics of the crowd is modelled as a time-evolving density distribution of indistinguishable agents.
For a detailed overview we refer e.g. to the work by \cite{bellomo:2011} and the monograph by \cite{cristiani2014multiscale}.
\\
In the article by \cite{h00}, the author introduced a by now classical fluid-dynamical model to study the motion of a large human crowd. The crowd is treated as a ``thinking fluid'' and it moves at maximum speed towards a common target, while taking into account environmental factors such as the congestion of the crowd. In this model people prefer to avoid crowded regions and this assumption is incorporated in a potential function which determines the velocity field driving the crowd. Indeed, the potential is characterized as the gradient of the solution of an eikonal equation with a right hand side which depends on the current distribution of the crowd.
Hence, given a time horizon $T>0$, for each time instant $t\in [0,T]$, each microscopical individual looks at the global configuration of the crowd and updates his/her velocity trying to avoid the current crowded regions. \\
\noindent In a two-dimensional space setting, given $f: [0,+\infty[ \to \mathcal{R}$, a bounded domain $\Omega\in \mathcal{R}^2$ and a target $\mathcal{T}\subseteq \Omega$, the model is given by
\begin{equation}\label{hughes}
\left\{
\begin{array}{ll}
\partial_t m(x,t)-\hbox{div}(f^2(m(x,t))\nabla u(x,t)\,m(x,t))=0, \\[5pt]
\hspace{3.9cm}\mbox{in $ \Omega \times ]0,T[$,}\\[4pt]
|\nabla u(x,t)|=\displaystyle\frac{1}{f(m(x,t))}, \; \; \mbox{in } \left(\Omega \setminus \mathcal{T}\right) \times ]0,T[,
\end{array}\right.
\end{equation}
complemented with the boundary conditions
\begin{equation}\label{BC}
\hspace{-0.18cm}\left\{
\begin{array}{ll}
m(x,0)= m_0(t), &\text{ on } \Omega\times\{0\},\\[4pt]
u(x,t) = 0, & \text{ on } \mathcal{T} \times (0,T),\\[4pt]
u(x,t)= g(x) &\text{ on } { \partial \Omega } \times (0,T), \\[4pt]
(f^2(m)\nabla u \,m)(x,t)\cdot \hat{n} (x)= 0, &\text{ on } \partial \Omega \times (0,T).
\end{array}\right.
\end{equation}
In \eqref{hughes} the gradient operator $\nabla$ acts on the space variable $x$, the unknown variable $m$ is the pedestrian density and $u$ is the potential, i.e. the weighted shortest distance to the target $\mathcal{T}$. Concerning the boundary conditions \eqref{BC}, we assume that $g$ is a continuous function large enough to satisfy some compatibility conditions, see e.g. \cite[Chapter IV]{BardiCapuz@incollection}, and $\hat n$ denotes the outward normal to the boundary $\partial \Omega$.
System \eqref{hughes} is a highly non-linear coupled system of PDEs. Few analytic results are available, all of them restricted to spatial dimension one and particular choices for the function $f$ (see the works by \cite{di2011hughes} and \cite{amadori2014existence}). The main difficulty comes from the low regularity of the potential $u$, which is only Lipschitz-continuous.
Hughes proposed a few functions $f$'s penalizing regions of high density, the simplest choice being $f(m)=1-m$ where $1$ corresponds to the maximum scaled pedestrian density. In this work we focus our attention on numerical methods to solve \eqref{hughes}-\eqref{BC} for several choices of the penalty function $f$. We must underline that since well-posedness results for \eqref{hughes}-\eqref{BC} have not been proved in full generality yet, convergence results of numerical algorithms for \eqref{hughes}-\eqref{BC} seem currently out of reach. Thus, we will consider some heuristics to solve \eqref{hughes}-\eqref{BC} based on recent techniques introduced in \cite{CS12}, \cite{CS15} and \cite{carlini2016PREPRINT}. We point out that the article by \cite{carlini2016PREPRINT} concerns the approximation of a regularized version of \eqref{hughes}-\eqref{BC}. Our strategy is to look at \eqref{hughes}-\eqref{BC} as a single non-linear continuity equation, which can be approximated by discretizing the characteristics governing the equation with an Euler scheme. However, the velocity field describing the characteristics depends at each time step non-locally on the current distribution of the agents and has to be computed with the help of the eikonal equation, for which several efficient methods exist.
\section{A semi-Lagrangian scheme for the approximation of the system}\label{discretization}
\subsection{Modeling aspects}\label{mod} As in the theory of Mean Field Games (MFGs), system \eqref{hughes} can, {\it at least formally}, be interpreted as a sort of Nash equilibrium for a dynamical game involving a continuous number of agents. Indeed, given a space-time distribution density of the agents $(x,t) \in \Omega \times [0,T] \mapsto m(x,t) \in \mathcal{R}$, such that $m \geq 0$, the solution $u[m]$ of the second equation in \eqref{hughes} can be formally represented as the value function of the \emph{optimal control problem}
\begin{equation}\label{value_function_depends_on_m}
u[m](x,t)= \inf_{ \alpha \in \mathcal{A}} \int_{t}^{\tau^{x,t}[\alpha]} F(m(X^{x,t}[\alpha](s),t)) d s,
\end{equation}
where
$$\begin{array}{l}
\mathcal{A} := \left\{ \alpha :[0,T] \mapsto \mathcal{R}^{d} \; ; \; |\alpha(t)| \leq 1, \; \mbox{a.e. in $[0,T]$} \right\},\\[6pt]
X^{x,t}[\alpha](s) := x+ \int_{t}^{s} \alpha(r) dr \hspace{0.4cm} \forall \; s\in [t,T],\\[7pt]
\tau^{x,t}[\alpha] := \inf\left\{ s\in [t,T] \; ; \; X^{x,t}[\alpha](s) \in \mathcal{T}\right\},\\[6pt]
\mbox{and } \; F(m):= \frac{1}{f(m)}.
\end{array}$$
Thus, $u[m](x,t)$ corresponds to the {\it weighted minimal time} to reach the target set $\mathcal{T}$
for a {\it typical player} positioned at $x$ at time $t$. From the modelling point of view, it is fundamental to notice that in its individual cost the typical agent {\it freezes} the global distribution $m$ at time $t$ and thus s/he does not forecast the future behaviour of the population in order to determine its optimal policy. This modelling aspect shows an important difference with MFG models, where, at the equilibrium, the agents take into account the future distribution of the population in order to design their strategies.
Then, after the optimal feedback law
\begin{equation}\label{feedbacklaw} s\in [t,T] \mapsto \bar{\alpha}[t] (s):= -\frac{\nabla u[m](X^{x,t}[\hat{\alpha}](s),t)}{\left|\nabla u[m](X^{x,t}[\hat{\alpha}](s),t)\right|}\end{equation}
is computed, the agents {\it actually} move according to the dynamics defined by the the solution of the ODE
\begin{equation}\label{evolucionconfeedback}
\frac{d \hat{X}^{t,x}}{d s}(s)= -\frac{\nabla u[m](\hat{X}^{x,t}(s),s)}{\left|\nabla u[m](\hat{X}^{x,t}(s),s)\right|} f(m(\hat{X}^{t,x}(s),s)),
\end{equation}
for $s\in [t,T]$.
Note that at each time $s \in [s,T]$ the agents must re-optimize their cost in terms of $m(\cdot,s)$ since, by \eqref{evolucionconfeedback}, the agents move accordingly to the feedback law
$$(x,s) \in \Omega \times [t,T] \mapsto -\frac{\nabla u[m](x,s)}{\left|\nabla u[m](x,s)\right|}$$ rather than
$$(x,s) \in \Omega \times [t,T] \mapsto -\frac{\nabla u[m](x,t)}{\left|\nabla u[m](x,t)\right|}$$ (see \eqref{feedbacklaw}). In addition, their desired velocity field is re-scaled by $f(m(\cdot,\cdot))$ modelling that congestion also affects {\it directly} the velocity of each individual agent. Based on \eqref{evolucionconfeedback} we get that the evolution $m$ of the initial distribution
leads, at least heuristically, to the non-linear continuity equation
\begin{equation}\label{continuityequationcoupled}
\begin{array}{rcl}
\partial_t m -\hbox{div}(f^2(m )\nabla u[m] \,m )&=&0, \\[6pt]
m(x,0) &=&m_{0}(x),
\end{array}
\end{equation}
which is, of course, equivalent to \eqref{hughes}- \eqref{BC} (omitting the Neumann boundary condition in \eqref{BC} for $m$, which amounts to say that the trajectories followed by the agents are reflected at $\partial \Omega \setminus \mathcal{T}$). Natural fixed point strategies to study the existence of solutions of \eqref{hughes} (or \eqref{continuityequationcoupled}) usually fail because of the lack of enough regularity for the solutions of both equations separately. We refer the reader to the article by \cite{di2011hughes} where an existence result is proved in the one-dimensional case $d=1$ by approximating system \eqref{hughes} by analogous systems involving small diffusion parameters for which the well-posedness can be shown with the help of classical arguments in PDE theory. Other existence results are described in \cite{amadori2012one,amadori2014existence}.
Based on the trajectorial description presented above for both equations in \eqref{hughes}, we consider in the next section a natural discretization of \eqref{continuityequationcoupled}, based on an Euler discretization of equation \eqref{evolucionconfeedback} and the fact that solutions of \eqref{continuityequationcoupled} can be interpreted as the push-forward of $m_0$ under the flow induced by \eqref{evolucionconfeedback} (cf. \cite{CS12,piccoli2011time}).
\subsection{A semi-Lagrangian scheme for a non-linear conserva- tion law}
{Equation \eqref{continuityequationcoupled} shows that \eqref{hughes} can be interpreted as a non-linear continuity equation. Note that \eqref{value_function_depends_on_m} implies that the non-linear term $\nabla u[m]$ in \eqref{continuityequationcoupled} depends {\it non-locally} on $m$. In view of the previous remarks, let us describe now a SL scheme designed to numerically solve general non-linear continuity equations. The scheme we present here has been first proposed in \cite{CS12} and in \cite{CS15} in order to approximate first and second order MFGs, respectively. Then, an extension of the scheme, designed for a regularized version of \eqref{hughes}, has been implemented in \cite{carlini2016PREPRINT}. We also refer the reader to \cite{FestaTosinWolfram} where the scheme has been applied to a non-linear continuity equation} modelling a kinetic pedestrian model. We recall here the scheme for the case of a two dimensional non-linear continuity equation on a bounded domain $\Omega$ with Neumann condition on the boundary $\partial \Omega$:
\begin{equation}\label{FP}
\left\{
\begin{array}{ll}
\partial_t m + \mbox{div}( b[m](x,t)\, m ) = 0&\hspace{0.3cm} \mbox{in }\Omega \times (0,T), \\[6pt]
b[m](x,t) m \cdot \hat{n} = 0 \; \; &\hspace{0.3cm} \mbox{on $\partial \Omega\times (0,T)$,} \\[6pt]
m(\cdot,0)= m_0(\cdot) &\hspace{0.3cm} \mbox{in $\Omega$}.
\end{array} \right.
\end{equation}
Here,
$b[m]: \Omega \times[0,T]\to\mathcal{R}$ is a given smooth vector field, depending on $m$, $m_0$ a smooth initial datum defined on $\Omega$ and
$ \hat{n}$ the unit outer normal vector to the boundary $\partial \Omega$. Formally, at time $t\in [0,T]$ the solution of \eqref{FP} is given, implicitly, by the image of the measure $m_0 d x $ induced by the flow $x\in \Omega \mapsto \Phi(x,0,t)$, where, given $0\leq s \leq t \leq T$, $\Phi(x,s,t)$ denotes the solution of
\begin{equation}\label{caracteristicas_continuas}
\left\{\begin{array}{rcl}
\dot{x}(r)&=& b[m](x,r) \hspace{0.2cm} \forall \; r\in [s,T],\\[4pt]
x(s)&=& x,
\end{array}\right.
\end{equation}
at time $t$, where the trajectory is reflected when it touches the boundary $\partial\Omega$.
Given $M\in \mathbb{N}$, we construct a mesh on $\Omega$ defined by a set of vertices $\mathcal{G}_{\Delta x}=\{x_i\in \Omega, i=1,...,M\}$ and by a set $\mathcal{T}$
of triangles, whose vertices belong to $\mathcal{G}_{\Delta x}$ and their maximum diameter is $\Delta x>0$, which form a non-overlapping coverage of $\Omega$.
We suppose $\Omega$ to be a polygonal domain, in order to avoid issues related to the approximation of a non-polygonal domain with triangular meshes.\\
Given $N\in \mathbb{N}$, we define the time-step $\Delta t=T/N$ and consider a uniform partition of $[0,T]$ given by $\{t_k=k\Delta t,\quad k=0,\hdots, N-1\}$.\\
We consider now a discretization of \eqref{FP} based on its representation formula by means of the flow $\Phi$.
For $\mu \in \mathcal{R}^M$, $j\in \{1,...,M\}$, $k=0,\hdots, N-1$, { we define the discrete characteristics as}
\begin{equation*}\label{car}
\Phi_{j,k}[\mu]:= R(x_{j}+\Delta t \,b[\mu](x_j,t_k))
\end{equation*}
where $R:\mathbb{R}^2\to \Omega$ is a reflection operator, related to the Neumann boundary condition, defined as
\begin{equation*}\label{Projection}
R (z):=\begin{cases}
z, &{\rm{if}}\;z\in \overline{ \Omega},\\
2\underset{w\in\Omega}{\rm{argmin}} |z-w|-z, \;\,&{\rm{if}}\;z\notin \overline{ \Omega}.
\end{cases}
\end{equation*}
We call $\{{\beta_{i}} \; ; \; i=1,...,M\}$ the set of affine shape functions associated to the triangular mesh, such that $\beta_i(x_j)=\delta_{i,j}$ (the Kronecker symbol) and $\sum_i \beta_i(x)=1$ for each $x\in \overline\Omega$.
We define the {\em{median dual control volume}} (see \cite{Quarteroni} and \cite{Voller} for a detailed discussion on the construction of control volumes) by
\begin{align}\label{e:approxm}
\begin{array}{c}
E_i : = \underset{T\in \mathcal{T}: x_i \in \partial T}\bigcup E_{i,T}, \; \; \; \forall i=1,\dots,M, \\[8pt]
{\rm{where}}\quad E_{i,T} :=\{x\in T: \beta_j(x)\leq \beta_i(x)\quad j\neq i\}. \\[6pt]
\end{array}
\end{align}
We approximate the solution $m$ of the problem \eqref{FP} by a sequence $\{m_{k}\}_{k}$, where for each $k=0,\dots,N$ $m_k:\mathcal{G}_{\Delta x}\to \mathcal{R}$ and for each $i=1,\dots,M$, $m_{i,k}$ approximates
$$\frac{1}{|E_i|}\int_{E_i} m(x,t_k) d x,$$
where $|E_i|$ denotes the area of $E_i$.
We compute the sequence $\{m_{k}\}_{k}$ by the following explicit scheme:
\begin{equation}\label{schemefp}
\hspace{-0.4cm}\begin{array}{rl}
m_{i,k+1}&= G(m_k,i,k) \hspace{0.2cm} \forall k=0,..., N-1, \; \; i=1,...,M, \\[8pt]
m_{i,0}&=\frac{ \int_{E_{i}}m_{0}(x) d x}{|E_i|} \hspace{0.4cm} \forall i=1,...,M,
\end{array}
\end{equation}
where
$G$ is defined by
\begin{equation*}\label{definicionG}
G (w,i,k) := \sum_{j=1}^M
\beta_{i} \left(\Phi_{j,k}\left[ w \right]\right) w_j \frac{|E_j|}{|E_i|},
\end{equation*}
for every $w\in \mathcal{R}^{M}$.
\begin{remark}
In the case of a uniform standard quadrilateral mesh, the volume $E_i$ is given by $E_i=[x^1_i- \frac{1}{2} \Delta x, x^1_i+ \frac{1}{2} \Delta x] \times [x^2_i- \frac{1}{2} \Delta x, x^2_i+ \frac{1}{2} \Delta x]$, $|E_i|=(\Delta x)^2$ for each $i$ and $\{\beta_i\}$ represents the $\mathbb{Q}_1$ basis function associated to the grid.
\end{remark}
\subsection{Fast-marching semi-Lagrangian scheme for the eikonal equation}
In order to compute $\nabla u[m](x,t)$ we need to solve at each time $t$ an eikonal type equation.
For this kind of equations, well known and efficient techniques are Fast Marching Methods (FMM) (\cite{Sethian}). These methods have been proposed to speed up the computation of an iterative scheme based on an upwind finite difference discretization of the eikonal equation: the advantage is to have a complexity $O(M \log (\sqrt{M} ))$ with respect to a complexity $O(M^2)$ of the iterative scheme.
The FMM is a one-pass algorithm: the main idea behind it is that the approximation of the solution on the grid is computed following the directions given by the characteristics equations governing the eikonal equation. This ordering allows to compute the approximated solution in only one iteration.\\
In the context of semi-Lagrangian schemes, a SL version of the FMM scheme for eikonal equations has been proposed in \cite{CristianiFalcone}, moreover a SL version of the FMM scheme on unstructured grid has been proposed in \cite{SethianVlad2001} and \cite{CarliniFalconeHoch}.
\section{Congestion modeling}
The design of the function $f$ is a delicate matter that deserves a special attention. First of all we notice that the original model \eqref{hughes} is equivalent with the following system (cf. Section \ref{mod}) with the boundary conditions \eqref{BC}:
\begin{equation*}
\left\{
\begin{array}{ll}
\partial_t m(x,t)-\hbox{div}(b[m](x,t)\,m(x,t))=0, \\[4pt]
b[m](x,t):=f(m(x,t))\frac{\nabla u(x,t)}{|\nabla u(x,t)|}\\
|\nabla u(x,t)|=\displaystyle\frac{1}{f(m(x,t))}.
\end{array}\right.
\end{equation*}
This system has the positive feature to avoid the numerically difficult computation of the vector field $f^2(m)\nabla u$, that, in the congested areas, is the multiplication between two quantities possibly very small $f^2(m)$ and very big (the modulus of $\nabla u$). Moreover, this formulation clarifies the role of $f$ as quantifier of the local relation between the speed of the crowd and density. It has been proved experimentally (see e.g. \cite{seyfried2006basics, chattaraj2009comparison} and \cite{narang2015generating}) that this relation can vary as a function of several physiological and psychological factors such as the state of stress, the knowledge of the environment, etc. The graph of local density/speed is generally called \emph{fundamental diagram}, in analogy with the terminology of vehicular traffic literature. Some examples of well established choices for the diagram are:
\begin{eqnarray*}
f_1(m)&:=&1-m,\\
f_2(m)&:=&\min\left(1, \exp\left(-\alpha \frac{m-k}{1-m}\right)\right),\; \alpha>0, \; k\in(0,1), \qquad\\
f_3(m)&:=&1-\exp\left(-\alpha\frac{1-m}{m}\right), \qquad \alpha>0, \\
f_4(m)&:=& a_4m^4-a_3m^3+a_2m^2-a_1m+a_0.
\end{eqnarray*}
The choice $f_1$ appears in \cite{hurley2015sfpe}; $f_3$ has been proposed in the early work by \cite{weidmann1992transporttechnik} and $f_2$ can be considered as a variation of it. The diagram $f_4$ has been proposed and experimentally discussed in \cite{predtechenskii1978planning}, where the authors proposed the choice of the coefficients reported in the caption of Figure \ref{f:dia}.
In those models, the maximal speed is scaled by $1.34 m s^{-1}$, which is typically the observed maximal speed of the pedestrians; in the same way, the maximal density before congestion (a value around $5.4 m^{-2}$) is scaled to 1.\\
Another diagram proposed by \cite{Lions}, in order to model congestion in Mean Field Games, is given by
$$f_5(m):=\frac{k_1}{(k_2 m)^\beta}, \hbox{ with }0<\beta<1/2, \hbox{ and }k_1,k_2>0. $$
In Figure \ref{f:dia} we present some examples of the various diagrams for some adequate choices of the coefficients.
We remark that the diagram of $f_5$ in Fig. \ref{f:dia} differs strongly from the others, since is not bounded when $m=0$ and
is not 0 when $m=1$, which means that complete congestion is not allowed.\\
\begin{figure}[t]
\begin{center}
\includegraphics[height=6cm]{Diag.png}
\caption{Fundamental diagrams. For $f_4$ the coefficients are given by $a_4=112/51$, $a_3=380/51$, $a_2=434/51$, $a_1=213/51$ and $a_0=1$.} \label{f:dia}
\end{center}
\end{figure}
The effectiveness of an approximation scheme for the system with the diagrams in Fig. \ref{f:dia} is not obvious, since the model requires a very accurate approximation close to the set $\{(x,t)\in \overline{\Omega} \; | \; m(x,t)=1\}$ (congestion). This brings several numerical difficulties that should be addressed in the resolution. \\
To the best of our knowledge, an organic comparison of different choices of fundamental diagrams in the Hughes model has never appeared in literature.
\section{Numerical simulations}
In this section we investigate numerically the influence of the penalty function $f$ in \eqref{hughes}.
In order to limit instability issues in the congested zones, in the simulations we truncate $f$. Given $\delta>0$, in the simulations we replace $f$ by $f^\delta$ defined by
\begin{equation}\label{trunc}
f^{\delta}(m):=\max(\delta, f(m)), \hbox{ with }\ \delta\in\mathcal{R}_+.
\end{equation}
Using the notations in Section \ref{discretization}, we compute the solution of the approximation of \eqref{hughes} iteratively in the following way: given the discrete measure $m_{k}$ at time $t_k$ ({$k=0,\hdots, N-1$}), we compute an approximation of the value function $u(\cdot,t_k)$ using the FMM scheme and the values of $m_{i,k}$ ($i=1, \hdots, M$). This allows us to construct an approximation $\nabla u_{i,k}$ of $\nabla u [m](x_i,t_k)$. Then, for all $i=1, \hdots, M$, we approximate $b[m](x_i,t_k)$ by $ f(m_{i,k}) \frac{\nabla u_{i,k}}{|\nabla u_{i,k}|}$ and we compute $m_{\cdot, k+1}$ using \eqref{schemefp}.
\begin{figure}[th]
\begin{center}
\includegraphics[height=4 cm]{lin1.png}
\includegraphics[height=4.1 cm]{lin2.png}
\includegraphics[height=3.8cm]{bar.png}
\includegraphics[height=4 cm]{lin3.png}
\includegraphics[height=4 cm]{lin4.png}
\includegraphics[height=3.8cm]{bar.png}
\includegraphics[height=4 cm]{lin5.png}
\includegraphics[height=4.07 cm]{lin6.png}
\includegraphics[height=3.8cm]{bar.png}
\caption{Evolution of the density with $f_1$. The red rectangle is the target region.} \label{f:t1}
\end{center}
\end{figure}
In the following simulations we fix
$$\delta=10^{-3}, \; \; \Delta x=0.0077 \; \; \mbox{and } \; \Delta t=\Delta x/3.$$
We consider a fix scenario in a domain $\Omega:=\left([0,1]\times[0,1]\right)\setminus \Gamma$, where $\Gamma:=\Gamma_1 \cup \Gamma_2 \cup \Gamma_3$ with
$$\begin{array}{c}
\Gamma_1:=[0.55,0.6]\times [0, 0.05], \; \; \Gamma_2:=[0.55,0.6]\times[0.2,0.45] ,\\[4pt]
\Gamma_3:= [0.55,0.6]\times[0.6,1].\end{array}$$
We also fix an initial distribution given by
$$
m_0(x):=\left\{
\begin{array}{ll}
0.7 \qquad & x\in [0.1, 0.3]\times [0.1,0.9],\\[4pt]
0 & \text{ otherwise},
\end{array}\right.
$$
and a target set $\mathcal{T}:= [0.88, 0.92]\times [0.1, 0.95].$
In the first test we choose $f=f_1$, which corresponds to a linear penalization of the congestion. This is the most popular choice, see e.g. the original paper by \cite{h00} and the subsequent works by \cite{di2011hughes} and by \cite{carlini2016PREPRINT}. In Figure \ref{f:t1} we observe some of the basic features of the system. The mass, initially given by $m_0$, evolves in the direction of the target avoiding the high congested areas (top/left). For this reason the agents initially located on the extreme left hand side of the domain circumvent the center of the domain opting for less crowded regions (top/right). The density moves towards the ``doors'' and the distribution of the agents takes the typical cone shape observed experimentally in the work by \cite{van2009pedestrian}. Then, the crowd splits into two groups crossing the two doors. It is also possible to see (center/left-right) as part of the mass, initially choosing the central door, changes its strategy, preferring the bottom one, which is less congested. We observe that the most congested areas of the narrow passage are in contact with the walls. Note that the crowd remains congested after crossing the doors (bottom/left-right).
This peculiar effect is due to the lack of alternative targets and, as we will see, the choice of the model of congestion.
\begin{figure}[th]
\begin{center}
\includegraphics[height=4.2cm]{t2_1.png}
\includegraphics[height=4.2cm]{t2_2.png}
\includegraphics[height=3.8cm]{bar.png}
\caption{Evolution of the density with $f_2$. The red rectangle is the target region.} \label{f:t2}
\end{center}
\end{figure}
For the same initial scenario we consider now the remaining penalty functions.
In Figure \ref{f:t2} we display two time instants of the evolution of the system with $f=f_2$, the parameters being set as $\alpha=1$ and $k=0.2$. In this case, the behaviour of the crowd maintains some features of the previous test, however we can observe an important difference: the maximum value reached by the density is around $0.8$ and not 1 as in the previous case. This reflects the fact that, from the perspective of the cost of each microscopic agent, the compromise between the choices of going directly to the target and avoiding congestion (less favorable here) is different than in the previous case.
\begin{figure}[th]
\begin{center}
\includegraphics[height=4.2cm]{t3_1.png}
\includegraphics[height=4.2cm]{t3_2.png}
\includegraphics[height=3.8cm]{bar.png}
\caption{Evolution of the density with $f_3$. The red rectangle is the target region.} \label{f:t3}
\end{center}
\end{figure}
Now, we choose $f_3$ with $\alpha=1$ as penalty function.
In this case, we observe a macroscopic behaviour that mixes some of the features observed so far. As before the crowd splits into two groups associated to each door and we have that a part of the crowd change their strategies, taking into account the congestion. In this case, we observe less congestion of the crowd after crossing the doors.
This is possibly due to a higher speed of the agents in the less crowded regions of the domain as imposed by $f_3$.
\begin{figure}[th]
\begin{center}
\includegraphics[height=4.2cm]{t4_1.png}
\includegraphics[height=4.2cm]{t4_2.png}
\includegraphics[height=3.8cm]{bar.png}
\caption{Evolution of the density with $f_4$. The red rectangle is the target region.} \label{f:t4}
\end{center}
\end{figure}
With the choice of $f=f_4$ we observe a very different behavior. In fact $f_4$ penalizes more the congested regions and it is more suitable to describe 'nervous' or 'panicked' pedestrians (\cite{predtechenskii1978planning}). As a consequence, we do not observe regions of high density and the trajectories chosen by the agents result to be more ``chaotic''. In general, as consequence of the choice of the parameters, the time necessary to reach the target for entire crowd is shorter than the ones observed before.
\begin{figure}[th]
\begin{center}
\includegraphics[height=4.2cm]{t5_1.png}
\includegraphics[height=4.2cm]{t5_2.png}
\includegraphics[height=3.8cm]{bar.png}
\caption{Evolution of the density with $f_5$. The red rectangle is the target region.} \label{f:t5}
\end{center}
\end{figure}
The last choice corresponds to $f=f_5$. Here, we observe a different splitting. A small part of the crowd, moving at a quite high speed, reaches the target area in a short time. The rest of the crowd concentrates near the doors, increasing the total time to reach the target.
\section{Conclusions}
In this paper we have considered a SL scheme to solve numerically the first-order Hughes system \eqref{hughes} in the presence of various choices of the fundamental diagram $f$. The popularity of such model justifies the study of efficient and stable numerical methods for their resolution. However, many question are still open.
First of all the well-posedness of \eqref{hughes}, and its relation with the penalty function $f$, is not understood in the two-dimensional case.
From the numerical point of view our approach requires some further work. In particular, in view of the lack of theoretical results for the continuous system \eqref{hughes}, a convergence theory for our scheme remains still as a difficult challenge.
\chapter{\LARGE \bf {#1}}\vspace{-40mm}}
\renewcommand{\sectionl}[1]{\section{\LARGE \bf {#1}}}
\renewcommand{\subsectionl}[1]{\subsection{\LARGE \bf {#1}}}
\renewcommand{\subsubsectionl}[1]{\subsubsection{\Large \bf {#1}}}
\renewcommand{\paragraphl}[1]{\paragraph{\LARGE \bf {#1}}}
\renewcommand{\subparagraphl}[1]{\subparagraph{\LARGE \bf {#1}}}
\renewcommand{\captionl}[1]{\caption{\LARGE \bf {#1}}}
\renewcommand{\Largel}{\Large}}{{\caption}}
} \fi
\newcommand\end{proof}{\hfill{$\blacksquare$}\medskip}
\newenvironment{proofdu}[1]{{\noindent \bf D\'emonstration #1.}\quad}{\hfill{$\blacksquare$}\medskip}
\def\hat{a}{\hat{a}}
\def\hat{b}{\hat{b}}
\def\hat{c}{\hat{c}}
\def\hat{d}{\hat{d}}
\def\hat{e}{\hat{e}}
\def\hat{f}{\hat{f}}
\def\hat{g}{\hat{g}}
\def\hat{h}{\hat{h}}
\def\hat{i}{\hat{i}}
\def\hat{j}{\hat{j}}
\def\hat{k}{\hat{k}}
\def\hat{l}{\hat{l}}
\def\hat{m}{\hat{m}}
\def\hat{n}{\hat{n}}
\def\hat{o}{\hat{o}}
\def\hat{p}{\hat{p}}
\def\hat{q}{\hat{q}}
\def\hat{r}{\hat{r}}
\def\hat{s}{\hat{s}}
\def\hat{t}{\hat{t}}
\def\hat{u}{\hat{u}}
\def\hat{v}{\hat{v}}
\def\hat{w}{\hat{w}}
\def\hat{x}{\hat{x}}
\def\hat{y}{\hat{y}}
\def\hat{z}{\hat{z}}
\def\bar{a}{\bar{a}}
\def\bar{b}{\bar{b}}
\def\bar{c}{\bar{c}}
\def\bar{d}{\bar{d}}
\def\bar{e}{\bar{e}}
\def\bar{f}{\bar{f}}
\def\bar{g}{\bar{g}}
\def\bar{h}{\bar{h}}
\def\bar{i}{\bar{i}}
\def\bar{j}{\bar{j}}
\def\bar{k}{\bar{k}}
\def\bar{l}{\bar{l}}
\def\bar{m}{\bar{m}}
\def\bar{n}{\bar{n}}
\def\bar{o}{\bar{o}}
\def\bar{p}{\bar{p}}
\def\bar{q}{\bar{q}}
\def\bar{r}{\bar{r}}
\def\bar{s}{\bar{s}}
\def\bar{t}{\bar{t}}
\def\bar{u}{\bar{u}}
\def\bar{v}{\bar{v}}
\def\bar{w}{\bar{w}}
\def\bar{x}{\bar{x}}
\def\bar{y}{\bar{y}}
\def\zb{\bar{z}}
\def\bar{A}{\bar{A}}
\def\bar{B}{\bar{B}}
\def\bar{C}{\bar{C}}
\def\bar{D}{\bar{D}}
\def\bar{E}{\bar{E}}
\def\bar{F}{\bar{F}}
\def\bar{G}{\bar{G}}
\def\bar{H}{\bar{H}}
\def\bar{I}{\bar{I}}
\def\bar{J}{\bar{J}}
\def\bar{K}{\bar{K}}
\def\bar{L}{\bar{L}}
\def\bar{M}{\bar{M}}
\def\bar{N}{\bar{N}}
\def\bar{O}{\bar{O}}
\def\bar{P}{\bar{P}}
\def\bar{Q}{\bar{Q}}
\def\bar{R}{\bar{R}}
\def\bar{S}{\bar{S}}
\def\bar{T}{\bar{T}}
\def\bar{U}{\bar{U}}
\def\bar{V}{\bar{V}}
\def\bar{W}{\bar{W}}
\def\bar{X}{\bar{X}}
\def\bar{Y}{\bar{Y}}
\def\bar{Z}{\bar{Z}}
\def\bar{C}{\bar{C}}
\def\bar{T}{\bar{T}}
\def\bar{Z}{\bar{Z}}
\def\tilde{a}{\tilde{a}}
\def\tilde{b}{\tilde{b}}
\def\tilde{c}{\tilde{c}}
\def\tilde{d}{\tilde{d}}
\def\tilde{e}{\tilde{e}}
\def\tilde{f}{\tilde{f}}
\def\tilde{g}{\tilde{g}}
\def\tilde{h}{\tilde{h}}
\def\tilde{i}{\tilde{i}}
\def\tilde{j}{\tilde{j}}
\def\tilde{k}{\tilde{k}}
\def\tilde{l}{\tilde{l}}
\def\tilde{m}{\tilde{m}}
\def\tilde{n}{\tilde{n}}
\def\tilde{o}{\tilde{o}}
\def\tilde{p}{\tilde{p}}
\def\tilde{q}{\tilde{q}}
\def\tilde{r}{\tilde{r}}
\def\tilde{s}{\tilde{s}}
\def\tilde{t}{\tilde{t}}
\def\tilde{u}{\tilde{u}}
\def\tilde{v}{\tilde{v}}
\def\tilde{w}{\tilde{w}}
\def\tilde{x}{\tilde{x}}
\def\tilde{y}{\tilde{y}}
\def\tilde{z}{\tilde{z}}
\def\tA{\tilde{A}}
\def\tL{\tilde{L}}
\def\tS{\tilde{S}}
\def\tV{\tilde{V}}
\def\tW{\tilde{W}}
\def\tX{\tilde{X}}
\def\tY{\tilde{Y}}
\def\tZ{\tilde{Z}}
\def\ubf { {\bf u}}
\def\vbf { {\bf v}}
\def\wbf { {\bf w}}
\def\xbf { {\bf x}}
\def\ybf { {\bf y}}
\def{\cal A}{{\cal A}}
\def{\cal B}{{\cal B}}
\def{\cal C}{{\cal C}}
\def{\cal D}{{\cal D}}
\def{\cal E}{{\cal E}}
\def{\cal F}{{\cal F}}
\def{\cal G}{{\cal G}}
\def{\cal H}{{\cal H}}
\def{\cal I}{{\cal I}}
\def{\cal J}{{\cal J}}
\def{\cal K}{{\cal K}}
\def{\cal L}{{\cal L}}
\def{\cal M}{{\cal M}}
\def{\cal N}{{\cal N}}
\def{\cal O}{{\cal O}}
\def{\cal P}{{\cal P}}
\def{\cal Q}{{\cal Q}}
\def{\cal R}{{\cal R}}
\def{\cal S}{{\cal S}}
\def{\cal T}{{\cal T}}
\def{\cal U}{{\cal U}}
\def{\cal V}{{\cal V}}
\def{\cal W}{{\cal W}}
\def{\cal X}{{\cal X}}
\def{\cal Y}{{\cal Y}}
\def{\cal Z}{{\cal Z}}
\def{\cal L}{{\cal L}}
\def\mathbb{P}{{\cal P}}
\def\widetilde{\cal P}{\widetilde{\cal P}}
\def\mathcal{A}{\mathcal{A}}
\def\mathcal{B}{\mathcal{B}}
\def\mathcal{C}{\mathcal{C}}
\def\mathcal{D}{\mathcal{D}}
\def\mathcal{E}{\mathcal{E}}
\def\mathcal{F}{\mathcal{F}}
\def\mathcal{G}{\mathcal{G}}
\def\mathcal{H}{\mathcal{H}}
\def\mathcal{I}{\mathcal{I}}
\def\mathcal{J}{\mathcal{J}}
\def\mathcal{K}{\mathcal{K}}
\def\mathcal{L}{\mathcal{L}}
\def\mathcal{M}{\mathcal{M}}
\def\mathcal{N}{\mathcal{N}}
\def\mathcal{P}{\mathcal{P}}
\def\mathcal{Q}{\mathcal{Q}}
\def\mathcal{R}{\mathcal{R}}
\def\mathcal{S}{\mathcal{S}}
\def\mathcal{T}{\mathcal{T}}
\def\mathcal{U}{\mathcal{U}}
\def\mathcal{V}{\mathcal{V}}
\def\mathcal{W}{\mathcal{W}}
\def\mathcal{X}{\mathcal{X}}
\def\mathcal{Y}{\mathcal{Y}}
\def\mathcal{Z}{\mathcal{Z}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\def\overline{\bf H}{\overline{\bf H}}
\def\underline{x}{\underline{x}}
\def\underline{V}{\underline{V}}
\def\varepsilon{\varepsilon}
\def{\Omega}{{\Omega}}
\def{\omega}{{\omega}}
\def\lambda{\lambda}
\def\bar{\lambda}{\bar{\lambda}}
\def\bar{\mu}{\bar{\mu}}
\def\bar{\nu}{\bar{\nu}}
\def{\bar\tau}{{\bar\tau}}
\newcommand{\ceil}[1]{\lceil #1 \rceil}
\newcommand{\floor}[1]{\lfloor #1 \rfloor}
\def\mathop{\rm ad}{\mathop{\rm ad}}
\def\mathop{\rm affhull}{\mathop{\rm affhull}}
\def\mathop{\rm argmin}{\mathop{\rm argmin}}
\def\mathop{\rm argmax}{\mathop{\rm argmax}}
\def\mathop{\rm cl}{\mathop{\rm cl}}
\def\mathop{\rm cone}{\mathop{\rm cone}}
\def\mathop{\rm conv}{\mathop{\rm conv}}
\def\mathop{\rm \overline{conv}}{\mathop{\rm \overline{conv}}}
\def\mathop{\rm deg}{\mathop{\rm deg}}
\def\mathop{\rm det}{\mathop{\rm det}}
\def{\mathop{\rm diag}}{{\mathop{\rm diag}}}
\def\mathop{\rm dist}{\mathop{\rm dist}}
\def\mathop{\rm div}{\mathop{\rm div}}
\def\mathop{{\rm dom}}{\mathop{{\rm dom}}}
\def\mathop{\text{\'epi}}{\mathop{\text{\'epi}}}
\def\mathop{\rm int}{\mathop{\rm int}}
\def\mathop{\rm inv}{\mathop{\rm inv}}
\def\mathop{\rm lin}{\mathop{\rm lin}}
\def\mathop{\rm Isom}{\mathop{\rm Isom}}
\def\mathop{\rm Im}{\mathop{\rm Im}}
\def\mathop{\rm sign}{\mathop{\rm sign}}
\def\mathop{\rm supp}{\mathop{\rm supp}}
\def\mathop{\rm trace}{\mathop{\rm trace}}
\def\mathop{\rm val}{\mathop{\rm val}}
\def\mathop{\rm var}{\mathop{\rm var}}
\def\mathop{\rm Inf}{\mathop{\rm Inf}}
\def\mathop{\rm Ker}{\mathop{\rm Ker}}
\def\mathop{\rm Min}{\mathop{\rm Min}}
\def\mathop{\rm Max}{\mathop{\rm Max}}
\def\mathop{\delta w}{\mathop{\delta w}}
\def\mathop{\underline{\lim}}{\mathop{\underline{\lim}}}
\def\mathop{\overline{\lim}}{\mathop{\overline{\lim}}}
\def\mbox{$\frac{1}{2}$}{\mbox{$\frac{1}{2}$}}
\def\mbox{$\frac{1}{6}$}{\mbox{$\frac{1}{6}$}}
\def{\bf 1}{{\bf 1}}
\def\sbdeux#1#2{\mbox{\scriptsize$#1$}\atop\mbox{\scriptsize$#2$}}
\def\sbtrois#1#2#3{\mbox{\scriptsize$#1$}\atop\mbox{\scriptsize$#2$}\atop\mbox{\scriptsize$#3$}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathsf{A}}{\mathsf{A}}
\newcommand{\mathsf{G}}{\mathsf{G}}
\newcommand{\mathsf{N}}{\mathsf{N}}
\newcommand{\mathsf{T}}{\mathsf{T}}
\newcommand{\mathsf{V}}{\mathsf{V}}
\newcommand{\mathsf{E}}{\mathsf{E}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\def\mathbb{C}{\mathbb{C}}
\def\mathbb{N}{\mathbb{N}}
\def\mathbb{P}{\mathbb{P}}
\def\mathbb{Q}{\mathbb{Q}}
\def\mathbb{R}{\mathbb{R}}
\def\mathbb{Z}{\mathbb{Z}}
\newcommand{I\!\!E}{I\!\!E}
\newcommand{\bar {I\!\! R}}{\bar {I\!\! R}}
\newcommand{\overline{\mathbb{R}}}{\overline{\mathbb{R}}}
\newcommand{\rbar^{_{\scriptstyle\sN}}}{\overline{\mathbb{R}}^{_{\scriptstyle\mathcal{N}}}}
\newcommand\be{\begin{equation}}
\newcommand\ee{\end{equation}}
\newcommand\ba{\begin{array}}
\newcommand\ea{\end{array}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\newenvironment{paoplist}{\vspace{-1ex}\begin{list}{-}
{\itemsep 0mm \leftmargin 2mm \labelwidth 0mm}}
{\end{list} \vspace{-1ex}}
\newenvironment{myenumerate}{
\renewcommand{\theenumi}{\roman{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
\begin{enumerate}}{\end{enumerate}}
\newcommand{\noindent}{\noindent}
\newcommand{\bigskip}{\bigskip}
\newcommand{\medskip}{\medskip}
\newcommand{{\bigskip \noindent}}{{\bigskip \noindent}}
\newcommand{\refeq}[1]{(\ref{#1})}
\def\marginpar{$\leftarrow$}{\marginpar{$\leftarrow$}}
\def\rightarrow{\rightarrow}
\def\rightarrow{\rightarrow}
\def\displaystyle{\displaystyle}
\def\displaystyle{\displaystyle}
\def\langle{\langle}
\def\langle{\langle}
\def\rangle{\rangle}
\newcommand{\mypsfig}[3]
{\begin{figure}[hbtp]
\centerline{\input #1}
\caption{\rm{#2}} \label{#3}
\end{figure}}
| proofpile-arXiv_065-11380 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
Nearly all the old Globular Clusters (GCs) host multiple stellar populations with typical photometric and spectroscopic features (Gratton et al.\,2004; Piotto et al.\,2015; Marino et al.\,2015; Milone et al.\,2016a and references therein).
The formation of the multiple stellar populations in the early Universe is one of the main open issues of stellar astrophysics and could play an important role in the assembly of the Galaxy (e.g.\,Gratton et al.\,2012; Renzini et al.\,2015).
In this context, the discovery that most intermediate-age star clusters in the Large and Small Magellanic Clouds (LMC, SMC) exhibit a multimodal or extended main-sequence turn off (eMSTO, Bertelli et al.\,2003; Mackey \& Broby Nielsen 2007; Glatt et al.\,2009; Milone et al.\,2009), and in some cases dual red clumps (Girardi et al.\,2009) has been one of the most-intriguing discoveries of the last decade in the field of stellar populations. Indeed it has been suggested that clusters with eMSTOs are the younger counterparts of the old GCs with multiple populations (e.g.\,Mackey et al.\,2008; Keller et al.\,2011).
The origin of the eMSTO has been widely investigated but a solution is still missing. A possible interpretation is that the eMSTO is due to multiple stellar populations with difference in age of about 100-700 Myr (e.g.\,Goudfrooij et al.\,2011, 2014; Li et al.\,2014) and that the intermediate-age clusters have experienced a prolonged star-formation episode in close analogy with old GCs (Conroy et al.\,2011). As an alternative, the eMSTO is due to coeval multiple populations with different rotation rates (Bastian \& De Mink 2009; D'Antona et al.\,2015) or to interacting binaries (Yang et al.\,2011, 2013).
Recent papers, based on {\it Hubble Space Telescope\,} ({\it HST\,}) photometry, have shown that the $\sim$300-Myr old cluster NGC\,1856 and the $\sim$100-Myr clusters NGC\,1844 and NGC\,1755 exhibit a very complex color-magnitude diagram (CMD) including split MS and eMSTO (Milone et al.\,2013, 2015, 2016b; Correnti et al.\,2015).
These findings have made a clear case that the once-thought simple MC young clusters host multiple stellar populations and that the eMSTO is not a peculiarity of the $\sim$1-2 Gyr star clusters.
The presence of multiple populations in young clusters has opened a new window of opportunity to investigate the eMSTO and has provided additional constraints to discriminate among the different scenarios.
In this paper we investigate multiple stellar populations in the $\sim 200$-Myr old cluster NGC\,1866 by using {\it HST} images.
The paper is organized as follows. Section~\ref{sec:data} describes the dataset and the data analysis. In Sections~\ref{sec:cmd} and~\ref{sec:ms} we present the CMD of NGC\,1866 and investigate the cluster's double MS while in Section~\ref{sec:teo} we compare the observed CMD with theoretical models. A summary and discussion follow in Section~\ref{sec:discussion}.
\section{Data and data analysis} \label{sec:data}
\begin{centering}
\begin{figure*}
\includegraphics[width=8.5cm]{footprint.ps}
\includegraphics[width=8.5cm]{image.ps}
\caption{\textit{Left panel:} Footprints of the UVIS/WFC3 images used in this paper. Blue, cyan, and red colors refer to images collected during different visits. The inner solid circle has a radius of 41 arcsec, corresponding to the projected cluster half-light radius, and delimits the cluster field. Reference-field stars are located outside the outer solid circle.
The five dotted circles indicate the regions used to study the radial distribution of stellar populations in NGC\,1866. \textit{Right panel:} Thrichromatic image of the analyzed field of view. }
\label{fig:footprint}
\end{figure*}
\end{centering}
The dataset that we have used to investigate multiple stellar populations in NGC\,1866 has been collected through the Ultraviolet and Visual Channel of the Wide Field Camera 3 (UVIS/WFC3) of {\it HST}. The footprints of these images are shown in the left panel of Figure~\ref{fig:footprint} where the different colors indicate images taken during three distinct visits on March, 1 (blue), May, 31 (red), and June, 1, 2016 (cyan). Each visit includes 2$\times$711s images collected through the F336W filter and 90s$+$678s images collected through the F814W filter.
The inner and outer black-continuous circles shown in the left panel of Figure~\ref{fig:footprint} have radius of 41 (equivalent to the projected half-light radius of NGC\,1866, Mc Laughlin \& Van der Marel\,2005) and 180 arcsec, respectively. The region within the inner circle is mainly populated by cluster members and will be designed as cluster field hereafter. In contrast, the region outside the outer circle mostly contains field stars and is called reference field.
To determine the radius of the outer circle we have calculated the number of stars with $m_{\rm F814W}<22.5$ per unit area in distinct concentric annuli from the cluster center to the outermost region of the analyzed field of view. We have verified that the stellar density is constant for radial distance larger than $\sim$3 arcmin. This fact indicates that the number of cluster stars, in the reference field is negligible.
The trichromatic image of the analyzed field is shown in Figure~\ref{fig:footprint}.
The entire dataset is part of GO-14204 (PI A.\,P.\,Milone) which is a program specifically devoted to the study of multiple stellar populations in the young LMC clusters NGC\,1866 and NGC\,1755 (see Milone et al.\,2016b). All the images have been reduced and analyzed by using the method and software programs that have been mostly developed by Jay Anderson and are widely described in previous papers from this series.
Briefly, we have first corrected the images for the effect of poor Charge Transfer Efficiency as in Anderson \& Bedin (2010) and then we have derived the stellar photometry and astrometry by using the software described in detail by Anderson et al.\,(2008) and adapted to UVIS/WFC3 images. Specifically, we have measured bright and faint stars by using a set of spatially-variable empirical point-spread functions (PSFs, see Anderson et al.\,2006 for details) but by adopting two different approaches.
Fluxes of bright stars have been measured in each image independently, and the results combined later, while all the pixels of each very faint star in all the images have been fitted simultaneously.
Stellar positions have been corrected for geometrical distortion by using the solution provided by Bellini, Anderson \& Bedin (2011) and photometry has been calibrated into the Vega-mag systems as in Bedin et al.\,(2005) and by adopting the zero points provided by the STScI web page for WFC3/UVIS\footnote{http://www.stsci.edu/hst/wfc3/phot\_{zp}\_{lbn}}.
The sample of stars used in our study of NGC\,1866 has been selected following Milone et al.\,(2009) and includes only relatively-isolated sources, that have been properly fitted by the PSF, and have small rms errors in position.
Finally, the photometry of stars with radial distances smaller than 3 arcmin from the cluster center have been corrected for differential reddening by following the recipe in Milone et al.\,(2012a) and adopting the values of $A_{\rm F336W}$ and $A_{\rm F814W}$ derived in Milone et al.\,(2016b).
In addition, we have used artificial stars (ASs) to determine the completeness level of our sample, to estimate internal photometric errors and to derive synthetic CMD. The AS tests have been performed following the procedure of Anderson et al.\,(2008), while the completeness has been determined as a function of both stellar position and magnitude by following the recipes of Milone et al.\,(2009).
\section{The color-magnitude diagram of NGC\,1866}
\label{sec:cmd}
\begin{centering}
\begin{figure*}
\includegraphics[width=12.5cm]{cmdLR.ps}
\caption{\textit{Left panel:} $m_{\rm F814W}$ vs.\,$m_{\rm F336W}-m_{\rm F814W}$ CMD for all the stars in the WFC3/UVIS field of view. The photometry of stars above the dashed line has been obtained from saturated images in at least one filter.
\textit{Right panel:} Zoom in around the upper MS for stars with radial distance smaller than 41 arcsec from the center of NGC\,1866. The corresponding region of the CMD is marked by a dashed box in the left-panel plot. The error bars in red are shown on the left of each panel.}
\label{fig:cmd}
\end{figure*}
\end{centering}
The $m_{\rm F814W}$ vs.\,$m_{\rm F336W}-m_{\rm F814W}$ CMD of all the stars in the WFC3/UVIS field of view is plotted in the left panel of Figure~\ref{fig:cmd} while in the right panel we show a zoom around the upper part of the MS of stars in the cluster field.
A visual inspection of these CMDs immediately reveals that the MSTO is broadened in color and magnitude in close analogy with what has been observed in NGC\,1856 and in the majority of the intermediate-age MC clusters (e.g.\,Mackey et al.\,2008; Milone et al.\,2009; Goudfrooij et al.\,2014). Furthermore, the upper MS is clearly split, and the two MSs merge together around $m_{\rm F814W}=21.0$. Noticeably, the red MS hosts the majority of MS stars, similarly to what we have observed in NGC\,1844, NGC\,1856, and NGC\,1755.
In the following we demonstrate that the split MS and the eMSTO are intrinsic features of NGC\,1866.
We started by comparing the Hess diagrams of stars in the cluster field and in the reference field. These diagrams are plotted in the panels (a) and (b) of Figure~\ref{fig:hess}, respectively. The adopted level of gray used in this figure is proportional to the number of stars, corrected for completeness and normalized to an area of one square arcsec, in each interval of color and magnitude. The panel (c) of Figure~\ref{fig:hess} shows the Hess diagram obtained by subtracting the star counts of the panel-(b) diagram from those of the panel-(a) one.
The fact that both the eMSTO and the split MS are present in the subtracted Hess diagram demonstrates that these features are real.
\begin{centering}
\begin{figure*}
\includegraphics[width=12.5cm]{hessLR.ps}
\caption{$m_{\rm F814W}$ vs.\,$m_{\rm F336W}-m_{\rm F814W}$ Hess diagram of stars in the cluster field (panel a) and in the reference field (panel b). The Hess diagram of the cluster CMD after field-stars have been subtracted is plotted in panel c.}
\label{fig:hess}
\end{figure*}
\end{centering}
To further investigate the effect of field-star contamination on the cluster CMD we have compared in the panels (a) and (b) of Figure~\ref{fig:sub} the CMDs of stars in the cluster field and in the reference field.
In order to statistically subtract the stars of the reference-field CMD from the cluster-field CMD we have adapted to NGC\,1866 the same procedure used by Milone et al.\,(2009, 2015, 2016b). Specifically, we have determined for each star (i) in the reference field a distance in the $m_{\rm F814W}$ vs.\,$m_{\rm F336W}-m_{\rm F814W}$ CMD\\
{ \scriptsize $d_{\rm i}=\sqrt{k((m_{\rm F336W, cf}-m_{\rm F814W, cf})-(m^{\rm i}_{\rm F336W, rf}-m^{\rm i}_{\rm F814W, rf}))^{2}+(m_{\rm F814W, cf}-m^{\rm i}_{\rm F814W, rf})^{2}} $}\\
where
$m_{\rm F336W (F814W), cf}$ and $m_{\rm F336W (F814W), rf}$ are the F336W (F814W) magnitudes in the cluster- and in the reference-field, respectively.
The adopted constant $k=4.1$ accounts for the fact that the color of a star is better constrained than its magnitude (Gallart et al.\,2003) and has been determined as in Marino et al.\,(2014, see their Section~3.1).
The stars in the cluster-field CMD with the smallest distance to each star of the reference field have been considered as candidate to be subtracted. We have subtracted all the candidates with $r_{\rm i}<f c^{\rm i}_{\rm rf}/c^{\rm i}_{\rm cf}$ where $r_{\rm i}$ is a random number between 0 and 1, $f$ is ratio between the area of the cluster field and of the reference field, and $c^{\rm i}_{\rm rf}$ and $c^{\rm i}_{\rm cf}$ are the completeness of the star (i) in the reference field and the completeness of the closest star in the cluster field, respectively.
The decontaminated CMD is shown in panel (c) of Figure~\ref{fig:sub} and confirms that both the eMSTO and the split MS are intrinsic features of the cluster CMD. For completeness we show the CMD of the subtracted stars in the panel (d) of Figure~\ref{fig:sub}.
\begin{centering}
\begin{figure*}
\includegraphics[width=12.5cm]{sub.ps}
\label{fig:sub}
\caption{Panel (a) reproduces the $m_{\rm F814W}$ vs.\,$m_{\rm F336W}-m_{\rm F814W}$ CMD of stars in the cluster field shown in the right panel of Fig.~\ref{fig:cmd}. The CMD of stars in the reference field is plotted in panel (b), while panel (c) shows the decontaminated CMD obtained by statistically subtracting the stars of the reference field from the cluster-field CMD. The CMD of the subtracted stars is plotted in panel (d).}
\label{fig:sub}
\end{figure*}
\end{centering}
\section{The double MS}
\label{sec:ms}
Having demonstrated that the split MS of NGC\,1866 is real, we estimate in the following the fraction of stars in each sequence and the binary fraction. The population ratio as a function of the stellar luminosity and the radial distributions of red-MS and blue-MS stars are derived in Section~\ref{sub:rd} and ~\ref{sub:lf}, respectively.
\begin{centering}
\begin{figure*}
\includegraphics[width=13cm]{METpratio.ps}
\caption{Zoom in of the $m_{\rm F814W}$ vs.\,$m_{\rm F336W}-m_{\rm F814W}$ CMD around the region where the MS split is most prominent. Left and middle panel show the observed CMD for stars in the cluster and the reference field, respectively. The simulated CMD that best reproduces the observed ones is plotted in the right-panel. The shaded areas are the CMD regions A, B, C used to determine the fraction of red-MS and blue-MS stars and the binary fraction. See text for details.}
\label{fig:METpratio}
\end{figure*}
\end{centering}
In order to infer the fraction of red-MS stars, blue-MS stars, and the fraction of binaries in the cluster field we have adapted to NGC\,1866 the method described by Milone et al.\,(2012b, MPB12 hereafter). To do this we have defined three regions in the CMD, namely A, B, and C which are represented by the blue, red, and green shadow area, respectively, in Figure~\ref{fig:METpratio}.
These three regions have been derived as follows.
Regions A and B are mostly populated by blue-MS stars and red-MS stars with $19.5<m_{\rm F814W}<20.5$, respectively. Note that in the adopted magnitude interval the two MSs are clearly split and binaries with mass ratio q$>0.5$ are well separated from single MS stars.
The blue boundary of region A has been drawn arbitrarily with the criteria of including the majority of blue-MS stars. Its red boundary has been determined by shifting each point of the red-MS fiducial line
by 2$\sigma_{\rm color}$ towards blue colors, where $\sigma_{\rm color}$ is the uncertainty in the $F336W-F814W$ color determination. The red-MS fiducial line has been derived by following the procedure described in the Section~3.3 of Milone et al.\,(2016b).
Briefly, we have defined a series of $F814W$ magnitude bins of width $\nu=0.1$ mag in the interval with $18.9<m_{\rm F814W}<20.9$. These bins have been determined over a sample of $N$ points separated by steps, $s=\nu/3$, of fixed magnitude (see Silverman 1986 for details).
Then we have selected a sample of bona-fide red-MS stars and calculated the median color and their mean magnitudes in each magnitude interval. The red-MS fiducial has been obtained by interpolating these median color and mean magnitudes by means of a cubic spline.
The region C defined in Figure~\ref{fig:METpratio} mostly includes binary systems with large mass ratio and has been derived as in MPB12. Its blue boundary is the sequence of binary systems with mass ratio q=0.5 formed by two red-MS stars. The red boundary of region C corresponds to the sequence of equal-mass red-MS binaries shifted to the red by four times the error in color. The faint and the bright boundaries correspond to the locus populated by a binary system formed by two red-MS stars where the primary component has luminosity $m_{\rm F814W}=20.5$ and $m_{\rm F814W}=19.5$. Region B is placed between regions A and C.
We have determined the number of stars, corrected for completeness, in the regions A, B, and C of the cluster-field CMD ($N^{\rm CL-F}_{\rm A, B, C}$) and of the reference-field CMD ($N^{\rm REF-F}_{\rm A, B, C}$). We have estimated the number of cluster stars in each region as $N^{\rm CL}_{\rm A, B, C}=N^{\rm CL-F}_{\rm A, B, C}- f N^{\rm REF-F}_{\rm A, B, C}$, where $f$ is ratio between the area of the cluster field and of the reference field, respectively.
We have generated a large number of CMDs by using ASs and compared them with the observed CMD. Each simulated CMD hosts the same number of region-B stars ($N^{\rm CL}_{\rm B}$) as the observed CMD but includes different fractions of blue-MS stars, red-MS stars, and binaries ($f_{\rm bMS}$, $f_{\rm rMS}$, and $f_{\rm bin}$).
Specifically, the grid of simulated CMDs have $f_{\rm bMS}$ and $f_{\rm bin}$ ranging from 0.01 to 1.00 in steps of 0.01.
Binaries have been added by assuming a constant mass-ratio distribution, in close analogy with what has been observed in Galactic GCs (Milone et al.\,2012a; 2016c). Moreover we have assumed that both the red and the blue MS have the same binary fraction and that both components of each binary system belong to the same sequence.
To obtain the best match between the simulated and the observed CMDs, we imposed that the simulated CMDs have the same number of stars in the regions A, B, and C as the observed ones. This condition is satisfied when the blue MS hosts 30$\pm$2\% of the total number of analyzed MS stars and for $f_{\rm bin}=0.25 \pm 0.02$.
For completeness, we have extended the analysis to the entire region with radial distance from the cluster center smaller than 3.0 arcmin and find slightly-higher values for both the fraction of blue MS stars ($f_{\rm bMS}=0.35 \pm 0.02$) and the binary fraction ($f_{\rm bin}=0.28 \pm 0.02$).
\subsection{The population ratio as a function of the stellar luminosity}
\label{sub:lf}
To investigate the multiple stellar populations along the double MS of NGC\,1866, we started to analyze in Figure~\ref{fig:pratio} the $m_{\rm F814W}$ vs.\,$m_{\rm F336W}-m_{\rm F814W}$ CMD of cluster-field (black points) and reference-field (aqua) stars with $18.9<m_{\rm F814W}<20.9$.
Only field stars with $r_{\rm i}<f c^{\rm i}_{\rm rf}/c^{\rm i}_{\rm cf}$ have been used in the following analysis, in close analogy with what we have done in Section~\ref{sec:cmd}.
We plotted in the panel (b) of Figure~\ref{fig:pratio}, a zoom of the CMD for the stars located within the gray box in the CMD shown in the panel (a). The red line is the red-MS fiducial derived as in Section~\ref{sec:ms}.
Panel (c) of Figure~\ref{fig:pratio} shows the verticalized $m_{\rm F814W}$ vs.\,$\Delta$($m_{\rm F336W}-m_{\rm F814W}$) CMD. The latter quantity has been obtained by subtracting from the $m_{\rm F336W}-m_{\rm F814W}$ color of each star the color of the fiducial at the corresponding $F814W$ magnitude.
The $\Delta$($m_{\rm F336W}-m_{\rm F814W}$) histogram distribution of cluster-field stars in ten magnitude bins is provided in the panels (d) and confirms the visual impression that the MS is bimodal in the magnitude interval $19.1<m_{\rm F814W}<20.7$. We have used a bi-Gaussian function to fit by means of least squares the observed histogram distribution, after the small contribution from reference-field stars has been subtracted. The two Gaussian components that best fit the histograms have been represented with blue and red continuous lines. From the area under the Gaussians we have inferred the fraction of stars in each interval of magnitude.
The two MS merge together around $m_{\rm F814W} \sim 20.7$ and there is no evidence for a split MS at fainter luminosities. The $\Delta$($m_{\rm F336W}-m_{\rm F814W}$) spread significantly increases for $m_{\rm F814W} \lesssim 19.1$, mainly because the analyzed sample includes stars in the faintest part of the eMSTO. A small number of blue-MS stars is still present in this luminosity interval.
\begin{centering}
\begin{figure*}
\includegraphics[width=12.5cm]{pratioLR.ps}
\caption{This figure illustrates the procedure used to study the color distribution of MS stars in NGC\,1866. Panel (a) shows the CMD of stars in the cluster field (black points) and in the reference field (green crosses) while panel (b) is a zoom of the analyzed MS region. The red line is the fiducial line of the red MS and has been used to derive the verticalized $m_{\rm F336W}$ vs.\,$\Delta$($m_{\rm F336W}-m_{\rm F814W}$) CMD plotted in the panel (c). Panel (d) shows the histogram distribution of $\Delta$($m_{\rm F336W}-m_{\rm F814W}$) for stars in the ten F814W magnitude intervals indicated by continuous lines in the panel (c). The histogram distribution of stars with $19.1<m_{\rm F814W}<20.7$ is clearly bimodal. The best fit least-squares bi-Gaussian function is overimposed to the histograms and the two Gaussian components are colored blue and red (see text for details). }
\label{fig:pratio}
\end{figure*}
\end{centering}
To investigate the properties of the split MS at different luminosities by means of a different method, we have divided the MS region within $19.5<m_{\rm F814W}<20.5$ into five intervals of 0.2 mag and we have determined the fraction of blue-MS stars and the fraction of binaries with respect to the total number of MS stars in each of them by using the procedure from MPB12 as described in Section~\ref{sec:ms}. In this case we have excluded from the analysis the upper MS, where it is not possible to distinguish binaries with q$>$0.5 from single MS stars. We have also excluded the faintest MS part due to the poor separation between the blue and the red MS.
\begin{centering}
\begin{figure}
\includegraphics[width=8.5cm]{LF.ps}
\caption{Fraction of blue-MS stars (black dots and grey diamonds) and fraction of binaries (red triangles) with respect to the total number of MS stars in different F814W magnitude intervals.
Black dots and grey diamonds indicate the results from the MPB12 method and by using the Gaussian-fit method, respectively.
The upper panel refer to the cluster field, while the population ratios derived in the region with radial distance smaller than 180 arcsec are plotted in the lower panel. For clarity the grey and red points have been shifted by $\pm$0.05 mag with respect to the average magnitude of the stars in the corresponding bin.}
\label{fig:LF}
\end{figure}
\end{centering}
The results are illustrated in the upper panel of Figure~\ref{fig:LF}, where we plot the fraction of blue-MS stars with respect to the total number of MS stars in the cluster field as a function of the F814W magnitude. The grey diamonds indicate the results from the method based on bi-Gaussian fitting while the black dots are obtained from the procedure by MPB12. Red triangles show the derived fraction of binaries as a function of $m_{\rm F814W}$.
The two methods point to similar conclusions. We find that the blue-MS hosts $\sim$15\% of the total number of MS stars in the brightest analyzed magnitude bin and that the fraction of blue-MS stars rises up to $\sim$33\% for $m_{\rm F814W}>19.7$. We did not find any evidence for a significant variation of the binary fraction with the F814W magnitude in the analyzed luminosity interval.
For completeness, we have extended the analysis to the entire region with radial distance smaller than 3.0 arcmin from the cluster center. The resulting values of $f_{\rm bMS}$ and $f_{\rm bin}$ are shown in the bottom panel of Figure~\ref{fig:LF} and confirm the conclusions obtained from the cluster field.
\subsection{The radial distribution of the two MSs}
\label{sub:rd}
\begin{centering}
\begin{figure*}
\includegraphics[width=13.5cm]{RDcmd.ps}
\caption{$m_{\rm F814W}$ vs.\,$m_{\rm F336W}-m_{\rm F814W}$ CMDs of stars with different radial distance from the center of NGC\,1866. The verticalized $m_{\rm F814W}$ vs.\,$\Delta$~col diagram for stars with $19.5<m_{\rm F814W}<20.5$ is plotted in the lower inset of each panel.
For the panels of stars within three arcmin from the cluster center we show the normalized $\Delta$~col histogram distribution for all the stars in the inset (gray line), the normalized histogram for field stars (aqua line), and the normalized histogram for cluster members (black line). The shaded aqua histograms correspond to reference-field stars only.
The red and blue lines overimposed on the black histogram are the two components of the best-fit bi-Gaussian function.
Reference-field stars are represented with aqua crosses in the bottom-right panel where we also show the $\Delta$~col histogram distribution for all the stars in the inset.}
\label{fig:RDcmd}
\end{figure*}
\end{centering}
In order to investigate how the CMD morphology changes as a function of cluster-centric radius we have employed two methods, in close analogy with what we have done in the study of multiple populations along the MS. The first method, is based on bi-Gaussian fit of the color distribution of the MS and is illustrated in Figure~\ref{fig:RDcmd} where we plot the CMDs of stars in six annuli with different distances from the cluster center. The inner and the outer radius of each annulus have been chosen in such a way that each CMD includes the same number of stars in the region B. The inset of each panel of Figure~\ref{fig:RDcmd} shows the verticalized $m_{\rm F814W}$ vs.\,$\Delta$~col diagram for stars in the magnitude interval with $19.5<m_{\rm F814W}<20.5$ where the MS split is more evident and where the two MSs run almost parallel.
In the insets of the five CMDs of stars with R$<$3.0 arcmin we also plot the corresponding normalized $\Delta$~col histogram distribution (grey line). Moreover we show the histogram of the field stars expected in each annulus (aqua histograms) and the histogram of cluster members (black line) which has been obtained by subtracting the aqua histogram from the grey ones. We have used a bi-Gaussian function to match the black histograms by means of least squares and we have colored the two components of the best-fitting bi-Gaussians red and blue. All the histograms in each panel have been normalized to the maximum value of the histogram of cluster stars.
The lower-right panel shows reference-field stars with radial distance from the cluster center larger than 3.0 arcmin. The corresponding histogram of the $\Delta$ col distribution is shown in the inset.
Figure~\ref{fig:RDcmd} suggests that the fraction of blue-MS stars with respect to the total number of MS stars significantly increases when we move from the cluster center outwards. In particular, from the area under the Gaussians, we find that in the central regions $\sim$30\% of the total number of MS stars belong to the blue MS, while, in the outermost analyzed region, the fraction of blue-MS stars rises to $\sim$45\%. The grey diamonds plotted in Figure~\ref{fig:RDngc1866} show the resulting fraction of blue MS stars ($f_{\rm bMS}$) as a function of the average radial distance from the cluster center of all the stars in the bin.
To further investigate the radial distribution of the stellar populations in NGC\,1866, we have used the recipe by MPB12 described in Section~\ref{sec:ms} to determine the fraction of red-MS and blue-MS stars and the fraction of binaries with respect to the total number of MS stars in each annulus defined above.
Results are illustrated in the left panel of Figure~\ref{fig:RDngc1866} and confirm that the red MS is more centrally concentrated than blue-MS stars. As shown in Figure~\ref{fig:RDngc1866}, the binary fraction has a flat distribution for cluster-centric distance smaller than R$\sim 1$ arcmin where their fraction is around 25\%. In the cluster outskirts the binary fraction rises up to $f_{\rm bin}=0.38 \pm 0.04$ but this result is significant at the 2-$\sigma$ level only.
Moreover, we have investigated the radial distribution of the stellar populatations by using radial bins of fixed size. Specifically, we have chosen 0.25-arcmin radial bins for stars within 1.5 arcmin from the cluster center and 0.50-arcmin bins for R$>\geq 1.5$ arcmin. The large size of the two outermost bins is due to the small number of stars in the external cluster region. Results are illustrated in the right panel of Figure~\ref{fig:RDngc1866} and confirm the previous finding that red-MS stars are more centrally-concentrated than blue-MS stars. This figure corroborates the idea that the binary fraction increases towards the cluster outskirts and makes it tempting to speculate that a large fraction of binaries are thus associated with the blue MS. A possible exception to these trends is provided by stars in the outer-most radial bin with $2.25<R\leq 3.0$ arcmin but the large error bars prevent us from any firm conclusion.
\begin{centering}
\begin{figure*}
\includegraphics[width=8.5cm]{RDngc1866.ps}
\includegraphics[width=8.5cm]{RD2ngc1866.ps}
\caption{Fraction of blue-MS stars with respect to the total number of analyzed MS stars as a function of the radial distance from the cluster center (in arcmins and in units of projected half-light radius, $R_{\rm hl}$). The black and grey points with error bars indicate the results obtained from the MPB12 method and the bi-Gaussian fit method, respectively, as described in the text. Red points show the fraction of binaries in different radial intervals. For clarity, the grey and red points have been shifted by $\pm$0.03 arcmin with respect to the average radius of stars in each radial bin. The horizontal bars mark the radial extension of each bin. The dotted line and the dashed-dotted vertical line mark the projected core and half-light radius, respectively (Mc Laughlin \& Van der Marel\,2005). In the left- and right-panel plot we have used different radial bins. See text for details.}
\label{fig:RDngc1866}
\end{figure*}
\end{centering}
\section{Theoretical interpretation}
\label{sec:teo}
The eMSTO and the split MS of young and intermediate-age star clusters have been interpreted both in terms of stellar populations with different rotation rates (e.g.\,Bastian \& De Mink\,2009; Niederhofer et al.\,2014; D'Antona et al.\,2015; Milone et al.\,2016a) or with different ages (e.g.\,Mackey \& Broby Nielsen\,2007; Milone et al.\,2009; Goudfrooij et al.\,2014). In addition, Milone et al.\,(2015) have shown that the eMSTO and the double MS of NGC\,1856 are consistent with stellar populations with different metallicity.
In NGC\,1866, we can immediately exclude that internal metallicity variations are responsible for the eMSTO and the split MS. Indeed high-resolution spectroscopy of 14 cluster members has revealed neither significant iron spread nor evidence for star-to-star variation of a variety of light elements including sodium, oxygen, and magnesium (Mucciarelli et al.\,2011). Similarly, the analysis of eight stars in the eMSTO cluster NGC\,1806 shows no evidence for metallicity variations (Mucciarelli et al.\,2014, Mackey et al.\,in preparation). These results demonstrate that the eMSTO and the split MS of NGC\,1866, and the eMSTO of NGC\,1806 are not due to stellar populations with different Z, and this suggests that similar features observed in the CMDs of other clusters are unlikely due to stars with different metallicity.
In the following subsection we compare the observed CMD of NGC\,1866 with isochrones from the Geneva database\footnote{http://obswww.unige.ch/Recherche/evoldb/index} (Mowlavi et al.\,2012; Ekstr{\"o}m et al.\,2013; Georgy et al.\,2014) that correspond to stellar populations with different ages, while in Section~\ref{subsec:rot} we investigate the possibility that the eMSTO and the double MS of NGC\,1866 are due to different rotation rates. In the latter subsection we will also compare the observations with isochrones and synthetic CMDs with both different age and rotation rates.
The comparison between the isochrones and the observed CMD of stars in the cluster field are shown in Figures~\ref{fig:iso}.
The adopted values of reddening and distance modulus are quoted in the inset of each panel. The reddening values have been transformed into absorption in the $F336W$ and $F814W$ bands by using the relations between E($B-V$), A$_{\rm F336W}$, and A$_{\rm F814W}$ derived by Milone et al.\,(2016a).
\subsection{Age variation}
\label{subsec:age}
The comparison between the observed CMD and non-rotating isochrones with the same metallicity but different ages is shown in the left panel of Figure~\ref{fig:iso}. First we have determined that the blue MS and the brighter eMSTO is well reproduced by a 140-Myr-old population with metallicity Z=0.006, assuming a distance modulus (m$-$M)${_0}$=18.31 and reddening $E(B-V)$=0.11. Then we have searched for an isochrone with the same value of $Z$ that properly fits both the red MS and lower part of the eMSTO.
We find that a 220-Myr-old isochrone matches well the lower eMSTO but provides a very poor fit of the red MS. We conclude that age variation alone can not be the responsible for the eMSTO and the split MS of NGC\,1866.
\subsection{Rotation}
\label{subsec:rot}
In the right panel of Figure~\ref{fig:iso} we investigate the possibility that rotation is the only factor responsible for the double MS and the eMSTO of NGC\,1866 by using isochrones with the same age of 200 Myr and different rotation rates of $\omega=0$ (blue line), $\omega=0.6 \omega_{\rm c}$ (green line, where $\omega_{\rm c}$ is the critical rotation value), and $\omega=0.9 \omega_{\rm c}$ (red line). We find that the red MS is well fitted by the fast-rotating stellar population while the blue MS is reproduced by a non-rotating isochrone. Nevertheless, rotation alone does not reproduce the upper part of the blue MS with $m_{\rm F814W} \lesssim 19$.
\begin{centering}
\begin{figure*}
\includegraphics[width=8.45cm]{age.ps}
\includegraphics[width=7.5cm]{rot.ps}
\caption{Comparison between the observed CMD of stars in the cluster field and isochrones from the Geneva database. In the left panel we have plotted two non-rotating isochrones with different ages while in the right panel we show three coeval isochrones with no rotation and with rotations of $\omega=0.6 \omega_{\rm c}$ and $\omega=0.9 \omega_{\rm c}$. }
\label{fig:iso}
\end{figure*}
\end{centering}
Finally, we investigate in Figure~\ref{fig:Rotage} the possibility that the observations are consistent with stellar populations with both different ages and different rotation rates.
The large panel of this figure shows that a 200-Myr isochrone with rotation $\omega=0.9 \omega_{\rm c}$ (red line) provides a good fit at the red MS, while the blue MS is well reproduced by two non-rotating isochrones with age of 140 Myr and 220 Myr. %
In the inset we compare the observed CMD of NGC\,1866 with a synthetic CMD that includes stellar populations with both age variation and different rotation rates. For that purpose we have retrieved from the Geneva database the isochrones plotted in the left panel. Specifically we have assumed that 70\% of total number of the simulated stars have rotation $\omega=0.9 \omega_{\rm c}$ and the remaining 30\% of stars do not rotate. Among them two-thirds have age of 220 Myr and the remaing one-third belong to a younger 140-Myr old population.
The viewing angle adopted in the simulations follow a random distribution, and the adopted gravity-darkening model from Espinosa Lara \& Rieutord (2011) includes the limb-darkening effect (Clare 2000). The synthetic data have been transformed into the observational plane by using the model atmospheres by Castelli \& Kurucz (2003) and the transmission curves of the $F336W$ and $F814W$ filters of UVIS/WFC3. The simulated CMD includes a fraction of binaries, $f_{\rm bin}=0.25$, which corresponds to the observed value. We have used blue and red colors to represent non-rotating and rotating stars, respectively, while the observed stars are colored in black.
In the middle panel of Figure~\ref{fig:Rotage} we compare the observed and the simulated verticalized $m_{\rm F814W}$ vs.\,$\Delta$~col diagram for MS stars with $19.5<m_{\rm F814W}<20.5$. The corresponding $\Delta$~col histogram distributions of stars in five magnitude intervals are plotted in the right panels. These figures show that the adopted synthetic models reproduce well the double MS of NGC\,1866.
An eMSTO is present in the synthetic CMD, but the fit with the observed CMD is still unsatisfactory. This issue has been previously noticed in both NGC\,1856 and NGC\,1755 and has been attributed to second-order parameters that affect the hydrogen-burning phase (D'Antona et al.\,2015). Indeed the stellar color and magnitude of stars in the synthetic CMD are strongly affected by the way the convective-core overshoot and the inclination angle are taken into account.
\begin{centering}
\begin{figure*}
\includegraphics[width=13.0cm]{RotageLR.ps}
\caption{The large panel shows three isochrones from the Geneva database with different ages and rotation rates, overimposed on the CMD of stars in the cluster field. The red line corresponds to the isochrone with age of 200 Myr and has a rotational velocity close to the breakout value $\omega=0.9 \omega_{\rm c}$. The two non-rotating isochrones are colored cyan and have ages of 140 and 220 Myrs. The inset shows a zoom around the upper MS of NGC\,1866, where we compare the simulated CMD derived from the rotating (red) and non-rotating (cyan) isochrones and the observed CMD (black). The corresponding verticalized $m_{\rm F814W}$ vs.\,$\Delta$~col is plotted in the middle panel, while in the right panels we compare the histogram $\Delta$~col distribution of the observed stars in five magnitude intervals and the distribution of the simulated rotating and rotating stars. See text for details.}
\label{fig:Rotage}
\end{figure*}
\end{centering}
\section{Summary and Discussion}
\label{sec:discussion}
We have used {\it HST\,} to derive high-precision photometry of the young LMC cluster NGC\,1866 in the $F336W$ and $F814W$ bands of WFC3/UVIS.
The resulting CMD reveals that this cluster has a double MS with the blue component hosting about one third of the total number of MS stars in the analyzed magnitude interval.
A bimodal MS has been recently observed in other LMC clusters younger than $\sim$400 Myr including NGC\,1844, NGC\,1856 and NGC\,1755 (Milone et al.\,2013, 2015, 2016a). The finding of a similar feature in NGC\,1866 corroborates the hypothesis that the split MS is a common feature of young Magellanic Cloud star clusters.
In addition, NGC\,1866 exhibits an eMSTO in close analogy with what is observed in most intermediate-age stars clusters of both Magellanic Clouds and in some young LMC clusters (Mackey \& Broby Nielsen 2007; Goudfrooij et al.\,2011, 2014; Milone et al.\,2009, 2015, 2016a; Bastian et al.\,2016).
The relative numbers of blue- and red-MS stars change when moving from the cluster center to the external regions, with the red MS being more-centrally concentrated. While this is the first study on the radial distribution of the multiple MSs in a young cluster, the radial distribution of eMSTO stars in intermediate-age star clusters has been already determined for several clusters by Goudfrooij et al.\,(2011).
These authors find that for several massive clusters the stars in the brightest half of the MSTO region are significantly more centrally concentrated than the stars in the faintest half.
In the cluster field of view the binary fraction with respect to the total number of MS stars of NGC\,1866 is $f_{\rm bin}=0.25 \pm 0.02$.
The binary fraction is almost constant within $\sim$1 arcmin from the cluster center and seems to increase at larger radial distance, in analogy with what has been observed in the massive young LMC cluster NGC\,1805 by Li et al.\,(2013).
The comparison between stellar models and the observed CMD rules out the possibility that age variation is the only responsible for the split MS of NGC\,1866.
In contrast, the split MS is well matched by stellar populations with distinct rotation rates.
Specifically, the red MS is consistent with a $\sim$200-Myr old population of fast-rotating stars with $\omega=0.9 \omega_{\rm c}$ while the blue MS is reproduced by non-rotating stars.
The adopted stellar models with different age and rotation rates roughly reproduce the eMSTO. As suggested by D'Antona et al.\,(2015), we speculate that the poor fit is mostly due to second-order parameters adopted in the models that affect the hydrogen-burning phase.
Noticeably, it appears that rotation alone is not able to fully reproduce the observations of NGC\,1866. Indeed, while the majority of blue-MS stars are possibly coeval, or about 10\% older than the red MS, the upper part of the blue MS is reproduced by a $\sim 30$\% younger ($\sim$140-Myr old) stellar population including $\sim$15\% of the total number of MS stars.
We are aware that the results presented in this work open many more questions than they solve. We regard the preliminary reproduction of the observational features of the CMD, proposed here and shown in Figure~\ref{fig:Rotage}, more as a provocative message than as a trustful interpretation. Is it realistic to think that stars born in possibly three different epoch of star formations ($\sim$140, $\sim$200 and $\sim$220 Myr ago) are present in the cluster? The first consequence of this scenario would be that the first star formation event gives birth to a non-rotating population, the second one to a fast rotating population and the final burst, $\sim$60 Myr later, would include only non rotating stars. This scheme seems too complex to be realistic, and we should reject it. Nevertheless, the multiple-age scheme may be telling us something on the evolution of these stars.
Also the increase of the blue MS fraction with increasing distance from the cluster center would argue against the younger age of the blue population.
Indeed, if there is an analogy with the multiple populations in old GCs, we expect that second-generation stars are expected to be more centrally concentrated than the first populations (e.g.\,D'Ercole et al.\,2016 and references therein).
Our interpretation of the split MS of NGC\,1866 are similar to those by D'Antona et al.\,(2015) who have shown that the split MS and the eMSTO of the $\sim$350-Myr old LMC cluster NGC\,1856 are consistent with two stellar populations with different rotation rates. In this framework, D'Antona and collaborators attributed the presence of a fraction of non-rotating stars to the braking mechanism of dynamical tides acting on the convective H-burning cores of B-A type stars due to the presence of a binary companions (Zahn 1977). In fact, observations show that binaries of these spectral types, with periods from 4 to 500 days are synchronized, so they are all slowly rotating (Abt \& Boonyarak 2004). If this phenomenon is the responsible for the slowly rotating fraction of stars, we could expect that there are more binaries with such adequate periods in the external parts of the cluster.
While it can be straightforward to interpret the populations in the cluster NGC\,1856, about a factor two older, by means of two coeval populations, with 70\% of fast rotating stars, and $\sim$30\% of slowly rotating stars, (D'Antona et al.\,2015, see above), the case of NGC\,1866 looks much more complex.
A possible solution is that the stellar populations of NGC\,1866 are coeval in close analogy with NGC\,1856.
As demonstrated in this paper, the recent rotating models (e.g.\,Ekstr{\"o}m et al.\,2013; Dotter 2016) make a superb work in reproducing the main features of the observed CMDs. Nevertheless the way rotation manifests itself in real stars may be not fully captured in the rotating models.
If we wish to properly investigate this possibility we must investigate much more deeply the evolution of rotating stars in the braking phase, and examine clusters of different ages, to explain the CMDs of these clusters in a coherent scheme. We postpone this analysis to a work in preparation by our group (D'Antona et al.\,in preparation).
\section*{acknowledgments}
\small
We thank the anonymous referee for several suggestions that have improved the quality of this manuscript.
APM, AFM and HJ acknowledge support by the Australian Research Council through Discovery Early Career Researcher Awards DE150101816 and DE160100851 and Discovery project DP150100862.
\bibliographystyle{aa}
| proofpile-arXiv_065-11382 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
Nowadays, FPGAs are integrated in high-performance computing
systems, servers, or even used as accelerators in System-on-Chip (SoC) platforms.
Since the execution is performed in hardware, FPGA gives much higher performance
and lower energy consumption compared to most microprocessor-based systems.
However, the room to improve FPGA performance still exists, e.g. when it is used
by multiple users.
In multi-user approaches, FPGA resources are shared between several users.
Therefore, one must be able to interrupt a running circuit
at any given time and continue the task at will.
An image of the state of the running circuit (context) is
saved during interruption and restored when the execution is continued.
The ability to extract and restore the context is known as context-switch.
In the previous work \cite{bourge_automatic_2015}, an automatic checkpoint selection
method is proposed for circuit generation targeting reconfigurable systems.
The method relies on static analysis of the finite state machine of a circuit
to select the checkpoint states. States with minimum overhead
will be selected as checkpoints, which allow optimal context save
and restore. The maximum time to reach a checkpoint
will be defined by the user and considered as the context-switch latency.
The method is implemented in C code and integrated as plugin in a free and open-source
High-Level Synthesis tool AUGH \cite{prost-boucle_fast_2014}.
\section{Demonstration}
In this demonstration, we present the context-switch method \cite{bourge_automatic_2015}
implemented in heterogeneous reconfigurable systems using a network-connected framework.
SoC-FPGA platforms with a CPU tightly-coupled with an FPGA are being used in the framework.
The demonstration framework consists of two FPGA-SoC platforms, a server with
tool-chain installed and a network storage disk.
We use two different platforms in the framework, a ZC706 Evaluation Board from Xilinx and
an Arria V SoC Development Kit from Altera, to show the genericity of our method
even for FPGAs from different vendors. The server provides a tool-chain suite of
corresponding platforms for configuration purpose and the network storage is
implemented with NFS protocol.
A graphical and a non-graphical applications will be used to perform context-switch on the boards. These applications will consume test vectors as input and perform the computations.
For the graphical application, an output video will be prepared for each board so attendees can follow from the screen.
For the non-graphical application, the flow of the execution will be observed from user's PC connected to the server.
Figure \ref{fig:system} shows the general overview of our demonstration framework and
the execution flow.
A typical execution flow is given below. Other variations of boards
and steps are possible.
\begin{enumerate}
\item A user connects to \emph{Server} either via local connection or framework's network, put its
application and launch the execution.
\item The configuration of the circuit is generated in \emph{Server} and saved in \emph{Network
Storage} with its respective input test vectors.
\item The CPU on \emph{Xilinx Board} which detects the configuration file in
\emph{Network Storage} and the
test vectors will program the FPGA and launch the execution. When there is an interruption,
it will retrieve the context from FPGA and save it in \emph{Network Storage}.
\item The CPU on \emph{Altera Board} which detects the bitstream and the context in \emph{Network Storage} will
program the FPGA and continue the execution. When there is no other interruption, it will finish the
execution and put the results in \emph{Network Storage}.
\end{enumerate}
\begin{figure}[tb]
\centering
\includegraphics[width=0.42\textwidth]{system.pdf}
\raggedleft
\caption{Demonstration Framework and its Typical Execution Flow}
\label{fig:system}
\vspace{-3mm}
\end{figure}
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-11504 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A direct search for physics beyond the Standard Model (BSM) is presented in a final state containing two isolated leptons (electrons or muons) with the same electric charge, $b$-jets and missing transverse momentum (\ensuremath{E^{\textrm{{\scriptsize miss}}}_{\textrm{{\scriptsize T}}}})~\cite{atlas_conf}.
The analysis is performed with the dataset recorded in 2015 at the ATLAS experiment~\cite{ATLAS} at the LHC, corresponding to an integrated luminosity of 3.2 fb$^{-1}$\ of $pp$ collisions at a center-of-mass energy of 13 TeV.
The search is sensitive to various signatures, in particular those containing four top quarks.
The Standard Model (SM) production of four top quarks is very small, but could be enhanced in various BSM scenarios.
At very high energies, this process can be described by an effective four-fermion contact interaction~\cite{CI}.
A Randall-Sundrum model with SM fields in the bulk~\cite{KK} is also considered in this search.
Feynman diagrams corresponding to the leading order SM contribution and to the contact interaction are shown in Figure~\ref{fig:diag_4tops}.
The search is also sensitive to the production of vector-like $T$, $B$, and $T_{5/3}$ quarks.
Several signal regions targeting the different scenarios are defined and described in Section~\ref{sec:signal}.
The estimation of the background contributions is described in Section~\ref{sec:bkg}.
The systematic uncertainties affecting the search are described in Section~\ref{sec:systs}, and the results are given in Section~\ref{sec:results}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.27\linewidth]{diag_4tops_SM.png}\hspace{1cm}
\includegraphics[width=0.27\linewidth]{fig_01b.pdf}
\caption{Feynman diagrams showing four top quark production from the leading order SM contribution (left) and from contact interaction (right).}
\label{fig:diag_4tops}
\end{figure}
\section{Definition of the signal regions}
\label{sec:signal}
This search is performed by defining eight signal regions sensitive to different BSM scenarios, and by comparing the expected number of events in the SM with the total number of events observed in data.
Preselected events contain at least two leptons, of which one pair has same electric charges.
The variables which are the most discriminant against the SM background and that allow to separate the different signals are: the number of jets containing $b$-hadrons ($N_b$), the \ensuremath{E^{\textrm{{\scriptsize miss}}}_{\textrm{{\scriptsize T}}}}\ and the scalar sum of the transverse momenta of jets and leptons (\ensuremath{H_{\textrm{{\scriptsize T}}}}).
Distributions of these variables for the background and signals are presented in Figure~\ref{fig:distr_ht_met}.
The signal regions are presented in Table~\ref{tab:SR_def}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.3\linewidth]{fig_03a.pdf}
\includegraphics[width=0.3\linewidth]{fig_03b.pdf}
\includegraphics[width=0.3\linewidth]{fig_03c.pdf}
\caption{Distributions of \ensuremath{H_{\textrm{{\scriptsize T}}}}\ (left), \ensuremath{E^{\textrm{{\scriptsize miss}}}_{\textrm{{\scriptsize T}}}}\ (middle) and $N_b$ (right) for the background and different signals~\cite{atlas_conf}.}
\label{fig:distr_ht_met}
\end{figure}
\begin{table}[htb]
\begin{center}
\caption{Definition of the signal regions~\cite{atlas_conf}.}
\includegraphics[width=0.6\linewidth]{tab_04.pdf}
\label{tab:SR_def}
\end{center}
\end{table}
\vspace{-1cm}
\section{Background estimation}
\label{sec:bkg}
The background is composed of three categories, each contributing roughly to one third of the total background contribution.
The first category corresponds to SM processes with two same-sign leptons in the final state. The main contributions are from the associated production of a $t\bar{t}$ pair and a gauge boson ($W$ or $Z$), diboson or triboson processes, as well as $t\bar{t}H$ and three top quark production.
They are estimated using Monte Carlo (MC) simulation.
The second category corresponds to events with so-called ``fakes'' or non-prompt leptons. Events from this category contain either a jet which has been reconstructed as an electron, or a lepton produced from a semi-leptonic decay of a $b$-hadron which passes the isolation criteria. This background is estimated using the matrix method applied on data~\cite{MM}.
The last category corresponds to events where the charge of an electron is misreconstructed. The contribution from this background is estimated using $Z \to ee$ events in a control region enriched in such events.
Some control regions, close but orthogonal to the signal regions are defined to check the estimation of the backgrounds.
Two examples are shown in Figure~\ref{fig:dataMCcomp}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.3\linewidth]{fig_02a.pdf}\hspace{1cm}
\includegraphics[width=0.3\linewidth]{fig_02b.pdf}
\caption{Distributions of \ensuremath{H_{\textrm{{\scriptsize T}}}}\ (left) and \ensuremath{E^{\textrm{{\scriptsize miss}}}_{\textrm{{\scriptsize T}}}}\ (right) in the control region with low \ensuremath{H_{\textrm{{\scriptsize T}}}}\ and at least one $b$-jet~\cite{atlas_conf}. The bottom panel of each plot shows the data-to-background ratio.}
\label{fig:dataMCcomp}
\end{figure}
\section{Systematic uncertainties}
\label{sec:systs}
The effect of various systematic uncertainties are estimated on the background and signal yields.
The luminosity uncertainty is affecting both of them and is of 2.1\%. The detector modeling uncertainties are between 1 and 21\% on the signals, and 1 and 7\% on the backgrounds.
The main uncertainties on the backgrounds are on the estimation of the fake lepton background (54\%) and on the charge misidentification (25\%).
In both cases, the uncertainties are obtained by varying the control regions used in data and the MC models used to simulate real background contributions in these control regions. A large part from the uncertainty also comes from the limited available statistics of data.
The uncertainties on the cross-sections of the processes estimated from MC simulation are between 8 and 57\%.
\section{Results and conclusion}
\label{sec:results}
After the background and systematic estimations, the observed number of events in each signal region is compared to the expected one.
The comparison is shown in Figure~\ref{fig:SR_yields}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\linewidth]{fig_04.pdf}
\caption{Comparison between the observed and expected numbers of events in each signal region~\cite{atlas_conf}. The bottom panel shows the data-to-background ratio.}
\label{fig:SR_yields}
\end{figure}
No excess is observed in data, and some limits are set on the various BSM physics models using a profile log-likelihood ratio.
For the four top quark production, limits are set on the cross-section. For SM kinematics, the limit at 95\% confidence level is 95~fb (with an expected limit of 107~fb). This corresponds to approximately ten times the SM cross-section. For the contact interaction model, the limit is 67~fb (79~fb expected). Some limits are also set on $C_{4t}$ and $\Lambda$, the coupling and energy scale used to parametrize the contact interaction model. These are shown in Figure~\ref{fig:limits} with the limits derived on the mass of a Kaluza-Klein resonance.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\linewidth]{fig_10a.pdf}
\includegraphics[width=0.45\linewidth]{fig_10b.pdf}
\caption{Limits on the coupling and energy scale of the contact interaction model (left) and on the mass of a Kaluza-Klein resonance leading to four top quark production (right)~\cite{atlas_conf}.}
\label{fig:limits}
\end{figure}
| proofpile-arXiv_065-11596 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{s-intro}
For several decades, CPUs have doubled their speed every two years in what is commonly known as Moore's law, but the storage technology has not been able to keep up with this trend: magnetic hard drives have steadily increased their capacity, but not their speed. Current computers and communication networks are not limited by the speed at which information can be processed, but rather by the speed at which it can be read, moved, and written. Furthermore, the recent information explosion is driving an exponential increase in the demand for data, which is not expected to slow down any time soon. Users and applications demand more data at higher speeds, straining the devices and networks to their maximum capabilities.
The IT industry has addressed this problem through parallelism and caching: instead of using a single high capacity storage drive to serve all the requests, networks usually distribute popular files across multiple independent servers that can operate in parallel and cache part of the information at intermediate or final nodes.
This paper proposes and analyzes multiple caching mechanisms for multi-server systems with different system parameters. Previous literature has addressed coded caching for single server systems and distributed storage without caching but, to the extent of our knowledge, this is the first work that considers both coded caching at the users and distributed storage at the servers.
Furthermore, it provides solutions for systems with and without file striping (\ie with files split among multiple servers and with whole files stored in each server).
Distributed storage deals with how the information is stored at the servers. Disk failures are very common in large storage systems, so they need to have some amount of redundancy. Erasure codes have recently sparked a renewed interest from the research community for this task. Files are encoded and distributed among a set of nodes (disks, servers, etc.) in such a way that the system can recover from the failure of a certain number of nodes~\cite{dimakis2010network}, \cite{ernvall2013capacity}. One widely used distributed storage technique based on erasure codes is RAID (redundant array of independent disks). It combines multiple storage nodes (disks, servers, etc.) into a single logical unit with data redundancy. Two of the most common are RAID-4 and RAID-6, consisting of block-level striping with one and two dedicated parity nodes, respectively \cite{corbett2004row, plank2009raid}. Most large scale systems use some form of RAID with striping across multiple storage drives, but store or replicate whole files as a single unit in the network nodes (\eg data centers)~\cite{balasubramanian2014sap}. This increases the peak rate, but it also simplifies book-keeping and deduplication, improves security, and makes the network more flexible.
Coded caching deals with the high temporal variability of network traffic: the peak traffic in the network is reduced by pre-fetching popular content in each receiver's local cache memory during off-peak hours, when resources are abundant. Coded caching has also recently become quite popular among the coding community, starting with the work by Maddah-Ali and Niesen in~\cite{maddah2014fundamental}, which focused on how a set of users with local memories can efficiently receive data from a single server through a common link. Their seminal paper proposed a caching and delivery scheme offering a worst case performance within a constant factor of the information-theoretic optimum, as well as upper and lower bounds on that optimum. The lower bounds were later refined in~\cite{ghasemi2015improved} and new schemes were designed to consider non-uniform file sizes and popularity~\cite{niesen2014coded,zhang2015coded,zhang2015code}; multiple requests per user~\cite{ji2014caching, ji2015caching}; variable number of users~\cite{hachem2015effect}; and multiple servers with access to the whole library of files~\cite{shariatpanahi2015multi}.
Maddah-Ali and Niesen's work in \cite{maddah2014fundamental} caches the information uncoded and encodes the transmitted packets. This scheme performs well when the cache size is relatively large, but a close inspection shows that there are other cases in which its performance is far from optimal. Tian and Chen's recent work in \cite{tian2016caching} designs a new algorithm which encodes both the cached and transmitted segments to achieve a better performance than~\cite{maddah2014fundamental} when the cache size is small or the number of users is greater than the number of files. However, this scheme also focuses on a single server system. In this paper, we aim to design a joint storage and transmission protocol for the multi-server multi-user system.
Summarizing, prior work on distributed storage has studied how a single user can efficiently recover data distributed across a set of nodes and prior work on coded caching has studied how a set of users with local memories can efficiently receive data from a single node. However, to the extent of our knowledge, it has not been studied how the cache placement and content delivery should be performed when multiple nodes send data to multiple users through independent channels.
In this paper, we aim to design a joint storage and transmission protocol for the multi-server multi-user system. We combine distributed storage with coded caching utilizing parallelism and redundancy to reduce the peak traffic rate. The main contributions of our paper are: (1) a flexible model for multi-server systems where each files can be divided among multiple servers or kept as a single block in one server; (2) an extension of the coded caching algorithms in \cite{maddah2014fundamental} and \cite{tian2016caching} to striping multi-server systems; (3) new caching and delivery schemes with significantly lower peak rates for the case when files are stored as a single unit in a data server.
The rest of the paper is structured as follows: Section \ref{s-background} introduces the system model and two existing coded caching algorithms for single server systems, namely the one proposed by Maddah-Ali and Niesen in~\cite{maddah2014fundamental} and the interference elimination scheme in~\cite{tian2016caching}. Section~\ref{s-striping} extends both algorithms to a multi-server system with file striping, while Sections~\ref{s-ali}~and~\ref{s-interference} consider the case where servers store whole files. Specifically, Section~\ref{s-ali} extends Maddah-Ali and Niesen's scheme, suitable for systems with large cache capacity, and Section~\ref{s-interference} extends the interference elimination scheme, which provides better performance when the cache size is small. Finally, Section~\ref{s-simulation} provides simulations to support and illustrate our algorithms and section \ref{s-conclude} concludes the paper.
\section{Background}\label{s-background}
This section describes the multi-node multi-server model in~\ref{ss-sysmodel} and then reviews two existing coded caching schemes that constitute the basis for our algorithms. Subsection~\ref{ss-basic} summarizes Maddah-Ali and Niesen's coded caching scheme from~\cite{maddah2014fundamental} and subsection~\ref{ss-interference} summarizes Tian and Chen's interference elimination scheme from~\cite{tian2016caching}.
\subsection{System Model}
\label{ss-sysmodel}
We consider a network with $K$ users\footnote{Servers and users can be anything from a single disk to a computer cluster, depending on the application.} and $N$ files stored in $L$ data servers. Some parts of the paper will also include additional parity servers, denoted parity server $P$ when storing the bitwise XOR of the information in the data servers (RAID-4) and parity server $Q$ when storing a different linear combination of the data (RAID-6). The network is assumed to be flexible, in the sense that there is a path from every server to every user \cite{shariatpanahi2015multi}. Each server stores the same number of files with the same size and each user has a cache with capacity for $M$ files.
For the sake of simplicity, this paper assumes that all files have identical length and popularity.
The servers are assumed to operate on independent error-free channels, so that two or more servers can transmit messages simultaneously and without interference to the same or different users. A server can broadcast the same message to multiple users without additional cost in terms of bandwidth, but users cannot share the content of their caches with each other. This assumption makes sense in a practical setting since peer-to-peer content sharing is generally illegal. Also, users typically have an asymmetric channel, with large download capacity but limited upload speed.
Similarly, each server can only access the files that it is storing, not those stored on other servers. A server can read multiple segments from its own files and combine them into a single message, but two files stored on different servers cannot be combined into a single message. However, it will be assumed that servers are aware of the content cached by each user and of the content stored in other servers, so that they can coordinate their messages. This can be achieved by exchanging segment IDs through a separate low-capacity control channel or by maintaining a centralized log.
The problem consists of two phases: placement and delivery. During the placement phase, the content is stored in the user's caches. The decisions on where to locate each file, how to compute the parity, and what data to store in each cache are made based on the statistics for each file's popularity, without knowledge of the actual user requests. In our paper, we assume all the files have the same popularity. The delivery phase starts with each user requesting one of the files. All servers are made aware of these requests and proceed to send the necessary messages.
Throughout the paper, we use subindices to represent file indices and superindices to represent segment indices, so $F_i^j$ will represent the j-th segment from file $F_i$. Some parts of the paper will also use different letters to represent files from different servers. For example, $A_i$ to represent the i-th file from server A and $A_i^j$ to represent the j-th segment from file $A_i$. The paper focuses on minimizing the peak rate (or delay), implicitly assuming that different users request different files. Therefore, we will indistinctly refer to users or their requests.
\subsection{Maddah-Ali and Niesen's scheme}
\label{ss-basic}
The coded caching scheme proposed by Maddah-Ali and Niesen in~\cite{maddah2014fundamental} has a single server storing all the files $\{F_1,F_2\ldots,F_N\}$, and users are connected to this server through a shared broadcast link. Their goal is to design caching and delivery schemes so as to minimize the peak load on the link, \ie the total amount of information transferred from the server to the users.
This scheme splits each file $F_i$ into $\binom{K}{t}$ nonoverlapping segments $F_i^j$ of equal size, $j=1,\ldots \binom{K}{t}$, with $t=\frac{KM}{N}$, and caches each segment in a distinct group of $t$ users. In other words, each subset of $t$ users is assigned one segment from each file for all the users to cache\footnote{Parameter $t$ is assumed to be an integer for the sake of symmetry. Otherwise some segments would be cached more often than others, requiring special treatment during the delivery phase and complicating the analysis unnecessarily.}. In the delivery phase the server sends one message to each subset of $t+1$ users, for a total of $\binom{K}{t+1}$ messages.
This caching scheme ensures that, regardless of which files have been requested, each user in a given subset of $t+1$ nodes is missing a segment that all the others have in their cache. The message sent to that subset of nodes consists of the bitwise XOR of all $t+1$ missing segments: a set of users $\mathbf{S}$ requesting files $F_{i_1}, F_{i_2},\ldots, F_{i_{t+1}}$ would receive the message
\begin{equation}
m^{\mathbf{S}}=F_{i_1}^{j_1}\oplus F_{i_2}^{j_2}\oplus\cdots \oplus F_{i_{t+1}}^{j_{t+1}}, \label{e-MNmessage}
\end{equation}
where $j_k$ is the index for the segment cached by all the users in the set except the one requesting $F_{i_k}$. Each user can then cancel out the segments that it already has in its cache to recover the desired segment. In the worst case, \ie when all users request different files, this scheme yields a (normalized by file size) peak rate of
\begin{align}
R_C(K,t) & = \binom{K}{t+1}/\binom{K}{t}\nonumber \\
& = K(1-M/N)\frac{1}{1+KM/N}. \label{e-RC_binom}
\end{align}
Under some parameter combinations, broadcasting all the missing segments uncoded could require lower rate than $R_C(K,t)$, so the generalized peak rate is
\begin{align}
\min\left\{R_C(K,t),N-M\right\}\nonumber
\end{align}
but this paper will ignore those pathological cases, assuming that $N$, $M$, and $K$ are such that $R_C(K,t)\leq N-M$. It has been shown that this peak rate is the minimum achievable for some parameter combinations and falls within a constant factor of the information-theoretic optimum for all others~\cite{maddah2014fundamental}\cite{ghasemi2015improved}.
This scheme, henceforth refered to as ``Maddah's scheme" will be the basis for multiple others throughout the paper. It is therefore recommended that the reader has a clear understanding of Maddah's scheme before proceeding.
\subsection{Interference Elimination}
\label{ss-interference}
A close examination of Maddah's algorithm reveals that it has poor performance when the cache is small and $N\leq K$. Thus, a new coded caching scheme based on interference elimination was proposed by Tian and Chen in~\cite{tian2016caching} for the case where the number of users is greater than the number of files. Instead of caching file segments in plain form, they propose that the users cache linear combinations of multiple segments. After formulating the requests, undesired terms are treated as interference that needs to be eliminated to recover the requested segment. The transmitted messages are designed to achieve this using maximum distance separable (MDS) codes \cite{blom1984optimal}\cite{suh2011exact}.
In the placement phase, this scheme also splits each file into $\binom{K}{t}$ non-overlapping segments of equal size and each segment is cached by $t$ users, albeit combined with other segments. Let $F_{i}^{\mathbf{S}}$, where $\mathbf{S}\subseteq \{1,2,\ldots,K\}$ and $\left|\mathbf{S}\right|=t$, denote the file segment from file $F_i$ chosen to be cached by the users in $\mathbf{S}$. In the placement phase user $k$ collects the file segments
\begin{equation}\label{e-segments_interf}
\{F_{i}^{\mathbf{S}}| i\in \{1,2,\ldots,N\},k\in\mathbf{S}\},
\end{equation}
($P=\binom{K-1}{t-1}N$ in total), encodes them with a MDS code $\mathcal{C}(P_0,P)$ of length $P_0=2\binom{K-1}{t-1}N-\binom{K-2}{t-1}(N-1)$, and stores the $P_0-P$ parity symbols in its cache.
The delivery phase proceeds as if all the files are requested. When only some files are requested, the scheme replaces some users' requests to the ``unrequested files" and proceeds as if all files were requested. A total of $\binom{K-1}{t}$ messages are transmitted (either uncoded or coded) for each file $F_i$, regardless of the requests. Uncoded messages provide the segments that were not cached by the users requesting $F_i$, while coded messages combining multiple segments from $F_i$ are used to eliminate the interference in their cached segments. Each user gathers $\binom{K-2}{t-1}(N-1)$ useful messages which, together with the $P-P_0$ components stored in its cache, are enough to recover all $P$ components in the $\mathcal{C}(P_0,P)$ MDS code. A more detailed description of the messages can be found in~\cite{tian2016caching}.
Therefore, the total number of messages transmitted from the server is $N\binom{K-1}{t}$. In this interference elimination scheme, the following normalized $(M,R)$ pairs are achievable:
\begin{equation}\label{e-eliminationrate}
\left(\frac{t\left[(N-1)t+K-N\right]}{K(K-1)},\frac{N(K-t)}{K}\right),\ t=0,1,\ldots,K.
\end{equation}
This scheme is shown to improve the inner bound given in~\cite{maddah2014fundamental} for the case $N\leq K$ and has a better performance than the algorithm in subsection \ref{ss-basic} when the cache capacity is small.
\subsection{Extension to multiple servers}\label{ss-multiple}
Both of the previous schemes assume that a single server stores all the files and can combine any two segments into a message. Then, they design a list of messages to be broadcast by the server, based on the users' requests. In practice, however, it is often the case that content delivery networks have multiple servers and throughput is limited by the highest load on any one server rather than by the total traffic in the link between servers and users. Shariatpanahi et al. addressed this case in~\cite{shariatpanahi2015multi}, but still assumed that all servers had access to all the files and could therefore compose any message. They proposed a load balancing scheme distributing the same list of messages among all the servers, scaling the peak rate by the number of servers.
If each server only has access to some of the files, the problem is significantly more complicated. The general case, where each segment can be stored by multiple servers and users, is known as the index coding problem. This is one of the core problems of network information theory but it remains open despite significant efforts from the research community \cite{el2010index,bar2011index,chaudhry2008efficient}. Instead of addressing the index coding problem in its general form, we focus on the case where each data segment is stored in a single server, all caches have the same capacity, and users request a single file.
A simple way to generalize the previous schemes to our scenario is to follow the same list of messages, combining transmissions from multiple servers to compose each of them. Instead of receiving a single message with all the segments as shown in Eq.~(\ref{e-MNmessage}), each node would receive multiple messages from different servers. The peak rate for any one server would then be the same as in a single server system.
With parity servers storing linear combinations of the data, the peak rate can be reduced. In general, distributed storage systems use MDS codes for the parity\footnote{Some systems use repetition or pyramid codes \cite{el2010fractional}\cite{yu2014irregular}\cite{huang2013pyramid} to reduce the recovery bandwidth, but this paper will focus on MDS codes.}, so any subset of $L$ servers can be used to generate any message. Therefore, a simple balancing of the load by rotating among all subsets of $L$ servers would scale the peak rate by $\frac{L}{L+L'}$, where $L'$ is the number of parity servers. However, we intend to design caching and delivery algorithms capable of further reducing the peak rate of any one server.
\section{File striping}
\label{s-striping}
The simplest way to extend single-server coded caching algorithms to a multi-server system is to spread each file across all servers. That way, each user will request an equal amount of information from each server, balancing the load. This is called data striping \cite{santos2000comparing} and it is common practice in data centers and solid state drives (SSD), where multiple drives or memory blocks can be written or read in parallel. The users then allocate an equal portion of their cache to each server and the delivery is structured as $L$ independent single-server demands. We now proceed to give a detailed description of how striping can reduce the peak rate of Maddah's scheme, but the same idea can be applied to any other scheme.
Each of the $N$ files $\{F_1,F_2\ldots,F_N\}$ is split into $L$ blocks to be stored in different servers and each block is divided into $\binom{K}{t}$ segments. These segments are denoted by $F_{i}^{(j,m)}$, where $i=1,2,\ldots,N$ represents the file number; $j=1,2,\ldots,\binom{K}{t}$ the segment number; and $m=1,2,\ldots,L$ the block number. The $m$-th server is designed to store the $m$-th segment of each file, that is $F_{i}^{(j,m)}$ for every $i$ and $j$.
The placement is the same as in Maddah's scheme. Each segment is cached by $t$ users, with $\{F_{i}^{(j,1)},F_i^{(j,2)},\ldots,F_i^{(j,L)}\}$ being cached by the same user. We notice that each message transmitted by Maddah's scheme in Eq.~(\ref{e-MNmessage}) can be split into $L$ components
\begin{equation}
F_{i_1}^{(j_1,m)}\oplus F_{i_2}^{(j_2,m)}\oplus\cdots \oplus F_{i_{t+1}}^{(j_{t+1},m)}, \label{e-StripingMessage}
\end{equation}
$m=1,2,\ldots,L$ to be sent by different servers.
Then the problem can be decomposed into $L$ independent single-server subproblems with reduced file sizes of $\frac{F}{L}$ bits. The subproblems have the same number of users, files, and cache capacity (relative to the file size) as the global problem.
Since all servers can transmit simultaneously, the peak load is reduced to $\frac{1}{L}$ of that in Eq.~(\ref{e-RC_binom}) (Maddah's single server scheme).
If one additional parity server $P$ is available (RAID-4), it will store the bitwise XOR of the blocks for each file, \ie $F_i^{(j,1)}\oplus F_i^{(j,2)}\oplus \cdots \oplus F_i^{(j,L)}$ for all $i$ and $j$.
Then, server $P$ can take over some of the transmissions, reducing the peak load to $\frac{1}{L+1}$ of that with Maddah's scheme\footnote{The number of segments must be a multiple of $L$ to achieve this reduction, but it is always possible to divide each segment into multiple chunks to fulfil this condition.}. Specifically, instead of having all data servers transmit their corresponding component in Eq.~(\ref{e-StripingMessage}), server $P$ can transmit the XOR of all the components, relieving one data server from transmitting.
The users can combine the rest of the components with this XOR to obtain the missing one.
Similarly, if two additional parity servers $P$ and $Q$ are available (RAID-6), it is possible to choose any $L$ out of the $L+2$ servers to take care of each set of messages in Eq.~(\ref{e-StripingMessage}), thereby reducing the peak rate to $\frac{1}{L+2}$ of that with Maddah's scheme.
A similar process with identical file splitting can be followed for the interference cancelling scheme, achieving the same scaling of the peak rate: $\frac{1}{L}$ when there is no parity, $\frac{1}{L+1}$ with a single parity server, and $\frac{1}{L+2}$ with two parity servers.
In practice, however, it is often preferred to avoid striping and store whole files as a single unit in each server to simplify the book-keeping, ensure security, and make the network more flexible. The rest of the paper will focus on the case where nodes store entire files, and each user requests a file stored in a specific node.
\section{Scheme 1: Large cache}\label{s-ali}
In this section, we extend Maddah-Ali and Niesen's scheme to the multiple server system. Instead of spreading each file across multiple servers as in Section~\ref{s-striping}, each file is stored as a single unit in a data server, as shown in Table~\ref{t-servers_general}.
\begin{table}
\centering
\begin{tabular}[h]{|c|c|c|c|
\hline
Server A & Server B & $\cdots$ & Server L\\
\hline
$A_1$ & $B_1$ & $\cdots$ & $L_1$ \\
$A_2$ & $B_2$ & $\cdots$ & $L_2$ \\
\vdots & \vdots & & \vdots \\
$A_r$ & $B_r$ & $\cdots$ & $L_r$ \\
\hline
\end{tabular}
\caption{Files stored in each server}\label{t-servers_general}
\end{table}
The performance of Maddah's scheme in Eq.~(\ref{e-RC_binom}) is highly dependent on the cache capacity $M$. Compared with the interference elimination in section \ref{ss-interference}, the advantage of Maddah's scheme lies in that file segments are stored in plain form instead of encoded as linear combinations. This saves some segments from being transmitted in the delivery phase, but it requires larger cache capacities to obtain coded caching gains. Hence, Maddah's scheme is appropriate when the cache capacity is large.
The placement phase of our algorithm is identical to that in the traditional scheme. For example, in a system with $K=6$ users with cache capacity $M=4$ and $N=8$ files, each file is divided into $20$ segments and each segment is stored by $t=3$ users. Table~\ref{t-caches} indicates the indices of the $10$ segments that each user stores, assumed to be the same for all files without loss of generality.
In order to simplify later derivations, the notation is clarified here. Since the peak rate for the storage system is considered, we assume that all users request different files, hence each user can be represented by the file that it has requested. Denote $\mathbf{S}$ to be the user set and $m_A^{\mathbf{S}}$ to represent the message sent from server $A$ to all the users in $\mathbf{S}$. Furthermore, if $\pmb{\alpha}=\{\alpha_1,\alpha_2,\ldots,\alpha_i\}$ represents a vector of file indices and $\pmb{\gamma}=\{\gamma_1,\gamma_2,\ldots,\gamma_i\}$ represents a vector of segment indices, then $\mathbf{A}_{\pmb{\alpha}}$ represents the set of requests (or users)
\begin{equation}\nonumber
\mathbf{A}_{\pmb{\alpha}}=\{A_{\alpha_1}, A_{\alpha_2}, \ldots, A_{\alpha_i}\}
\end{equation}
and $\mathbf{A}_{\pmb{\alpha}}^{\pmb{\gamma}}$ represents the message
\begin{equation}\nonumber
\mathbf{A}_{\pmb{\alpha}}^{\pmb{\gamma}}=A_{\alpha_1}^{\gamma_1}\oplus A_{\alpha_2}^{\gamma_2}\oplus\ldots \oplus A_{\alpha_i}^{\gamma_i},
\end{equation}
where $A_i^j$ represents the $j$-th segment from the $i$-th file in server $A$.
Similarly, $\mathbf{A}_{\pmb{\alpha}}^{\pmb{\gamma}}\oplus \mathbf{B}_{\pmb{\alpha}}^{\pmb{\gamma}}$ represents the the message:
\begin{equation}\nonumber
\mathbf{A}_{\pmb{\alpha}}^{\pmb{\gamma}}\oplus \mathbf{B}_{\pmb{\alpha}}^{\pmb{\gamma}}=(A_{\alpha_1}^{\gamma_1}\oplus B_{\alpha_1}^{\gamma_1})\oplus\ldots\oplus(A_{\alpha_i}^{\gamma_i}\oplus B_{\alpha_i}^{\gamma_i}).
\end{equation}
We first explore the multi-server system without parity servers in subsection~\ref{ss-MoreFilesNoParity}. Then we study a simple system with two data and one parity server in subsection~\ref{ss-algorithm}. Finally, we study the cases with one and two parity servers in subsections~\ref{ss-beyond} and \ref{s-raid6}, respectively.
\subsection{No parity servers}
\label{ss-MoreFilesNoParity}
In a system without redundancy, such as the one shown in Table~\ref{t-servers_general}, the servers cannot collaborate with each other. During the delivery phase, each user is assigned to the server storing the file that it requested, and then each data server transmits enough messages to fulfil its requests. Specifically, following Maddah's scheme, a server receiving $m$ requests would need to transmit $\binom{K}{t+1}-\binom{K-m}{t+1}$ messages, \ie one for each group of $t$ users containing at least one of its requesters. The normalized peak rate for that server would therefore be
\begin{equation}\nonumber
\left(\binom{K}{t+1}-\binom{K-m}{t+1}\right)\left/\binom{K}{t}\right.
\end{equation}
The worst case occurs when all users request files from the same server, \ie $m=K$. Then the peak transmission rate is the same as in the single server system.
\subsection{One parity and two data servers}
\label{ss-algorithm}
This section focuses on a very simple storage system with two data servers and a third server storing their bitwise XOR, as shown in Table~\ref{t-servers}.
Despite each server can only access its own files, the configuration in Table~\ref{t-servers} allows composing any message by combining messages from any two servers. Intuitively, if server $A$ (or $B$) finish its transmission task before the other one, it can work with the parity server to help server $B$ (or $A$). This collaborative scheme allows serving two requests for files stored in the same server in parallel, balancing the load and reducing the worst case peak rate below that achieved without the parity server (see Section~\ref{ss-MoreFilesNoParity}).
However, there is a better transmission scheme where messages from all three servers are combined to get more information across to the users. The basic idea is to include some unrequested segments, as well as the requested ones, in each message from a data server. If the additional segments are well chosen, they can be combined with messages from the parity server to obtain desired file segments. The algorithm developed in this section is based on this idea.
\begin{table}
\centering
\begin{tabular}[h]{|c|c|c|
\hline
Server $A$ & Server $B$ & Server $P$\\
\hline
$A_1$ & $B_1$ & $A_1 \oplus B_1$ \\
$A_2$ & $B_2$ & $A_2 \oplus B_2$ \\
$\vdots$ & $\vdots$ & $\vdots$ \\
$A_r$ & $B_r$ & $A_r \oplus B_r$ \\
\hline
\end{tabular}
\caption{Files stored in each server}\label{t-servers}
\end{table}
Just like in Maddah's scheme, data servers will send each message to a set of $t+1$ users and the message will contain the XOR of $t+1$ segments (one for each user). These segments are chosen so that all users except the intended receiver can cancel them out. If the user had requested a file stored by the sender, the message will contain the corresponding segment; otherwise the message will include its complement in terms of the parity in server $P$, {\it i.e.} $A_i^j$ instead of $B_i^j$ and vice versa. Therefore, the contents of each message from server $A$ or $B$ are uniquely determined by the sender and the set of receivers, denoted by $S_1$ or $S_2$ respectively. In the example shown in Table~\ref{t-caches}, the message from server $A$ to $S_1=\{A_1, A_2, A_3, B_4\}$, corresponding to users 1 through 4, will be $m_A^{\mathbf{S_1}}=A_1^{11}\oplus A_2^5\oplus A_3^2\oplus A_4^1$.
\begin{lemma}\label{l-combine}
Let the receivers for servers A and B be
\begin{equation}\nonumber
S_1=\{\mathbf{A}_{\pmb{\alpha}},\mathbf{B}_{\pmb{\beta}},\mathbf{A}_\ast\}\qquad S_2=\{\mathbf{A}_{\pmb{\alpha}},\mathbf{B}_{\pmb{\beta}},\mathbf{B}_\ast\},
\end{equation}
respectively, where $\alpha$ and $\beta$ denote (possibly empty) sets of indices, the $\ast$ denote arbitrary sets, and $S_1\neq S_2$. The corresponding messages are
\begin{equation}\nonumber
m_A^{\mathbf{S_1}}= \mathbf{A}_{\pmb{\alpha}}^{\ast}\oplus\mathbf{A}_{\pmb{\beta}}^{\pmb{\gamma}}\oplus\mathbf{A}_\ast^\ast\qquad m_B^{\mathbf{S_2}}=\mathbf{B}_{\pmb{\alpha}}^{\pmb{\eta}}\oplus\mathbf{B}_{\pmb{\beta}}^{\ast}\oplus\mathbf{B}_\ast^\ast,
\end{equation}
with segment indices chosen so that each user can cancel all but one of the components.
This provides users $\mathbf{B}_{\pmb{\beta}}$ and $\mathbf{A}_{\pmb{\alpha}}$ with some unrequested segments $\mathbf{A}_{\pmb{\beta}}^{\pmb{\gamma}}$ and $\mathbf{B}_{\pmb{\alpha}}^{\pmb{\eta}}$, respectively. Then server $P$ can send the message
\begin{equation}\nonumber
m_P^{\mathbf{S_1\cap S_2}}=(\mathbf{A}_{\pmb{\alpha}}^{\pmb{\eta}}\oplus \mathbf{B}_{\pmb{\alpha}}^{\pmb{\eta}})\oplus(\mathbf{A}_{\pmb{\beta}}^{\pmb{\gamma}}\oplus \mathbf{B}_{\pmb{\beta}}^{\pmb{\gamma}}),
\end{equation}
to $S_1\cap S_2$, so that each user in $S_1$ and $S_2$ obtains a missing segment and those in the intersection obtain two. These three transmissions are equivalent to messages $m^{\mathbf{S_1}}$ and $m^{\mathbf{S_2}}$ as defined in Eq.~(\ref{e-MNmessage}) for Maddah's single server scheme. They both provide the same requested segments to their destinations.
\end{lemma}
\begin{IEEEproof}
All the users in $S_1$ and $S_2$ get at least one desired segment, from the server storing their requested file. Those in $S_1\cap S_2$ also receive an unrequested segment from server $A$ or $B$. It only remains to prove that users in $S_1\cap S_2$ can use this unrequested segment to obtain its complement from $m_P^{\mathbf{S_1\cap S_2}}$.
Without loss of generality, consider user $B_{\beta_i}\in S_1\cap S_2$.
The set of segment indices $\gamma$ in $m_A^{\mathbf{S_1}}$ were chosen so that user $B_{\beta_i}$ is caching all the segments except the $\gamma_i$-th. Similarly, the set of indices $\eta$ in $m_B^{\mathbf{S_2}}$ was chosen so that $B_{\beta_i}$ is caching all of them (for all files). Therefore, $B_{\beta_i}$ can obtain $A_{\beta_i}^{\gamma_i}$ from $m_A^{\mathbf{S_1}}$ and should be able to cancel all terms from $m_P^{\mathbf{S_1\cap S_2}}$ except $A_{\beta_i}^{\gamma_i}\oplus B_{\beta_i}^{\gamma_i}$. Combining both of these yields the desired segment $B_{\beta_i}^{\gamma_i}$. As long as $S_1\neq S_2$, this segment will be different from the one that $B_{\beta_i}$ obtains from $m_B^{\mathbf{S_2}}$ because there is a one-to-one relationship between segment indices and user subsets.
\end{IEEEproof}
Take the case in Table \ref{t-caches} as an example. Lemma.~\ref{l-combine} states that if $S_1=\{A_1, A_2, A_3, B_4\}$ and $S_2=\{A_1, A_2, B_1, B_4\}$, we construct $m_A^{\mathbf{S_1}}$, $m_B^{\mathbf{S_2}}$, $m_P^{\mathbf{S_1\cap S_2}}$ as:
\begin{align}
m_A^{\mathbf{S_1}}&=A_1^{11}\oplus A_2^5\oplus A_3^2\oplus A_4^1,\nonumber\\
m_B^{\mathbf{S_2}}&=B_1^{14}\oplus B_2^8\oplus B_1^2\oplus B_4^3,\nonumber\\
m_P^{\mathbf{S_1\cap S_2}}&=(A_1^{14}\oplus B_1^{14})\oplus(A_2^8\oplus B_2^8)\oplus (A_4^1\oplus B_4^1).\nonumber
\end{align}
It is easy to verify that these messages are equivalent to two transmissions in Maddah's scheme, specifically those intended for users $\{A_1, A_2, A_3, B_4\}$ and $\{A_1, A_2, B_1, B_4\}$.
\begin{table}
\centering\small
\begin{tabular}[htp]{|c|cccccc|
\hline
Segment$\setminus$ User & 1 & 2 & 3 & 4 & 5 & 6\\
\hline
1& X&X &X & & & \\
2 &X & X& & X& & \\
3& X& X& & & X& \\
4& X& X& & & &X \\
5& X& &X &X & & \\
6& X& &X & &X & \\
7& X& & X& & &X \\
8& X& & & X&X & \\
9& X& & & X& &X \\
10& X& & & & X&X \\
11& & X&X &X & & \\
12& & X& X& & X& \\
13& & X& X& & & X\\
14& & X& &X &X & \\
15& & X& & X& & X\\
16& & X& & &X &X \\
17& & & X& X& X& \\
18& & & X& X & &X \\
19& & & X& &X &X \\
20& & & &X &X &X \\
\hline
Request& $A_1$ &$A_2$ &$A_3$ &$B_4$& $B_1$&$B_2$\normalsize\\
\hline
\end{tabular}
\caption{Mapping of file segments to user caches. Each cache stores the same 10 segments for every file, marked with X in the table.}\label{t-caches}
\end{table}
\begin{corollary}\label{c-combine2}
Assume $S_1=\{\mathbf{A}_\ast,\mathbf{B}_{\pmb{\beta}}\}$ and $S_2=\{\mathbf{B}_\ast\}$, \ie it only contains requests for server $B$. Then server $P$ sends $m_P^{\mathbf{B}_{\pmb{\beta}}}=\mathbf{A}_{\pmb{\beta}}^{\pmb{\gamma}}\oplus \mathbf{B}_{\pmb{\beta}}^{\pmb{\gamma}}$ to all the users in $\mathbf{B}_{\pmb{\beta}}$ in Lemma~\ref{l-combine}, so that all the users in $S_1$ and $S_2$ get the same segments as in Maddah's scheme. The same holds switching the roles of $A$ and $B$.
\end{corollary}
\begin{IEEEproof}
This is a particular case of Lemma~\ref{l-combine} when $\pmb{\alpha}$ is empty ($\pmb{\beta}$ can be empty or non-empty).
\end{IEEEproof}
\begin{definition}
If user subsets $S_1$ and $S_2$ fulfill the conditions in Lemma~\ref{l-combine}, we call $(S_1,S_2)$ an \textbf{effective pair}.
\end{definition}
Our goal is to design a scheme equivalent to Maddah's scheme while minimizing the maximum number of messages sent by any server. If two user subsets form an effective pair, the corresponding messages in Maddah's scheme (see Eq.~(\ref{e-MNmessage})) can be replaced by a single transmission from each server. Hence, we wish to make as many effective pairs as possible.
\begin{lemma}\label{l-unpaired}
The peak rate is $\left(\frac{1}{2}+\frac{1}{6}\Delta\right)R_C(K,t)$ for the server system in Table~\ref{t-servers}, where $\Delta$ represents the ratio of unpaired messages and $t=\frac{KM}{N}$.
\end{lemma}
\begin{IEEEproof}
For each effective pair, we can use a single transmission from each server to deliver the same information as two transmissions in Maddah's single server scheme. This contributes $\frac{1}{2}(1-\Delta)R_C(K,t)$ to the total rate. Unpaired messages are transmitted as described in section~\ref{ss-multiple}, that is combining messages from any two out of the three servers. Assuming that this load is balanced among all three servers, the contribution to the total rate is $\frac{2}{3}\Delta R_C(K,t)$. Adding both contributions yields the rate above.
\end{IEEEproof}
The following lemma characterizes the ratio of unpaired user subsets $\Delta$ in the case with symmetric requests (both servers receive the same number of requests).
\begin{lemma}\label{l-pairing}
If the requests are symmetric, then $\Delta=0$ when $t$ is even and $\Delta\leq\frac{1}{3}$ when $t$ is odd. That is, the following peak rate is achievable in the case with symmetric requests:
\begin{displaymath}\numberthis \label{e-rate}
R_T(K,t) = \left\{ \begin{array}{ll}
\frac{1}{2}R_C(K,t)&\textrm{if $t$ is even}\\
&\\
\left(\frac{1}{2}+\frac{1}{6}\Delta\right)R_C(K,t)&\textrm{if $t$ is odd,}
\end{array} \right.
\end{displaymath}
where $R_C(K,t)$ is defined in Eq.~(\ref{e-RC_binom}).
\end{lemma}
\begin{IEEEproof}
A pairing algorithm with these characteristics is presented in the Appendix.
\end{IEEEproof}
Although $\Delta$ can reach $\frac{1}{3}$, in most cases the pairing algorithm in the Appendix performs much better. As an example, Table~\ref{t-caches} has each segment cached by $t=\frac{KM}{N}=3$ users and the normalized peak rate with the pairing algorithm is $\frac{2}{5}$, significantly lower than the $\frac{3}{4}$ with Maddah's single server scheme.
Finally, we are ready to derive an achievable peak rate for a general set of requests, based on the following lemma.
\begin{lemma}\label{l-combine3}
If $(S_1,S_2)$ form an effective pair, then $S'_1=\{S_1,\mathbf{A}_{\pmb{\alpha}}\}$ and $S'_2=\{S_2,\mathbf{A}_{\pmb{\alpha}}\}$ also form an effective pair of a larger dimension. The same holds when an all-B file set is appended instead of the all-A file set $\mathbf{A}_{\pmb{\alpha}}$.
\end{lemma}
\begin{IEEEproof}
The proof is straightforward by observing that $(S'_1,S'_2)$ still fulfills the conditions in Lemma~\ref{l-combine}.
\end{IEEEproof}
The extension to the asymmetric case is as follows. Let $K_A$ and $K_B$ respectively denote the number of requests for servers $A$ and $B$, and assume $K_A>K_B$ without loss of generality. Divide the $K=K_A+K_B$ requests (or users) into two groups: the first with $K_B$ requests for each server (symmetric demands) and the second with the remaining $K_A-K_B$ requests for server A. We construct effective pairs of length $t+1$ by appending requests from the second group to effective pairs from the first.
\begin{theorem}\label{t-2data1parity_asym}
If the requests are asymmetric, the ratio of unpaired messages is also bounded by $\Delta\leq\frac{1}{3}$. Specifically, if $K_A$ and $K_B$ respectively denote the number of requests for servers $A$ and $B$, assuming $K_A>K_B$ without loss of generality, the following normalized peak rate is achievable:
\begin{align}\label{e-pair}
R(K_A,K_B,t)&=\sum_{l=0}^{t+1}\binom{K_A-K_B}{l}R_T(2K_B,t-l),
\end{align}
where $R_T$ is defined in Eq.~(\ref{e-rate}) and $K=K_A+K_B$.
\end{theorem}
\begin{IEEEproof}
From Lemma~\ref{l-pairing}, $R_T(2K_B,t-l)$ represents the peak rate after pairing all subsets of $t+1-l$ requests from the symmetric group. For each $l=0,1,\ldots,t+1$, we multiply $R_T(2K_B,t-l)$ by the number of possible completions with $l$ requests from the second group, to obtain the peak rate corresponding to subsets with $t+1-l$ requests from the first group and $l$ from the second. Adding them for all $l$ gives Eq.~(\ref{e-pair}).
Since $R_T(i,j)\leq \left(\frac{1}{2}+\frac{1}{6}\Delta\right)R_C(i,j)$ with $\Delta\leq\frac{1}{3}$ by Lemma~\ref{l-pairing}, and $\sum_{l=0}^{t+1}\binom{K_A-K_B}{l}R_C(2K_B,t-l)=R_C(K,t)$ by combinatorial equations, Eq.~(\ref{e-pair}) implies that $R(K_A,K_B,t)\leq \left(\frac{1}{2}+\frac{1}{6}\Delta\right)R_C(K,t)$ with $\Delta\leq\frac{1}{3}$ as defined in Lemma~\ref{l-unpaired}.
\end{IEEEproof}
\begin{corollary}
A peak rate of $\frac{5}{9}R_C(K,t)$ is achievable for a system with two data servers and a parity server.
\end{corollary}
\subsection{One parity and $L$ data servers}\label{ss-beyond}
The previous subsection has discussed the case with two data servers and one parity server, but the same algorithm can be extended to systems with more than two data servers. Intuitively, if there are $L$ data servers and one parity server, any message can be built by combining messages from any $L$ servers. A first approach could be distributing the $\binom{K}{t+1}$ messages in Maddah's scheme across the $L+1$ possible groups of $L$ servers, as proposed in subsection~\ref{ss-multiple}. Each server would then need to send a maximum of $\binom{K}{t+1}\cdot \frac{L}{L+1}$ messages. However, there is a more efficient way of fulfilling the requests based on the algorithms in subsections~\ref{ss-multiple}, \ref{ss-MoreFilesNoParity} and \ref{ss-algorithm}.
\begin{lemma}\label{l-transmit1}
Let $S_1=\{\mathbf{A}_{\pmb{\alpha}},\mathbf{B}_{\pmb{\beta}},\mathbf{A}_\ast,\mathbf{Y}\}$ and $S_2=\{\mathbf{A}_{\pmb{\alpha}},\mathbf{B}_{\pmb{\beta}},\mathbf{B}_\ast,\mathbf{Y'}\}$ be two user subsets, where $\mathbf{Y}$ and $\mathbf{Y'}$ are arbitrary lists of requests for servers $C$ through $L$ and the $\ast$ represent arbitrary (possibly empty) index sets. Then, $S_1$ and $S_2$ can be paired so that servers $A$, $B$ and $P$ require a single transmission to provide the same information as messages $m^{\mathbf{S_1}}$ and $m^{\mathbf{S_2}}$ in Maddah's single server scheme. The other data servers, $C$ through $L$, require a maximum of two transmissions, as shown in paired transmissions in Fig.~\ref{fig:lservers}.
\end{lemma}
\begin{IEEEproof}
The transmissions would proceed as follows:
\begin{enumerate}
\item Servers $C$ through $L$ each send two messages, to $S_1$ and $S_2$. For example, server $C$ would send $m_C^{S_1}$ and $m_C^{S_2}$, providing a desired segment to users requesting files from $C$ and the corresponding $C$-segments to those requesting other files.
\item Server $A$ sends\footnote{It would be enough for $A$ to send $m_A^{\{\mathbf{A}_\ast,\mathbf{A}_{\pmb{\alpha}},\mathbf{B}_{\pmb{\beta}}\}}$ instead of $m_A^{S_1}$, but we use the latter for the sake of simplicity. The same applies to the message from server $B$.} $m_A^{S_1}$, providing a desired segment to users requesting $\{\mathbf{A}_\ast,\mathbf{A}_{\pmb{\alpha}}\}$ and the corresponding undesired A-segments to those requesting $\mathbf{B}_{\pmb{\beta}}$.
\item Server $B$ sends $m_B^{S_2}$, providing a desired segment to users requesting $\{\mathbf{B}_{\pmb{\beta}},\mathbf{B}_\ast\}$ and the corresponding undesired B-segments to those requesting $\mathbf{A}_{\pmb{\alpha}}$.
\item Server $P$ sends $m_P^{\{\mathbf{A}_{\pmb{\alpha}},\mathbf{B}_{\pmb{\beta}}\}}$ to users requesting $\{\mathbf{A}_{\pmb{\alpha}},\mathbf{B}_{\pmb{\beta}}\}$. Using the undesired segments previously received, the users in $\{\mathbf{A}_{\pmb{\alpha}},\mathbf{B}_{\pmb{\beta}}\}$ can solve for the desired $A$ and $B$ segments.
\end{enumerate}
A simple comparison of the requested and received segments shows that these transmissions deliver the same information as messages $m^{\mathbf{S_1}}$ and $m^{\mathbf{S_2}}$ in Maddah's single server scheme.
\end{IEEEproof}
As an example, Table~\ref{t-beyondthree} shows the segments that each user gets in transmissions (1)-(4) when $S_1=\{A_1, A_2, B_1, C_1\}$ and $S_2=\{A_1, B_1, B_2, C_2\}$, respectively corresponding to segments $\{A_1^1,A_2^2,B_1^3,C_1^4 \}$ and $\{A_1^5,B_1^6,B_2^7,C_2^8 \}$.
\begin{table}
\centering\small
\begin{tabular}[htp]{|c|c|c|c|c|c|c|}
\hline
\tiny{Trans.$\setminus$}Req.& $A_1$ & $A_2$ & $B_1$ & $B_2$ & $C_1$ &$C_2$\\
\hline
(1)&$C_1^5$&&$C_1^3$&&$C_1^4$&$C_2^8$\\
\hline
(2)&$A_1^1$&$A_2^2$&$A_1^3$&&&\\
\hline
(3)&$B_1^5$&&$B_1^6$&$B_2^7$&&\\
\hline
(4)&$P_1^5$&&$P_1^3$&&&\\
\hline
in total&$A_1^1,A_1^5$&$A_2^2$&$B_1^3,B_1^6$&$B_2^7$&$C_1^4$&$C_2^8$\\
\hline
\end{tabular}
\caption{Segments received by each users in transmissions (1)-(4) from Lemma~\ref{l-transmit1}, where $P_i^j=A_i^j\oplus B_i^j\oplus C_i^j$.}\label{t-beyondthree}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.18\textwidth,angle=270]{L_server.eps}
\vspace{1em}
\caption{Pairing for $4$ data servers and one parity server system. $A,B,C,D$ are data servers and P represents the parity server. X means there is a message transmitted from the corresponding server.}
\label{fig:lservers}
\end{figure}
\begin{theorem}\label{t-rateLplus1}
The following normalized peak rate is achievable for a system with $L\geq 3$ data servers and one parity server:
\begin{equation}\label{e-oneparity}
R_P(K,t)=\frac{L-1}{L}R_C(K,t),
\end{equation}
where $R_C$ is defined in Eq.~(\ref{e-RC_binom}).
\end{theorem}
\begin{IEEEproof}
First we show that we can deliver $\frac{2}{L}\binom{K}{t+1}$ of the messages in Maddah's scheme using at most $\frac{1}{L}\binom{K}{t+1}$ transmissions from servers $A$, $B$ and $P$; and at most $\frac{2}{L}\binom{K}{t+1}$ transmissions from each of the other servers. This can be done by pairing the messages as shown in Lemma~\ref{l-transmit1}, if they include requests for $A$ or $B$, and by using the scheme in subsection~\ref{ss-MoreFilesNoParity}, if they do not.
Selecting these $\frac{2}{L}\binom{K}{t+1}$ messages can be done as follows: group messages by the number of segments that they have from servers $A$ or $B$. Within each group, we pair the messages as shown in Lemma~\ref{l-transmit1}. This is equivalent to pairing the $A$ and $B$ requests into effective pairs according to Theorem~\ref{t-2data1parity_asym} and considering all possible completions for each pair using requests for other servers.
Theorem~\ref{t-2data1parity_asym} showed that at least $\frac{2}{3}\geq \frac{2}{L}$ of the messages in each group can be paired. Messages which have no $A$ or $B$ segments can be transmitted as described in section~\ref{ss-MoreFilesNoParity}, without requiring any transmissions from servers $A$, $B$ or $P$.
The remaining $\frac{L-2}{L}\binom{K}{t+1}$ messages can be transmitted as described in subsection~\ref{ss-multiple}, distributing the savings evenly among servers $C$ through $L$. This requires $\frac{L-2}{L}\binom{K}{t+1}$ transmissions from servers $A$, $B$ and $P$; and $\frac{L-3}{L}\binom{K}{t+1}$ from each of the rest.
Each server then transmits a total of $\frac{L-1}{L}\binom{K}{t+1}$, hence the peak rate in Eq.~(\ref{e-oneparity}).
\end{IEEEproof}
Theorem~\ref{t-rateLplus1} provides a very loose bound for the peak rate in a system with one parity and $L$ data servers. In practice, there often exist alternative delivery schemes with significantly lower rates. For example, if all the users request files from the same server, that server should send half of the messages while all the other servers collaborate to deliver the other half. The rate would then be reduced to half of that in Maddah's scheme. Similarly, if $L>t+1$ and all the servers receive similar numbers of requests, the scheme in subsection~\ref{ss-MoreFilesNoParity} can provide significantly lower rates than Eq.~(\ref{e-oneparity}).
\subsection{Two parity and L data servers}\label{s-raid6}
In this section, we will extend our algorithm to a system with $L$ data and two linear parity servers operating in a higher order field instead of GF(2). The parity server $P$ stores the horizontal sum of all the files while the parity server $Q$ stores a different linear combination of the files BY ROW, as shown in Table~\ref{t-sixservers}. It will be assumed that the servers form an MDS code. We will show that with a careful design of the delivery strategy, the peak rate can be reduced to almost half of that with Maddah's single server scheme.
\begin{table}
\centering\small
\begin{tabular}[h]{|c|c|
\hline
Server P & Server Q\\
\hline
$A_1 + B_1+\ldots+ L_1$ & $A_1+\kappa_BB_1+\ldots+\kappa_LL_1$\\
$A_2 + B_2+\ldots+ L_2$ & $A_2+\kappa_BB_2+\ldots+\kappa_LL_2$\\
$\vdots$ &$\vdots$ \\
$A_r + B_r+\ldots+L_r$ & $A_r+\kappa_BB_r+\ldots+\kappa_LL_r$\\
\hline
\end{tabular}
\caption{Files stored in parity servers in RAID-6}\label{t-sixservers}
\end{table}
\begin{lemma}\label{l-transmit2}
Let $S_1=\{\mathbf{A}_\ast,\mathbf{Y}\}$ and $S_2=\{\mathbf{B}_\ast,\mathbf{Y}\}$, where $\mathbf{Y}$ represents a common set of requests from any server. Then $S_1$ and $S_2$ can be paired so that a single transmission from each server fills the same requests as messages $m^{\mathbf{S_1}}$ and $m^{\mathbf{S_2}}$ in Eq.~(\ref{e-MNmessage}).
\end{lemma}
\begin{IEEEproof}
The transmission scheme shares the same pairing idea as the algorithm in subsection~\ref{ss-algorithm}. The transmissions are as follows:
\begin{enumerate}
\item Server A sends $m_A^{\mathbf{S_1}}$, providing a desired segment to users requesting its files and the corresponding undesired A-segments to others.
\item Server B sends $m_B^{\mathbf{S_2}}$, providing a desired segment to users requesting its files and the corresponding undesired B-segments to others.
\item Servers $C,D,\ldots,L$ each send a single message to $S_1\bigcap S_2=\{\mathbf{Y}\}$ with the following content for each user:
\begin{itemize}
\item Users requesting files from server B received some undesired segments from server $A$. Servers $C,D,\ldots,L$ send them the matching ones so that the desired segments can be decoded using the parity in server $P$ later.
\item The remaining users in $\mathbf{Y}$ will get the desired segment corresponding to $S_1$ when possible, otherwise they will get the undesired segment corresponding to $S_2$.
\end{itemize}
In other words, each server $C,\ldots,L$ will send segments corresponding to $S_1$ to users requesting its files or those from server $B$, and segments corresponding to $S_2$ to the rest.
At this point, all the users have satisfied their requests related to $S_1$, except those requesting files from server $B$, who satisfied their requests related to $S_2$ instead. Each user has also received $L-2$ undesired ``matched" segments\footnote{Users in $\mathbf{Y}$ requesting files from servers $A$ or $B$ received $L-1$ ``matched" segments instead of $L-2$, but we can ignore the extra one.}, corresponding to $S_1$ for those requesting files from server $B$ and corresponding to $S_2$ for the rest.
\item Finally, parity servers $P$ and $Q$ each transmit a message to $S_1\bigcap S_2=\{\mathbf{Y}\}$ with a combination of segments for each user (see Table~\ref{t-sixservers}). Those requesting files from server $B$ will get two combinations of the segments corresponding to $S_1$, while the rest will get two combinations of the segments corresponding to $S_2$. Since each user now has $L-2$ individual segments and two independent linear combinations of all $L$ segments, it can isolate the requested segment (as well all the ``matching" segments in other servers).
\end{enumerate}
A simple comparison of the requested and received segments shows that these transmissions deliver the same information as messages $m^{\mathbf{S_1}}$ and $m^{\mathbf{S_2}}$ in Maddah's single server scheme.
\end{IEEEproof}
As an example, Table~\ref{t-raid6} shows the delivered segments in transmissions
(1)-(4) if $m^{\mathbf{S_1}}=\{A_1^1,A_2^2,B_1^3,C_1^4,C_2^5\}$ and $m^{\mathbf{S_2}}=\{A_1^6,B_1^7,B_2^8,C_1^9,C_2^{10} \}$.
\begin{table}
\centering\small
\begin{tabular}[htp]{|c|c|c|c|c|c|c|}
\hline
\tiny{Trans.$\setminus$}Req.& $A_1$ & $A_2$ & $B_1$ & $B_2$ & $C_1$ &$C_2$\\
\hline
(1)&$A_1^1$&$A_2^2$&$A_1^3$&&$A_1^4$&$A_2^5$\\
\hline
(2)&$B_1^6$&&$B_1^7$&$B_2^8$&&\\
\hline
(3)&$C_1^6$&&$C_1^3$&&$C_1^9$&$C_2^{10}$\\
\hline
(4)&$P_1^6$&&$P_1^3$&&$P_1^4,Q_1^4$&$P_2^5,Q_2^5$\\
\hline
in total&$A_1^1,A_1^6$&$A_2^2$&$B_1^3,B_1^7$&$B_2^8$&$C_1^4,C_1^9$&$C_2^5,C_2^{10}$\\
\hline
\end{tabular}
\caption{Segments users get in (1)-(4) transmissions (In order to simplify notation, denote $P_i^j=A_i^j+B_i^j+C_i^j$ and $Q_i^j=A_i^j+\kappa_BB_i^j+\kappa_CC_i^j$).}\label{t-raid6}
\end{table}
\begin{theorem}
For the $L$ data server and two parity server system, the following normalized peak rate is achievable:
\begin{equation}
R_Q(K,t)=\left(\frac{1}{2}+\frac{L-2}{2L+4}\Delta\right)R_C(K,t),
\end{equation}
where $\Delta\leq\frac{1}{3}$ is the pairing loss and $R_C$ is the rate of the single server Maddah's scheme in Eq.~(\ref{e-RC_binom}).
\end{theorem}
\begin{IEEEproof}
Group messages by the number of segments that they have from servers $A$ or $B$. Within each group, we pair the messages as shown in Lemma~\ref{l-transmit2}. If the number of requests from $A$ or $B$ is not zero, this is equivalent to pairing the $A$ and $B$ requests into effective pairs according to Theorem~\ref{t-2data1parity_asym} and considering all possible completions for each pair using requests for other servers. Theorem~\ref{t-2data1parity_asym} showed that at most $\frac{1}{3}$ of the messages in each group remains unpaired.
For the messages which do not contain segments from $A$ or $B$ we repeat the same process with two other servers, with identical results: at most $\frac{1}{3}$ of them remain unpaired.
Each pair of messages can be delivered using a single transmission from each server, as shown in Lemma~\ref{l-transmit2}, hence paired messages contribute $\frac{1}{2}(1-\Delta)R_C(K,t)$ to the total rate, where $\Delta$ denotes the ratio of unpaired messages. Unpaired messages are transmitted as described in section \ref{ss-multiple}, that is using $L$ out of the $L+2$ servers. Balancing this load among all the servers, they contribute $\frac{L}{L+2}\Delta R_C(K,t)$ to the total rate. Adding both contributions yields the rate above.
\end{IEEEproof}
\section{Scheme 2: Small cache}
\label{s-interference}
This section extends the interference elimination scheme in section~\ref{ss-interference} to a multi-server system. The interference elimination scheme is specially designed to reduce the peak rate when the cache size is small \cite{tian2016caching}. Unlike Maddah's scheme, which caches plain segments, the interference elimination scheme proposes caching linear combinations of them. That way each segment can be cached by more users, albeit with interference. This section will start with the system without parity in Table~\ref{t-servers_general}, showing that the transmission rate decreases as $\frac{1}{L}$ with the number of servers. Then it performs a similar analysis for the case with parity servers, which can be interpreted as an extension of the user's caches.
\begin{theorem}
In a system with $L$ data servers and parallel channels, the peak rate of the interference cancelling scheme can be reduced to $\frac{1}{L}$ of that in a single server system, \ie the following $(M,R)$ pair is achievable:
\begin{equation}\label{e-eliminationrate_lservers}
\left(\frac{t\left[(N-1)t+K-N\right]}{K(K-1)},\frac{N(K-t)}{LK}\right),\ t=0,1,\ldots,K.
\end{equation}
This holds regardless of whether each file is spread across servers (striping) or stored as a single block in one server.
\end{theorem}
\begin{IEEEproof}
Section~\ref{s-striping} showed that striping the files across $L$ servers reduces the peak rate of the interference cancelling scheme by $\frac{1}{L}$ compared with a single server system.
In contrast to Maddah's scheme, the interference cancelling scheme sends the same number of segments from each file, regardless of the users' requests. Moreover, each message consists of a combination of segments from a single file~\cite{tian2016caching}. Therefore, the same messages can be transmitted even if different files are stored in different servers. Each server will need to transmit a fraction $\frac{1}{L}$ of the messages, since it will be storing that same fraction of the files. The peak load can then be reduced to $\frac{1}{L}$ of that in Eq.~(\ref{e-eliminationrate}).
\end{IEEEproof}
If there are parity servers, we can further reduce the transmission rate by regarding them as an extension of the users' cache. Section \ref{ss-interference} explained that in the interference elimination algorithm~\cite{tian2016caching}, each user caches the parity symbols resulting from encoding a set of segments with a systematic MDS code $\mathcal{C}(P_0,P)$. It is possible to pick the code in such a way that some of these parity symbols can be found as combinations of the information stored in servers $P$ and $Q$. Then, instead of storing them in the user's cache, they are discarded. Those that are needed in the delivery phase will be transmitted by the parity servers.
For example, parity server $P$ stores the horizonal sum of the files, so it can transmit messages of the form:
\begin{equation}\nonumber
\sum_{i=1}^{N/L}\sum_{j=1}^{\binom{K-1}{t-1}}\lambda_{ij}\left(A_i^{\mathbf{s}_j}+B_i^{\mathbf{s}_j}\ldots+L_i^{\mathbf{s}_j}\right),
\end{equation}
with arbitrary coefficients $\lambda_{ij}$ for any user set $\mathbf{s}_j$. This corresponds to a linear combination of all the segments in Eq.~(\ref{e-segments_interf}).
Similarly, parity server $Q$ can transmit some other linear combinations of the segments which can also work as components of an MDS code.
This effectively increases the size of the cache memories by $M'$ file units, corresponding to the amount of information that the parity servers can afford to send each user during the delivery phase.
\begin{theorem}\label{e-eliminationrateparity}
If there are $\eta$ parity servers and $K\geq N$, the following $(M,R)$ pairs are achievable for $t=0,1,\ldots,K$
\begin{equation}\nonumber
\left(\frac{t\left[(N-1)t+K-N\right]}{K(K-1)}-\eta\frac{N(K-t)}{LK^2},\frac{N(K-t)}{LK}\right).
\end{equation}
\end{theorem}
\begin{IEEEproof}
The information sent by the parity server is bounded by the peak rate of the data servers, \ie $\frac{N(K-t)}{LK}$ according to Eq.~(\ref{e-eliminationrate_lservers}). Assuming a worst case scenario, each transmission from a parity server will benefit a single user. Therefore, each parity server can effectively increase the cache of each user by $M'=\frac{N(K-t)}{LK^2}$.
\end{IEEEproof}
This memory sharing strategy provides significant improvement when the cache capacity is small. Fig.~\ref{fig:interference_compare} shows the performance for $K=15$ users and $N=12$ files stored in $L=4$ data servers. When the cache size is small, the peak rate of the system with two parity servers is much lower than that without parity servers. As the cache grows the advantage of the system with parity servers becomes less clear.
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{interference_compare.eps}
\caption{Comparison of the performance between multi-server system without parity servers and the system with two parity servers.}
\label{fig:interference_compare}
\end{figure}
The interference elimination scheme is specially designed for the case with less files than users ($N\leq K$) in the single server system. However, since the peak load is reduced by $\frac{1}{L}$ in a multi-server system, the interference elimination scheme might also have good performance when $N>K$ if $L$ is large. In order to apply the algorithm, we can just add $N-K$ dummy users with arbitrary requests. Then, we have the following corollary from Theorem~\ref{e-eliminationrateparity}:
\begin{corollary}
If there are $\eta$ parity servers and $K\leq N$, the following $(M,R)$ pairs are achievable:
\begin{equation}\nonumber
\left(\frac{t^2}{N}-\eta\frac{(N-t)}{LN},\frac{(N-t)}{L}\right),\ \ \ \ t=0,1,\ldots,N.
\end{equation}
\end{corollary}
\section{Simulations}\label{s-simulation}
This section compares all the schemes studied in this paper, for a system with $N=20$ files stored in $L=4$ data servers with $5$ files each. We show that striping has better performance than the schemes in sections \ref{s-ali} and \ref{s-interference} (Scheme~1 and Scheme~2, respectively) at the cost of network flexibility. If each file is stored as a single block in one server, Scheme~2 has better performance when the cache capacity is small while Scheme~1 is more suitable for the case where the cache capacity is large. The performances of Scheme~1 and Scheme~2 are summarized in Table~\ref{t-scheme1} and Table~\ref{t-scheme2}, respectively.
\begin{table}
\centering
\begin{tabular}[h]{|c|c|
\hline
server system & Normalized peak rate\\
\hline
single server & $R_C(K,t)=\binom{K}{t+1}/\binom{K}{t}$ \\
\hline
$L$ data $1$ parity& $\frac{L-1}{L}R_C(K,t)$ \\
\hline
$L$ data $2$ parity& $(\frac{1}{2}+\frac{L-2}{2L+4}\Delta)R_C(K,t)$ ($\Delta\leq\frac{1}{3}$) \\
\hline
\end{tabular}
\caption{Normalized peak rate of Scheme~1.}\label{t-scheme1}
\end{table}
\begin{table}
\centering
\begin{tabular}[h]{|c|c|
\hline
server system & Normalized (M,R)\\
\hline
single server & $\left(\frac{t\left[(N-1)t+K-N\right]}{K(K-1)},\frac{N(K-t)}{K}\right)$ \\
\hline
$L$ data $\eta$ parity ($K\geq N$)& $\left(\frac{t\left[(N-1)t+K-N\right]}{K(K-1)}-\eta\frac{N(K-t)}{LK^2},\frac{N(K-t)}{LK}\right)$ \\
\hline
$L$ data $\eta$ parity ($K\leq N$)& $\left(\frac{t^2}{N}-\eta\frac{(N-t)}{LN},\frac{(N-t)}{L}\right)$ \\
\hline
\end{tabular}
\caption{Normalized (M,R) pair of Scheme~2. ($\eta$ is the number of parity servers.)}\label{t-scheme2}
\end{table}
Fig.~\ref{fig:compareraid4} and Fig.~\ref{fig:compareNgreaterK} focus on the case with one and two parity servers, respectively. We assume that there are $K=15$ users, thus there are more files than users, with varying cache capacity. We observe that striping provides lower peak rates than storing whole files, as expected. Additionally, since $N>K$, the interference elimination scheme always has worse performance than Maddah's scheme when striping is used. Without striping, Scheme 2 provides lower peak rate than Scheme 1 when the cache capacity is small, and it is the other way around when the capacity is large.
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{raid4NK.eps}
\caption{Comparison between the performance between Scheme 1 and Scheme 2 in one parity server system when $N=20$ and $K=15$.}
\label{fig:compareraid4}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{raid6NK.eps}
\caption{Comparison between the performance between Scheme 1 and Scheme 2 in two parity server system when $N=20$ and $K=15$.}
\label{fig:compareNgreaterK}
\end{figure}
Then Fig.~\ref{fig:compareraid4k} and Fig.~\ref{fig:compareKgreaterN} compare the performance between Scheme~1 and Scheme~2 when there are more users $(K=60)$ than files for the one or two parity case, respectively. As shown in Fig.~\ref{fig:compareraid4k} and Fig.~\ref{fig:compareKgreaterN}, the striping has lower rate than storing whole files and when the cache capacity is very small, the striping interference elimination has better performance than striping Maddah's scheme. For Scheme~1 and Scheme~2, when the cache capacity is small, Scheme~2 provides lower peak rate, while when the cache capacity increases, Scheme~1 has better performance. Moreover, we notice that the curves intersect at a point with larger $M$ than they did in Fig.~\ref{fig:compareraid4} and Fig.~\ref{fig:compareNgreaterK}, which means that we are more prone to utilize Scheme~2 when there are more users than files.
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{raid4KN.eps}
\caption{Comparison between the performance between scheme 1 and scheme 2 in one parity server system when $N=20$ and $K=60$.}
\label{fig:compareraid4k}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.2\textwidth]{raid6KN.eps}
\caption{Comparison between the performance between scheme 1 and scheme 2 in two parity server system when $N=20$ and $K=60$.}
\label{fig:compareKgreaterN}
\end{figure}
\section{Conclusion}\label{s-conclude}
This paper proposes coded caching algorithms for reducing the peak data rate in multi-server systems with distributed storage and different levels of redundancy.
It shows that, by striping each file across multiple servers, the peak rate can be reduced proportionally to the number of servers. Then it addresses the case where each file is stored as a single block in one server and proposes different caching and delivery schemes depending on the size of the cache memories.
Distributed storage systems generally use MDS codes across the servers to protect the information against node failures. The coded caching schemes proposed in this paper are able to leverage that redundancy in creative ways to reduce the achievable traffic peak rate. The results for Scheme 1 and Scheme 2 are shown in Table~\ref{t-scheme1} and Table~\ref{t-scheme2} respectively.
In the future, we will study how this process can be generalized to other erasure codes, such as fractional repetition codes~\cite{el2010fractional}\cite{yu2014irregular} or other RAID-6~\cite{wang2014mdr} structures. We also plan to generalize our schemes to the case where files have different popularity, which will require designing erasure codes with different levels of protection for different files.
| proofpile-arXiv_065-11637 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
For two graphs $G$ and $H$, a \emph{combinatorial $G$-packing} of $H$ is a collection of edge-disjoint subgraphs of $H$ that are isomorphic to $G$. In the study of graph packing, we typically seek the maximum cardinality of a combinatorial $G$-packing of $H$, denoted by $p(H,G)$. Clearly, $p(H,G)\le |E(H)|/|E(G)|$. When equality holds, we call the corresponding collection a \emph{perfect combinatorial $G$-packing} of $H$. Due to a well-known result of Kirkman~\cite{Kirkman1847}, $K_n$ has a perfect combinatorial $K_3$-packing if and only if $n\equiv 1$ or $3\mod 6$. More generally, Wilson~\cite{wilson1975decompositions} proved that, for $n$ large enough, $K_n$ has a perfect combinatorial $G$-packing if and only if $|E(G)|$ divides $\binom{n}{2}$ and the greatest common divisor of the vertex degrees of $G$ divides $n-1$. This result is also generalized to hypergraphs \cite{glock2016existence,keevash2014existence}.
Although a perfect combinatorial $G$-packing of the complete graph $K_n$ doesn't always exist, we can pack any $G$ into complete graphs ``almost perfectly''. Let us call a graph $G$ \emph{combinatorially packable} if
\[\lim_{n\rightarrow\infty}\frac{p(K_n,G)|E(G)|}{|E(K_n)|}=1.\]
In other words, $G$ is combinatorially packable if there exist combinatorial $G$-packings of $K_n$ covering all but $o(n^2)$ edges. Using a technique known as R\"odl Nibble (see Theorem~\ref{Pippenger_Spencer}), one can easily show that all fixed graphs are combinatorially packable. See~\cite{Yuster2007} for a survey on combinatorial packing problems.
In this paper, we consider the packability of convex geometric graphs. A \emph{convex geometric graph} (shortly \emph{cgg}) $G$ is a graph with a cyclic order $\prec$ on its vertex set $V(G)$. Equivalently, it can be represented as a graph drawn in the plane such that $V(G)$ consists of points in convex position, $E(G)$ consists of straight-line segments connecting the corresponding points, and $\prec$ is the clockwise ordering of $V(G)$. Two cgg's $G$ and $H$ are \emph{isomorphic} if there is a graph isomorphism $f$ between them preserving the cyclic order, that is, $u\prec v\prec w$ if and only if $f(u)\prec f(v)\prec f(w)$. Note that any two complete cgg's of the same size are isomorphic, we use $K_n$ to denote a complete cgg on $n$ vertices. For cgg's $G$ and $H$, a \emph{convex $G$-packing} (shortly \emph{$G$-packing}) of $H$ is a collection of edge-disjoint sub-cgg's of $H$ that are isomorphic to $G$. We call a cgg $G$ \emph{convex-packable} (shortly \emph{packable}) if there exist (convex) $G$-packings of $K_n$ covering all but $o(n^2)$ edges.
In contrast to combinatorial packability, there exist cgg's that are not packable, for example, the plane cycle $C_5$ of length $5$. To see why $C_5$ isn't packable, we can consider the average length of edges. Inside a cgg $G$, an \emph{interval} of $V(G)$ is a subset that is contiguous with respect to $\prec$, and the \emph{length} $l_G(e)$ of an edge $e$ is one less than the smallest cardinality of an interval containing $e$. One can check that the average length of all edges in $K_n$ is $(1+o(1))n/4$. On the other hand, the average length of the edges in a copy of $C_5$ in $K_n$ is at most $n/5$, so the average length of all edges covered by a $C_5$-packing of $K_n$ is at most $n/5$. Hence, a $C_5$-packing can never cover all but $o(n^2)$ edges of $K_n$.
Partly based on this average length argument, Cranston, Nie, Verstra\"ete, and Wesolek~\cite{cranston2021asymptotic} recently determined all packable plane Hamiltonian cgg's. See Figure~\ref{fig:plane_hamiltonian}.
\begin{figure}[ht]
\centering
\input{plane_hamiltonian}
\caption{All packable plane Hamiltonian cgg's.}
\label{fig:plane_hamiltonian}
\end{figure}
Because the average length of edges in $K_n$ is roughly $n/4$, it's natural to predict that a cgg is packable if it is ``cyclically 4-partite''. Our main result confirms this prediction. The \emph{cyclic chromatic number} $\chi_c(G)$ of a cgg $G$ is the minimum number of intervals that $V(G)$ can be partitioned into such that every edge in $E(G)$ has its endpoints in different parts.
\begin{theorem}\label{main}
Let $G$ be a cgg. If $\chi_c(G)\leq 4$, then $G$ is packable.
\end{theorem}
Theorem~\ref{main} is best possible since (plane) $C_5$ has cyclic chromatic number $5$ and is not packable.
To prove Theorem~\ref{main}, we prove a sufficient condition for the packability of a cgg (see Lemma~\ref{frac_pack_lemma}) which is satisfied by all cgg $G$ with $\chi_c(G)\le 4$. The sufficient condition is quite general, and we also use it to show that some cggs with higher cyclic chromatic numbers can still be packable if they have many ``long'' edges. Let $\mathcal{P}=\{V_1,\dots,V_k\}$ be an interval partition of $V(G)$ with each edge in $E(G)$ having its endpoints in different parts. Since every $V_i$ is an interval, we can define a cyclic order on $\mathcal{P}$ by setting $V_{i_1}\prec V_{i_2}\prec V_{i_3}$ if $v_{i_1}\prec v_{i_2}\prec v_{i_3}$ for any $v_{i_j}\in V_{i_j}$. For each pair $\{V_i,V_j\}$, we use $l_{\mathcal{P}}(\{V_i,V_j\})$ to denote the length of this pair in $\mathcal{P}$, and $E_G(V_i,V_j)$ to denote the set of edges between this pair.
\begin{theorem}\label{higher}
For every integer $k>2$ there is an absolute constant $C_k$ with the following property. Suppose $\mathcal{P}=\{V_1,\dots, V_{2k}\}$ is an interval partition of a cgg $G$ with each edge having its endpoints in different parts, and $E_l$ is the union of all $E_G(V_i,V_j)$ with $l_{\mathcal{P}}(\{V_i,V_j\})=l$ for $l\in [k]$. If $E_1\neq\emptyset$ and $|E_{k}|\geq C_k\cdot \sum_{l<k}|E_l|$, then $G$ is packable.
\end{theorem}
We also prove an analogue of Theorem~\ref{main} for ordered graphs. An ordered graph is a graph $G$ with a linear order $<$ on $V(G)$. An isomorphism between ordered graphs is a graph isomorphism preserving the linear order. Packings, intervals, and lengths are defined similarly for ordered graphs. We say an ordered graph $G$ is \emph{linearly packable} (shortly \emph{packable}) if there exist $G$-packings of the complete ordered graph $K_n$ that covers all but $o(n^2)$ edges. The \emph{interval chromatic number} $\chi_<(G)$ of an ordered graph $G$ is the minimum number of intervals that $V(G)$ can be partitioned into such that every edge in $E(G)$ has its endpoints in different parts.
\begin{theorem}\label{ordered}
Let $G$ be an ordered graph. If $\chi_<(G)\leq 3$, then $G$ is packable.
\end{theorem}
Theorem~\ref{ordered} is also best possible: the ordered graph $P_3$, whose vertices are $1<2<3<4$ and edges are $\{1,2\},\{2,3\},\{3,4\}$, is not packable. We give a heuristic explanation of this fact. The average length of all edges in the complete ordered graph $K_n$ is $(1+o(1))n/3$. Although the average length of the edges in a copy of $P_3$ in $K_n$ can attain $(1-o(1))n/3$, it only happens when the head and tail of this copy are very close to the head and tail of $V(K_n)$ with respect to $<$. But there are not enough such $P_3$-copies to cover most edges of $K_n$.
To end this introduction, we remark that research have been done for packing problems of (not necessarily convex) geometric graphs~\cite{AHKVLPSW2017,BK1979,biniaz2020packing,BHRW2006,obenaus2021edge,TCAK2019}. The extremal problems about cgg's and ordered graphs are also extensively studied, with many results particularly related to cyclic chromatic numbers and/or interval chromatic numbers~\cite{axenovich2018chromatic,BKV2003,furedi2020ordered,gyHori2018turan,pach2006forbidden}.
The rest of this paper is organized as follows. In Section~\ref{sec:frac_pack}, we prove Lemma~\ref{frac_pack_lemma} which translates the packing problems of cgg's into fractional packing problems of weighted cgg's. It will be useful to view the fractional packing problems as linear programming problems. In Section~\ref{sec:main} and Section~\ref{sec:higher}, we establish Theorem~\ref{main} and Theorem~\ref{higher}, by proving the feasibility of their corresponding linear programs through Farkas' lemma. In Section~\ref{sec:ordered}, we prove Theorem~\ref{ordered} directly using R\"odl Nibble. Section~\ref{sec:remark} lists some final remarks. We systemically omit floors and ceilings whenever they are not crucial for the sake of clarity in our presentation.
\section{Fractional packing of weighted convex geometric graphs}\label{sec:frac_pack}
In this section, we prove an auxiliary lemma which relates the packability of a given cgg to a fractional packing problem of an associated weighted cgg. We introduce some notions before we state and prove this lemma.
A \emph{weighted cgg} is a cgg $G$ together with a function $\omega_G: E(G)\to{\mathbb{R}_{\geq 0}}$. A weighted sub-cgg $G$ of $H$ is a weighted cgg whose underlying cgg structure is a sub-cgg of $H$. An unweighted cgg $H$ can be seen as a weighted cgg with each edge having weight $1$. An isomorphism $f:G\to H$ between two weighted cgg's is a cgg-isomorphism such that $\omega_G(e)=\omega_H(f(e))$ for all $e\in E(G)$. We use $\mathcal{I}(H,G)$ to denote the set of all weighted sub-cgg's of $H$ that are isomorphic to $G$. A \emph{perfect fractional $G$-packing} of $H$ is a function $\phi:\mathcal{I}(H,G)\to \mathbb{R}_{\geq 0}$ such that
\begin{align}\label{eq:perfect_fractional_packing_def}
\sum_{S\in \mathcal{I}(H,G)}\phi(S)\omega_{S}(e)=\omega_H(e)\text{ for all $e\in E(H)$}.
\end{align} Here, we take the convention that $\omega_{S}(e)=0$ if $e\not\in E(S)$.
For an unweighted cgg $G$ and an interval partition $\mathcal{P}=\{V_1,\dots, V_k\}$ of $V(G)$ with each edge having its endpoints in different parts, we described in Section~\ref{sec:intro} that $\mathcal{P}$ has an induced cyclic order. A \emph{weighted representation} of $G$ (constructed from $\mathcal{P}$) is a complete weighted cgg $W$ with vertex set $\mathcal{P}$ and edge weight given by
\[ \omega_W(\{V_i,V_j\})=|E_G(V_i,V_j)|\text{ for every $i\neq j\in[k]$}.\]
See Figure~\ref{fig:weighted_and_blowup} for an example of weighted representations.
Now we state the main result of this section.
\begin{lemma}\label{frac_pack_lemma}
Let $G$ and $H$ be two cgg's. Suppose $\chi_c(G)\geq 3$ and $W$ is a weighted representation of $G$. If $H$ is packable and has a perfect fractional $W$-packing, then $G$ is packable.
\end{lemma}
To prove this lemma, we need the following theorem, known as R\"odl Nibble~\cite{Rodl1985}. The version stated here is due to Pippenger and Spencer~\cite{PS1989}, see also~\cite[Section~4.7]{AS2004}.
\begin{theorem}[R\"odl Nibble]\label{Pippenger_Spencer}
For every integer $r\ge 2$ and real numbers $c\ge1$ and $\epsilon>0$, there
exist $\gamma=\gamma(r,c,\epsilon)$, $n_0=n_0(r,c,\epsilon)$ and
$d_0=d_0(r,c,\epsilon)$ such that for all integers $n\ge n_0$ and $D\ge d_0$, the following holds. If $\mathcal{H}=(\mathcal{V},\mathcal{E})$ is an $n$-vertex $r$-uniform hypergraph satisfying
\begin{enumerate}
\item For all $x\in V$ but at most $\gamma n$ of them, its degree $d(x)=(1\pm\gamma)D$.
\item For all $x\in V$, its degree $d(x)<c D$.
\item For any two distinct $x_1,x_2\in V$, their codegree $d(x_1,x_2)<\gamma D$.
\end{enumerate}
Then $\mathcal{H}$ contains a matching of at least $(1-\epsilon)n/r$ hyperedges.
\end{theorem}
Another ingredient in our proof is the notion of blowups of cgg's. Given a cgg $H$ and a positive integer $n$, the \emph{$n$-regular blowup} $H[n]$ of $H$ is a cgg constructed as follows. The vertex set of $H[n]$ consists of $|V(H)|$ intervals, denoted as $\{I_v\}_{v\in V(H)}$, each of cardinality $n$, and $I_{v_1}\prec I_{v_2}\prec I_{v_3}$ in $H[n]$ whenever $v_1\prec v_2\prec v_3$ in $H$; The edge set of $H[n]$ consists of all pairs between $I_{u}$ and $I_{v}$ whenever $\{u,v\}\in E(H)$. See Figure~\ref{fig:weighted_and_blowup} for an example of blowups.
\begin{figure}[ht]
\centering
\input{weighted_blowup.tex}
\caption{A cgg $H$; A weighted representation of $H$; The blowup $H[2]$.}
\label{fig:weighted_and_blowup}
\end{figure}
\begin{proof}[Proof of Lemma~\ref{frac_pack_lemma}]
We claim that there exist $G$-packings of the blowup $H[n]$ covering all but $o(n^2)$ edges. We first prove $G$ is packable given this claim. Take an arbitrary positive integer $n$. Since $H$ is packable, there exists an $H$-packing $\mathcal{C}$ of $K_n$ that covers all but $o(n^2)$ edges. Observe that $\mathcal{C}$ induces an $H[n]$-packing $\mathcal{C}'$ of the blowup $K_n[n]$ as follows: if $H'\in \mathcal{C}$ is a sub-cgg of $K_n$, then $H'[n]$ is naturally a sub-cgg of $K_n[n]$ and we include it in $\mathcal{C}'$. For each copy of $H[n]$ in $\mathcal{C}'$, according to the claim, we can use a $G$-packing to cover all but $o(n^2)$ of its edges, and consequently obtain a $G$-packing $\mathcal{C}''$ of $K_n[n]$. Since $K_n[n]$ is a sub-cgg of $K_{n^2}$, we view $\mathcal{C}''$ as a $G$-packing of $K_{n^2}$, whose uncovered edges are in either one of the following categories:\begin{itemize}
\item Edges that are in $K_{n^2}$ but not in $K_n[n]$. There are $O(n^3)$ many of them;
\item Edges of $K_n[n]$ that are not covered by $\mathcal{C}'$. There are at most $o(n^2)\cdot n^2=o(n^4)$ of them (the $o(n^2)$ term corresponds to edges of $K_n$ not covered by $\mathcal{C}$);
\item Edges of copies of $H[n]$ in $\mathcal{C}'$ that are not covered by $\mathcal{C}''$. There are at most $|\mathcal{C}'|\cdot o(n^2)=o(n^4)$ of them (note that $|\mathcal{C}'|<|E(K_n)|=O(n^2)$).
\end{itemize} Therefore, the $G$-packing $\mathcal{C}''$ covers all but $o(n^4)$ edges of $K_{n^2}$. So $G$ is packable by the arbitrariness of $n$.
For the rest of this proof, we show the existence of $G$-packings of $H[n]$ covering all but $o(n^2)$ edges. Let $\mathcal{P}=\{V_1,\dots,V_k\}$ be the interval partition from which $W$ is constructed. Note that $k\geq\chi_c(G)\geq 3$. Denote each interval in the blowup $H[n]$ corresponding to a vertex $v\in V(H)$ as $I_v$. By assumption, there's a function $\phi:\mathcal{I}(H,W)\to \mathbb{R}_{\geq 0}$ satisfying \eqref{eq:perfect_fractional_packing_def} for $W$. We write
\[\Phi:=\max_{S\in \mathcal{I}(H,W)}\phi(S)\quad\text{and}\quad m:=\max\{|V_1|,\dots,|V_k|\}.\]
We construct a random hypergraph $\mathcal{H}$ whose vertex set $\mathcal{V}$ consists of all edges of $H[n]$ and edge set $\mathcal{E}$ consists of copies of $G$ coming from the following random experiment: For each sub-cgg $G'\subset H[n]$ satisfying\begin{itemize}
\item there exists an isomorphism $f:G\to G'$ such that there are vertices $v_1,\dots,v_k\in V(H)$ with $f(V_i)\subset I_{v_i}$, and
\item there exists $\delta\in [1,\log n]$ such that for all $i$, every consecutive pair of vertices in $V(G')\cap I_{v_i}$ has length $\delta$ in $H[n]$,
\end{itemize} its weighted representation $W'$ is naturally a weighted sub-cgg of $H$ (on vertices $v_1,\dots,v_k$). See Figure~\ref{fig:frac_pack_correspondence}. We include $E(G')$ into $\mathcal{E}$ as a hyperedge with probability $\phi(W')/\Phi$.
\begin{figure}[ht]
\centering
\input{frac_pack_correspondence}
\caption{A sub-cgg $G'\subset H[2]$; Its weighted representation $W'\subset H$. (Edges of $H$ and $H[2]$ are not pictured here but in Figure~\ref{fig:weighted_and_blowup}.)}
\label{fig:frac_pack_correspondence}
\end{figure}
Now we check that $\mathcal{H}$ satisfies the conditions of R\"odl Nibble with high probability.
Take any vertex $x\in \mathcal{V}$ and denote its degree in $\mathcal{H}$ as $D_x$. Write $x=\{u_1,u_2\}$ as an edge in $H[n]$. Consider $v_i\in V(H)$ such that $u_i\in I_{v_i}$ for $i=1,2$ and let $e:=\{v_1,v_2\}\in E(H)$. We say $x$ is \emph{good} if $u_1$ and $u_2$ are at least $m\log n$ vertices away from both boundary vertices of the interval $I_{v_i}$. If $x$ is good, then we can compute the expectation
\[\mathbb{E}(D_x)=(1-o(1))\frac{n^{k-2}\log n}{\Phi}.\]
Indeed, for each $W'\in \mathcal{I}(H,W)$ containing $e$, we count the number of $G'$ in $\mathcal{H}$ whose weighted representation is $W'$ and covering the edge $x\in E(H[n])$. There are $\log n$ choices for $\delta$, $\omega_{W'}(e)$ choices for $f^{-1}(x)$, and $(1-o(1))n^{k-2}$ choices for vertices in $V(G')\cap I_v$ with $v\in V(W')\setminus e$. Hence the computation follows from linearity of expectations, $H$ being unweighted, and \eqref{eq:perfect_fractional_packing_def}.
If $x$ is not good, there are fewer choices in the above process, so $\mathbb{E}(D_x)\leq (1-o(1))n^{k-2}\log n/\Phi$. Moreover, there are at most $\binom{|V(H)|}{2}(2m\log n)2n=O(n\log n)$ edges of $H[n]$ that are not good.
Take any two distinct vertices $x_1,x_2\in \mathcal{V}$ and denote their codegree in $\mathcal{H}$ by $D_{x_1x_2}$. Write $x_i=\{u_{i1},u_{i2}\}\in E(H[n])$ and let $v_{ij}\in V(H)$ such that $u_{ij}\in I_{v_{ij}}$ for $i,j=1,2$. Consider the two edges $e_i=\{v_{i1},v_{i2}\}$. If $e_1=e_2$, we can compute the expectation
\[\mathbb{E}(D_{x_1x_2})\leq (1-o(1))\frac{m^2n^{k-2}}{\Phi}.\]
Indeed, for each $W'\in \mathcal{I}(H,W)$ containing $e:=e_1=e_2$, we count the number of $G'$ in $\mathcal{H}$ whose weighted representation is $W'$ and covering the edges $x_1,x_2\in E(H[n])$. There are at most $\omega_{W'}(e)$ choices for both $f^{-1}(x_1)$ and $f^{-1}(x_2)$. Note that $\omega_{W'}(e)\leq m^2$. However, after those two choices, $\delta$ will be fixed. And there are still $(1-o(1))n^{k-2}$ choices for vertices in $V(G')\cap I_v$ with $v\in V(W')\setminus e$. Hence the computation follows again.
If $e_1\neq e_2$, we can check $\mathbb{E}(D_{e_1e_2})\leq (1-o(1))m^2n^{k-3}\log n/\Phi$ by similar counting and computation. Essentially, there are fewer choices for vertices in $V(G')\cap I_v$ with $v\in V(W')\setminus (e_1\cup e_2)$ after $\delta$ is fixed.
Finally, we check these degrees and codegrees are concentrated around their expectations. Observe that $D_x$ is a summation of independent random variables all taking values from $\{0,1\}$. By Chernoff bound (\cite[Corollary A.1.14]{AS2004}), for any $x\in \mathcal{V}$, we have
\[\Pr\left(|D_x-\mathbb{E}(D_x)|>\frac{1}{\sqrt{\log n}} \mathbb{E}(D_x)\right)<2\exp\left(\frac{-\mathbb{E}(D_x)}{3\log n}\right)\leq\exp(-\Omega(n)).\]
Here we used $k\geq 3$. On the other hand, there are at most $\binom{|V(H)|}{2}n^2$ many vertices in $\mathcal{V}$. Therefore, by union bound, the probability that every $x\in \mathcal{V}$ satisfies $|D_x-\mathbb{E}(D_x)|\leq \mathbb{E}(D_x)/\sqrt{\log n}$ converges to $1$ as $n$ tends to infinity. By a similar argument, the probability that every distinct pair $x_1,x_2\in \mathcal{V}$ satisfies $D_{x_1x_2}<2m^2n^{k-2}/\Phi$ converges to $1$ as $n$ tends to infinity.
To summarise, we have shown, when $n$ is large enough, $\mathcal{H}$ almost surely satisfies: All $x\in \mathcal{V}$ but $O(n\log n)$ of them have degree $d(x)=(1\pm o(1))n^{k-2}\log n/\Phi$; All $x\in \mathcal{V}$ have degree $d(x)<2n^{k-2}\log n/\Phi$; Any two distinct $x_1,x_2\in \mathcal{V}$ have codegree $d(x,y)<o(n^{k-2}\log n)$. Thus, there is a realization of $\mathcal{H}$ that satisfies the conditions of Theorem~\ref{Pippenger_Spencer}, meaning there is a matching which corresponds to $G$-packings of $H[n]$ covering all but $o(n^2)$ edges, as claimed.
\end{proof}
\section{Proof of Theorem~\ref{main}}\label{sec:main}
To prove Theorem~\ref{main}, we need the following lemma.
\begin{lemma}\label{K4_frac_pack}
Let $W$ be a complete weighted cgg on vertices $v_1\prec v_2\prec v_3\prec v_4$ with\begin{align*}
&\omega_W(\{v_{1},v_2\})=\omega_W(\{v_{2},v_3\})=\omega_W(\{v_{3},v_4\})=\omega_W(\{v_{4},v_1\})>0,\\
&\omega_W(\{v_{1},v_3\})=\omega_W(\{v_{2},v_4\})>0.
\end{align*} Then there exists some $K_m$ having a perfect fractional $W$-packing.
\end{lemma}
Before we establish this lemma, we first use it to prove Theorem~\ref{main}. For a complete cgg whose vertices are $v_1\prec v_2\prec\dots\prec v_m$ in cyclic order, its \emph{rotation} refers to an automorphism that maps $v_i$ to $v_{i+r}$ for all $i\in [m]$ and some fixed $r\in \mathbb{Z}$, where the indices are meant modulo $m$.
\begin{proof}[Proof of Theorem~\ref{main}]
Let $G$ be the given cgg. If $\chi_c(G)=2$, a result of Brass, K\'arolyi and Valtr~\cite{BKV2003} states that
\[\text{ex}_{c}(n,G)=\left(1-\frac{1}{\chi_c(G)-1}\right)\binom{n}{2}+o(n^2)=o(n^2),\]
where $\text{ex}_{c}(n,G)$ is the maximum number of edges in a cgg on $n$ vertices that doesn't contain $G$ as a sub-cgg. Clearly, for a $G$-packing of $K_n$ with maximal cardinality, the number of uncovered edges is less than $\text{ex}_{c}(n,G)$. So for this case we conclude $G$ is packable.
If $\chi_c(G)=3$, let $\mathcal{P}$ be an interval partition of $G$ into 3 parts with each edge having its endpoints in different parts, and $W$ the weighted representation of $G$ constructed from $\mathcal{P}$. Write the vertices of $W$ as $v_1\prec v_2\prec v_3$, and consider another weighted cgg $W'$ on these vertices with $\omega_{W'}(\{v_1,v_2\})$, $\omega_{W'}(\{v_2,v_3\})$, and $\omega_{W'}(\{v_3,v_1\})$ all equal to
\[\omega_W(\{v_1,v_2\})+\omega_W(\{v_2,v_3\})+\omega_W(\{v_3,v_1\}).\]
By considering the rotations of $W$, we can see that $W'$ has a perfect fractional $W$-packing. The unweighted triangle $K_3$ has a perfect fractional $W'$-packing just by scaling. So $K_3$ has a perfect fractional $W$-packing and because $K_3$ is packable (see Figure~\ref{fig:plane_hamiltonian}), using Lemma~\ref{frac_pack_lemma} we conclude the theorem.
If $\chi_c(G)=4$, let $W$ be a weighted representation of $G$ constructed from an interval partition with 4 parts. Write the vertices of $W$ as $v_1\prec v_2\prec v_3\prec v_4$ and consider another weighted cgg $W'$ on these vertices such that every edge with length $l$ in $W'$ has weight $w_l$, where
\begin{align*}
&w_1:=\omega_W(\{v_1,v_2\})+\omega_W(\{v_2,v_3\})+\omega_W(\{v_3,v_4\})+\omega_W(\{v_4,v_1\}),\\
&w_2:=2\omega_W(\{v_1,v_3\})+2\omega_W(\{v_2,v_4\}).
\end{align*}
Similarly, $W'$ has a perfect fractional $W$-packing by rotations. It's easy to check that $\chi_c(G)=4$ guarantees $w_1>0$. If $w_2=0$, then the plane cycle $C_4$ of length 4 has a perfect fractional $W'$-packing just by scaling. So $C_4$ has a perfect fractional $W$-packing and because $C_4$ is packable (see Figure~\ref{fig:plane_hamiltonian}), by Lemma~\ref{frac_pack_lemma} we are done. If $w_2>0$, by Lemma~\ref{K4_frac_pack}, a complete cgg $K_m$ has a perfect fractional $W'$-packing, hence $K_m$ also has a perfect fractional $W$-packing. Then we conclude Theorem~\ref{main} by Lemma~\ref{frac_pack_lemma} and the next observation.
\end{proof}
\begin{observation}\label{Km_packable}
Any complete (unweighted) cgg $K_m$ is packable.
\end{observation}
\begin{proof}
Since every abstract graph is combinatorially packable, there are combinatorial $K_m$-packings of $K_n$ that cover all but $o(n^2)$ edges. But we know that all complete cgg's of the same size are isomorphic to each other, so such combinatorial $K_m$-packings are also convex $K_m$-packings.
\end{proof}
The remainder of the section is dedicated to the proof of Lemma~\ref{K4_frac_pack}, which is based on the well-known Farkas' lemma~\cite{matousek2006understanding}.
\begin{lemma}[Farkas' Lemma]\label{farkas}
Let $M\in \mathbb{R}^{a\times b}$ and $z\in \mathbb{R}^a$. Then exactly one of the following two assertions is true:
\begin{enumerate}
\item There exists $x\in \mathbb{R}^b$ such that $x\geq 0$ and $Mx=z$.
\item There exists $y\in \mathbb{R}^a$ such that $z^Ty<0$ and $M^Ty\geq 0$.
\end{enumerate}
\end{lemma}
To approach Lemma~\ref{K4_frac_pack} using the Farkas' lemma, we define the following matrix: Two sub-cgg's $S_1,S_2\in \mathcal{I}(K_m,W)$ are \emph{rotationally equivalent} if there is a rotation of $K_m$ that transforms $S_1$ to $S_2$. For every $S\in \mathcal{I}(K_m,W)$, we denote its rotational equivalence class by $\overline{S}$. Let $M_{K_m,W}$ be the matrix whose rows are indexed by all possible edge-lengths in $K_m$, and columns indexed by rotational equivalence classes in $\mathcal{I}(K_m,W)$, and set\begin{equation}\label{eq:compressed_matrix_def}
M_{K_m,W}(l,\overline{S})=\sum_{e\in E(S);l_{K_m}(e)=l}\omega_{S}(e).
\end{equation}
\begin{observation}\label{compressed_matrix}
If $m$ is odd and there exists a vector $x\geq 0$ such that $M_{K_m,W}x=\Vec{1}$, then $K_m$ has a perfect fractional $W$-packing.
\end{observation}
\begin{proof}
For every $S\in \mathcal{I}(K_m,W)$, let $\phi(S)$ be the entry of $x$ indexed by $\overline{S}$. Observe that for any two edges $e_1,e_2\in E(K_m)$ having the same length, there's a unique rotation that transforms $e_1$ to $e_2$. (This requires $m$ to be odd.) Also note that every edge of $K_m$ has weight 1. We can check that $M_{K_m,W}x=\Vec{1}$ is equivalent to \eqref{eq:perfect_fractional_packing_def}.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{K4_frac_pack}] Write the weight of length-$l$ edges in $W$ as $w_l$ for $l=1,2$, and set
\[m=2m'+1\text{\quad with\quad}m'=\left\lceil\frac{12(w_1+w_2)^2}{w_1w_2}\right\rceil.\]
Note that $m'$ is the largest edge-length in $K_m$. For the rest of this proof, all the edge-lengths are defined in $K_m$.
By Observation~\ref{compressed_matrix}, it suffices to show there exists some vector $x\geq 0$ with $M_{K_m,W}x=\Vec{1}$. For the sake of contradiction, suppose there's no such $x$. By Lemma~\ref{farkas}, there exists $y\in \mathbb{R}^{m'}$ such that\begin{itemize}
\item $\sum^{m'}_{i=1}y_{i}=(\Vec{1})^T\cdot y<0$; and
\item For any $S\in \mathcal{I}(K_m,W)$, whose $w_1$-weighted edges have lengths $i_1,i_2,i_3,i_4$ and $w_2$-weighted edges have lengths $i_5,i_6$, we have
\[ y(\overline{S}):= w_1(y_{i_1}+y_{i_2}+y_{i_3}+y_{i_4})+w_2(y_{i_5}+y_{i_6})=(M^T_{K_m,W}\cdot y)_{\overline{S}}\geq 0.\]
\end{itemize} Let $I:=\{i\in [m']; y_i<0\}$, $N=\sum_{i\in I} y_i$ and $P=\sum_{i\not\in I} y_i$. Then we have $N+P<0$.
Let $i_{max}$ be the maximum element in $I$. For each $i\in I\setminus i_{max}$, choose $S_i\in \mathcal{I}(K_m,W)$ such that its $w_1$-weighted edges have lengths $i,i_{max}-i,i,\min\{i_{max}+i, m-i_{max}-i\}$ following the cyclic order. See Figure~\ref{fig:trapezoid}. Note that $\overline{S_i}$ is uniquely determined in this way. Consider all the edges from all chosen $S_i$: for each $i\in I\setminus i_{max}$, there are at least two $w_1$-weighted edges having length $i$; for each $j\in [m']\setminus I$, there are at most two $w_1$-weighted edges having length $j$ (notice that $j$ is not equal to both $i_{max}+i_1$ and $i_{max}-i_2$ for $i_1,i_2\in I$); there are $2(|I|-1)$ many $w_2$-weighted edges having length $i_{max}$. So we have\begin{align*}
\sum_{i\in I\setminus i_{max}} y(\overline{S_i})&\leq 2w_1(N-y_{i_{max}})+2w_1P+2(|I|-1)w_2y_{i_{max}}\\
&<2((|I|-1)w_2-w_1)y_{i_{max}},
\end{align*}where the last inequality holds since $N+P<0$.
\begin{figure}[ht]
\centering
\scalebox{0.9}{\input{trapezoid}}
\caption{Configurations of $S_i$, $S'_i$, and $S''_i$. Numbers represent the edge-lengths.}
\label{fig:trapezoid}
\end{figure}
Now, by the inequality above and the assumption $y(\overline{S_i})\geq 0$, we conclude $(|I|-1)w_2-w_1< 0$, so $|I|< \frac{w_1+w_2}{w_2}$. By the pigeonhole principle, there exists $i_0\in I$ such that $y_{i_0}<\frac{w_2}{w_1+w_2}N$.
If $i_0\leq m'/2$, for each $1\leq i\leq m'/4$, we choose $S'_i\in \mathcal{I}(K_m,W)$ such that its $w_1$-weighted edges have lengths $i,i_0,i, i_0+2i$ following the cyclic order. See Figure~\ref{fig:trapezoid}. Then $\overline{S'_i}$ is uniquely determined and a similar analysis of the edge-lengths gives
\[\sum_{i=1}^{m'/4} y(\overline{S'_i})\leq \frac{m'}{4} w_1y_{i_0}+3(w_1+w_2)P<\frac{m'}{4}\frac{w_1w_2}{w_1+w_2}N+3(w_1+w_2)P<0,\]
where the last inequality is by the value of $m'$ and $N+P<0$. This contradicts the assumption that $y(\overline{S'_i})\geq 0$ for all possible $i$.
If $i_0> m'/2$, for each $1\leq i\leq m'/4$, we choose $S''_i\in \mathcal{I}(K_m,W)$ such that its $w_1$-weighted edges have lengths $i,i_0,i, i_0-2i$ following the cyclic order. See Figure~\ref{fig:trapezoid}. Then $\overline{S''_i}$ is uniquely determined and a similar analysis of the edge-lengths gives
\[\sum_{i=1}^{m'/4} y(\overline{S''_i})\leq \frac{m'}{4} w_1y_{i_0}+3(w_1+w_2)P<\frac{m'}{4}\frac{w_1w_2}{w_1+w_2}N+3(w_1+w_2)P<0,\]
where the last inequality is by the value of $m'$ and $N+P<0$. This is also a contradiction, hence we conclude the proof.
\end{proof}
\section{Proof of Theorem~\ref{higher}}\label{sec:higher}
The proof of Theorem~\ref{higher} is based on the following lemma.
\begin{lemma}\label{Kk_frac_pack}
For every integer $k>2$ there is an absolute constant $C_k$ with the following property. Suppose $W$ is a complete weighted cgg on $2k$ vertices with every length-$l$ edge having weight $w_l$. If $w_1>0$ and $w_{k}\geq C_k \sum_{l<k} w_l$, then there exists some $K_m$ having a perfect fractional $W$-packing.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{higher}] Let $C_k$ be as concluded by Lemma~\ref{Kk_frac_pack}. Recall that we wish to prove any cgg $G$ satisfying
\[E_1\neq\emptyset\text{ and }|E_{k}|\geq C_k\cdot \sum_{l<k}|E_l|\]
is packable. Here $\{E_l; l=1,\dots,k\}$ are the edge collections associated with an interval partition $\mathcal{P}=\{V_1,\dots,V_{2k}\}$ as described in the statement of this theorem.
Denote the weighted representation of $G$ constructed from $\mathcal{P}$ as $W$ and its vertices as $v_1\prec \dots \prec v_{2k}$~($k>2$). Consider another weighted cgg $W'$ on these vertices such that every edge with length $l$ in $W'$ has weight $w_l$, where
\[w_{k}=2\times\sum_{l_W(e)=k}\omega_W(e)\text{\quad and\quad} w_l=\sum_{l_W(e)=l}\omega_W(e)\text{\quad for all $l=1,\dots, k-1$}.\]
By considering the rotations of $W$, we can check that $W'$ has a perfect fractional $W$-packing. Observe that we have $w_{k}=2|E_{k}|$ and $w_l=|E_l|$ for $1\leq l< k$. Then by hypothesis and Lemma~\ref{Kk_frac_pack}, some $K_m$ has a perfect fractional $W'$-packing, hence also a perfect fractional $W$-packing. So we conclude the proof by Observation~\ref{Km_packable} and Lemma~\ref{frac_pack_lemma}.\end{proof}
\begin{proof}[Proof of Lemma~\ref{Kk_frac_pack}]
Let $C_k=16k$ and $W$ be as described. Note that $k$ is the largest edge-length in $W$. We take
\[m=2m'+1\text{\quad with\quad}m'=\left\lceil 24k^3w_1^{-1}\sum_{l=1}^k w_l \right\rceil.\]
Note that $m'$ is the largest edge-length in $K_m$. For the rest of this proof, all the edge-lengths are defined in $K_m$.
By Observation~\ref{compressed_matrix}, it suffices to show there exists some vector $x\geq 0$ with $M_{K_m,W}x=\Vec{1}$ for $M_{K_m,W}$ defined in \eqref{eq:compressed_matrix_def}. For the sake of contradiction, suppose there's no such $x$, by Lemma~\ref{farkas}, there exists $y\in \mathbb{R}^{m'}$ such that\begin{itemize}
\item $\sum^{m'}_{i=1}y_{i}=(\Vec{1})^T\cdot y<0$;
\item For any $S\in \mathcal{I}(K_m,W)$, we have
\[y(\overline{S}):=\sum_{e\in E(S)} \omega_S(e)\cdot y_{l_{K_m}(e)}=(M^T_{K_m,W}\cdot y)(\overline{S})\geq 0.\]
\end{itemize}
Let $I:=\{i\in [m']; y_i<0\}$, $N=\sum_{i\in I} y_i$, and $P=\sum_{i\not\in I} y_i$. Then we have $N+P<0$. Also, write $I'=\{i\in I; i<k\}$ and $N'=\sum_{i\in I'} y_i$.
For each $i\in I\setminus I'$, write $i=kq_i + r_i$ uniquely with integers $q_i>0$ and $0\leq r_i<k$. We choose $S_i\in \mathcal{I}(K_m,W)$ such that the lengths of its $w_1$-weighted edges listed following the cyclic order are
\[\underbrace{q_i,q_i,\dots,q_i}_{\text{$k-1$ times}}\ ,\ q_i+r_i\ ,\ \underbrace{q_i,q_i,\dots,q_i}_{\text{$k-1$ times}}\ ,\ \min\{2i-q_i-r_i,m-(2i-q_i-r_i)\}.\]
See Figure~\ref{fig:hex}. Note that $\overline{S_{i}}$ is uniquely determined in this way.
\begin{figure}[ht]
\centering
\input{hex}
\caption{Configurations of $S_i$ and $S_{ij}$ for $k=3$. Numbers represent the edge-lengths.}
\label{fig:hex}
\end{figure}
Observe that all $w_k$-weighted edges of $S_i$ have length $i$ and there are $k$ many of them. On the other hand, for any $l\in [k]$, every $w_l$-weighted edge of $S_i$ has length equal to either $lq_i$, or $lq_i+r_i$, or $i+(k-l)q_i$, or $m-(i+(k-l)q_i)$. And for each such value, there are at most $2k$ many such edges of $S_i$ having it as their length. Hence we can check, among all chosen $S_i$, there are at most $8k^2$ many $w_{l}$-weighted edges having length $j$ for each $j\in [m']\setminus I$. Overall we have
\begin{align*}
\sum_{i\in I\setminus I'} y(\overline{S_i})&\leq kw_{k}(N-N')+ 8k^2\sum_{l< k} w_lP\\
&=kw_{k}\left(\frac{1}{2}N-N'+\frac{1}{2}N\right)+ 8k^2\sum_{l< k} w_lP\\
&\leq kw_{k}\left(\frac{1}{2}N-N'\right) + \frac{1}{2}kC_k\sum_{l< k} w_lN+8k^2\sum_{l< k} w_lP\\
&< kw_{k}\left(\frac{1}{2}N-N'\right),
\end{align*}where the last inequality is by the value of $C_k$ and $N+P<0$.
Now, by the inequality above and the assumption $y(\overline{S_i})\geq 0$, we conclude $N'<N/2$. For each $i\in I'$ and $j\leq \frac{m'}{2k}$, we choose $S_{ij}\in \mathcal{I}(K_m,W)$ such that the lengths of its $w_1$-weighted edges listed following the cyclic order are
\[\underbrace{j,j,\dots,j}_{\text{$k-1$ times}}\ ,\ i\ ,\ \underbrace{j,j,\dots,j}_{\text{$k-1$ times}}\ ,\ i+(2k-2)j.\]
See Figure~\ref{fig:hex}. Note that $i+(2k-2)j\leq m'$ in this definition. Again, $\overline{S_{ij}}$ is uniquely determined and a similar analysis of the edge-lengths gives
\[\sum_{j=1}^{m'/2k}y(\overline{S_{ij}})\leq \frac{m'}{2k}w_1 y_i + 6k\sum_{l} w_lP.\]
Hence, as $|I'|<k$ and $N'<N/2$, we have
\[\sum_{i\in I'}\sum_{j=1}^{m'/2k}y(\overline{S_{ij}}) < \frac{m'}{2k}w_1 N'+ 6k^2\sum_{l} w_lP< \frac{m'}{2k}w_1 \frac{N}{2}+ 6k^2\sum_{l} w_lP<0,\]
where the last inequality is by the value of $m'$ and $N+P<0$. This contradicts the assumption that $y(\overline{S_{ij}})\geq 0$ for all $S_{ij}$, so we conclude the proof.
\end{proof}
\section{Proof of Theorem~\ref{ordered}}\label{sec:ordered}
Some details of this proof are abridged due to their similarity to previous arguments.
\begin{proof}[Proof of Theorem~\ref{ordered}]
Let $G$ be the given ordered graph. If $\chi_<(G)=2$, a result of Pach and Tardos~\cite{pach2006forbidden} states that
\[\text{ex}_{<}(n,G)=\left(1-\frac{1}{\chi_<(G)-1}\right)\binom{n}{2}+o(n^2)=o(n^2),\]
where $\text{ex}_{<}(n,G)$ is the maximum number of edges in a $n$-vertex ordered graph that doesn't contain $G$ as an ordered subgraph. Similar to the $\chi_c(G)=2$ case in Theorem~\ref{main}, we conclude $G$ is packable.
If $\chi_<(G)=3$, let $\{V_1,V_2,V_3\}$ be an interval partition of $V(G)$ with every edge in $E(G)$ having endpoints in different parts. Write $e_{ij}=|E_G(V_i,V_j)|$ for $1\leq i<j\leq 3$. For each positive integer $n$, the \emph{irregular blow-up} $\tilde{K}_3[n]$ is an ordered graph defined as follows: the vertex set $V(\tilde{K}_3[n])$ consists of three intervals $I_1,I_2,I_3$ in order, of size $e_{12}e_{13}n$, $e_{12}e_{23}n$, $e_{13}e_{23}n$ respectively; the edge set $E(\tilde{K}_3[n])$ consists of all pairs with endpoints in different parts.
First we prove that given arbitrary $\epsilon>0$, there exists $n_\epsilon$ such that, for all $n\geq n_\epsilon$ there is a $G$-packing of $\tilde{K}_3[n]$ covering all but an $\epsilon$-fraction of edges. Consider the hypergraph $\mathcal{H}$ whose vertex set $\mathcal{V}$ consists of all edges of $\tilde{K}_3[n]$ and edge set $\mathcal{E}$ consists of all $E(G')$ where $G'$ is a subgraph of $\tilde{K}_3[n]$ satisfying\begin{itemize}
\item there exists an isomorphism $f:G\to G'$ such that with $f(V_i)\subset I_i$ for $i=1,2,3$; and
\item there exists $\delta\in [1,\log n]$ such that for $i=1,2,3$, every consecutive pair of vertices in $V(G')\cap I_i$ has length $\delta$ in $\tilde{K}_3[n]$,
\end{itemize} Using similar arguments as in the proof of Lemma~\ref{frac_pack_lemma}, we can check inside $\mathcal{H}$: All $x\in \mathcal{V}$ but $O(n\log n)$ of them have degree $d(x)=(1\pm o(1))e_{12}e_{13}e_{23}n\log n$; All $x\in \mathcal{V}$ have degree $d(x)<2e_{12}e_{13}e_{23}n\log n$; Any two distinct $x_1,x_2\in \mathcal{V}$ have codegree $d(x,y)<O(e_{12}e_{13}e_{23}n)$. Therefore, by Theorem~\ref{Pippenger_Spencer}, there exists $n_\epsilon$ with the claimed property.
Now we fix an arbitrary $\epsilon>0$ and describe a recursive construction of a $G$-packing of $K_n$. Firstly, we partition $V(K_n)$ into four intervals $I_1,I_2,I_3,I_4$ in order, where $I_1,I_2,I_3$ have sizes $e_{12}e_{13}n'$, $e_{12}e_{23}n'$, $e_{13}e_{23}n'$ respectively, for a unique $n'$ such that $I_4$ has size less than $q:=e_{12}e_{13}+e_{12}e_{23}+e_{13}e_{23}$. Then we can regard the subgraph of $K_n$ with vertices $I_1\cup I_2\cup I_3$ and edges $E_{K_n}(I_1,I_2)\cup E_{K_n}(I_1,I_3)\cup E_{K_n}(I_2,I_3)$ as $\tilde{K}_3[n']$. If $n'< n_\epsilon$, we do nothing and end this construction. Otherwise, we find a $G$-packing of this $\tilde{K}_3[n']$ that covers all but an $\epsilon$-fraction of edges, and we recursively repeat this process on the induced subgraphs on $I_1,I_2,I_3$ respectively. Finally, collecting all the produced edge-disjoint $G$-copies gives the resulting $G$-packing of $K_n$.
For $n$ sufficiently large, the uncovered edges of above $G$-packing are in either one of the following categories:\begin{itemize}
\item Edges with a endpoint in $I_4$ at some stage. There are $O(n\log n)$ many of them;
\item Edges in some $\tilde{K}_3[n']$ with $n'< n_\epsilon$. There are at most $\frac{n}{qn_\epsilon}\binom{qn_\epsilon}{2}=O(n)$ many of them.
\item Edges in some $\tilde{K}_3[n']$ with $n'\geq n_\epsilon$ but not covered by the constructed $G$-packing. There are at most $\epsilon \binom{n}{2}$ many of them.
\end{itemize} Therefore, there exists a $G$-packing of $K_n$ covering all but $\epsilon n^2$ edges. By the arbitrariness of $\epsilon$, we conclude that $G$ is packable.
\end{proof}
\section{Final remarks}\label{sec:remark}
\noindent 1. There is a polynomial time randomized greedy algorithm for the concluded matching in Theorem~\ref{Pippenger_Spencer} \cite{rodl1996asymptotic}. It is also well-known that linear programs are solvable in polynomial time \cite[Chapter 7]{matousek2006understanding}. Hence there exist polynomial time randomized algorithms for the packings asserted by our theorems.
\medskip
\noindent 2. We believe our methods could also prove Theorem~\ref{higher} where the interval partition $\mathcal{P}$ has an odd size. But such a proof probably requires new constructions like $S_i\in \mathcal{I}(K_m,W)$ in Lemma~\ref{Kk_frac_pack}.
\medskip
\noindent 3. Is it true that for any cgg $G$ with $\chi_c(G)=k\geq 5$, there exist $G$-packings of $K_n$ covering a $(1-o(1))4/k$-fraction of edges? If this is possible, it is going to be asymptotically tight, by an average length argument.
\medskip\noindent {\bf Acknowledgement.} We want to thank Daniel Cranston, Andrew Suk, Jacques Verstra\"ete, Alexandra Wesolek, and Yunkun Zhou for helpful discussions. We also wish to thank every organizers of Graduate Student Combinatorics Conference 2022 where this work is initiated.
\bibliographystyle{abbrv}
| proofpile-arXiv_065-11749 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{Per-category localization accuracy}
\input{tables/cvpr-flickr30k-per-category-accuracy}
Table\ \ref{tab:flickr30k-per-category} shows the per-category phrase localization accuracy on Flickr30K Entities dataset. Comparing with the NCE model, the NCE+Distill model has the largest relative improvement on following categories: body parts ($+62\%$), instruments ($+9.5\%$), scene ($+9\%$), and clothing ($+8\%$). These categories mainly contain phrases that are covered by the Open Images object classes.
\subsection{Per-phrase localization accuracy on phrases correspond to detector classes}
Fig.\ \ref{fig:phrase-accuracy} shows the accuracy of the NCE only and the NCE+Distill model on the most frequent phrase categories in Flickr30K Entities~\cite{plummer2015flickr30k} that are also presented in Open Images~\cite{krasin2017openimages}. The goal of this experiment is to verify if our distillation scheme can help to improve the accuracy of phrase categories by leveraging external knowledge from the object detector. Across all 14 categories, our full model performs on par with NCE for mouth and jeans, and outperforms NCE for 12 categories including people and clothing. We note the category of ``mouth'' has zero accuracy for both models. This is indeed bounded by the object proposals --- only 10\% of the ``mouth'' was covered by the proposals, leading to unsatisfactory performance of both models.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{images/distill-oid-phrase-accuracy-compare-crop}
\caption{Phrase grounding accuracy(\%) for frequent phrases on Flickr30K Entities that are also presented in Open Images detector. We compare the results of two variants of our model (NCE vs.\ NCE+Distill). Our full model (NCE+Distill) helps to improve those phrase categories that lie in Open Images.}
\label{fig:phrase-accuracy}
\end{figure*}
\section{Visualization of Region-Phrase Matching}
Moving forward, we provide additional visualization of our learned region-phrase matching function, as shown in Fig.\ \ref{fig:grounding-flickr} (samples from Flickr30K Entities~\cite{plummer2015flickr30k}) and Fig.\ \ref{fig:grounding-referit} (samples from ReferItGame~\cite{kazemzadeh2014referitgame}). On both datasets, our learned matching function can identify meaningful regions associated with the phrases, as shown in Figures~\ref{fig:grounding-flickr} and~\ref{fig:grounding-referit}.
\begin{figure*}[th!]
\centering
\includegraphics[width=1.0\textwidth]{images/grounding-viz-flickr30k-1}
\includegraphics[width=1.0\textwidth]{images/grounding-viz-flickr30k-2}\vspace{-1em}
\caption{Visualization of region-phrase matching results using our full model (NCE+Distill) on Flickr30k Entities dataset. We present 4 sample images (a—d). For each sample, from left to right: the sentence with parsed phrases, the attention map of region-phrase matching for each phrase. For each pixel, we compute a matching score by averaging scores from all proposals covering the pixel. The red color corresponds to high matching scores.}
\label{fig:grounding-flickr}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{images/grounding-viz-referit-1}
\includegraphics[width=1.0\textwidth]{images/grounding-viz-referit-2-crop}\vspace{-0.6em}
\caption{Visualization of region-phrase matching results using our full model (NCE+Distill) on ReferItGame dataset. We present 4 sample images (a---d). For each sample, we visualize the attention map of region-phrase matching for each phrase. Similarly, we aggregate matching scores for each pixel from all nearby proposals. The red color corresponds to high matching scores.}
\label{fig:grounding-referit}
\end{figure*}
\section{Introduction}
\label{intro}
\input{intro}
\section{Related Work}
\label{rela_work}
\input{related_work}
\section{Approach}
\label{approach}
\input{approach}
\section{Experiments and Results}
\label{exp}
\input{exp}
\section{Conclusion, Limitation and Future Work}
\label{conclusion}
\input{conclusion}
\bibliographystyle{ieee_fullname}
\subsection{Score Functions for Image-Text Matching}
Our model builds on a two-branch network~\cite{wang2018learning} for image-text matching at both region-phrase and image-sentence levels. The key idea is learning a score function to match region-phrase pairs. Based on the region-phrase matching scores, we further construct an image-sentence similarity score. Specifically, our network has two branches $f$ and $g$ that take the inputs of region and phrase features $x_i^l$ and $y_j^k$, respectively. Each branch is realized by a deep network by stacking multiple fully connected layers with ReLU activation in-between, followed by a L$2$ normalization at the end. We define the similarity between a region-phrase pair $(x_i^l, y_j^k)$ as the cosine similarity between the transformed features $f(x_i^l)$ and $g(y_j^k)$, given by
\begin{equation} \label{eq:noweight}
\small
s(x_i^l, y_j^k) = f(x_i^l)^T g(y_j^k).
\end{equation}
We further aggregate the region-phrase matching scores $s(x_i^l, y_j^k)$ into a similarity score between a image-sentence pair $(X_i, Y_j)$, defined as
\begin{equation} \label{eq:sim_score}
\small
S(X_i, Y_j) = \sum_{k=1}^{m} \max_{{1\leq l\leq n}} \ s(x_i^l, y_j^k).
\end{equation}
This image-sentence score $S(X_i, Y_j)$ is computed using greedy matching. Concretely, for each phrase $k$ in the sentence $j$, we find its best matching region in an image.
The scores of best matching regions are further summed across all phrases. Note that phrases and regions are not interchangeable in this score function, i.e., $S(X_i, Y_j)\neq S(Y_j, X_i)$, because each phrase must be matched to at least one region, while some regions, such as background regions, are not matched to any phrase. Similar image-sentence scoring functions were discussed in~\cite{karpathy2014deep,zhou2018weakly} for image-sentence retrieval.
\subsection{Distillation using Contrastive Learning}
A major challenge of weakly supervised grounding is the lack of ground-truth region-phrase pairs. Our key idea is to make use of an object detector during training that can provide ``pseudo'' labels. Our model further distills from these ``pseudo'' labels for learning region-phrase matching. Once learned, we can directly use the region-phrase score function without object detection. In what follows, we describe the generation of pseudo labels, the contrastive loss used for distillation, and the training and inference of our model.
\noindent \textbf{Pseudo Labels for Region-Phrase Matching}. An object detector $\mathcal{D}$ predicts the probability of region $x_i^l$ having an object label $z_i^l$ in the form of nouns (including ``background''), i.e., $p(z_i^l|x_i^l)=\mathcal{D}(x_i^l)$.
$z_i^l$ can be further matched to the phrase $y_j^k$, e.g., using similarity scores between object noun and the head noun of the phrase. {In particular, our implementation uses the WordNet~\cite{miller1995wordnet} to define such similarity scores, as discussed in section~\ref{exp}.} Let $p(y_j^k, z_i^l)$ be the matching probability between $y_j^k$ and $z_i^l$. We propose to approximate the unknown region-phrase matching ground-truth $p(x_i^l, y_j^k)$ by soft ``pseudo'' label $\hat{p}(x_i^l, y_j^k)$, approximated by
\begin{equation}
\label{eq:eq_3}
\small
\hat{p}(x_i^l, y_j^k) \propto \sum_z p(y_j^k, z_i^l) p(z_i^l|x_i^l) p(x_i^l).
\end{equation}
where we assume $p(z_i^l)$ is a constant value for every $z_i^l$ since object classes are fixed and limited and therefore we omit the $p(z_i^l)$ in the denominator of Eq.\ref{eq:eq_3}. Note that this approximation holds when $x_i^l$ and $y_j^k$ is conditionally independent given $z_i^l$ --- a rather strong assumption. Nonetheless, we use $\hat{p}(x_i^l, y_j^k)$ as a ``pseudo'' soft target distribution for training our model. In practice, $\hat{p}(x_i^l, y_j^k)$ is approximated and computed by detecting objects and matching their names to candidate phrases.
\noindent \textbf{Distilling Knowledge from Pseudo Labels}. We propose to distill from the pseudo label $\hat{p}(x_i^l, y_j^k)$ by aligning the region-phrase matching scores $s(x_i^l, y_j^k)$ to the soft pseudo label $\hat{p}(x_i^l, y_j^k)$. Specifically, given a matching image-sentence pair $(X_i, Y_j)$, we propose the following distillation loss function for region-phrase matching
\begin{equation} \label{eq:obj_distill}
\small
\mathcal{L}_{RP}(X_i, Y_j) =
- \sum_{y_{j}^{k}\in Y_j} \sum_{x_{i}^{l} \in R_i^l}
\hat{p}(x_i^l, y_j^k)
\log \hat{h}(x_i^l, y_j^k),
\end{equation}
where $\hat{h}(x_i^l, y_j^k)$ is given by
\begin{equation*}
\resizebox{.45\textwidth}{!}
{
$\hat{h}(x_i^l, y_j^k) =
\frac{\exp(s(x_{i}^{l},y_{j}^{k})/\tau)}{\exp(s(x_{i}^{l},y_{j}^{k})/\tau) + \sum_{x_{i}^{l'} \in {R_i^l} {\setminus \{x_i^l\}}} \exp(s(x_{i}^{l'},y_{j}^{k})/\tau)}$.
}
\end{equation*}
Here $\tau$ is the temperature scale factor (0.5 in all our experiments). $R_i^l$ controls how we select $x_{i}^{l'}$. {A simple choice for $x_{i}^{l'}$ is to use all regions in $X_i$ except $x_{i}^{l}$.} In this case, our loss can be interpreted as the cross entropy loss, where the normalized output of the score function $s(x_{i}^{l},y_{j}^{k})$ is trained to mimic the pseudo label $\hat{p}(x_i^l, y_j^k)$ given by object detection outputs. This is the same idea as knowledge distillation~\cite{hinton2015distilling}, where the soft target from a teacher detection model is used as a learning objective.
\noindent \textbf{Contrastive Loss for Image Sentence Matching}. Moving beyond region-phrase matching, we enforce additional constraints for image-sentence matching scores $S(X_i, Y_j)$, where the ground truth pairs $p(X_i, Y_j)$ is readily available.
To this end, we make use of the noise contrastive estimation loss~\cite{gutmann2010noise} to contrast samples from data distribution (matched pairs) and noise distribution (non-matched pairs). The NCE loss for image-sentence matching is thus given by
\begin{equation} \label{eq:obj_contrast}
\small
\mathcal{L}_{IS}(X_i, Y_j) = -\mathbb{E}_{\mathcal{N}(X_i) \in X} \left[
p(X_i, Y_j) \\
\log h(X_i, Y_j)
\right],
\end{equation}
where $h(X_i, Y_j)$ is defined as
\begin{equation*}
\resizebox{.45\textwidth}{!}
{
$h(X_i, Y_j) = \frac{\exp(S(X_i,Y_j)/\tau)}{\exp(S(X_i,Y_j)/\tau) + \sum_{i' \in \mathcal{N}(X_i)} \exp(S(X_{i'},Y_j)/\tau)}$.
}
\end{equation*}
Again, $\tau$ is the temperature scale factor (0.5). $p(X_i, Y_j)$ is reduced to binary values during training, i.e., $p(X_i, Y_j)=1$ if and only if $(X_i, Y_j)$ is a ground-truth image-sentence pair. $\mathcal{N}(X_i)$ includes a set of negative samples, i.e., those images not matched to the current sentence $Y_j$, sampled from the set of images $X$. In practice, we always sample a fixed number of negative pairs from the current mini-batch.
\noindent \textbf{Loss Function}. We note that both Eq.\ \ref{eq:obj_distill} and Eq.\ \ref{eq:obj_contrast} share a similar form and can be both considered as a variant of contrastive loss. Concretely, the two loss functions seek to align the normalized scores in the form of NCE to a target distribution. The difference is how the target distribution is defined and how the samples are selected for normalization. For region-phrase matching, the target distribution is given by pseudo labels from object detection and local image regions are used for normalization. For image-sentence matching, the target distribution is defined by ground-truth image-sentence pairs and non-matched image-sentence pairs are sampled for normalization.
By combining the distillation loss $\mathcal{L}_{RP}(X_i, Y_j)$ for region-phrase matching and the NCE loss $\mathcal{L}_{IS}(X_i, Y_j)$ for image-sentence matching, our final loss function is given by
\begin{equation} \label{eq:obj_total}
\small
\mathcal{L}(X_i, Y_j) = \mathcal{L}_{IS}(X_i, Y_j) + \lambda\mathcal{L}_{RP}(X_i, Y_j),
\end{equation}
where $\lambda$ is the coefficient balancing the two loss terms. During training, we gradually increase the coefficient $\lambda$, such that our model learns to optimize image-sentence matching during the early stage of training, and to focus on region-phrase matching during the late stage of training.
\noindent \textbf{Inference without Object Detection}. During inference, given an input image-sentence pair, we apply the learned region-phrase score function $s(x_{i}^{l},y_{j}^{k})$ between every region-phrase pair. The image region with the highest score to each phrase is then selected as the grounding results, i.e., $\arg \max_{x_{i}^{l}} s(x_{i}^{l},y_{j}^{k})$. We must point out that unlike previous methods~\cite{wang2019phrase,gupta2020contrastive} {\it the inference of our model does not require running object detection}, therefore our method is very efficient at test time.
\subsection{Implementation Details}
We first describe our implementation details, including the features and object detectors, the network architecture and training scheme, and details of object-phrase matching.
\noindent \textbf{Features and Object Detectors}.
To establish a fair comparison with previous work using region features extracted from different backbones, we benchmark our methods by varying the backbone networks. We follow the same settings in~\cite{chen2018knowledge,wang2019phrase} to extract activations from the last layer before the classification head in Faster R-CNN~\cite{ren2015faster} with VGG16 and ResNet-101 backbones pre-trained on PASCAL VOC (PV)~\cite{everingham2009pascal} or MS COCO (CC)\footnote{\url{https://github.com/endernewton/tf-faster-rcnn}}~\cite{lin2014microsoft}. To compare with WPT~\cite{wang2019phrase} using object detectors trained on Open Images Dataset~\cite{krasin2017openimages}, we also extract classifier logits from Faster R-CNN with Inception-ResNet-V2 (IRV2) backbone pre-trained on the Open Images Dataset (OI)\footnote{\url{https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md}}. To compare with the InfoGround from~\cite{gupta2020contrastive}, we also experiment with their released features\footnote{\url{https://github.com/BigRedT/info-ground}} on Flickr30K and follow their protocols to use the same VisualGenome (VG) pre-trained Faster R-CNN model\footnote{\url{https://github.com/BigRedT/bottom-up-features}} to extract features on ReferItGame. Unlike \cite{chen2018knowledge,rohrbach2016grounding,zhao2018weakly} using a large amount of bounding box proposals, typically $100$ per image, InfoGround only extracts $30$ proposals per image for the grounding model to select from. We compare the two settings separately in the result tables.
We denote these feature choices as ``{\it VGG16}'', ``{\it Res101}'', ``{\it IRV2}'' respectively plus the object data set when reporting our results. For example, ``{\it IRV2 OI}'' means that the backbone is Inception-ResNet-V2 (IRV2) pre-trained on the Open Images (OI) Dataset.
\input{tables/cvpr-flickr30k-benchmark}
\noindent \textbf{Network Architecture}.
For visual representation, we normalized the region features to zero-mean and unit-variance using stats from training samples. This normalization helps our model to converge faster.
We attached two fully connected layers on top of the region features to get $512$-D region embeddings.
For phrase representation, we tokenized each query phrase into words
and encoded using LSTM~\cite{hochreiter1997long} with the Glove embeddings~\cite{pennington2014glove}.
The embedding vocabulary contains the most frequent 13K tokens from the Flickr30K Entities training split. The same vocabulary is used for ReferItGame. The LSTM has two layers, with both embedding and hidden dimension as $300$. Max pooling is applied over the hidden states of all tokens, followed by two fully connected layers to get $512$-D phrase embeddings.
\noindent \textbf{Training Details}.
We trained our model using Adam\cite{kingma2014adam} with a learning rate of 0.0001. We used a mini-batch size of 32 image-sentence pairs (31 negative images per sentence for the contrastive loss). Unlike~\cite{fang2019modularized}, we did not fine-tune our vision backbone during training for efficiency. Similarly, the GloVe embeddings~\cite{pennington2014glove} are also fixed during training. We observed that the model converges quickly within a few epochs on both datasets. For the $\lambda$ in Eq. \ref{eq:obj_total}, we gradually increased the value using a staircase function $\lambda=\min(\lfloor step / a \rfloor, b)$, where $a, b$ are selected based on the validation set. We observed that $a$=$200$, $b$=$1$ for VG features and $a$=$200$, $b$=$3$ for others work the best.
\noindent \textbf{Object-Phrase Matching}.
We use the WordNet~\cite{miller1995wordnet} to define the similarity scores between phrases and region object labels. Specifically, we identify the head noun of each phrase using the off-the-shelf POS tagger provided by NLTK~\cite{loper2002nltk}, which uses the Penn Treebank tag set. If the head noun matches one of the detector class labels, the phrase is mapped to the class. Otherwise, we look up the head noun in the WordNet~\cite{miller1995wordnet} to find its corresponding synset, the synset's lemmas, and hypernyms. If any of these exists in the detector classes, the phrase is mapped to the class. For phrases with multiple synsets, the most frequent one is used. The WordNet synset helps to match phrases such as ``spectators'' to ``person'' and ``sweater'' to ``clothing''. With the $545$ classes in Open Images Dataset~\cite{krasin2017openimages}, the WordNet-based matching algorithm covers $18$k out of $70$k unique phrases in Flickr30k Entities and $7$k out of $27$k in ReferItGame training set.
We empirically observe that using WordNet is more reliable than word embeddings for noun matching. Similarity in the word embedding space is not necessarily aligned with the visual similarity of the entities described. A similar observation was made in~\cite{fang2019modularized} where the word2vec embeddings are not discriminative for gender related visual concepts.
\subsection{Comparison to Other Methods}
We further compare our results to the latest methods of weakly supervised phrase grounding on both Flickr30K Entities and ReferItGame datasets.
\noindent \textbf{Baselines} We consider a number of baselines. Our main competitors are those methods using object detectors, including KAC~\cite{chen2018knowledge}, UTG~\cite{yeh2018unsupervised}, and WPT~\cite{wang2019phrase}.
Among these methods, KAC and UTG used detectors during both training and inference. WPT applied detectors during inference. While these baselines have very different sets of detectors and backbones, we try to match their settings in our experiments. Our baselines also include previous methods that do not use object detectors, such as GroundeR~\cite{rohrbach2016grounding}, MATN~\cite{zhao2018weakly}, MTG~\cite{fang2019modularized}, and InfoGround~\cite{gupta2020contrastive} for completeness.
\input{tables/cvpr-referit-benchmark}
\noindent \textbf{Results on Flickr30K Entities}. Our results on Flickr30K Entities are summarized in Table~\ref{tab:flickr30k-results}. Table~\ref{tab:flickr30k-results} compares both the settings of different methods and their phrase localization accuracy.
When using detectors pre-trained on COCO (YOLOv2 CC, VGG16 CC, IRV2 CC), our method outperforms previous works (+3.5\% / +1.7\% / +2.81\% for UTG / KAC / WPT, respectively). In comparison to MTG which uses COCO pre-trained backbones, our results are better (+2.6\%) with only a single VGG16 backbone in inference (vs.\ three ResNet backbones in MTG).
When using a stronger backbone (Res101 CC) and a better detector pre-trained on a larger scale dataset (IRV2 OI), our results are further improved by 10.6\% on Flickr30K Entities. Our final results thus outperform the latest method of WPT under a similar training setting. Moreover, in contrast to WPT, our method does not require running the three cumbersome object detectors during inference, thus is more applicable for real world deployment.
Lastly, to compare fairly with~\cite{gupta2020contrastive}, using the same VisualGenome pre-trained backbone (Res101 VG) and under the same proposals setting (30 per image) with~\cite{gupta2020contrastive}, our NCE+Distill model significantly outperforms the latest method of InfoGround by 5.22\% when trained on the Flickr30K Entities dataset. Moreover, with the help of distillation, our results also outperform their best results by 1.4\%, which is trained on COCO Caption dataset~\cite{chen2015microsoft} using a strong language model (BERT~\cite{devlin2018bert}).
\input{tables/cvpr-flops-benchmark}
To further illustrate the benefits of our model, we compare the computation complexity at inference time in terms of floating point operations (FLOPs). For each model, the estimation is a combination of the backbone and the object detector, since the rest part of the model, including the language feature extractor, is computationally insignificant (0.3 GFLOPs for our NCE+Distill model). For Faster R-CNN based detectors, we use the numbers reported in \cite{huang2017detspeed}, under the high-resolution input (600$\times$600) and 300 proposals setting. For YOLOv2, we use the number reported in \cite{redmon2017yolo9000}. The comparison focuses on methods that incorporate external knowledge, namely UTG, KAC, WPT, and ours. As shown in Table \ref{tab:flops-results}, our proposed NCE+Distill method reduces the computational complexity by $50\%$ and $70\%$, while being a bit more expensive than UTG due to the detector meta-architecture difference (Faster R-CNN vs. YOLOv2).
\noindent \textbf{Results on ReferItGame}
We summarize the results on ReferItGame under different settings in Table~\ref{tab:referit-results}. When using COCO pre-trained detectors, our method significantly outperforms UTG, KAC, and WPT by 3.6\%, 8.7\%, and 9.12\% respectively. When using a stronger backbone (Res101 CC) and a better detector pre-trained on a larger scale dataset (IRV2 OI), our results are improved by 3.1\%, outperforming the best results from WPT using four cumbersome knowledge detectors in inference by 1.1\%.
Finally, using the VisualGenome pre-trained backbone (Res101 VG), our results can be further improved by 10.8\%. This gain is significantly larger than the one on Flickr30K Entities. We conjecture that the VisualGenome pre-trained backbone provides more discriminative features in certain categories that appear more frequently in the ReferItGame than Flickr30K Entities. To verify this hypothesis, we compare the most frequent phrases from Flickr30K Entities and ReferItGame. In Flickr30K Entities, top phrases are mostly people related: \textit{man, woman, boy, girl, person, people, dog, two men, street, young boy, child}, whereas top phrases in ReferItGame are mostly scene related:
\textit{sky, water, people, ground, person, trees,
building, face, road, grass, clouds}. Many scene related objects are not in the COCO label set, but are available in the VisualGenome categories.
\begin{figure*}[h!]
\centering
\includegraphics[clip, trim=0cm 3cm 0cm 0cm,width=0.95\textwidth]{images/cvpr-grounding-viz-final.pdf}\vspace{-1em}
\caption{Visualization of region-phrase matching. First row: results of the NCE only (left) and the NCE+Distill (green) on phrases mapped to Open Images Dataset classes. Second row: results of the Distill only (left) and the NCE+Distill (green) on phrases mapped to the same class but with different attributes. For each pixel, we compute a matching score by averaging scores from all proposals covering the pixel. Red colors indicate high matching scores.
Our knowledge distillation helps to better identify the extent of objects, while contrastive learning helps to distinguish finer attributes.}\label{fig:compare}\vspace{-1.5em}
\end{figure*}
\subsection{Ablation Study}
To fully understand our model, we conduct ablation studies on both Flickr30K Entities and ReferItGame datasets. Specifically, we consider four different variants of our model: (1) our model with only image-sentence score function
(Eq.\ \ref{eq:sim_score}) supervised by a max margin loss following ~\cite{karpathy2014deep,zhao2018weakly}, denoted as ``Max Margin'', i.e. modeling positive and negative image-sentence pairs distances via max margin loss. The full loss function is defined in the Eq.\ref{eq:margin} below.
(2) our model with only image-sentence score function (Eq.\ \ref{eq:sim_score}) supervised by the NCE loss (Eq.\ \ref{eq:obj_contrast}), denoted as ``NCE''; (3) our model with only region-phrase score function (Eq.\ \ref{eq:noweight}) supervised by the distillation loss (Eq.\ \ref{eq:obj_distill}), denoted as ``Distill''; and (4) our full model with both region-phrase and image-sentence score functions supervised by our joint loss (Eq.\ \ref{eq:obj_total}), denoted as ``NCE+Distill''.
We present our ablation results on the four model variations in Table~\ref{tab:ablation-results}.
\noindent \textbf{Contrastive vs.\ Ranking loss}. We first define the max margin loss following notations in the image-sentence level contrastive loss (Eq.~\ref{eq:obj_contrast}) as
\begin{equation}
\small
\label{eq:margin}
\mathcal{L}_{IS}(X_i, Y_j) = \mathbb{E}_{\mathcal{N}(X_i) \in X} \left[h(X_i, Y_j)\right]
\end{equation}
\begin{equation*}
\resizebox{.45\textwidth}{!}
{
$h(X_i, Y_j) = \sum_{i'\in \mathcal{N}(X_i)} \max\{0, m - S(X_i, Y_j) + S(X_i', Y_j)\} $,
}
\end{equation*}
where $m$ is the margin. In our experiment, we fix $m = 0.05$.
We observe that NCE loss substantially outperforms the standard max margin loss by +6.2\%/+3.7\% on Flickr30K Entities and ReferItGame, respectively. These results suggest the effectiveness of contrastive learning, as also demonstrated in the concurrent work~\cite{gupta2020contrastive}.
\input{tables/cvpr-method-ablation}
\noindent \textbf{Effects of Knowledge Distillation}.
Our full model combining both region-phrase and image-sentence matching brings further improvements over both the NCE only model and the Distill only model. We conjecture that NCE and Distill provide complementary information for phrase grounding.
Specifically, Distill helps learn the extent of objects better, distinguishing parts frequently co-occurred in the same image, such as ``person'' and ``face''. NCE helps to learn finer-grained attributes, such as ``striped shirt'' and ``blue shirt'', as well as concepts not covered by detector classes.
To understand where the accuracy improvement of the distillation method comes from, we compute per phrase accuracy on frequent phrases that are mapped to one of the classes in the Open Images Dataset. The results can be found in the supplementary materials. We also perform qualitative analysis with side-by-side heatmaps of region-phrase matching scores on such mapped phrases, shown in Figure~\ref{fig:compare}. Our full model (NCE+Distill) can better localize objects corresponding to the query phrase.
\noindent \textbf{Generalization to Different Backbones}.
We vary the object detectors used by our model and present the results in Table~\ref{tab:distillation-results}. As the knowledge gap between visual backbone and external detector becomes smaller, e.g., the visual backbones are trained on tasks that involve finer-grained object and attribute annotations, the effects of distillation become less predominant. Nevertheless, our method can consistently improve performance for all detectors.
\input{tables/cvpr-distill-ablation}
| proofpile-arXiv_065-11851 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Conclusion}
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation. On top of the meta-learning strategy, we introduce two complementary objectives to enhance the segmentation outputs on unseen domain by imposing the shape compactness and smoothness in meta-optimization. Extensive experiments demonstrate the effectiveness. To our best knowledge, this is the first work incorporating shape constraints with meta-learning for domain generalization in medical image segmentation. Our method can be extended to various segmentation scenarios that suffer from domain shift.
\subsubsection{Acknowledgement.} This work was supported in parts by the following grants:
Key-Area Research and Development Program of Guangdong Province, China (2020B010165004),
Hong Kong Innovation and Technology Fund (Project No. ITS/426/17FP),
Hong Kong RGC TRS Project T42-409/18-R, and National Natural Science
Foundation of China with Project No. U1813204.
\section{Experiments}
\begin{table}[b]
\renewcommand\arraystretch{1.1}
\centering
\caption{Details of our employed six different sites obtained from public datasets.}
\label{table:dataset}
\scalebox{0.73}{
\begin{tabular}{p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.8cm}|p{3.0cm}|p{1.8cm}|p{1.5cm}}
\hline
Dataset &Institution & Case num & Field strength(T) & Resolution(in/ through plane)(mm) & Endorectal Coil & Manufactor \\
\hline
Site A &RUNMC & 30 & 3 & 0.6-0.625/3.6-4 & Surface & Siemens \\
Site B &BMC & 30 & 1.5& 0.4/3 & Endorectal& Philips \\
Site C &HCRUDB & 19 & 3 & 0.67-0.79/1.25 & No & Siemens \\
Site D &UCL & 13 & 1.5 and 3 & 0.325-0.625/3-3.6 & No & Siemens \\
Site E &BIDMC & 12 & 3 & 0.25/2.2-3 & Endorectal & GE \\
Site F &HK & 12 & 1.5 & 0.625/3.6 & Endorectal & Siemens \\
\hline
\end{tabular}
}
\end{table}
\subsubsection{Dataset and Evaluation Metric.}
We employ prostate T2-weighted MRI from 6 different data sources with distribution shift (cf. Table~\ref{table:dataset} for summary of their sample numbers and scanning protocols).
Among these data, samples of Site A,B are from NCI-ISBI13 dataset~\cite{bloch2015nci}; samples of Site C are from I2CVB dataset~\cite{lemaitre2015computer}; samples of Site D,E,F are from PROMISE12 dataset~\cite{litjens2014evaluation}.
Note that the NCI-ISBI13 and PROMISE12 actually include multiple data sources, hence we decompose them in our work.
For pre-processing, we resized each sample to $384 \! \times \! 384$ in axial plane, and normalized it to zero mean and unit variance. We then clip each sample to only preserve slices of prostate region for consistent objective segmentation regions across sites. We adopt Dice score (Dice) and Average Surface Distance (ASD) as the evaluation metric.
\subsubsection{Implementation Details. }
We implement an adapted Mix-residual-UNet~\cite{yu2017volumetric} as segmentation backbone. Due to the large variance on
slice thickness among different sites, we employ the 2D architecture.
The domains number of meta-train and meta-test were set as 2 and 1.
The weights $\lambda_1$ and $\lambda_2$ were set as 1.0 and $5e^{-3}$. The embedding network $H_\phi$ composes of two fully connected layers with output sizes of 48 and 32.
The segmentation network $F_\theta$ was trained using Adam optimizer and the learning rates for inner-loop update and meta optimization were both set as $1e^{-4}$. The network $H_\phi$ was also trained using Adam optimizer with learning rate of $1e^{-4}$. We trained 20K iterations with batch size of 5 for each source domain. For batch normalization layer, we use the statistics of testing data for feature normalization during inference for better generalization performance.
\subsubsection{Comparison with State-of-the-art Generalization Methods. }
We implemented several state-of-the-art generalization methods for comparison, including a data-augmentation based method (BigAug)~\cite{zhang2020generalizing}, a classifier regularization based method (Epi-FCR)~\cite{li2019episodic}, a latent space regularization method (LatReg)~\cite{aslani2020scanner} and a meta-learning based method (MASF)~\cite{dou2019domain}.
In addition, we conducted experiments with `DeepAll' baseline (i.e., aggregating data from all source domains for training a deep model) and `Intra-site' setting (i.e., training and testing on the same domain, with some outlier cases excluded to provide general internal performance on each site).
Following previous practice~\cite{dou2019domain} for domain generalization,
we adopt the leave-one-domain-out strategy,~\textit{i.e.}, training on $K$-1 domains and testing on the one left-out unseen target domain.
As listed in Table~\ref{table:results},
DeepAll presents a strong performance, while the Epi-FCR with classifier regularization shows limited advantage over this baseline.
The other approaches of LatReg, BigAug and MASF are more significantly better than DeepAll, with the meta-learning based method yielding the best results among them in our experiments.
Notably, our approach (cf. the last row) achieves higher performance over all these state-of-the-art methods across all the six sites,
and outperforms the DeepAll model by 2.15\% on Dice and 0.60$mm$ on ASD, demonstrating the capability of our shape-aware meta-learning scheme to deal with domain generalization problem.
Moreover, Fig.~\ref{fig:qualitive_result} shows the generalization segmentation results of different methods on three typical cases from different unseen sites. We observe that
our model with shape-relevant meta regularizers can well preserve the complete shape and smooth boundary for the segmentation in unseen domains, whereas other methods sometimes failed to do so.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figure/qualitive_result.pdf}
\caption{Qualitative comparison on the generalization results of different methods, with three cases respectively drawn from different unseen domains.}
\label{fig:qualitive_result}
\end{figure}
\begin{table}[t]
\renewcommand\arraystretch{1.1}
\centering
\caption{Generalization performance of various methods on Dice (\%) and ASD ($mm$).}
\label{table:results}
\scalebox{0.73}{
\begin{tabular}{p{2.8cm}|cc|cc|cc|cc|cc|cc|cc}
\hline
Method & \multicolumn{2}{c|}{Site A} & \multicolumn{2}{c|}{Site B} & \multicolumn{2}{c|}{Site C} &\multicolumn{2}{c|}{Site D} &\multicolumn{2}{c|}{Site E} &\multicolumn{2}{c|}{Site F} &\multicolumn{2}{c}{Average}\\
\hline
Intra-site &89.27 &1.41 &\underline{88.17} &1.35 &\underline{88.29} &1.56 &83.23 &3.21 &83.67 &2.93 &85.43 &1.91 &86.34 &2.06 \\
\hline
DeepAll (baseline) &87.87 &2.05 &85.37 &1.82 &82.94 &2.97 &86.87 &2.25 &84.48 &2.18 &85.58 &1.82 &85.52 &2.18\\
\hline
Epi-FCR~\cite{li2019episodic} &88.35 &1.97 &85.83 &1.73 &82.56 &2.99 &86.97 &2.05 &85.03 &1.89 &85.66 &1.76 &85.74 &2.07\\
LatReg~\cite{aslani2020scanner} &88.17 &1.95 &86.65 &1.53 &83.37 &2.91 &87.27 &2.12 &84.68 &1.93 &86.28 &1.65 &86.07 &2.01\\
BigAug~\cite{zhang2020generalizing}&88.62 &1.70 &86.22 &1.56 &83.76 &2.72 &87.35 &1.98 &85.53 &1.90 &85.83 &1.75 &86.21 &1.93\\
MASF~\cite{dou2019domain} &88.70 &1.69 &86.20 &1.54 &84.16 &2.39 &87.43 &1.91 & 86.18 &1.85 &86.57 &1.47 &86.55 &1.81\\
\hline
Plain meta-learning &88.55 &1.87 &85.92 &1.61 &83.60 &2.52 &87.52 &1.86 &85.39 &1.89 &86.49 &1.63 &86.24 &1.90\\
+ $\mathcal{L}_{compact}$ &89.08 &1.61 &87.11 &1.49 &84.02 &2.47 &87.96 &1.64 &86.23 &1.80 &87.19 &1.32 &86.93 &1.72\\%86.71
+ $\mathcal{L}_{smooth}$ &89.25 &1.64&87.14 &1.53 &~\textbf{84.69}&2.17 &87.79&1.88&86.00 &1.82&87.74 &1.24 &87.10 &1.71\\
SAML (\textbf{Ours}) &~\textbf{89.66} &~\textbf{1.38} &~\textbf{87.53} &~\textbf{1.46} &84.43 &~\textbf{2.07} &~\textbf{88.67} &~\textbf{1.56} &~\textbf{87.37} &~\textbf{1.77} &~\textbf{88.34} &~\textbf{1.22} &~\textbf{87.67} &~\textbf{1.58}\\
\hline
\end{tabular}
}
\end{table}
We also report in Table~\ref{table:results} the cross-validation results conducted within each site, i.e., Intra-site.
Interestingly, we find that this result for site D/E/F is relatively lower than the other sites, and even worse than the baseline model. The reason would be that the sample numbers of these three sites are fewer than the others, consequently intra-site training is ineffective with limited generalization capability.
This observation reveals the important fact that, when a certain site suffers from severe data scarcity for model training, aggregating data from other sites (even with distribution shift) can be very helpful to obtain a qualified model.
In addition, we also find that our method outperforms the Intra-site model in 4 out of 6 data sites, with superior overall performances on both Dice and ASD, which endorses the potential value of our approach in clinical practice.
\begin{figure*}[t]
\centering
\includegraphics[width=0.93\textwidth]{figure/less_domain.pdf}
\caption{Curves of generalization performance on unseen domain as the number of training source domain increases, using DeepAll method and our proposed approach.}
\label{fig:domain_number}
\end{figure*}
\subsubsection{Ablation Analysis. }
We first study the contribution of each key component in our model.
As shown in Table~\ref{table:results},
the plain meta-learning method only with $\mathcal{L}_{seg}$ can already outperform the DeepAll baseline, leveraging the explicit simulation of domain shift for training.
Adding shape compactness constraint into $\mathcal{L}_{meta}$ yields improved Dice and ASD which are higher than MASF.
Further incorporating $L_{smooth}$ (SAML) to encourage domain-invariant embeddings for pixels around the boundary,
consistent performance improvements on all six sites are attained. Besides, simply constraining $L_{smooth}$ on pure meta-learning method (+ $L_{smooth}$) also leads to improvements across sites.
We further investigate the influence of training domain numbers on the generalization performance of our approach and the DeepAll model. Fig.~\ref{fig:domain_number} illustrates how the segmentation performance on each unseen domain would change, as we gradually increase the number of source domains in range $[1, K\!-\!1]$.
Obviously, when a model is trained just with a single source domain, directly applying it to target domain receives unsatisfactory results.
The generalization performance progresses as the training site number increases, indicating that aggregating wider data sources helps to cover a more comprehensive distribution.
Notably, our approach consistently outperforms DeepAll across all numbers of training sites, confirming the stable efficacy of our proposed learning scheme.
\section{Introduction}
Deep learning methods have shown remarkable achievement in automated medical image segmentation~\cite{jia2019hd,milletari2016v,zhu2019boundary}.
However, the clinical deployment of existing models still suffer from the performance degradation under the distribution shifts across different clinical sites using various imaging protocols or scanner vendors.
Recently, many domain adaptation~\cite{chen2020unsupervised,kamnitsas2017unsupervised} and transfer learning methods~\cite{gibson2018inter,karani2018lifelong} have been proposed to address this issue, while all of them require images from the target domain (labelled or unlabelled) for model re-training to some extent.
In real-world situations, it would be time-consuming even impractical to collect data from each coming new target domain to adapt the model before deployment.
Instead, learning a model from multiple source domains in a way such that it can directly generalize to an unseen target domain is of significant practical value.
This challenging problem setting is \emph{domain generalization (DG)}, in which no prior knowledge from the unseen target domain is available during training.
Among previous efforts towards the generalization problem~\cite{gibson2018inter,liu2020ms,yoon2019generalizable}, a naive practice of aggregating data from all source domains for training a deep model (called `DeepAll' method) can already produce decent results serving as a strong baseline. It has also been widely used and validated in existing literature~\cite{chen2019improving,dou2020unpaired,yao2019strong}.
On top of DeepAll training, several studies added data augmentation techniques to improve the model generalization capability~\cite{zhang2020generalizing,paschali2018generalizability}, assuming that the domain shift could be simulated by conducting extensive transformations to data of source domains. Performance improvements have been obtained on tasks of cardic~\cite{chen2019improving}, prostate~\cite{zhang2020generalizing} and brain~\cite{paschali2018generalizability} MRI image segmentations, yet the choices of augmentation schemes tend to be tedious with task-dependence.
Some other approaches have developed new network architectures to handle domain discrepancy~\cite{kouw2019cross,yang2018generalizing}.
Kour~\textit{et al.}~\cite{kouw2019cross} developed an unsupervised bayesian model to interpret the tissue information prior for the generalization in brain tissue segmentation.
A set of approaches~\cite{aslani2020scanner,otalora2019staining} also tried to learn domain invariant representations with feature space regularization by developing adversarial neural networks.
Although achieving promising progress, these methods rely on network designs, which introduces extra parameters thus complicating the pure task model.
Model-agnostic meta-learning~\cite{finn2017model} is a recently proposed method for fast deep model adaptation, which has been successfully applied to address the domain generalization problem~\cite{balaji2018metareg,dou2019domain,li2018learning}.
The meta-learning strategy is flexible with independence from the base network, as it fully makes use of the gradient descent process. However, existing DG methods mainly tackle image-level classification tasks with natural images,
which are not suitable for the image segmentation task that requires pixel-wise dense predictions.
An outstanding issue remaining to be explored is how to incorporate the shape-based regularization for the segmentation mask during learning, which is a distinctive point for medical image segmentation.
In this regard, we aim to build on the advantages of gradient-based meta-learning, while further integrate shape-relevant characteristics to advance model generalization performance on unseen domains.
\begin{comment}
Model-agnostic meta-learning~\cite{finn2017maml} is a recently proposed gradient-based method for fast deep model adaptation, which has been successfully introduced to address domain generalization problem and achieved promising performance on several object recognition benchmarks~\cite{dou2019masf,li2018mldg,balaji2018metareg}.
The meta-learning strategy aims to find more robust gradient descending by simulating the domain shift during training process, which enables to improve the model generalization in a model-agnostic and task-agnostic manner.
In this paper, we aim to build on the strength of meta-learning, but further improve it by exploring the shape-relevant characters in segmentation problems for boosted generalization performance.
\end{comment}
We present a novel \textbf{s}hape-\textbf{a}ware \textbf{m}eta-\textbf{l}earning (SAML) scheme for domain generalization on medical image segmentation.
Our method roots in the meta-learning episodic training strategy, to promote robust optimization by simulating the domain shift with meta-train and meta-test sets during model training.
Importantly, to address the specific deficiencies encountered when applying a learned segmentation model to unseen domains (i.e., incomplete shape and ambiguous boundary of the predictions), we further propose two complementary shape-aware loss functions to regularize the meta optimization process.
First, we regularize the \emph{shape compactness} of predictions for meta-test data, enforcing the model to well preserve the complete shape of segmentation masks in unseen domains. Second, we enhance the \emph{shape smoothness} at boundary under domain shift, for which we design a novel objective to encourage domain-invariant contour embeddings in the latent space. We have extensively evaluated our method with the application of prostate MRI segmentation, using public data acquired from six different institutions with various imaging scanners and protocols. Experimental results validate that our approach outperforms many state-of-the-art methods on the challenging problem of domain generalization, as well as achieving consistent improvements for the prostate segmentation performance across all the six settings of unseen domains
\section{First Section}
\bibliographystyle{splncs04}
\section{Method}
Let $(\mathcal{X}, \mathcal{Y})$ denote the joint input and label space in an segmentation task, $\mathcal{D}=\{\mathcal{D}_1,\mathcal{D}_2,...,\mathcal{D}_K\}$ be the set of $K$ source domains. Each domain $\mathcal{D}_k$ contains image-label pairs $\{(x^{(k)}_n,y^{(k)}_n)\}_{n=1}^{N_k}$ sampled from domain distributions $(\mathcal{X}_k, \mathcal{Y})$, where $N_k$ is the number of samples in the $k$-th domain. Our goal is to learn a segmentation model $F_\theta:\mathcal{X} \! \rightarrow \! \mathcal{Y}$ using all source domains $\mathcal{D}$ in a way such that it generalizes well to an unseen target domain $\mathcal{D}_{tg}$. Fig.~\ref{fig:overview} gives an overview of our proposed shape-aware meta-learning scheme, which we will detail in this section.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figure/method_final.pdf}
\caption{Overview of our shape-aware meta-learning scheme. The source domains are randomly split into meta-train and meta-test to simulate the domain shift (Sec.~\ref{section:meta}). In meta-optimization: (1) we constrain the shape compactness in meta-test to encourage segmentations with complete shape (Sec.~\ref{section:compact}); (2) we promote the intra-class cohesion and inter-class separation between the contour and background embeddings regardless of domains, to enhance domain-invariance for robust boundary delineation (Sec.~\ref{section:smooth}).}
\label{fig:overview}
\end{figure*}
\subsection{Gradient-based Meta-learning Scheme}
\label{section:meta}
The foundation of our learning scheme is the gradient-based meta-learning algorithm~\cite{li2018learning}, to promote robust optimization by simulating the real-world domain shifts in the training process.
Specifically, at each iteration, the source domains $\mathcal{D}$ are randomly split into the meta-train $\mathcal{D}_{tr}$ and meta-test $\mathcal{D}_{te}$ sets of domains. The meta-learning can be divided into two steps.
First, the model parameters $\theta$ are updated on data from meta-train $\mathcal{D}_{tr}$, using Dice segmentation loss $\mathcal{L}_{seg}$
\begin{equation}
\theta' = \theta - \alpha \nabla_{\theta}\mathcal{L}_{seg}(\mathcal{D}_{tr}; \theta),
\end{equation}
where $\alpha$ is the learning-rate for this inner-loop update.
Second, we apply a meta-learning step, aiming to enforce the learning on meta-train $\mathcal{D}_{tr}$ to further exhibit certain properties that we desire on unseen meta-test $\mathcal{D}_{te}$. Crucially, the meta-objective $\mathcal{L}_{meta}$ to quantify these properties is computed with the updated parameters $\theta'$, but optimized towards the original parameters $\theta$.
Intuitively, besides learning the segmentation task on meta-train $\mathcal{D}_{tr}$, such a training scheme further learns how to generalize at the simulated domain shift across meta-train $\mathcal{D}_{tr}$ and meta-test $\mathcal{D}_{te}$. In other words, the model is optimized such that the parameter updates learned on virtual source domains $\mathcal{D}_{tr}$ also improve the performance on the virtual target domains $\mathcal{D}_{te}$, regarding certain aspects in $\mathcal{L}_{meta}$.
In segmentation problems, we expect the model to well preserve the complete shape (compactness) and smooth boundary (smoothness) of the segmentations in unseen target domains.
To achieve this, apart from the traditional segmentation loss $\mathcal{L}_{seg}$, we further introduce two complementary loss terms into our meta-objective, $\mathcal{L}_{meta} = \mathcal{L}_{seg} + \lambda_1 \mathcal{L}_{compact} + \lambda_2 \mathcal{L}_{smooth}$ ($\lambda_1$ and $\lambda_2$ are the weighting trade-offs), to explicitly impose the shape compactness and shape smoothness of the segmentation maps under domain shift for improving generalization performance.
\subsection{Meta Shape Compactness Constraint}
\label{section:compact}
Traditional segmentation loss functions,~\textit{e.g.} , Dice loss and cross entropy loss, typically evaluate the pixel-wise accuracy, without a global constraint to the segmentation shape. Trained in that way, the model often fails to produce complete segmentations under distribution shift. Previous study have demonstrated that for the compact objects, constraining the shape compactness~\cite{fan2014multiregion} is helpful to promote segmentations for complete shape, as an incomplete segmentation with irregular shape often corresponds to a worse compactness property.
Based on the observation that the prostate region generally presents a compact shape, and such shape prior is independent of observed domains,
we propose to explicitly incorporate the compact shape constraint in the meta-objective $\mathcal{L}_{meta}$, for encouraging the segmentations to well preserve the shape completeness under domain shift. Specifically, we adopt the well-established Iso-Perimetric Quotient~\cite{li2013efficient} measurement to quantify the shape compactness, whose definition is $C_{IPQ}={4\pi A}/{P^2}$, where $P$ and $A$ are the perimeter and area of the shape, respectively.
In our case, we define the shape compactness loss as the reciprocal form of this $C_{IPQ}$ metric, and expend it in a pixel-wise manner as follows:
\begin{equation}
\mathcal{L}_{compact}=\frac{P^2}{4\pi A}= \frac{\sum_{i \in \Omega} \sqrt{(\nabla p_{u_{i}}) ^2 + (\nabla p_{v_{i}}) ^2 +\epsilon}}{ 4\pi (\sum_{i \in \Omega} { |p_i| + \epsilon})},
\end{equation}
where $p$ is the prediction probability map, $\Omega$ is the set of all pixels in the map; $\nabla p_{u_{i}}$ and $\nabla p_{v_{i}}$ are the probability gradients for each pixel $i$ in direction of horizontal and vertical; $\epsilon$ ($1e^{-6}$ in our model) is a hyperparameter for computation stability. Overall, the perimeter length $P$ is the sum of gradient magnitude over all pixels $i\in \Omega$; the area $A$ is calculated as the sum of absolute value of map $p$.
Intuitively, minimizing this objective function encourages segmentation maps with complete shape, because an incomplete segmentation with irregular shape often presents a relatively smaller area $A$ and relatively larger length $P$, leading to a higher loss value of $\mathcal{L}_{compact}$. Also note that we only impose $\mathcal{L}_{compact}$ in meta-test $\mathcal{D}_{te}$, as we expect the model to preserve the complete shape on unseen target images, rather than overfit the source data.
\subsection{Meta Shape Smoothness Enhancement}
\label{section:smooth}
In addition to promoting the complete segmentation shape, we further encourage smooth boundary delineation in unseen domains, by regularizing the model to capture domain-invariant contour-relevant and background-relevant embeddings that cluster regardless of domains. This is crucial, given the observation that performance drop at the cross-domain deployment mainly comes from the ambiguous boundary regions.
In this regard, we propose a novel objective $\mathcal{L}_{smooth}$ to enhance the boundary delineation, by explicitly promoting the intra-class cohesion and inter-class separation between the contour-relevant and background-relevant embeddings drawn from each sample across all domains $\mathcal{D}$.
Specifically, given an image $x_m \in \mathbb{R}^{H\times W\times 3}$ and its one-hot label $y_m$, we denote its activation map from layer $l$ as $M^l_m \in \mathbb{R}^{H_l \times W_l \times C_l}$, and we interpolate $M_m^l$ into $T_m^l \in \mathbb{R}^{H \times W \times C_l}$ using bilinear interpolation to keep consistency with the dimensions of $y_m$. To extract the contour-relevant embeddings $E_m^{con} \in \mathbb{R}^{C_l}$ and background-relevant embeddings $E_m^{bg} \in \mathbb{R}^{C_l}$, we first obtain the binary contour mask $c_m \in \mathbb{R}^{H \times W \times 1}$ and binary background mask $b_m \in \mathbb{R}^{H \times W \times 1
}$ from $y_m$ using morphological operation. Note that the mask $b_m$ only samples background pixels around the boundary, since we expect to enhance the discriminativeness for pixels around boundary region. Then, the embeddings $E_m^{con}$ and $E_m^{bg}$ can be extracted from $T_m^l$ by conducting
weighted average operation over $c_m$ and $b_m$:
\begin{equation}
E_m^{con} = \frac{\sum_{i \in \Omega} (T_m^l)_i \cdot (c_m)_i}{\sum_{i \in \Omega} (c_m)_i}, \quad
E_m^{bg} = \frac{\sum_{i \in \Omega} (T_m^l)_i \cdot (b_m)_i}{\sum_{i \in \Omega} (b_m)_i},
\end{equation}
where $\Omega$ denotes the set of all pixels in $T_m^l$, the $E_m^{con}$ and $E_m^{bg}$ are single vectors, representing the contour and backgound-relevant representations extracted from the whole image $x_m$.
In our implementation, activations from the last two deconvolutional layers are interpolated and concatenated to obtain the embeddings.
Next, we enhance the domain-invariance of
$E^{con}$ and $E^{bg}$ in latent space, by encouraging embeddings' intra-class cohesion and inter-class separation among samples from all source domains $\mathcal{D}$.
Considering that imposing such regularization directly onto the network embeddings might be too strict to impede the convergence of $\mathcal{L}_{seg}$ and $\mathcal{L}_{compact}$,
we adopt the
contrastive learning~\cite{chen2020simple} to achieve this constraint. Specifically, an embedding network $H_\phi$ is introduced to project the features $E \in [E^{con}, E^{bg} ]$ to a lower-dimensional space, then the distance is computed on the obtained feature vectors from network $H_\phi$ as
$
d_{\phi}(E_m, E_n) = \|H_\phi(E_m) - H_\phi(E_n)\|_2
\label{equ:distance}
$, where the sample pair $(m, n)$ are randomly drawn from all domains $\mathcal{D}$, as we expect to harmonize the embeddings space of $\mathcal{D}_{te}$ and $\mathcal{D}_{tr}$ to capture domain-invariant representations around the boundary region.
Therefore in our model, the contrastive loss is defined as follows:
\begin{equation}
\ell_{contrastive}(m, n) =
\left\{
\begin{array}{lr}
d_{\phi}(E_{m}, E_{n}), & ~\text{if} ~ \tau(E_m) = \tau(E_n)\\
(max\{0, \zeta-d_{\phi}(E_{m}, E_{n}\})^2,& ~\text{if} ~ \tau(E_m) \neq \tau(E_n)\\
\end{array}
\right..
\end{equation}
where the function $\tau(E)$ indicates the class (1 for $E$ being $E^{con}$, and 0 for $E^{bg}$)
$\zeta$ is a pre-defined distance margin following the practice of metric learning (set as 10 in our model).
The final objective $\mathcal{L}_{smooth}$ is computed within mini-batch of $q$ samples. We randomly employ either $E^{con}$ or $E^{bg}$ for each sample, and the $\mathcal{L}_{smooth}$ is the average of $\ell_{contrastive}$ over all pairs of $(m,n)$ embeddings:
\begin{equation}
\mathcal{L}_{smooth} = \sum\nolimits_{m = 1}^q \sum\nolimits_{n = m +1}^q \ell_{contrastive}(m,n)/ C(q,2).
\end{equation}
where $C(q,2)$ is the number of combinations. Overall, all training objectives
including $\mathcal{L}_{seg}(\mathcal{D}_{tr}; \theta)$ and $\mathcal{L}_{meta}(\mathcal{D}_{tr},D_{te}; \theta')$, are optimized together with respect to the original parameters $\theta$. The $\mathcal{L}_{smooth}$ is also optimized with respect to $H_\phi$.
| proofpile-arXiv_065-11995 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The concept of deformation quantization has been defined by Bayen,
Flato, Fronsdal, Lichnerowicz and Sternheimer in
\cite{bayen.et.al:1978a} based on Gerstenhaber's theory of associative
deformations of algebra \cite{gerstenhaber:1964a}. A formal star
product on a symplectic (or Poisson) manifold $M$ is defined as a
formal associative deformation $\star$ of the algebra of smooth
functions $\Cinfty (M)$ on $M$. The existence as well as the
classification of star products has been studied in many different
settings, e.g in \cite{dewilde.lecomte:1983b, fedosov:1986a,
fedosov:1994a, fedosov:1996a, kontsevich:2003a, nest.tsygan:1995a,
bertelson.cahen.gutt:1997a}, see also the textbooks
\cite{esposito:2015a, waldmann:2007a} for more details in deformation
quantization. Quite parallel to this, Drinfel'd introduced the notion
of quantum groups and started the deformation of Hopf algebra, see
e.g. the textbooks \cite{kassel:1995a, chari.pressley:1994a,
etingof.schiffmann:1998a} for a detailed discussion.
It turned out that under certain circumstances one can give simple and
fairly explicit formulas for associative deformations of algebras:
whenever a Lie algebra $\lie{g}$ acts on an associative algebra
$\algebra{A}$ by derivations, the choice of a \emph{formal Drinfel'd
twist} $\twist{F} \in (\uea{\lie{g}} \tensor \uea{\lie{g}})[[t]]$
allows to deform $\algebra{A}$ by means of a \emph{universal
deformation formula}
\begin{equation}
\label{eq:TheUDF}
a \star_{\twist{F}} b
=
\mu_{\algebra A} (\twist{F}\acts (a \tensor b))
\end{equation}
for $a, b \in \algebra{A}[[t]]$. Here
$\mu_{\algebra{A}}\colon \algebra{A} \tensor \algebra{A}
\longrightarrow \algebra{A}$
is the algebra multiplication and $\acts$ is the action of $\lie{g}$
extended to the universal enveloping algebra $\uea{\lie{g}}$ and then
to $\uea{\lie{g}} \tensor \uea{\lie{g}}$ acting on
$\algebra{A} \tensor \algebra{A}$. Finally, all operations are
extended $\ring{R}[[t]]$-multilinearly to formal power series. Recall
that a formal Drinfel'd twist \cite{drinfeld:1983a, drinfeld:1988a} is
an invertible element
$\twist{F} \in (\uea{\lie{g}} \tensor \uea{\lie{g}})[[t]]$ satisfying
\begin{gather}
\label{eq:TwistConditionI}
(\Delta \tensor \id)(\twist{F})(\twist{F}\tensor 1)
=
(\id \tensor \Delta)(\twist{F})(1 \tensor \twist{F}),
\\
\label{eq:TwistConditionII}
(\epsilon \tensor 1)\twist{F}
=
1
=
(1\tensor \epsilon)\twist{F} \\
\shortintertext{and}
\label{eq:TwistConditionIII}
\twist{F} = 1 \tensor 1 + \mathcal{O}(t).
\end{gather}
The properties of a twist are now easily seen to guarantee that
\eqref{eq:TheUDF} is indeed an associative deformation.
Yielding the explicit formula for the deformation universally in the
algebra $\algebra{A}$, Drinfel'd twists are considered to be of great
importance in deformation theory in general, and in fact, used at many
different places. We just mention a few recent developments, certainly
not exhaustive: Giaquinto and Zhang studied the relevance of universal
deformation formulas like \eqref{eq:TheUDF} in great detail in the
seminal paper \cite{giaquinto.zhang:1998a}. Bieliavsky and Gayral
\cite{bieliavsky.gayral:2015a} used universal deformation formulas
also in a non-formal setting by replacing the notion of a Drinfel'd
twist with a certain integral kernel. This sophisticated construction
leads to a wealth of new strict deformations having the above formal
deformations as asymptotic expansions. But also beyond pure
mathematics the universal deformation formulas found applications
e.g. in the construction of quantum field theories on noncommutative
spacetimes, see e.g. \cite{aschieri.schenkel:2014a}.
In characteristic zero, there is one fundamental example of a
Drinfel'd twist in the case of an abelian Lie algebra $\lie{g}$. Here
one chooses any bivector $\pi \in \lie{g} \tensor \lie{g}$ and
considers the formal exponential
\begin{equation}
\label{eq:WeylTwist}
\twist{F}_{\textrm{Weyl-Moyal}}
=
\exp(t\pi),
\end{equation}
viewed as element in $(\uea{\lie{g}} \tensor \uea{\lie{g}})[[t]]$. An
easy verification shows that this is indeed a twist. The corresponding
universal deformation formula goes back at least till
\cite[Thm.~8]{gerstenhaber:1968a} under the name of \emph{deformation
by commuting derivations}. In deformation quantization the
corresponding star product is the famous Weyl-Moyal star product if
one takes $\pi$ to be antisymmetric.
While this is an important example, it is not at all easy to find
explicit formulas for twists in the general non-abelian case. A
starting point is the observation, that the antisymmetric part of the
first order of a twist, $\twist{F}_1 - \mathsf{T}(\twist{F}_1)$, where
$\mathsf{T}$ is the usual flip isomorphism, is first an element in
$\Anti^2\lie{g}$ instead of $\Anti^2 \uea{\lie{g}}$, and, second, a
\emph{classical $r$-matrix}. This raises the question whether one can
go the opposite direction of a quantization: does every classical
$r$-matrix $r \in \Anti^2\lie{g}$ on a Lie algebra $\lie{g}$ arise as
the first order term of a formal Drinfel'd twist? It is now a
celebrated theorem of Drinfel'd \cite[Thm.~6]{drinfeld:1983a} that
this is true.
But even more can be said: given a twist $\twist{F}$ one can construct
a new twist by conjugating with an invertible element
$S \in \uea{\lie{g}}[[t]]$ starting with $S = 1 + \mathcal{O}(t)$ and
satisfying $\epsilon(S) = 1$. More precisely,
\begin{equation}
\label{eq:NewTwist}
\twist{F}' = \Delta(S)^{-1} \twist{F} (S \tensor S)
\end{equation}
turns out to be again a twist. In fact, this defines an equivalence
relation on the set of twists, preserving the semi-classical limit,
i.e. the induced $r$-matrix. In the spirit of Kontsevich's formality
theorem, and in fact building on its techniques, Halbout showed that
the equivalence classes of twists quantizing a given classical
$r$-matrix are in bijection to the equivalence classes of formal
deformations of the $r$-matrix in the sense of $r$-matrices
\cite{halbout:2006a}. In fact, this follows from Halbout's more
profound result on formality for general Lie bialgebras, the
quantization of $r$-matrices into twists is just a special case
thereof. His theorem holds in a purely algebraic setting (in
characteristic zero) but relies heavily on the fairly inexplicit
formality theorems of Kontsevich and Tamarkin \cite{tamarkin:1998b}
which in turn require a rational Drinfel'd associator.
On the other hand, there is a simpler approach to the existence of
twists in the case of real Lie algebras: in seminal work of Drinfel'd
\cite{drinfeld:1983a} he showed that a twist is essentially the same
as a left $G$-invariant star product on a Lie group $G$ with Lie
algebra $\lie{g}$, by identifying the $G$-invariant bidifferential
operators on $G$ with elements in $\uea{\lie{g}} \tensor
\uea{\lie{g}}$. The associativity of the star product gives then
immediately the properties necessary for a twist and vice
versa. Moreover, an $r$-matrix is nothing else as a left $G$-invariant
Poisson structure, see \cite[Thm.~1]{drinfeld:1983a}. In this paper,
Drinfel'd also gives an existence proof of such $G$-invariant star
products and therefore of twists, see
\cite[Thm.~6]{drinfeld:1983a}. His argument uses the canonical star
product on the dual of a central extension of the Lie algebra by the
cocycle defined by the (inverse of the) $r$-matrix, suitably pulled
back to the Lie group, see also
Remark~\ref{remark:DrinfeldConstruction} for further details.
The equivalence of twists translates into the usual $G$-invariant
equivalence of star products as discussed in
\cite{bertelson.bieliavsky.gutt:1998a}. Hence one can use the
existence (and classification) theorems for invariant star products to
yield the corresponding theorems for twists
\cite{bieliavsky:2016a}. This is also the point of view taken by
Dolgushev et al. in \cite{dolgushev.isaev.lyakhovich.sharapov:2002a},
where the star product is constructed in a way inspired by Fedosov's
construction of star products on symplectic manifolds.
A significant simplification concerning the existence comes from the
observation that for every $r$-matrix $r \in \Anti^2 \lie{g}$ there is
a Lie subalgebra of $\lie{g}$, namely
\begin{equation}
\label{eq:ESSubalgebra}
\lie{g}_r
=
\left\{
(\alpha \tensor \id)(r)
\; \big| \;
\alpha \in \lie{g}^*
\right\},
\end{equation}
such that $r \in \Anti^2 \lie{g}_r$ and $r$ becomes
\emph{non-degenerate} as an $r$-matrix on this Lie subalgebra
\cite[Prop.~3.2-3.3]{etingof.schiffmann:1998a}. Thus it will always be
sufficient to consider non-degenerate classical $r$-matrices when
interested in the existence of twists. For the classification this is
of course not true since a possibly degenerate $r$-matrix might be
deformed into a non-degenerate one only in higher orders: here one
needs Halbout's results for possibly degenerate $r$-matrices. However,
starting with a non-degenerate $r$-matrix, one will have a much
simpler classification scheme as well.
The aim of this paper is now twofold: On the one hand, we want to give
a direct construction to obtain the universal deformation formulas for
algebras acted upon by a Lie algebra with non-degenerate
$r$-matrix. This will be obtained in a purely algebraic fashion for
sufficiently nice Lie algebras and algebras over a commutative ring
$\ring{R}$ containing the rationals. Our approach is based on a
certain adaption of the Fedosov construction of symplectic star
products, which is in some sense closer to the original Fedosov
construction compared to the approach of
\cite{dolgushev.isaev.lyakhovich.sharapov:2002a} but yet completely
algebraic. More precisely, the construction will not involve a twist
at all but just the classical $r$-matrix. Moreover, it will be
important to note that we can allow for a non-trivial symmetric part
of the $r$-matrix, provided a certain technical condition on it is
satisfied. This will produce deformations with more specific features:
as in usual deformation quantization one is not only interested in the
Weyl-Moyal like star products, but certain geometric circumstances
require more particular star products like Wick-type star products on
Kähler manifolds \cite{karabegov:2013a, karabegov:1996a,
bordemann.waldmann:1997a} or standard-ordered star products on
cotangent bundles \cite{bordemann.neumaier.waldmann:1998a,
bordemann.neumaier.pflaum.waldmann:2003a}.
On the other hand, we give an alternative construction of Drinfel'd
twists, again in the purely algebraic setting, based on the above
correspondence to star products but avoiding the techniques from
differential geometry completely in order to be able to work over a
general field of characteristic zero. We also obtain a classification
of the above restricted situation where the $r$-matrix is
non-degenerate.
In fact, both questions turn out to be intimately linked since
applying our universal deformation formula to the tensor algebra of
$\uea{\lie{g}}$ will yield a deformation of the tensor product which
easily allows to construct the twist. This is in so far remarkable
that the tensor algebra is of course rigid, the deformation is
equivalent to the undeformed tensor product, but the deformation is
not the identity, allowing therefore to consider nontrivial products
of elements in $\Tensor^\bullet(\uea{\lie{g}})$.
We show that the universal deformation formula we construct in fact
coincides with \eqref{eq:TheUDF} for the twist we construct. However,
it is important to note that the detour via the twist is not needed to
obtain the universal deformation of an associative algebra.
Finally, we add the notion of positivity: this seems to be new in the
whole discussion of Drinfel'd twists and universal deformation
formulas so far. To this end we consider now an ordered ring
$\ring{R}$ containing $\mathbb{Q}$ and its complex version $\ring{C} =
\ring{R}(\I)$ with $\I^2 = -1$, and $^*$-algebras over $\ring{C}$ with
a $^*$-action of the Lie algebra $\lie{g}$, which is assumed to be a
Lie algebra over $\ring{R}$ admitting a Kähler structure. Together
with the non-degenerate $r$-matrix we can define a Wick-type universal
deformation which we show to be \emph{strongly positive}: every
undeformed positive linear functional stays positive also for the
deformation. Applied to the twist we conclude that the Wick-type twist
is a convex series of positive elements.
The paper is organized as follows. In Section~\ref{sec:FedosovSetUp}
we explain the elements of the (much more general) Fedosov
construction which we will
need. Section~\ref{sec:UniversalDeformationFormula} contains the
construction of the universal deformation formula. Here not only the
deformation formula will be universal for all algebras $\algebra{A}$
but also the construction itself will be universal for all Lie
algebras $\lie{g}$. In
Section~\ref{sec:UniversalDeformationFormulaTwist} we construct the
Drinfel'd twist while Section~\ref{sec:Classification} contains the
classification in the non-degenerate case. Finally,
Section~\ref{sec:HermitianCPDeformations} discusses the positivity of
the Wick-type universal deformation formula. In two appendices we
collect some more technical arguments and proofs. The results of this
paper are partially based on the master thesis \cite{schnitzer:2016a}.
For symplectic manifolds with suitable polarizations one can define
various types of star products with separation of variables
\cite{karabegov:1996a, karabegov:2013a, bordemann.waldmann:1997a,
bordemann.neumaier.waldmann:1999a, donin:2003a,
bordemann.neumaier.waldmann:1998a,
bordemann.neumaier.pflaum.waldmann:2003a} which have specific
properties adapted to the polarization. The general way to construct
(and classify) them is to modify the Fedosov construction by adding
suitable symmetric terms to the fiberwise symplectic Poisson
tensor. We have outlined that this can be done for twists as well in
the Kähler case, but there remain many interesting situations. In
particular a more cotangent-bundle like polarization might be
useful. We plan to come back to these questions in a future project.
\noindent
\textbf{Acknowledgements:} We would like to thank Pierre Bieliavsky,
Kenny De Commer, Alexander Karabegov, and Thorsten Reichert for the
discussions and useful suggestions. Moreover, we would like to thank
the referee for many useful comments and remarks.
\section{The Fedosov Set-Up}
\label{sec:FedosovSetUp}
In the following we present the Fedosov approach in the particular
case of a Lie algebra $\lie{g}$ with a non-degenerate $r$-matrix $r$.
We follow the presentation of Fedosov approach given in
\cite{waldmann:2007a} but replacing differential geometric concepts by
algebraic version in order to be able to treat not only the real case.
The setting for this work will be to assume that $\lie{g}$ is a Lie
algebra over a commutative ring $\ring{R}$ containing the rationals
$\mathbb{Q} \subseteq \ring{R}$ such that $\lie{g}$ is a
finite-dimensional free module.
We denote by $\{e_1, \ldots, e_n\}$ a basis of $\lie{g}$ and by
$\{e^1, \ldots, e^n\}$ its dual basis of $\lie{g}^*$. We also assume
the $r$-matrix $r \in \Anti^2\lie{g}$ to be non-degenerate in the
strong sense from the beginning, since, at least in the case of
$\ring{R}$ being a field, we can replace $\lie{g}$ by $\lie{g}_r$ from
\eqref{eq:ESSubalgebra} if necessary. Hence $r$ induces the
\emph{musical isomorphism}
\begin{equation}
\label{eq:MusicalIso}
\sharp\colon
\lie{g}^* \longrightarrow \lie{g}
\end{equation}
by paring with $r$, the inverse of which we denote by $\flat$ as
usual. Then the defining property of an $r$-matrix is
$\Schouten{r, r} = 0$, where $\Schouten{\argument, \argument}$ is the
unique extension of the Lie bracket to $\Anti^\bullet \lie{g}$ turning
the Grassmann algebra into a Gerstenhaber algebra. Since we assume $r$
to be (strongly) non-degenerate have the inverse
$\omega \in \Anti^2 \lie{g}^*$ of $r$ and $\Schouten{r, r} = 0$
becomes equivalent to the linear condition $\delta_\CE\omega = 0$,
where $\delta_\CE$ is the usual Chevalley-Eilenberg
differential. Moreover, the musical isomorphisms intertwine
$\delta_\CE$ on $\Anti^\bullet \lie{g}^*$ with the differential
$\Schouten{r, \argument}$ on $\Anti^\bullet \lie{g}$. We refer to
$\omega$ as the induced symplectic form.
\begin{remark}
\label{remark:WhyRings}%
For the Lie algebra $\lie{g}$ there seems to be little gain in
allowing a ring $\ring{R}$ instead of a field $\mathbb{k}$ of
characteristic zero, as we have to require $\lie{g}$ to be a free
module and \eqref{eq:MusicalIso} to be an isomorphism. However,
for the algebras which we would like to deform there will be no
such restrictions later on. Hence allowing for algebras over rings
in the beginning seems to be the cleaner way to do it, since after
the deformation we will arrive at an algebra over a ring, namely
$\ring{R}[[t]]$ anyway.
\end{remark}
\begin{definition}[Formal Weyl algebra]
The algebra $\left(\prod_{k=0}^\infty \Sym^k\lie{g}^* \tensor
\Anti^\bullet \lie{g}^*\right)[[t]]$ is called the formal Weyl
algebra where the product $\mu$ is defined by
\begin{equation}
\label{eq:UndeformedProduct}
(f \tensor \alpha) \cdot (g \tensor \beta)
=
\mu(f \tensor \alpha, g \tensor \beta)
=
f\vee g\tensor\alpha\wedge\beta.
\end{equation}
for any factorizing tensors $f \tensor \alpha, g \tensor \beta \in
\mathcal{W} \tensor \Anti^\bullet$ and extended
$\ring{R}[[t]]$-bilinearly. We write $\mathcal{W} =
\prod_{k=0}^\infty \Sym^k \lie{g}^* [[t]]$ and $\Anti^\bullet =
\Anti^\bullet \lie{g}^*[[t]]$.
\end{definition}
Since $\lie{g}$ is assumed to be finite-dimensional we have
\begin{equation}
\label{eq:WtensorLambdaIsWtensorLambda}
\mathcal{W} \tensor \Lambda^\bullet
=
\left(
\prod_{k=0}^\infty \Sym^k\lie{g}^* \tensor
\Anti^\bullet \lie{g}^*
\right)[[t]].
\end{equation}
Since we will deform this product $\mu$ we shall refer to $\mu$ also
as the \emph{undeformed product} of
$\mathcal{W} \tensor \Anti^\bullet$. It is clear that $\mu$ is
associative and graded commutative with respect to the antisymmetric
degree. In order to handle this and various other degrees, it is
useful to introduce the following degree maps
\begin{equation}
\label{eq:DegreeMaps}
\degs, \dega, \deg_t\colon
\mathcal{W} \tensor \Anti^\bullet
\longrightarrow
\mathcal{W} \tensor \Anti^\bullet,
\end{equation}
defined by the conditions
\begin{equation}
\label{eq:DegsDega}
\degs(f \tensor \alpha)
=
k f \tensor \alpha
\quad
\textrm{and}
\quad
\dega(f \tensor\alpha)
=
\ell f \tensor \alpha
\end{equation}
for $f \in \Sym^k\lie{g}^*$ and $\alpha \in \Anti^\ell \lie{g}^*$. We
extend these maps to formal power series by $\ring{R}[[t]]$-linearity.
Then we can define the degree map $\deg_t$ by
\begin{equation}
\label{eq:Degt}
\deg_t
=
t\frac{\partial}{\partial t},
\end{equation}
which is, however, not $\ring{R}[[t]]$-linear. Finally, the
\emph{total degree} is defined by
\begin{equation}
\label{eq:TotalDegree}
\Deg
=
\degs + 2\deg_t.
\end{equation}
It will be important that all these maps are derivations of the
undeformed product $\mu$ of $\mathcal{W} \tensor \Anti^\bullet$. We
denote by
\begin{equation}
\label{eq:FiltrationDeg}
\mathcal{W}_k \tensor \Anti^\bullet
=
\bigcup_{r \ge k}
\left\{
a \in \mathcal{W} \tensor \Anti^\bullet
\; \big| \;
\Deg a = r a
\right\}
\end{equation}
the subspace of elements which have total degree bigger or equal to
$+k$. This endows $\mathcal{W} \tensor \Anti^\bullet$ with a complete
filtration, a fact which we shall frequently use in the
sequel. Moreover, the filtration is compatible with the undeformed
product \eqref{eq:UndeformedProduct} in the sense that
\begin{equation}
\label{eq:FiltrationProduct}
ab \in
\mathcal{W}_{k + \ell} \tensor \Anti^\bullet
\quad
\textrm{for}
\quad
a \in \mathcal{W}_k \tensor \Anti^\bullet
\textrm{ and }
b \in \mathcal{W}_\ell \tensor \Anti^\bullet.
\end{equation}
Following the construction of Fedosov we define the operators $\delta$
and $\delta^*$ by
\begin{equation}
\label{eq:DeltaDeltaStar}
\delta
=
e^i \wedge \inss(e_i)
\quad
\textrm{and}
\quad
\delta^*
=
e^i \vee \insa(e_i),
\end{equation}
where $\inss$ and $\insa$ are the symmetric and antisymmetric
insertion derivations. Both maps are graded derivations of $\mu$ with
respect to the antisymmetric degree: $\delta$ lowers the symmetric
degree by one and raises the antisymmetric degree by one, for
$\delta^*$ it is the other way round. For homogeneous elements
$a \in \Sym^k\lie{g}^* \tensor \Anti^\ell \lie{g}^*$ we define by
\begin{equation}
\label{eq:deltaInvDef}
\delta^{-1}(a)
=
\begin{cases}
0,
& \textrm{if } k + \ell = 0 \\
\frac{1}{k + \ell}\delta^*(a)
& \textrm{else,}
\end{cases}
\end{equation}
and extend this $\ring{R}[[t]]$-linearly. Notice that this map is not
the inverse of $\delta$, instead we have the following properties:
\begin{lemma}
\label{lem:Poincare}%
For $\delta$, $\delta^*$ and $\delta^{-1}$ defined above, we have
$\delta^2 = (\delta^*)^2 = (\delta^{-1})^2 = 0$ and
\begin{align*}
\label{eq:Poincare}
\delta\delta^{-1} + \delta^{-1}\delta + \sigma
=
\id,
\end{align*}
where $\sigma$ is the projection on the symmetric and
antisymmetric degree zero.
\end{lemma}
In fact, this can be seen that the polynomial version of the Poincaré
lemma: $\delta$ corresponds to the exterior derivative and
$\delta^{-1}$ is the standard homotopy.
The next step consists in deforming the product $\mu$ into a
noncommutative one: we define the \emph{star product}
$\circs$ for $a, b \in \mathcal{W} \tensor \Anti^\bullet$ by
\begin{equation}
\label{eq:WeylMoyalStarProduct}
a \circs b
=
\mu \circ \E^{\frac{t}{2}\mathcal{P}}(a\tensor b),
\quad
\textrm{where}
\quad
\mathcal{P} = \pi^{ij} \inss(e_i) \tensor \inss(e_j),
\end{equation}
for $\pi^{ij} = r^{ij} + s^{ij}$, where $r^{ij}$ are the coefficients
of the $r$-matrix and $s^{ij} = s(e^i, e^j) \in \ring{R}$ are the
coefficients of a \emph{symmetric} bivector $s \in \Sym^2 \lie{g}$.
When taking $s = 0$ we denote $\circs$ simply by $\circweyl$.
\begin{proposition}
\label{prop:StarProductAss}%
The star product $\circs$ is an associative
$\ring{R}[[t]]$-bilinear product on
$\mathcal{W} \tensor \Anti^\bullet$ deforming $\mu$ in zeroth
order of $t$. Moreover, the maps $\delta$, $\dega$, and $\Deg$
are graded derivations of $\circs$ of antisymmetric degree $+1$
for $\delta$ and $0$ for $\dega$ and $\Deg$, respectively.
\end{proposition}
\begin{proof}
The associativity follows from the fact that the insertion
derivations are commuting, see
\cite[Thm.~8]{gerstenhaber:1968a}. The statement about $\delta$,
$\dega$ and $\Deg$ are immediate verifications.
\end{proof}
Next, we will need the graded commutator with respect to the
antisymmetric degree, denoted by
\begin{equation}
\label{eq:GradedCommutator}
\ad(a)(b)
=
[a, b]
=
a \circs b - (-1)^{k\ell} b \circs a,
\end{equation}
for any $a \in \mathcal{W} \tensor \Anti^k$ and $b \in
\mathcal{W}\tensor \Anti^\ell$ and extended
$\mathbb{K}[[t]]$-bilinearly as usual. Since $\circs$ deforms the
graded commutative product $\mu$, all graded commutators $[a, b]$ will
vanish in zeroth order of $t$. This allows to define graded
derivations $\frac{1}{t} \ad(a)$ of $\circs$.
\begin{lemma}
\label{lem:Center}%
An element $a \in \mathcal{W} \tensor \Anti^\bullet$ is central,
that is $\ad(a) = 0$, if and only if $\degs(a) = 0$.
\end{lemma}
By definition, a \emph{covariant derivative} is an arbitrary bilinear
map
\begin{equation}
\label{eq:CovariantDerivative}
\nabla\colon
\lie{g} \times \lie{g} \ni (X, Y)
\; \mapsto \;
\nabla_X Y \in \lie{g}.
\end{equation}
The idea is that in the geometric interpretation the covariant
derivative is uniquely determined by its values on the left invariant
vector fields: we want an invariant covariant derivative and hence it
should take values again in $\lie{g}$. An arbitrary covariant
derivative is called \emph{torsion-free} if
\begin{equation}
\label{eq:TorsionFree}
\nabla_X Y - \nabla_Y X - [X, Y] = 0
\end{equation}
for all $X, Y \in \lie{g}$. Having a covariant derivative, we can
extend it to the tensor algebra over $\lie{g}$ by requiring the maps
\begin{equation}
\label{eq:nablaXTensors}
\nabla_X\colon
\Tensor^\bullet \lie{g} \longrightarrow \Tensor^\bullet \lie{g}
\end{equation}
to be derivations for all $X \in \lie{g}$. We also extend $\nabla_X$
to elements in the dual by
\begin{equation}
\label{eq:nablaXDual}
(\nabla_X \alpha)(Y) = - \alpha(\nabla_X Y)
\end{equation}
for all $X, Y \in \lie{g}$ and $\alpha \in \lie{g}^*$. Finally, we can
extend $\nabla_X$ to $\Tensor^\bullet \lie{g}^*$ as a derivation,
too. Acting on symmetric or antisymmetric tensors, $\nabla_X$ will
preserve the symmetry type and yields a derivation of the $\vee$- and
$\wedge$-products, respectively. The fact that we extended $\nabla$ as
a derivation in a way which is compatible with natural pairings will
lead to relations like
\begin{equation}
\label{eq:CommutatorNablaInss}
[\nabla_X, \inss(Y)] = \inss(\nabla_X Y)
\end{equation}
for all $X, Y \in \lie{g}$ as one can easily check on generators.
Sometimes it will be advantageous to use the basis of $\lie{g}$ for
computations. With respect to the basis we define
the \emph{Christoffel symbols}
\begin{equation}
\label{eq:ChristoffelSymbols}
\Gamma_{ij}^k = e^k(\nabla_{e_i} e_j)
\end{equation}
of a covariant derivative, where $i, j, k = 1, \ldots, n$. Clearly,
$\nabla$ is uniquely determined by its Christoffel symbols. Moreover,
$\nabla$ is torsion-free iff
\begin{equation}
\label{eq:ChristoffelTorsion}
\Gamma^k_{ij} - \Gamma^k_{ji} = C_{ij}^k
\end{equation}
with the usual structure constants
$C_{ij}^k = e^k([e_i, e_j]) \in \ring{R}$ of the Lie algebra
$\lie{g}$.
As in symplectic geometry, the Hess trick \cite{hess:1981a} shows the
existence of a \emph{symplectic} torsion-free covariant derivative:
\begin{proposition}[Hess trick]
\label{prop:HessTrick}%
Let $(\lie{g}, r)$ be a Lie algebra with non-degenerate $r$-matrix
$r$ and inverse $\omega$. Then there exists a torsion-free
covariant derivative $\nabla$ such that for all $X \in \lie{g}$ we
have
\begin{equation}
\label{eq:HessTrick}
\nabla_X \omega = 0
\quad
\textrm{and}
\quad
\nabla_X r = 0.
\end{equation}
\end{proposition}
\begin{proof}
The idea is to start with the half-commutator connection as in the
geometric case and make it symplectic by means of the Hess trick.
The covariant derivative
\begin{equation*}
\tilde\nabla
\colon
\lie{g} \times \lie{g} \ni (X, Y)
\; \mapsto \;
\frac{1}{2}[X, Y] \in \lie{g}
\end{equation*}
is clearly torsion-free. Since $\omega$ is non-degenerate, we can
determine a map $\nabla_X$ uniquely by
\begin{equation}
\label{eq:HessConnection}
\omega(\nabla_X Y,Z)
=
\omega(\tilde\nabla_X Y,Z)
+
\frac{1}{3}(\tilde{\nabla}_X\omega)(Y,Z)
+
\frac{1}{3}(\tilde{\nabla}_Y\omega)(X,Z).
\end{equation}
It is then an immediate computation using the closedness
$\delta_\CE\omega = 0$ of $\omega$, that this map satisfies all
requirements.
\end{proof}
The curvature $\tilde{R}$ corresponding to $\nabla$ is defined by
\begin{equation}
\label{eq:CurvatureTilde}
\tilde{R}\colon
\lie{g} \times \lie{g} \times \lie{g} \ni (X, Y, Z)
\; \mapsto \;
\tilde{R}(X, Y)Z
=
\nabla_X \nabla_Y Z - \nabla_Y\nabla_X Z - \nabla_{[X,Y]}Z
\in \lie{g}
\end{equation}
For a symplectic covariant derivative, we contract $\tilde{R}$ with the
symplectic form $\omega$ and get
\begin{equation}
\label{eq:Curvature}
R\colon
\lie{g} \times \lie{g} \times \lie{g} \times \lie{g}
\ni (Z, U, X, Y)
\; \mapsto \;
\omega(Z, \tilde{R}(X, Y)U)
\in \ring{R},
\end{equation}
which is symmetric in the first two components and antisymmetric in
the last ones: this follows at once from $\nabla$ being torsion-free
and symplectic. In other words, $R\in \Sym^2(\lie{g}^*) \tensor
\Anti^2\lie{g}^*$ becomes an element of the formal Weyl algebra
satisfying
\begin{equation}
\label{eq:DegreesCurvature}
\degs R = 2 R = \Deg R,
\quad
\dega R = 2 R,
\quad
\textrm{and}
\quad
\deg_t R = 0.
\end{equation}
In the following, we will fix a symplectic torsion-free covariant
derivative, the existence of which is granted by
Proposition~\ref{prop:HessTrick}. Since $\nabla_X$ acts on all types
of tensors already, we can use $\nabla$ to define the following
derivation $D$ on the formal Weyl algebra
\begin{equation}
\label{eq:ConnectionWeylAlgebra}
D\colon
\mathcal{W} \tensor \Anti^\bullet \ni (f \tensor \alpha)
\; \mapsto \;
\nabla_{e_i} f\tensor e^i \wedge \alpha
+
f \tensor e^i\wedge\nabla_{e_i} \alpha
\in
\mathcal{W} \tensor \Anti^{\bullet+1}.
\end{equation}
Notice that we do not use the explicit expression of $\nabla$ given in
\eqref{eq:HessConnection}. In fact, any other symplectic torsion-free
covariant derivative will do the job as well.
For every torsion-free covariant derivative $\nabla$ it is easy to
check that
\begin{equation}
\label{eq:TorsionFreeCEDifferential}
e^i \wedge \nabla_{e_i}\alpha = \delta_\CE \alpha
\end{equation}
holds for all $\alpha \in \Anti^\bullet \lie{g}^*$: indeed, both sides
define graded derivations of antisymmetric degree $+1$ and coincide on
generators in $\lie{g}^* \subseteq \Anti^\bullet\lie{g}^*$. Therefore,
we can rewrite $D$ as
\begin{equation}
\label{eq:ExtendedDerivative}
D (f \tensor \alpha)
=
\nabla_{e_i}f \tensor e^i \wedge \alpha
+
f \tensor \delta_\CE \alpha.
\end{equation}
From now on, unless clearly stated, we refer to $[\cdot, \cdot]$ as
the super-commutator with respect to the anti-symmetric degree.
\begin{proposition}
\label{proposition:OperatorProperties}%
Let $\nabla$ be a symplectic torsion-free covariant derivative.
If in addition $s$ is covariantly constant, i.e. if
$\nabla_X s = 0$ for all $X \in \lie{g}$, the map
$D\colon \mathcal{W} \tensor \Anti^\bullet \longrightarrow
\mathcal{W} \tensor \Anti^{\bullet+1}$
is a graded derivation of antisymmetric degree $+1$ of the star
product $\circs$, i.e.
\begin{equation}
\label{eq:CovariantDerivativeWeyl}
D (a \circs b)
=
D(a) \circs b + (-1)^k a \circs D(b)
\end{equation}
for $a \in \mathcal{W} \tensor \Anti^k$ and $b \in
\mathcal{W}\tensor \Anti^\bullet$. In addition, we have
\begin{equation}
\label{eq:BianchiId}
\delta R = 0,
\quad
DR = 0,
\quad
[\delta, D]
=
\delta D + D \delta
=
0,
\quad
\textrm{and}
\quad
D^2
=
\tfrac{1}{2}[D, D]
=
\tfrac{1}{t}\ad(R).
\end{equation}
\end{proposition}
\begin{proof}
For the operator $\mathcal{P}$ from
\eqref{eq:WeylMoyalStarProduct} we have
\begin{align*}
&(\id \tensor \nabla_{e_k} + \nabla_{e_k}\tensor\id)
\mathcal P(a\tensor b) \\
&\quad=
\pi^{ij}\inss(e_i)a\tensor\nabla_{e_k}\inss(e_j)b
+
\pi^{ij}\nabla_{e_k}\inss(e_i)a\tensor\inss(e_j)b
\\
&\quad\ot{(a)}{=}
(\pi^{\ell j}\Gamma_{k\ell}^i
+
\pi^{i\ell}\Gamma^j_{k\ell})\inss(e_i)a\tensor\inss(e_j)b
+
\mathcal P(\id\tensor \nabla_{e_k}
+
\nabla_{e_k}\tensor\id)(a\tensor b)
\\
&\quad=
\mathcal P(\id\tensor \nabla_{e_k}
+
\nabla_{e_k}\tensor\id)(a\tensor b)
\end{align*}
for $a, b \in \mathcal{W}\tensor \Anti^\bullet$. Here we used the
relation $[\nabla_X, \inss(Y)] = \inss(\nabla_X Y)$ as well as the
definition of the Christoffel symbols in ($a$). In the last step
we used $\pi^{\ell j}\Gamma_{k\ell}^i + \pi^{i\ell}\Gamma^j_{k\ell} =
0$ which follows from $\nabla (r+s) = 0$. Therefore we have
\begin{align*}
\nabla_{e_i}
\circ
\mu \circ \E^{\frac{t}{2}\mathcal P}
=
\mu
\circ
(\id\tensor \nabla_{e_i} + \nabla_{e_i}\tensor\id)
\circ
\E^{\frac{t}{2}\mathcal P}
=
\mu
\circ
\E^{\frac{t}{2}\mathcal P}
\circ
(\id \tensor \nabla_{e_i} + \nabla_{e_i} \tensor \id).
\end{align*}
By $\wedge$-multiplying by the corresponding $e^i$'s it follows
that $D$ is a graded derivation of antisymmetric degree $+1$. Let
$f \tensor \alpha \in \mathcal{W} \tensor \Anti^\bullet$. Just
using the definition of $\delta$, \eqref{eq:ExtendedDerivative}
and the fact that $\nabla$ is torsion-free we get
\begin{align*}
\delta D(f\tensor\alpha)
&=
\delta(\nabla_{e_k}f\tensor e^k\wedge\alpha
+
f\tensor\delta_\CE \alpha)
\\
&
=
-D\delta(f \tensor \alpha)
+
\tfrac{1}{2}
(\Gamma^\ell_{ik} - \Gamma^\ell_{ki} - C^\ell_{ik})
\inss(e_\ell) f \tensor e^i \wedge e^k \wedge \alpha
\\
&=
- D\delta(f \tensor \alpha).
\end{align*}
Using a similar computation in coordinates, we get $D^2 =
\frac{1}{2}[D,D] = \frac{1}{t} \ad(R)$. Finally, from the Jacobi
identity of the graded commutator we get $\frac{1}{2t} \ad(\delta
R) = [\delta, [D, D]] = 0$. Hence $\delta R$ is central. Since
$\delta R$ has symmetric degree $+1$, this can only happen if
$\delta R = 0$. With the same argument, $0 = [D, [D, D]]$ yields
that $DR$ is central, which again gives $DR = 0$ by counting
degrees.
\end{proof}
\begin{remark}
\label{remark:WeylOrOtherOrdering}%
In principle, we will mainly be interested in the case $s = 0$ in
the following. However, if the Lie algebra allows for a
covariantly constant $s$ it might be interesting to incorporate
this into the universal construction: already in the abelian case
this leads to the freedom of choosing a different ordering than
the Weyl ordering (total symmetrization). Here in particular the
Wick ordering is of significance due to the better positivity
properties, see \cite{bursztyn.waldmann:2000a} for a universal
deformation formula in this context.
\end{remark}
The core of Fedosov's construction is now to turn $-\delta + D$ into a
differential: due to the curvature $R$ the derivation $- \delta + D$
is not a differential directly. Nevertheless, from the above
discussion we know that it is an inner derivation. Hence the idea is
to compensate the defect of being a differential by inner derivations,
leading to the following statement:
\begin{proposition}
\label{proposition:FedosovDerivation}%
Let $\Omega \in t\Anti^2 \lie g^*[[t]]$ be a series of
$\delta_\CE$-closed two-forms. Then there is a unique
$\varrho \in \mathcal{W}_2\tensor\Anti^1$, such that
\begin{align}
\label{eq:x-property2}
\delta \varrho
=
R + D\varrho + \tfrac{1}{t} \varrho \circs \varrho
+
\Omega
\end{align}
and
\begin{align}
\label{eq:x-property1}
\delta^{-1} \varrho
=
0.
\end{align}
Moreover, the derivation
$\FedosovD = -\delta + D + \frac{1}{t}\ad(\varrho)$ satisfies
$\FedosovD^2 = 0$.
\end{proposition}
\begin{proof}
Let us first assume that \eqref{eq:x-property2} is satisfied and
apply $\delta^{-1}$ to \eqref{eq:x-property1}. This yields
\begin{align*}
\delta^{-1}\delta \varrho
=
\delta^{-1}\left(
R + Dx + \tfrac{1}{t} \varrho \circs \varrho + \Omega
\right).
\end{align*}
From the Poincaré Lemma as in \eqref{eq:Poincare} we have
\begin{align}
\label{eq:x-necessaryprop}
\varrho
=
\delta^{-1}
\left(
R
+ D\varrho + \tfrac{1}{t} \varrho \circs \varrho
+ \Omega
\right).
\end{align}
Let us define the operator $B \colon \mathcal{W} \tensor \Anti^1
\longrightarrow \mathcal{W} \tensor \Anti^1$ by
\begin{align*}
B(a)
=
\delta^{-1}(R + Da + \tfrac{1}{t}a\circs a + \Omega).
\end{align*}
Thus the solutions of \eqref{eq:x-property1} coincide with the
fixed points of the operator $B$. Now we want to show that $B$
has indeed a unique fixed point. By a careful but straightforward
counting of degrees we see that $B$ maps $\mathcal{W}_2 \tensor
\Anti^1$ into $\mathcal{W}_2 \tensor \Anti^1$.
Second, we note that $B$ is a contraction with respect to the
total degree. Indeed, for $a, a' \in \mathcal{W}_2 \tensor\Anti^1$
with $a - a' \in \mathcal{W}_k \tensor \Anti^1$ we have
\begin{align*}
B(a)- B(a')
&=
\delta^{-1} D(a - a')
+
\tfrac{1}{t}
\left(a \circs a - a' \circs a'\right) \\
&=
\delta^{-1} D(a - a')
+
\tfrac{1}{t}\delta^{-1}\left(
(a - a') \circs a' + a \circs (a - a')
\right).
\end{align*}
The first term $\delta^{-1}D(a - a')$ is an element of
$\mathcal{W}_{k+1}\tensor\Anti^1$, because $D$ does not change the
total degree and $\delta^{-1}$ increases it by $+1$. Since $\Deg$
is a $\circs$-derivation and since $a, a'$ have total degree at
least $2$ and their difference has total degree at least $k$, the
second term has total degree at least $k+1$, as $\frac{1}{t}$ has
total degree $-2$ but $\delta^{-1}$ raises the total degree by
$+1$. This allows to apply the Banach fixed-point theorem for the
complete filtration by the total degree: we have a unique
fixed-point $B(\varrho) = \varrho$ with
$\varrho \in \mathcal{W}_2 \tensor \Anti^1$, i.e. $\varrho$
satisfies \eqref{eq:x-necessaryprop}. Finally, we show that this
$\varrho$ fulfills \eqref{eq:x-property1}. Define
\begin{align*}
A
=
\delta \varrho
- R
- D\varrho
- \tfrac{1}{t} \varrho \circs \varrho
- \Omega.
\end{align*}
Apply $\delta$ to $A$ and using
Prop.~\ref{proposition:OperatorProperties} we obtain
\begin{align*}
\delta A
&=
- \delta D \varrho
- \tfrac{1}{t}
\left(
\delta \varrho \circs \varrho
-
\varrho \circs \delta \varrho
\right) \\
&=
D\delta \varrho + \tfrac{1}{t} \ad(\varrho)\delta \varrho
\\
&=
D\left(
A
+ R
+ D\varrho
+ \tfrac{1}{t} \varrho \circs \varrho
+ \Omega
\right)
+
\tfrac{1}{t}\ad(\varrho)
\left(
A
+ R
+ D\varrho
+ \tfrac{1}{t} \varrho \circs \varrho
+ \Omega
\right)
\\
&\ot{(a)}{=}
DA + \tfrac{1}{t}\ad(\varrho)(A).
\end{align*}
In ($a$) we used the fact that
$(-\delta + D + \frac{1}{t}\ad (\varrho)) (R + D \varrho +
\frac{1}{t} \varrho \circs \varrho + \Omega) = 0$,
which can be seen as a version of the second Bianchi identity for
$- \delta + D + \frac{1}{t}\ad(\varrho)$. This follows by an
explicit computation for arbitrary $\varrho$. On the other hand
\begin{align*}
\delta^{-1}A
=
\delta^{-1} \left(
\delta \varrho
- R
- D\varrho
- \tfrac{1}{t} \varrho \circs \varrho
-\Omega
\right)
=
\delta^{-1}\delta \varrho - \varrho
=
\delta\delta^{-1} \varrho
=
0
\end{align*}
for $\varrho$ being the fixed-point of the operator $B$. In other
words,
\begin{align*}
A
=
\delta^{-1}\delta A
=
\delta^{-1}\left(DA + \tfrac{1}{t}\ad(\varrho)(A)\right)
\end{align*}
is a fixed-point of the operator
$K \colon \mathcal{W} \tensor \Anti^\bullet \longrightarrow
\mathcal{W} \tensor \Anti^\bullet$ defined by
\begin{align*}
K a
=
\delta^{-1}\left(Da + \tfrac{1}{t}\ad(\varrho)(a)\right).
\end{align*}
Using an analogous argument as above, this operator is a
contraction with respect to the total degree, and has a unique
fixed-point. Finally, since $K$ is linear the fixed point has to
be zero, which means that $A = 0$.
\end{proof}
\begin{remark}
\label{remark:RecursiveConstruction}%
It is important to note that the above construction of the element
$\varrho$, which will be the crucial ingredient in the universal
deformation formula below, is a fairly explicit recursion formula.
Writing $\varrho = \sum_{r=3}^\infty \varrho^{(r)}$ with
components $\varrho^{(r)}$ of homogeneous total degree
$\Deg \varrho^{(r)} = r \varrho^{(r)}$ we see that
$\varrho^{(3)} = \delta^{-1} (R + t \Omega_1)$ and
\begin{equation}
\label{eq:VarrhoIteratively}
\varrho^{(r+3)}
=
\delta^{-1}\left(
D \varrho^{(r+2)}
+
\tfrac{1}{t}
\sum_{\ell = 1}^{r-1}
\varrho^{(\ell+2)} \circs \varrho^{(r+2-\ell)}
+
\Omega^{(r+2)}
\right),
\end{equation}
where $\Omega^{(2k)} = t^k \Omega_k$ for $k \in \mathbb{N}$ and
$\Omega^{(2k+1)} = 0$. Moreover, if we find a \emph{flat}
$\nabla$, i.e. if $R = 0$, then for trivial $\Omega = 0$ we have
$\varrho = 0$ as solution.
\end{remark}
\section{Universal Deformation Formula}
\label{sec:UniversalDeformationFormula}
Let us consider a triangular Lie algebra $(\lie g, r)$ acting on a
generic associative algebra $(\algebra A, \mu_{\algebra A})$ via
derivations. We denote by $\acts$ the corresponding Hopf algebra
action $\uea{\lie{g}} \longrightarrow \End(\algebra A)$. In the
following we refer to
\begin{align*}
\algebra{A} \tensor \mathcal{W} \tensor \Anti^\bullet
=
\prod_{k=0}^\infty
\left(
\algebra{A}
\tensor \Sym^k \lie{g}^*
\tensor \Anti^\bullet \lie{g}^*
\right)[[t]]
\end{align*}
as the \emph{enlarged Fedosov algebra}. The operators defined in the
previous section are extended to
$\algebra{A} \tensor \mathcal{W} \tensor \Anti^\bullet$ by acting
trivially on the $\algebra{A}$-factor and as before on the
$\mathcal{W} \tensor \Anti^\bullet$-factor.
The deformed product $\circs$ on $\mathcal{W} \tensor \Anti^\bullet$
together with the product $\mu_{\algebra{A}}$ of $\algebra{A}$ yields
a new (deformed) $\ring{R}[[t]]$-bilinear product $\msA$ for the
extended Fedosov algebra. Explicitly, on factorizing tensors we have
\begin{equation}
\label{eq:ExtendedProduct}
\msA\left(
\xi_1 \tensor f_1 \tensor \alpha_1,
\xi_2 \tensor f_2 \tensor\alpha_2
\right)
=
(\xi_1 \cdot \xi_2)
\tensor
(f_1\tensor\alpha_1) \circs (f_2\tensor\alpha_2),
\end{equation}
where $\xi_1, \xi_2 \in \algebra{A}$,
$f_1, f_2 \in \Sym^\bullet\lie{g}^*$ and
$\alpha_1, \alpha_2 \in \Anti^\bullet\lie{g}^*$. We simply write
$\xi_1 \cdot \xi_2$ for the (undeformed) product $\mu_{\algebra{A}}$
of $\algebra{A}$. Clearly, this new product $\msA$ is again associative.
As new ingredient we use the action $\acts$ to define the operator
$L_{\algebra{A}}\colon \algebra{A} \tensor \mathcal{W} \tensor
\Anti^\bullet \longrightarrow \algebra{A} \tensor \mathcal{W} \tensor
\Anti^\bullet$ by
\begin{align}
\label{eq:LeftMult}
L_{\algebra{A}}(\xi \tensor f \tensor \alpha)
=
e_i \acts \xi \tensor f \tensor e^i\wedge \alpha
\end{align}
on factorizing elements and extend it $\ring{R}[[t]]$-linearly as
usual. Since the action of Lie algebra elements is by derivations, we
see that $L_{\algebra{A}}$ is a derivation of
$\algebra{A} \tensor \mathcal{W} \tensor \Anti^\bullet$ of
antisymmetric degree $+1$. The sum
\begin{equation}
\label{eq:ExtendedFedosovDerivation}
\FedosovA
=
L_{\algebra A} + \FedosovD
\end{equation}
is thus still a derivation of antisymmetric degree $+1$ which we call
the \emph{extended Fedosov derivation}. It turns out to be a
differential, too:
\begin{lemma}
The map
$\FedosovA = L_{\algebra A} + \FedosovD$
squares to zero.
\end{lemma}
\begin{proof}
First, we observe that
$\FedosovA^2 = L_{\algebra A}^2 + [\FedosovD,
L_{\algebra A}]$,
because $\FedosovD^2 = 0$. Next, since $\acts$ is a Lie
algebra action, we immediately obtain
\[
L_{\algebra A}^2(\xi\tensor f\tensor \alpha)
=
\frac{1}{2}C_{ij}^k e_k \acts \xi
\tensor f
\tensor e^i\wedge e^j \wedge \alpha
\]
on factorizing elements. We clearly have
$[\delta, L_{\algebra{A}}] = 0 = [\ad(\varrho), L_{\algebra{A}}]$
since the maps act on different tensor factors. It remains to
compute the only nontrivial term in
$[\FedosovD, L_{\algebra A}] = [D, L_{\algebra{A}}]$. Using
$\delta_\CE e^k = - \frac{1}{2} C^k_{ij} e^i \wedge e^j$, this
results immediately in
$[D, L_{\algebra{A}}] = - L_{\algebra{A}}^2$.
\end{proof}
The cohomology of this differential turns out to be almost trivial: we
only have a nontrivial contribution in antisymmetric degree $0$, the
kernel of $\FedosovA$. In higher antisymmetric
degrees, the following homotopy formula shows that the cohomology is
trivial:
\begin{proposition}
\label{proposition:CohomologyExtendedFedosov}%
The operator
\begin{equation}
\label{eq:DAHomotopy}
\FedosovA^{-1}
=
\delta^{-1}
\frac{1}
{
\id
-
\left[
\delta^{-1},
D + L_{\algebra{A}} + \frac{1}{t}\ad(\varrho)
\right]
}
\end{equation}
is a well-defined $\ring{R}[[t]]$-linear endomorphism of
$\algebra A\tensor \mathcal{W}\tensor\Anti^\bullet$ and we have
\begin{equation}
a
=
\FedosovA \FedosovA^{-1} a
+
\FedosovA^{-1}\FedosovA a
+
\frac{1}
{
\id
-
\left[
\delta^{-1},
D + L_{\algebra A} + \frac{1}{t}\ad(\varrho)
\right]
}
\sigma(a).
\end{equation}
for all $a \in \algebra{A} \tensor \mathcal{W} \tensor \Anti^\bullet$.
\end{proposition}
\begin{proof}
Let us denote by $A$ the operator
$[\delta^{-1}, D + L_{\algebra A} +
\frac{1}{t}\ad(\varrho)]$.
Since it increases the total degree by $+1$, the geometric series
$(\id - A)^{-1}$ is well-defined as a formal series in the total
degree. We start with the Poincaré lemma \ref{eq:Poincare} and get
\begin{equation}
\label{eq:GeometricSeries}
- \FedosovA \delta^{-1}a
- \delta^{-1} \FedosovA a
+ \sigma(a)
=
(\id - A) a,
\end{equation}
since $\mathcal{D}_{\mathcal{A}}$ deforms the differential
$-\delta$ by higher order terms in the total degree. The usual
homological perturbation argument then gives \eqref{eq:DAHomotopy}
by a standard computation, see
e.g. \cite[Prop.~6.4.17]{waldmann:2007a} for this computation.
\end{proof}
\begin{corollary}
\label{cor:TrivialCohomology}%
Let $a \in \algebra A \tensor \mathcal{W} \tensor \Anti^0$. Then
$\FedosovA a = 0$ if and only if
\begin{equation}
\label{eq:aInKernelDA}
a
=
\frac{1}
{
\id
-
\left[
\delta^{-1},
D + L_{\algebra A} + \frac{1}{t}\ad(\varrho)
\right]
}
\sigma(a).
\end{equation}
\end{corollary}
Since the element
$a \in \algebra{A} \tensor \mathcal{W} \tensor \Anti^0$ is completely
determined in the symmetric and antisymmetric degree $0$, we can use
it to define the extended Fedosov Taylor series.
\begin{definition}[Extended Fedosov Taylor series]
\label{def:ExtFedosovTaylorSeries}%
Given the extended Fedosov derivation
$\FedosovA = -\delta + D + L_{\algebra{A}} +
\frac{1}{t}\ad(\varrho)$,
the extended Fedosov Taylor series of $\xi \in \algebra{A}[[t]]$
is defined by
\begin{equation}
\label{eq:ExtendedFedosovTaylorSeries}
\tau_{\algebra{A}}(\xi)
=
\frac{1}
{
\id
-
\left[
\delta^{-1},
D + L_{\algebra{A}} + \frac{1}{t}\ad(\varrho)
\right]
}
\xi.
\end{equation}
\end{definition}
\begin{lemma}
\label{lem:TauAsASeries}%
For $\xi \in \algebra A[[t]]$ we have
\begin{equation}
\label{eq:SigmaTau}
\sigma(\tau_{\algebra{A}}(\xi))
=
\xi.
\end{equation}
Moreover, the map
$\tau_{\algebra{A}}\colon \algebra{A}[[t]] \longrightarrow
\ker\FedosovA \cap \ker \dega$
is a $\ring{R}[[t]]$-linear isomorphism starting with
\begin{equation}
\label{eq:LowestDegOrdersTau}
\tau_{\algebra{A}}(\xi)
=
\sum_{k = 0}^\infty
\left[
\delta^{-1},
D + L_{\algebra{A}} + \tfrac{1}{t} \ad(\varrho)
\right]^k (\xi)
=
\xi \tensor 1 \tensor 1
+
e_i \acts\xi \tensor e^i \tensor 1
+ \cdots
\end{equation}
in zeroth and first order of the total degree.
\end{lemma}
\begin{proof}
The isomorphism property follows directly from
Corollary~\ref{cor:TrivialCohomology}. The commutator
$[\delta^{-1}, D + L_{\algebra{A}} + \frac{1}{t}\ad(\varrho)]$
raises the total degree at least by one, thus the zeroth and first
order terms in the total degree come from the terms with $k = 0$
and $k = 1$ in the geometric series in
\eqref{eq:LowestDegOrdersTau}. Here it is easy to see that the
only non-trivial contribution is
\[
\left[
\delta^{-1},
D + L_{\algebra{A}} + \tfrac{1}{t} \ad(\varrho)
\right] \xi
=
L_{\algebra{A}} \xi,
\]
proving the claim in \eqref{eq:LowestDegOrdersTau}. Note that
already for $k = 2$ we get also contributions of $S$ and
$\ad(\varrho)$.
\end{proof}
Given the $\ring{R}[[t]]$-linear isomorphism
$\tau_{\algebra{A}}\colon \algebra{A}[[t]] \longrightarrow \ker
\FedosovA \cap \ker \dega$
we can turn $\algebra{A}[[t]]$ into an algebra by pulling back the
deformed product: note that the kernel of a derivation is always a
subalgebra and hence the intersection $\ker\FedosovA \cap \ker \dega$
is also a subalgebra. This allows us to obtain a universal
deformation formula for any $\uea{\lie{g}}$-module algebra
$\algebra{A}$:
\begin{theorem}[Universal deformation formula]
\label{theorem:StarProduct}%
Let $\lie{g}$ be a Lie algebra with non-degenerate
$r$-matrix. Moreover, let $s \in \Sym^2 \lie{g}$ be such that
there exists a symplectic torsion-free covariant derivative
$\nabla$ with $s$ being covariantly constant. Consider then
$\pi = r + s$. Finally, let $\Omega \in t \Anti^2 \lie{g}^*[[t]]$
be a formal series of $\delta_\CE$-closed two-forms. Then for
every associative algebra $\algebra{A}$ with action of $\lie{g}$
by derivations one obtains an associative deformation
$m_\star^{\algebra{A}}\colon \algebra{A}[[t]]\times\algebra A[[t]]
\longrightarrow \algebra{A}[[t]]$ by
\begin{equation}
\label{eq:TheRealUDF}
m_\star^{\algebra{A}} (\xi, \eta)
=
\sigma\left(
\msA
(\tau_{\algebra{A}}(\xi), \tau_{\algebra{A}}(\eta))
\right)
\end{equation}
Writing simply $\star = \star_{\Omega, \nabla, s}$ for this new
product, one has
\begin{equation}
\label{eq:TheStarProduct}
\xi \star \eta
=
\xi \cdot \eta
+
\frac{t}{2} \pi^{ij} (e_i \acts\xi) \cdot (e_j \acts \eta)
+
\mathcal{O}(t^2)
\end{equation}
for $\xi, \eta \in \algebra{A}$.
\end{theorem}
\begin{proof}
The product $m_\star^{\algebra{A}}$ is associative, because $\msA$
is associative and $\tau_{\algebra{A}}$ is an isomorphism onto a
subalgebra with inverse $\sigma$. The second part is a direct
consequence of Lemma~\ref{lem:TauAsASeries}.
\end{proof}
\begin{remark}
The above theorem can be further generalized by observing that
given a Poisson structure on $\algebra{A}$ induced by a generic
bivector on $\lie{g}$, we can reduce to the quotient
$\lie{g} / \ker\acts$ and obtain an $r$-matrix on the quotient,
inducing the same Poisson structure.
\end{remark}
\section{Universal Construction for Drinfel'd Twists}
\label{sec:UniversalDeformationFormulaTwist}
Let us consider the particular case in which $\algebra{A}$ is the
tensor algebra $(\Tensor^\bullet(\uea{\lie{g}}), \tensor)$. In this
case, we denote by $L$ the operator $L_{\Tensor^\bullet(\uea{\lie
g})}\colon \Tensor^\bullet(\uea{\lie{g}})\tensor
\mathcal{W}\tensor\Anti^\bullet \longrightarrow
\Tensor^\bullet(\uea{\lie{g}})\tensor \mathcal{W}\tensor\Anti^\bullet
$, which is given by
\begin{equation}
\label{eq:LforUg}
L_{\Tensor^\bullet(\uea{\lie{g}})}(\xi\tensor f\tensor \alpha)
=
L_{e_i}\xi\tensor f\tensor e^i\wedge \alpha.
\end{equation}
Here $L_{e_i}$ is the left multiplication in $\uea{\lie{g}}$ of the
element $e_i$ extended as a derivation of the tensor product. Note
that it is independent of the choice of the basis in $\lie{g}$.
Applying the results discussed in the last section, we obtain a star
product for the tensor algebra over $\uea{\lie{g}}$ as a particular
case of Theorem~\ref{theorem:StarProduct}:
\begin{corollary}
\label{cor:StarProduct}%
The map $m_\star\colon \Tensor^\bullet(\uea{\lie
g})[[t]]\times\Tensor^\bullet(\uea{\lie{g}})[[t]]
\longrightarrow \Tensor^\bullet(\uea{\lie{g}})[[t]]$ defined by
\begin{equation}
\label{eq:StarProductForTensorUg}
m_\star (\xi,\eta)
=
\xi\star \eta
=
\sigma (\ms(\tau(\xi),\tau(\eta)))
\end{equation}
is an associative product and
\begin{equation}
\label{eq:ClassLimTensorAlgebraStar}
\xi\star \eta
=
\xi\tensor \eta
+
\frac{t}{2} \pi^{ij}L_{e_i}\xi\tensor L_{e_j}\eta
+
\mathcal{O}(t^2)
\end{equation}
for $\xi, \eta \in \Tensor^\bullet(\uea{\lie{g}})$.
\end{corollary}
In the following we prove that the star product $m_\star$ defined
above allows to construct a formal Drinfel'd twist. Let us define, for
any linear map
\begin{equation}
\label{eq:PhiUktoUl}
\Phi \colon \uea{\lie{g}}^{\tensor k}
\longrightarrow
\uea{\lie{g}}^{\tensor \ell},
\end{equation}
the \emph{lifted} map
\begin{equation}
\label{eq:LiftedMap}
\Phi^\lift \colon
\uea{\lie{g}}^{\tensor k}
\tensor
\mathcal{W}\tensor\Anti^\bullet
\ni \xi\tensor f\tensor \alpha
\; \mapsto \;
\Phi(\xi)\tensor f\tensor\alpha \in
\uea{\lie{g}}^{\tensor \ell}
\tensor
\mathcal{W}\tensor\Anti^\bullet,
\end{equation}
obeying the following simple properties:
\begin{lemma}
\label{lem:LiftingAndStructures}%
Let $\Phi\colon \uea{\lie{g}}^{\tensor k} \longrightarrow
\uea{\lie{g}}^{\tensor \ell}$ and $\Psi\colon
\uea{\lie{g}}^{\tensor m} \longrightarrow \uea{\lie{g}}^{\tensor
n}$ be linear maps.
\begin{lemmalist}
\item The lifted map $\Phi^\lift$ commutes with $\delta$,
$\delta^{-1}$, $D$, and $\ad(x)$ for all
$x \in \mathcal{W}\tensor \Anti^\bullet$.
\item We have
\begin{equation}
\label{eq:ListCommutesWithSigma}
\Phi \circ
\sigma\at{
\uea{\lie{g}}^{\tensor k}
\tensor
\mathcal{W}\tensor \Anti^\bullet
}
=
\sigma\at{
\uea{\lie{g}}^{\tensor \ell}
\tensor
\mathcal{W}\tensor\Anti^\bullet
}
\circ \Phi^\lift.
\end{equation}
\item We have
\begin{equation}
\label{eq:LiftAndProduct}
(\Phi\tensor\Psi)^\lift \ms(a_1,a_2)
=
\ms(\Phi^\lift(a_1), \Psi^\lift(a_2)),
\end{equation}
for any
$a_1 \in \uea{\lie{g}}^{\tensor k} \tensor \mathcal{W} \tensor
\Anti^\bullet$
and
$a_2 \in \uea{\lie{g}}^{\tensor m} \tensor \mathcal{W} \tensor
\Anti^\bullet$.
\end{lemmalist}
\end{lemma}
Let $\eta \in \uea{\lie{g}}^{\tensor k}[[t]]$ be given. Then we can
consider the right multiplication by $\eta$ using the algebra
structure of $\uea{\lie{g}}^{\tensor k}[[t]]$ coming from the
universal enveloping algebra as a map
\begin{equation}
\label{eq:RightMultiplication}
\cdot \eta\colon
\uea{\lie{g}}^{\tensor k}\ni \xi
\; \mapsto \;
\xi \cdot \eta \in \uea{\lie{g}}^{\tensor k}.
\end{equation}
To this map we can apply the above lifting process and extend it this
way to a $\ring{R}[[t]]$-linear map such that on factorizing
elements
\begin{equation}
\label{eq:LiftedRightMultiplication}
\cdot \eta\colon
\uea{\lie{g}}^{\tensor k}
\tensor
\mathcal{W} \tensor \Anti^\bullet
\ni \xi \tensor f \tensor \alpha
\; \mapsto \;
(\xi \cdot \eta) \tensor f \tensor \alpha
\in \uea{\lie{g}}^{\tensor k},
\end{equation}
where we simply write $\cdot \eta$ instead of $(\cdot \eta)^\lift$.
Note that $a \cdot \eta$ is only defined if the tensor degrees $k$ of
$\eta \in \Tensor^k(\uea{\lie{g}})$ and $a$ coincide since we use the
algebra structure inherited from the universal enveloping algebra.
In the following we denote by $\FedosovTU$ the derivation
$\mathscr{D}_{\Tensor^\bullet(\uea{\lie{g}})}$ as obtained in
\eqref{eq:ExtendedFedosovDerivation}. We collect some properties how
the lifted right multiplications match with the extended Fedosov
derivation:
\begin{lemma}
\label{lem:RactsProperties}%
\begin{lemmalist}
\item\label{item:RactsI} For any
$a \in \Tensor^k(\uea{\lie{g}}) \tensor \mathcal{W} \tensor
\Anti^\bullet$
and $\xi \in \Tensor^k(\uea{\lie{g}})[[t]]$, we have
$\FedosovTU(a \cdot \xi) = \FedosovTU(a) \cdot \xi$
\item\label{item:RactsII} The extended Fedosov Taylor series
$\tau$ preserves the tensor degree of elements in
$\Tensor^\bullet(\uea{\lie{g}})$.
\item\label{item:RactsIII} For any
$\xi, \eta \in \Tensor^k(\uea{\lie{g}})[[t]]$, we have
$\tau(\xi\cdot\eta) = \tau(\xi) \cdot \eta$.
\item\label{item:RactsVI} For any
$a_1 \in \Tensor^k(\uea{\lie{g}}) \tensor \mathcal{W} \tensor
\Anti^\bullet$
and
$a_2 \in \Tensor^\ell(\uea{\lie{g}}) \tensor \mathcal{W}
\tensor \Anti^\bullet$
as well as $\eta_1 \in \Tensor^k(\uea{\lie{g}})[[t]]$ and
$\eta_2 \in \Tensor^\ell(\uea{\lie{g}})[[t]]$, we have
$\ms(a_1 \cdot \eta_1, a_2 \cdot_l \eta_2) = \ms(a_1, a_2)
\cdot (\eta_1 \tensor \eta_2)$.
\end{lemmalist}
\end{lemma}
\begin{proof}
Let
$\xi \tensor a \in \Tensor^k(\uea{\lie{g}}) \tensor \mathcal{W}
\tensor \Anti^\bullet$
and $\eta \in \Tensor^k(\uea{\lie{g}})$ then we have
\begin{align*}
\FedosovTU((\xi \tensor a) \cdot \eta)
&=
\FedosovTU((\xi \cdot \eta) \tensor a)
\\
&=
L_{e_i} (\xi\cdot\eta) \tensor e^i\wedge a
+
(\xi \cdot \eta) \tensor \FedosovD(a)
\\
&=
(L_{e_i}(\xi) \tensor e^i \wedge a) \cdot \eta
+
(\xi \tensor \FedosovD(a)) \cdot \eta
\\
&=
\FedosovTU(a) \cdot \eta.
\end{align*}
This proves the first claim. The second claim follows immediately
from the fact that all operators defining $\tau$ do not change the
tensor degree. In order to prove the claim
\refitem{item:RactsIII}, let us consider
$\xi, \eta \in \Tensor^k(\uea{\lie{g}})[[t]]$. Then we have
\begin{equation*}
\FedosovTU(\tau(\xi)\cdot \eta)
=
\FedosovTU(\tau(\xi))\cdot \eta
=
0,
\end{equation*}
according to \refitem{item:RactsI}. Thus,
$\tau(\xi) \cdot \eta \in \ker{\FedosovTU} \cap \ker \dega$ and
therefore
\begin{equation*}
\tau(\xi) \cdot \eta
=
\tau(\sigma(\tau(\xi)\cdot \eta))
=
\tau(\sigma(\tau(\xi))\cdot \eta)
=
\tau(\xi\cdot\eta).
\end{equation*}
Finally, to prove the last claim we choose
$\xi_1 \tensor f_1 \in \Tensor^k(\uea{\lie{g}}) \tensor
\mathcal{W} \tensor \Anti^\bullet$
and
$\xi_2 \tensor f_2 \in \Tensor^\ell(\uea{\lie{g}}) \tensor
\mathcal{W} \tensor \Anti^\bullet$
as well as $\eta_1 \in \Tensor^k(\uea{\lie{g}}) [[t]]$ and
$\eta_2 \in \Tensor^\ell(\uea{\lie{g}})[[t]]$. We obtain
\begin{align*}
\ms\left(
(\xi_1\tensor f_1) \cdot \eta_1,
(\xi_2\tensor f_2) \cdot \eta_2
\right)
&=
\ms\left(
(\xi_1 \cdot \eta_1) \tensor f_1,
(\xi_2 \cdot \eta_2) \tensor f_2
\right) \\
&=
\left(
(\xi_1 \cdot \eta_1) \tensor (\xi_2 \cdot \eta_2)
\right)
\tensor (f_1 \circs f_2) \\
&=
\left(
(\xi_1 \tensor \xi_2) \cdot (\eta_1 \tensor \eta_2)
\right)
\tensor (f_1 \circs f_2)\\
&=
\left(
(\xi_1 \tensor \xi_2) \tensor (f_1 \circs f_2)
\right) \cdot (\eta_1 \tensor \eta_2).
\end{align*}
This concludes the proof.
\end{proof}
From the above lemma, we observe that the isomorphism $\tau$ can be
computed for any element $\xi \in \Tensor^k(\uea{\lie{g}})[[t]]$ via
\begin{equation}
\label{eq:tauxiExplicitly}
\tau(\xi)
=
\tau(1^{\tensor k} \cdot \xi)
=
\tau(1^{\tensor k }) \cdot \xi,
\end{equation}
where $1 \in \uea{\lie{g}}$ is the unit element of the universal
enveloping algebra. Moreover, from
Lemma~\ref{lem:LiftingAndStructures}, we have
\begin{equation}
\label{eq:StarPr}
\xi \star \eta
=
\sigma(\ms(\tau(\xi)\tensor \tau(\eta)))
=
(1^{\tensor k} \star 1^{\tensor \ell}) \cdot (\xi\tensor \eta)
\end{equation}
for $\xi \in \Tensor^k(\uea{\lie{g}})[[t]]$ and
$\eta \in \Tensor^\ell(\uea{\lie{g}})[[t]]$. Thus $\star$ is entirely
determined by the values on tensor powers of the unit element of the
universal enveloping algebra. Note that the unit of $\star$ is the
unit element in $\ring{R} \subseteq \Tensor^\bullet(\uea{\lie{g}})$
of the \emph{tensor algebra} but not $1 \in \uea{\lie{g}}$.
\begin{lemma}
\label{lem:LiftedHopfStrucures}%
Let
$\Delta\colon \uea{\lie{g}}[[t]] \longrightarrow
\uea{\lie{g}}^{\tensor 2}[[t]]$
be the coproduct of $\uea{\lie{g}}[[t]]$ and
$\epsilon\colon \uea{\lie{g}}\to \ring{R}[[t]]$ the counit.
\begin{lemmalist}
\item We have
\begin{equation}
\label{eq:LcommutesWithDelta}
L\at{
\uea{\lie{g}}^{\tensor 2}
\tensor \mathcal{W} \tensor \Anti^\bullet
}
\circ
\Delta^\lift
=
\Delta^\lift
\circ
L\at{\uea{\lie{g}}\tensor \mathcal{W}\tensor \Anti^\bullet}.
\end{equation}
\item For the Fedosov-Taylor series one has
\begin{equation}
\Delta^\lift \circ \tau = \tau \circ \Delta.
\end{equation}
\item We have
\begin{equation}
\label{eq:EpsilonOfL}
\epsilon^\lift
\circ
L\at{\uea{\lie{g}}\tensor \mathcal{W}\tensor \Anti^\bullet}
=
0.
\end{equation}
\item For the Fedosov-Taylor series one has
\begin{equation}
\label{eq:EpsilonListTau}
\epsilon^\lift \circ \tau =\epsilon.
\end{equation}
\end{lemmalist}
\end{lemma}
\begin{proof}
Let
$\xi \tensor f \tensor \alpha \in \uea{\lie{g}} \tensor
\mathcal{W} \tensor \Anti^\bullet$ then we get
\begin{align*}
\Delta^\lift L(\xi \tensor f \tensor \alpha)
&=
\Delta^\lift\left(
L_{e_i}(\xi) \tensor f \tensor e^i \wedge \alpha
\right) \\
&=
\Delta^\lift\left(
e_i \xi \tensor f \tensor e^i \wedge \alpha
\right)\\
&=
\Delta(e_i \xi) \tensor f \tensor e^i \wedge \alpha \\
&=
\Delta (e_i) \cdot \Delta(\xi)
\tensor f \tensor e^i \wedge \alpha \\
&=
(e_i\tensor 1 + 1\tensor e_i) \cdot \Delta(\xi)
\tensor f \tensor e^i \wedge \alpha \\
&=
L_{e_i}(\Delta(\xi)) \tensor f \tensor e^i \wedge \alpha \\
&=
L \Delta^\lift (\xi\tensor f\tensor \alpha),
\end{align*}
since we extended the left multiplication by $e_i$ as a
\emph{derivation} of the tensor product to higher tensor powers.
Hence all the operators appearing in $\tau$ \emph{commute} with
$\Delta^\lift$ and therefore we get the the second part.
Similarly, we get
\begin{align*}
\epsilon^\lift (L(\xi \tensor f \tensor \alpha)
&=
\epsilon^\lift (e_i \xi \tensor f \tensor e^i \wedge \alpha)
\\
&=
\epsilon(e_i \xi) \tensor f \tensor e^i \wedge \alpha \\
&=
\epsilon(e_i) \epsilon(\xi)
\tensor f \tensor e^i \wedge \alpha \\
&=
0,
\end{align*}
where we used that $\epsilon$ vanishes on primitive elements of
$\uea{\lie{g}}$. Since $\epsilon^\lift$ commutes with all other
operators $\delta^{-1}$, $D$ and $\ad(\varrho)$ according to
Lemma~\ref{lem:LiftingAndStructures}, we first get
\[
\epsilon^\lift
\circ
\left[\delta^{-1}, D + L + \tfrac{1}{t}\ad(\varrho)\right]
=
\left[\delta^{-1}, D + \tfrac{1}{t}\ad(\varrho)\right]
\circ
\epsilon^\lift.
\]
Hence for $\xi \in \uea{\lie{g}}[[t]]$ we have
\begin{align*}
\epsilon^\lift \tau(\xi)
&=
\epsilon^\lift
\left(
\sum_{k=0}^\infty
\left[
\delta^{-1}, D + L + \tfrac{1}{t} \ad(\varrho)
\right]^k
\xi
\right) \\
&=
\sum_{k=0}^\infty
\left[
\delta^{-1}, D + \tfrac{1}{t} \ad(\varrho)
\right]^k
\epsilon^\lift (\xi) \\
&=
\epsilon(\xi),
\end{align*}
since $\epsilon^\lift(\xi) = \epsilon(\xi)$ is just a constant and
hence unaffected by all the operators in the series. Thus only the
zeroth term remains.
\end{proof}
This is now the last ingredient to show that the element $1 \star 1$
is the twist we are looking for:
\begin{theorem}
\label{Thm:twist}
The element $1 \star 1 \in \uea{\lie{g}}^{\tensor 2}[[t]]$ is a
twist such that
\begin{equation}
\label{eq:TheTwistAtLast}
1\star 1
=
1\tensor 1 + \frac{t}{2} \pi + \mathcal{O}(t^2).
\end{equation}
\end{theorem}
\begin{proof}
First we see that
\begin{align*}
(\Delta \tensor \id)(1\star 1)
&=
(\Delta\tensor\id) \sigma (\ms(\tau(1),\tau(1))) \\
&=
\sigma\left(
(\Delta \tensor \id)^\lift
(\ms(\tau(1), \tau(1)))
\right) \\
&=
\sigma\left(
\ms(\Delta^\lift \tau(1), \tau(1))
\right) \\
&=
\sigma (\ms(\tau(\Delta (1)), \tau(1))) \\
&=
\sigma(\ms(\tau(1\tensor 1), \tau(1))) \\
&=
(1 \tensor 1) \star 1.
\end{align*}
Similarly, we get
$(\id \tensor \Delta)(1\star 1) = 1 \star (1\tensor 1)$. Thus,
using the associativity of $\star$ we obtain the first condition
\eqref{eq:TwistConditionI} for a twist as follows,
\begin{align*}
(\Delta \tensor \id) (1\star 1) \cdot ((1\star 1) \tensor 1)
&=
((1 \tensor 1)\star 1) \cdot ((1\star 1) \tensor 1) \\
&=
(1 \star 1) \star 1 \\
&=
1 \star (1 \star 1) \\
&=
(\id \tensor \Delta)(1 \star 1)
\cdot (1 \tensor (1 \star 1)).
\end{align*}
To check the normalization condition \eqref{eq:TwistConditionII}
we use Lemma~\ref{lem:LiftingAndStructures} and
Lemma~\ref{lem:LiftedHopfStrucures} again to get
\begin{align*}
(\epsilon \tensor \id)(1\star 1)
&=
(\epsilon \tensor \id) \sigma(\ms(\tau(1), \tau(1))) \\
&=
\sigma\left(
(\epsilon \tensor \id)^\lift
(\ms(\tau(1), \tau(1)))
\right) \\
&=
\sigma\left(
(\ms(\epsilon^\lift\tau(1), \tau(1)))
\right) \\
&=
\sigma\left(
(\ms(\epsilon(1), \tau(1)))
\right) \\
&=
\epsilon(1) \sigma(\tau(1)) \\
&=
1,
\end{align*}
since $\epsilon(1)$ is the unit element of $\ring{R}$ and thus
the unit element of $\Tensor^\bullet(\uea{\lie{g}})$, which serves
as unit element for $\ms$ as well. Similarly we obtain
$(\id\tensor\epsilon)(1\star 1) = 1$. Finally, the facts that the
first term in $t$ of $1 \star 1$ is given by $\pi$ and that zero
term in $t$ is $1 \tensor 1$ follow from
Corollary~\ref{cor:StarProduct}.
\end{proof}
\begin{remark}
\label{remark:WhatWeGotNow}%
From now on we refer to $1\star 1$ as the \emph{Fedosov twist}
\begin{equation}
\label{eq:FedosovTwistDefinition}
\twist{F}_{\Omega, \nabla, s} = 1 \star 1,
\end{equation}
corresponding to the choice of the $\delta_\CE$-closed form
$\Omega$, the choice of the torsion-free symplectic covariant
derivative and the choice of the covariantly constant $s$. In the
following we will be mainly interested in the dependence of
$\twist{F}_{\Omega, \nabla, s}$ on the two-forms $\Omega$ and
hence we shall write $\twist{F}_\Omega$ for simplicity. We also
note that for $s = 0$ and $\Omega = 0$ we have a \emph{preferred}
choice for $\nabla$, namely the one obtained from the Hess trick
out of the half-commutator covariant derivative as described in
Proposition~\ref{prop:HessTrick}. This gives a \emph{canonical
twist} $\twist{F}_0$ quantizing $r$.
\end{remark}
The results discussed above allow us to give an alternative proof of
the Drinfel'd theorem \cite{drinfeld:1983a}, stating the existence of
twists for every $r$-matrix:
\begin{corollary}[Drinfel'd]
\label{corollary:ExtendedDrinfeld}%
Let $(\lie{g},r) $ be a Lie algebra with $r$-matrix over a field
$\mathbb{K}$ with characteristic $0$. Then there exists a formal
twist $\twist {F}\in(\uea{\lie{g}}\tensor\uea{\lie{g}})[[t]]$,
such that
\begin{align*}
\twist{F}
=
1 \tensor 1 + \frac{t}{2} r + \mathcal{O}(t^2).
\end{align*}
\end{corollary}
To conclude this section we consider the question whether the two
approaches of universal deformation formulas actually coincide: on the
one hand we know that every twist gives a universal deformation
formula by \eqref{eq:TheUDF}. On the other hand, we have constructed
directly a universal deformation formula \eqref{eq:TheRealUDF} in
Theorem~\ref{theorem:StarProduct} based on the Fedosov
construction. Since we also get a twist from the Fedosov construction,
we are interested in the consistence of the two constructions. In
order to answer this question, we need some preparation. Hence let
$\algebra{A}$ be an algebra with action of $\lie{g}$ by derivations as
before. Then we define the map
\begin{equation}
\bullet\colon
\uea{\lie{g}} \tensor \mathcal{W} \tensor \Anti^\bullet
\times
\algebra{A}
\ni
(\xi \tensor \alpha, a)
\; \mapsto \;
(\xi \tensor \alpha) \bullet a
=
\xi \acts a \tensor \alpha
\in
\algebra{A} \tensor \mathcal{W} \tensor \Anti^\bullet
\end{equation}
for any $a \in \algebra{A} $ and
$\alpha \in \mathcal{W} \tensor \Anti^\bullet$. Then the following
algebraic properties are obtained by a straightforward computation:
\begin{lemma}
\label{lemma:ModuleStructures}%
For any $\xi \in \uea{\lie{g}}$,
$\alpha \in \mathcal{W} \tensor \Anti^\bullet$ and
$a \in \algebra{A}$ we have
\begin{lemmalist}
\item
$\sigma ((\xi \tensor \alpha) \bullet a) = \sigma (\xi \tensor
\alpha) \acts a$,
\item
$L_{\algebra{A}} (\xi \acts a \tensor \alpha) = L (\xi \tensor
\alpha) \bullet a$,
\item $\tau_{\algebra{A}}(a) = \tau (1) \bullet a$,
\item
$\msA (\xi_1 \tensor a_1 \tensor \alpha_1 , \xi_2 \tensor a_2
\tensor \alpha_2) = (\mu_{\algebra{A}} \tensor \id \tensor
\id) (\ms (\xi_1 \tensor \alpha_1 , \xi_2 \tensor
\alpha_2)\bullet (a_1 \tensor a_2))$.
\end{lemmalist}
\end{lemma}
For matching parameters $\Omega$, $\nabla$, and $s$ of the Fedosov
construction, the two approaches coincide:
\begin{proposition}
\label{prop:Coincide}%
For fixed choices of $\Omega$, $\nabla$, and $s$ and for any
$a, b \in \algebra{A}$ we have
\begin{equation}
a \star_{\Omega, \nabla, s} b
=
a \star_{\twist{F}_{\Omega, \nabla, s}} b.
\end{equation}
\end{proposition}
\begin{proof}
This is now just a matter of computation. We have
\begin{align*}
a \star b
&=
\sigma\left(
\msA (\tau_{\algebra{A}}(a) \tensor \tau_{\algebra{A}}(b))
\right)
\\
&\ot{(a)}{=}
\sigma\left(
\ms ((\tau(1) \tensor \tau(1)) \bullet (a \tensor b))
\right)
\\
&\ot{(b)}{=}
\mu_{\algebra{A}}\left(
\sigma (\ms (\tau (1)\tensor \tau (1)))
\acts
(a \tensor b)
\right)
\\
&=
\mu_{\algebra{A}} ((1 \star 1) \acts (a \tensor b))
\\
&=
a \star_{\twist{F}} b,
\end{align*}
where in $(a)$ we use the third claim of the above lemma and in
$(b)$ the first and the fourth.
\end{proof}
\section{Classification of Drinfel'd Twists}
\label{sec:Classification}
In this section we discuss the classification of twists on universal
enveloping algebras for a given Lie algebra $\lie{g}$, with
non-degenerate $r$-matrix. Recall that two twists $\twist{F}$ and
$\twist{F}'$ are said to be \emph{equivalent} and denoted by
$\twist{F}\sim \twist{F}'$ if there exists an element
$S\in \uea{\lie{g}}[[t]]$, with $S = 1 + \mathcal{O}(t)$ and
$\epsilon(S) = 1$ such that
\begin{equation}
\label{eq:EquivalentTwist}
\Delta(S) \twist{F}'
=
\twist{F} (S \tensor S).
\end{equation}
In the following we prove that the set of equivalence classes of
twists $\mathrm{Twist}(\uea{\lie{g}},r)$ with fixed $r$-matrix $r$ is
in bijection to the formal series in the second Chevalley-Eilenberg
cohomology $\mathrm{H}_\CE^2(\lie{g})[[t]]$.
We will fix the choice of $\nabla$ and the symmetric part $s$ in the
Fedosov construction. Then the cohomological equivalence of the
two-forms in the construction yields equivalent twists. In fact, an
equivalence can even be computed recursively:
\begin{lemma}
\label{lem:FedosovEquiv}%
Let $\varrho$ and $\varrho'$ be the two elements in
$\mathcal{W}_2 \tensor \Anti^1$ uniquely determined from
Proposition~\ref{proposition:FedosovDerivation}, corresponding to
two closed two-forms $\Omega, \Omega' \in t\Anti^2\lie{g}^*[[t]]$,
respectively, and let $\Omega - \Omega' = \delta_\CE C$ for a
fixed $C \in t\lie{g}^*[[t]]$. Then there is a unique solution
$h\in\mathcal{W}_3\tensor\Anti^0$ of
\begin{equation}
\label{eq:FedosovEquivConstr}
h
=
C \tensor 1
+
\delta^{-1}\left(
D h
-
\frac{1}{t}\ad(\varrho) h
-
\frac{\frac{1}{t}\ad(h)}
{\exp(\frac{1}{t}\ad(h)) - \id}
(\varrho' - \varrho)
\right)
\quad
\textrm{and}
\quad
\sigma(h) = 0.
\end{equation}
For this $h$ we have
\begin{align*}
\FedosovD'
=
\mathcal{A}_h \FedosovD \mathcal{A}_{-h},
\end{align*}
with $\mathcal{A}_h = \exp(\frac{1}{t}\ad(h))$ being an
automorphism of $\circs$.
\end{lemma}
\begin{proof}
In the context of the Fedosov construction it is well-known that
cohomologous two-forms yield equivalent star products. The above
approach with the explicit formula for $h$ follows the arguments
of \cite[Lemma~3.5]{reichert.waldmann:2016a} which is based on
\cite[Sect.~3.5.1.1]{neumaier:2001a}.
\end{proof}
\begin{lemma}
\label{lem:CohomologousEquiv}%
Let $\Omega, \Omega' \in t\Anti^2\lie{g}^*[[t]]$ be
$\delta_\CE$-cohomologous. Then the corresponding Fedosov twists
are equivalent.
\end{lemma}
\begin{proof}
By assumption, we can find an element $C \in t\lie g^*[[t]]$, such
that $\Omega - \Omega' = \delta_\CE C$. From
Lemma~\ref{lem:FedosovEquiv} we get an element
$h \in \mathcal{W}_3 \tensor \Anti^0$ such that
$\FedosovTU'_F = \mathcal{A}_h \FedosovD \mathcal{A}_{-h}$.
An easy computation shows that $\mathcal{A}_h$ commutes with $L$,
therefore we have
\begin{equation*}
\FedosovTU'
=
\mathcal{A}_h\FedosovTU\mathcal{A}_{-h}.
\end{equation*}
Thus, $\mathcal{A}_h$ is an automorphism of $\ms$ with
$\mathcal{A}_h \colon \ker \FedosovTU \longrightarrow
\ker\FedosovTU'$
being a bijection between the two kernels. Let us consider the
map
\begin{equation*}
S_h\colon
\Tensor^\bullet (\uea{\lie{g}})[[t]]
\ni \xi
\; \mapsto \;
(\sigma\circ\mathcal{A}_h\circ\tau)(\xi) \in
\Tensor^\bullet (\uea{\lie{g}})[[t]],
\end{equation*}
which is defines an equivalence of star products, i.e.
\begin{align}
\label{eq:StarAutomorphism}
S_h (\xi \star \eta)
=
S_h(\xi) \star' S_h(\eta)
\end{align}
for any $\xi, \eta \in \Tensor^\bullet(\uea{\lie{g}})[[t]]$. Let
$\xi, \eta \in \uea{\lie{g}}$, then using
Lemma~\ref{lem:RactsProperties} we have
\begin{align*}
S_h(\xi\tensor \eta)
&=
(\sigma\circ\mathcal{A}_h\circ\tau)(\xi\tensor \eta)\\
&=
(\sigma\circ\mathcal{A}_h)
(\tau(1\tensor 1) \cdot (\xi\tensor \eta)) \\
&=
\sigma\left(
(\mathcal{A}_h(\tau(1 \tensor 1)))
\cdot (\xi \tensor \eta)
\right) \\
&=
\sigma(\mathcal{A}_h(\tau(1 \tensor 1)))
\cdot (\xi \tensor \eta) \\
&=
\sigma\left(\mathcal{A}_h(\Delta^\lift \tau(1))\right)
\cdot(\xi\tensor\eta) \\
&=
\Delta\left(
\sigma(\mathcal{A}_h(\tau(1)))
\right)
\cdot(\xi\tensor\eta) \\
&=
\Delta(S_h(1)) \cdot (\xi \tensor \eta).
\end{align*}
From the linearity of $S_h$ we immediately get
$S_h(\xi \star \eta) = \Delta(S_h(1))(\xi \star \eta)$. Now,
putting $\xi = \eta = 1$ in \eqref{eq:StarAutomorphism} and using
\eqref{eq:StarPr} we obtain
\begin{equation*}
\Delta(S_h(1)) \cdot (1\star 1)
=
S_h(1\star 1)
=
S_h(1) \star' S_h(1)
=
(1 \star' 1) \cdot (S_h(1)\tensor S_h(1)).
\end{equation*}
Thus, the twists $\twist{F}_{\Omega} = 1 \star
1$ and $\twist{F}_{\Omega'} = 1 \star' 1$ are equivalent since we have
\begin{equation*}
\epsilon (S_h (1))
=
1.
\end{equation*}
\end{proof}
\begin{lemma}
\label{lem:FedosovTwistProperties}%
Let $\Omega \in t\Anti^2\lie{g}^*$ with $\delta_\CE\Omega=0$,
$x$ the element in $\mathcal{W}_2 \tensor
\Anti^1$ uniquely determined from
Proposition~\ref{proposition:FedosovDerivation} and
$\twist{F}_\Omega$ the corresponding Fedosov twist.
\begin{lemmalist}
\item The lowest total degree of $\varrho$, where $\Omega_k$
appears, is $2k+1$, and we have
\begin{equation}
\label{eq:FirstOmegaInVarrho}
\varrho^{(2k+1)}
=
t^k \delta^{-1}\Omega_k
+
\textrm{terms not containing }
\Omega_k.
\end{equation}
\item For $\xi \in \Tensor^\bullet (\uea{\lie{g}})$ the lowest
total degree of $\tau(\xi)$, where $\Omega_k$ appears, is
$2k+1$, and we have
\begin{equation}
\label{eq:FirstOmegaInTau}
\tau(\xi)^{(2k+1)}
=
\frac{t^k}{2}
\left(
e_i \tensor \insa((e^i)^\sharp) \Omega_k
\right)
+
\textrm{terms not containing }
\Omega_k.
\end{equation}
\item The lowest $t$-degree of $\twist{F}_\Omega$, where
$\Omega_k$ appears, is $k+1$, and we have
\begin{align*}
(F_\Omega)_{k+1}
=
-\frac{1}{2}(\Omega_k)^\sharp
+
\textrm{terms not containing }
\Omega_k.
\end{align*}
\item The map $\Omega \mapsto \twist{F}_\Omega$ is injective.
\end{lemmalist}
\end{lemma}
\begin{proof}
The proof uses the recursion formula for $\varrho$ as well as the
explicit formulas for $\tau$ and $\star$ and consists in a careful
counting of degrees. It follows the same lines of
\cite[Thm.~6.4.29]{waldmann:2007a}.
\end{proof}
\begin{lemma}
\label{lemma:CohomologousOmegas}%
Let $\twist{F}_\Omega$ and $\twist{F}_{\Omega'}$ be two equivalent
Fedosov twists corresponding to the closed two-forms
$\Omega, \Omega' \in t\Anti^2\lie{g}^*$. Then there exists an
element $C\in t\lie{g}^*[[t]]$, such that
$\delta_\CE C = \Omega - \Omega'$.
\end{lemma}
\begin{proof}
We can assume that $\Omega$ and $\Omega'$ coincide up to order
$k-1$ for $k \in \mathbb{N}$, since they coincide at order
$0$. Due to Lemma~\ref{lem:FedosovTwistProperties}, we have
\begin{equation*}
(F_\Omega)_{i}
=
(F_{\Omega'})_{i}
\end{equation*}
for any $i \in \{0, \ldots, k\}$ and
\begin{equation*}
(F_{\Omega})_{k+1} - (F_{\Omega'})_{k+1}
=
\frac{1}{2}(-\Omega_k^\sharp +{\Omega'}_k^\sharp).
\end{equation*}
From Lemma~\ref{lem:SkewsymmetricTwistExact}, we know that we can
find an element $\xi \in \lie{g}^*$, such that
\begin{align*}
([(F_{\Omega})_{k+1} - (F_{\Omega'})_{k+1}])^\flat
=
- \Omega_k^\sharp + {\Omega'}_k^\sharp
=
\delta_\CE\xi,
\end{align*}
where by $[(F_{\Omega})_{k+1} - (F_{\Omega'})_{k+1}]$ we denote
the skew-symmetrization of
$(F_{\Omega})_{k+1} - (F_{\Omega'})_{k+1}$. Let us define
$\hat\Omega = \Omega - t^k\delta_\CE\xi$. From
Lemma~\ref{lem:FedosovTwistProperties} we see that
\begin{equation*}
(F_{\hat\Omega})_{k+1}-(F_{\Omega'})_{k+1}
=
0.
\end{equation*}
Therefore the two twists $\twist{F}_{\hat\Omega}$ and
$\twist{F}_{\Omega'}$ coincide up to order $k+1$. Finally, since
$\twist{F}_{\hat\Omega}$ and $\twist{F}_{\Omega}$ are equivalent
(from Lemma~\ref{lem:CohomologousEquiv}) and $\twist{F}_{\Omega}$
and $\twist{F}_{\Omega'}$ are equivalent by assumption, the two
twists $\twist{F}_{\hat\Omega}$ and $\twist{F}_{\Omega'}$ are also
equivalent. By induction, we find an element
$C \in t\lie{g}^*[[t]]$, such that
\begin{align*}
\twist{F}_{\Omega + \delta_\CE C}
=
\twist{F}_{\Omega'},
\end{align*}
and therefore, from Lemma~\ref{lem:FedosovTwistProperties},
$\Omega + \delta_\CE C = \Omega'$.
\end{proof}
\begin{lemma}
\label{lemma:AllTwistAreFedosov}%
Let $\twist{F}\in (\uea{\lie{g}}\tensor \uea{\lie{g}})[[t]]$ be a
formal twist with $r$-matrix $r$. Then there exists a Fedosov
twist $\twist{F}_{\Omega}$, such that
$\twist{F}\sim \twist{F}_\Omega$.
\end{lemma}
\begin{proof}
Let $\twist{F}\in (\uea{\lie{g}} \tensor \uea{\lie{g}})[[t]]$ be a
given twist. We can assume that there is a Fedosov twist
$\twist{F}_\Omega$, which is equivalent to $\twist{F}$ up to order
$k$. Therefore we find a $\hat{\twist {F}}$ such that
$\hat{\twist {F}}$ is equivalent to $\twist{F}$ and coincides with
$\twist{F}_\Omega$ up to order $k$. Due to
Lemma~\ref{lem:SkewsymmetricTwistExact}, we can find an element
$\xi\in \lie g^*$, such that
\begin{equation*}
[(F_\Omega)_{k+1} - \hat{F}_{k+1})]
=
(\delta_\CE \xi)^\sharp.
\end{equation*}
From Lemma \ref{lem:CohomologousEquiv}, the twist
$\twist{F}_{\Omega'}$ corresponding to
$\Omega' = \Omega - t^k\delta_\CE \xi$ is equivalent to
$\twist{F}_{\Omega}$. Moreover, $\twist{F}_{\Omega'}$ coincides
with $\hat{\twist {F}}$ up to order $k$, since
$\twist{F}_{\Omega'}$ coincides with $\twist{F}_{\Omega}$ and
\begin{equation*}
(F_{\Omega'})_{k+1}
=
(F_\Omega)_{k+1} + \frac{1}{2}\delta_\CE \xi.
\end{equation*}
Therefore the skew-symmetric part of
$(F_{\Omega'})_{k+1}-\hat{F}_{k+1}$ is vanishing and this
difference is exact with respect to the differential defined in
\eqref{eq:HKRDiff}. Applying Lemma~\ref{lem:EquivalenceOrder}, we
can see that $\twist{F}_{\Omega'}$ is equivalent to
$\hat{\twist {F}}$ up to order $k+1$. The claim follows by
induction.
\end{proof}
Summing up all the above lemmas we obtain the following
characterization of the equivalence classes of twists:
\begin{theorem}[Classification of twists]
\label{theorem:Classification}%
Let $\lie{g}$ be a Lie algebra over $\ring{R}$ such that $\lie{g}$
is free and finite-dimensional and let $r \in \Anti^2\lie{g}$ be a
classical $r$-matrix such that $\sharp$ is bijective. Then the
set of equivalence classes of twists
$\mathrm{Twist}(\uea{\lie{g}}, r)$ with $r$-matrix $r$ is in
bijection to $\mathrm{H}_\CE^2(\lie{g})[[t]]$ via
$\Omega \mapsto \twist{F}_\Omega$.
\end{theorem}
It is important to remark that even for an abelian Lie algebra
$\lie{g}$ the second Chevalley-Eilenberg cohomology
$\mathrm{H}_\CE^2(\lie{g})[[t]]$ is different from zero. Thus, not all
twists are equivalent. An example of a Lie algebra with trivial
$\mathrm{H}_\CE^2(\lie{g})[[t]]$ is the two-dimensional non-abelian
Lie algebra:
\begin{example}[$ax+b$]
Let us consider the two-dimensional Lie algebra given by the
$\ring{R}$-span of the elements $X, Y \in \lie{g}$ fulfilling
\begin{equation}
\label{eq:abPlusbLieAlgebra}
[X, Y] = Y,
\end{equation}
with $r$-matrix $r = X \wedge Y$. We denote the dual basis of
$\lie{g}^*$ by $\{X^*, Y^*\}$. Since $\lie{g}$ is two-dimensional,
all elements of $\Anti^2\lie{g}^*$ are a multiple of
$X^* \wedge Y^*$, which is closed for dimensional reasons. For
$Y^*$ we have
\begin{equation}
\label{eq:XstarYstarExact}
(\delta_\CE Y^*)(X, Y)
=
- Y^*([X, Y])
=
-Y^*(Y)
=
-1.
\end{equation}
Therefore $\delta_\CE Y^* = -X^* \wedge Y^*$ and we obtain
$\mathrm{H}_\CE^2(\lie{g}) = \{0\}$. From
Theorem~\ref{theorem:Classification} we can therefore conclude that
all twists with $r$-matrix $r$ of $\lie{g}$ are equivalent.
\end{example}
\begin{remark}[Original construction of Drinfel'd]
\label{remark:DrinfeldConstruction}%
Let us briefly recall the original construction of Drinfel'd from
\cite[Thm.~6]{drinfeld:1983a}: as a first step he uses the inverse
$B \in \Anti^2 \lie{g}^*$ of $r$ as a $2$-cocycle to extend
$\lie{g}$ to $\tilde{\lie{g}} = \lie{g} \oplus \mathbb{R}$ by
considering the new bracket
\begin{equation}
\label{eq:NewBracket}
[(X, \lambda), (X', \lambda')]_{\tilde{\lie{g}}}
=
([X, X']_{\lie{g}}, B(X, X'))
\end{equation}
where $X, X' \in \lie{g}$ and $\lambda, \lambda' \in
\mathbb{R}$. On $\tilde{\lie{g}}^*$ one has the canonical star
product quantizing the linear Poisson structure $\star_{DG}$
according to Drinfel'd and Gutt \cite{gutt:1983a}. Inside
$\tilde{\lie{g}}^*$ one has an affine subspace defined by $H =
\lie{g}^* + \ell_0$ where $\ell_0$ is the linear functional
$\ell_0\colon \tilde{\lie{g}} \ni (X, \lambda) \mapsto
\lambda$. Since the extension is central, $\star_{DG}$ turns out
to be tangential to $H$, therefore it restricts to an associative
star product on $H$. In a final step, Drinfel'd then uses a local
diffeomorphism $G \longrightarrow H$ by mapping $g$ to
$\operatorname{Ad}_{g^{-1}}^* \ell_0$ to pull-back the star
product to $G$, which turns out to be left-invariant. By
\cite[Thm.~1]{drinfeld:1983a} this gives a twist. Without major
modification it should be possible to include also closed higher
order terms $\Omega \in t \Anti^2 \lie{g}^*[[t]]$ by considering
$B + \Omega$ instead. We conjecture that
\begin{enumerate}
\item this gives all possible classes of Drinfel'd twists by
modifying his construction including $\Omega$,
\item the resulting classification matches the classification by
our Fedosov construction.
\end{enumerate}
Note that a direct comparison of the two approaches will be
nontrivial due to the presence of the combinatorics in the BCH
formula inside $\star_{DG}$ in the Drinfel'd construction on the
one hand and the recursion in our Fedosov approach on the other
hand. We will come back to this in a future project.
\end{remark}
\section{Hermitian and Completely Positive Deformations}
\label{sec:HermitianCPDeformations}
In this section we include now aspects of positivity into the picture:
in addition, let $\ring{R}$ be now an ordered ring and set
$\ring{C} = \ring{R}(\I)$ where $\I^2 = -1$. In $\ring{C}$ we have a
complex conjugation as usual, denoted by $z \mapsto \cc{z}$. The Lie
algebra $\lie{g}$ will now be a Lie algebra over $\ring{R}$, still
begin free as a $\ring{R}$-module with finite dimension.
The formal power series $\ring{R}[[ t ]]$ are then again an ordered
ring in the usual way and we have
$\ring{C}[[ t ]] = (\ring{R}[[ t ]])(\I)$. Moreover, we consider a
$^*$-algebra $\algebra{A}$ over $\ring{C}$ which we would like to
deform. Here we are interested in \emph{Hermitian} deformations
$\star$, where we require
\begin{equation}
\label{eq:HermitianDeformation}
(a \star b)^* = b^* \star a^*
\end{equation}
for all $a, b \in \algebra{A}[[ t ]]$.
Instead of the universal enveloping algebra directly, we consider now
the complexified universal enveloping algebra
$\ueac{\lie{g}} = \uea{\lie{g}} \tensor[\ring{R}] \ring{C} =
\uea{\lie{g}_{\ring{C}}}$
where $\lie{g}_{\ring{C}} = \lie{g} \tensor[\ring{R}] \ring{C}$ is the
complexified Lie algebra. Then this is a $^*$-Hopf algebra where the
$^*$-involution is determined by the requirement
\begin{equation}
\label{eq:HopfStarAlgebra}
X^* = - X
\end{equation}
for $X \in \lie{g}$, i.e. the elements of $\lie{g}$ are
\emph{anti-Hermitian}. The needed compatibility of the action of
$\lie{g}$ on $\algebra{A}$ with the $^*$-involution is then
\begin{equation}
\label{eq:StarAction}
(\xi \acts a)^* = S(\xi)^* \acts a^*
\end{equation}
for all $\xi \in \ueac{\lie{g}}$ and $a \in \algebra{A}$. This is
equivalent to $(X \acts a)^* = X \acts a^*$ for $X \in \lie{g}$. We
also set the elements of $\lie{g}^* \subseteq \lie{g}_{\ring{C}}^*$ to
be \emph{anti-Hermitian}.
In a first step we extend the complex conjugation to tensor powers of
$\lie{g}_{\ring{C}}^*$ and hence to the complexified Fedosov algebra
\begin{equation}
\mathcal{W}_{\ring{C}} \tensor \Anti^\bullet_{\ring{C}}
=
\left(
\prod_{k=0}^\infty
\Sym^k\lie{g}_{\ring{C}}^*
\tensor
\Anti^\bullet \lie{g}_{\ring{C}}^*
\right)[[ t ]]
\end{equation}
and obtain a (graded) $^*$-involution, i.e.
\begin{equation}
\label{eq:FedosovInvolution}
((f \tensor \alpha) \cdot (g \tensor \beta))^*
=
(-1)^{ab} (g \tensor \beta)^* \cdot (f \tensor \alpha)^*,
\end{equation}
where $a$ and $b$ are the antisymmetric degrees of $\alpha$ and
$\beta$, respectively.
Let $\pi \in \lie{g}_{\ring{C}} \tensor \lie{g}_{\ring{C}}$ have
antisymmetric part $\pi_- \in \Anti^2 \lie{g}_{\ring{C}}$ and
symmetric part $\pi_+ \in \Anti^2 \lie{g}_{\ring{C}}$. Then we have
for the corresponding operator $\mathcal{P}_\pi$ as in
\eqref{eq:WeylMoyalStarProduct}
\begin{equation}
\mathsf{T} \circ \cc{\mathcal{P}_\pi (a \tensor b)}
=
\mathcal{P}_{\tilde{\pi}} \circ \mathsf{T} (\cc{a} \tensor \cc{b})
\end{equation}
where $\tilde{\pi} = \cc{\pi}_+ - \cc{\pi}_-$. In particular, we have
$\tilde{\pi} = \pi$ iff $\pi_+$ is Hermitian and $\pi_-$ is
anti-Hermitian. We set $t = \I t $ for the formal parameter as in
the previous sections, i.e. we want to treat $t$ as imaginary. Then we
arrive at the following statement:
\begin{lemma}
\label{lemma:HermitianFiberwise}%
Let $\pi = \pi_+ + \pi_- \in \lie{g}_{\ring{C}} \tensor
\lie{g}_{\ring{C}}$. Then the fiberwise product
\begin{equation}
\label{eq:HermitianFiberwise}
a \circs b
=
\mu \circ \E^{\frac{\I t }{2}\mathcal{P}_\pi}(a\tensor b)
\end{equation}
satisfies $(a \circs b)^* = (-1)^{ab} b^* \circ a^*$ iff $\pi_+$
is anti-Hermitian and $\pi_-$ is Hermitian.
\end{lemma}
This lemma is now the motivation to take a \emph{real} classical
$r$-matrix
$r \in \Anti^2 \lie{g} \subseteq \Anti^2 \lie{g}_{\ring{C}}$.
Moreover, writing the symmetric part of $\pi$ as $\pi_+ = \I s$ then
$s = \cc{s} \in \Sym^2 \lie{g}$ is Hermitian as well. In the following
we shall assume that these reality condition are satisfied.
It is now not very surprising that with such a Poisson tensor $\pi$ on
$\lie{g}$ we can achieve a Hermitian deformation of a $^*$-algebra
$\algebra{A}$ by the Fedosov construction. We summarize the relevant
properties in the following proposition:
\begin{proposition}
\label{proposition:HermitianFedosov}%
Let $\pi = r + \I s$ with a real strongly non-degenerate
$r$-matrix $r \in \Anti^2 \lie{g}$ and a real symmetric
$s \in \Sym^2 \lie{g}$ such that there exists a symplectic
torsion-free covariant derivative $\nabla$ for $\lie{g}$ with
$\nabla s = 0$.
\begin{propositionlist}
\item The operators $\delta$, $\delta^{-1}$, and $\sigma$ are
real.
\item The operator $D$ is real and $D^2 = \frac{1}{\I t } \ad(R)$
with a Hermitian curvature $R = R^*$.
\item Suppose that
$\Omega = \Omega^* \in \Anti^2 \lie{g}^*_{\ring{C}}[[ t ]]$
is a formal series of Hermitian $\delta_\CE$-closed
two-forms. Then the unique
$\varrho \in \mathcal{W}_2\tensor\Anti^1$ with
\begin{equation}
\label{eq:varrhoHermitian}
\delta \varrho
=
R + D\varrho
+
\tfrac{1}{\I t } \varrho \circs \varrho + \Omega
\end{equation}
and $\delta^{-1} \varrho = 0$ is Hermitian, too. In this case,
the Fedosov derivative
$\FedosovD = - \delta + D + \frac{1}{\I t }\ad(\varrho)$
is real.
\end{propositionlist}
Suppose now in addition that $\algebra{A}$ is a $^*$-algebra over
$\ring{C}$ with a $^*$-action of $\lie{g}$,
i.e. \eqref{eq:StarAction}.
\begin{propositionlist}
\addtocounter{enumi}{3}
\item The operator $L_{\algebra{A}}$ as well as the extended
Fedosov derivation $\FedosovA$ are real.
\item The Fedosov-Taylor series $\tau_{\algebra{A}}$ is real.
\item The formal deformation $\star$ from
Theorem~\ref{theorem:StarProduct} is a Hermitian deformation.
\end{propositionlist}
\end{proposition}
When we apply this to the twist itself we first have to clarify which
$^*$-involution we take on the tensor algebra
$\Tensor^\bullet(\ueac{\lie{g}})$: by the universal property of the
tensor algebra, there is a unique way to extend the $^*$-involution of
$\ueac{\lie{g}}$ as a $^*$-involution. With respect to this
$^*$-involution we have $r^* = - r$ since $r$ is not only real as an
element of $\lie{g}_{\ring{C}} \tensor \lie{g}_{\ring{C}}$ but also
antisymmetric, causing an additional sign with respect to the
$^*$-involution of $\Tensor^\bullet(\ueac{\lie{g}})$. Analogously, we
have $s^* = s$ for the real and symmetric part of $\pi$.
\begin{corollary}
\label{corollary:TwistHermitian}%
The Fedosov twist $\mathcal{F}$ is Hermitian.
\end{corollary}
\begin{proof}
Indeed, $1 \in \ueac{\lie{g}}$ is Hermitian and hence
$(1 \star 1)^* = 1^* \star 1^* = 1 \star 1$.
\end{proof}
Up to now we have not yet used the fact that $\ring{R}$ is ordered but
only that we have a $^*$-involution. The ordering of $\ring{R}$ allows
to transfer concepts of positivity from $\ring{R}$ to every
$^*$-algebra over $\ring{C}$. Recall that a linear functional
$\omega\colon \algebra{A} \longrightarrow \ring{C}$ is called
\emph{positive} if
\begin{equation}
\label{eq:omegaPositive}
\omega(a^*a) \ge 0
\end{equation}
for all $a \in \algebra{A}$. This allows to define an algebra element
$a \in \algebra{A}$ to be \emph{positive} if $\omega(a) \ge 0$ for all
positive $\omega$. Note that the positive elements denoted by
$\algebra{A}^+$, form a convex cone in $\algebra{A}$ and
$a \in \algebra{A}^+$ implies $b^*ab \in \algebra{A}^+$ for all
$b \in \algebra{A}$. Moreover, elements of the form $a = b^*b$ are
clearly positive: their convex combinations are denoted by
$\algebra{A}^{++}$ and called \emph{algebraically positive}. More
details on these notions of positivity can be found in
\cite{bursztyn.waldmann:2005b, bursztyn.waldmann:2001a,
waldmann:2005b}.
Since with $\ring{R}$ also $\ring{R}[[ t ]]$ is ordered, one can
compare the positive elements of $\algebra{A}$ and the ones of
$(\algebra{A}[[ t ]], \star)$, where $\star$ is a Hermitian
deformation. The first trivial observation is that for a positive
linear functional
$\boldsymbol{\omega} = \omega_0 + t \omega_1 + \cdots$ of the
deformed algebra, i.e. $\boldsymbol{\omega}(a^* \star a) \ge 0$ for
all $a \in \algebra{A}[[ t ]]$ the classical limit $\omega_0$ of
$\boldsymbol{\omega}$ is a positive functional of the undeformed
algebra. The converse needs not to be true: one has examples where a
positive $\omega_0$ is not directly positive for the deformed
algebras, i.e. one needs higher order corrections, and one has
examples where one simply can not find such higher order corrections
at all, see \cite{bursztyn.waldmann:2005a,
bursztyn.waldmann:2000a}. One calls the deformation $\star$ a
\emph{positive deformation} if every positive linear functional
$\omega_0$ of the undeformed algebra $\algebra{A}$ can be deformed
into a positive functional
$\boldsymbol{\omega} = \omega_0 + t \omega_1 + \cdots$ of the
deformed algebra $(\algebra{A}[[ t ]], \star)$. Moreover, since also
$\Mat_n(\algebra{A})$ is a $^*$-algebra in a natural way we call
$\star$ a \emph{completely positive deformation} if for all $n$ the
canonical extension of $\star$ to $\Mat_n(\algebra{A})[[ t ]]$ is a
positive deformation of $\Mat_n(\algebra{A})$, see
\cite{bursztyn.waldmann:2005a}. Finally, if no higher order
corrections are needed, then $\star$ is called a \emph{strongly
positive deformation}, see \cite[Def.~4.1]{bursztyn.waldmann:2000a}
In a next step we want to use a Kähler structure for $\lie{g}$. In
general, this will not exist so we have to require it explicitly. In
detail, we want to be able to find a basis
$e_1, \ldots, e_n, f_1, \ldots, f_n \in \lie{g}$ with the property
that the $r$-matrix decomposes into
\begin{equation}
\label{eq:ABpartOfr}
(e^k \tensor f^\ell)(r)
=
A^{k\ell}
=
- (f^\ell \tensor e^k)(r)
\quad
\textrm{and}
\quad
(e^k \tensor e^\ell)(r)
=
B^{k\ell}
=
- (f^k \tensor f^\ell)(r)
\end{equation}
with a symmetric matrix $A = A^\Trans \in \Mat_n(\ring{R})$ and an
antisymmetric matrix $B = - B^\Trans \in \Mat_n(\ring{R})$. We set
\begin{equation}
\label{eq:DefinitionOfs}
s
=
A^{k\ell} (e_k \tensor e_\ell + f_k \tensor f_\ell)
+
B^{k\ell} e_k \tensor f_\ell
+
B^{k\ell} f_\ell \tensor e_k.
\end{equation}
The requirement of being \emph{Kähler} is now that first we find a
symplectic covariant derivative $\nabla$ with $\nabla s = 0$. Second,
we require the symmetric two-tensor $s$ to be positive in the sense
that for all $x \in \lie{g}^*$ we have $(x \tensor x)(s) \ge 0$. In
this case we call $s$ (and the compatible $\nabla$) a Kähler structure
for $r$. We have chosen this more coordinate-based formulation over
the invariant one since in the case of an ordered ring $\ring{R}$
instead of the reals $\mathbb{R}$ it is more convenient to start
directly with the nice basis we need later on.
As usual we consider now $\lie{g}_{\ring{C}}$ with the vectors
\begin{equation}
\label{eq:ZkccZell}
Z_k
=
\frac{1}{2}(e_k - \I f_\ell)
\quad
\textrm{and}
\quad
\cc{Z}_\ell
=
\frac{1}{2}(e_k + \I f_\ell)
\end{equation}
which together constitute a basis of the complexified Lie
algebra. Finally, we have the complex matrix
\begin{equation}
\label{eq:complexMatrixg}
g = A + \I B \in \Mat_n(\ring{C}),
\end{equation}
which satisfies now the positivity requirement
\begin{equation}
\label{eq:gPositive}
\cc{z_k} g^{k\ell} z_\ell \ge 0
\end{equation}
for all $z_1, \ldots, z_n \in \ring{C}$. If our ring $\ring{R}$ has
sufficiently many inverses and square roots, one can even find a basis
$e_1, \ldots, e_n, f_1, \ldots, f_n$ such that $g$ becomes the unit
matrix. However, since we want to stay with an arbitrary ordered ring
$\ring{R}$ we do not assume this.
We use now $\pi = r + \I s$ to obtain a fiberwise Hermitian product
$\circwick$, called the fiberwise Wick product. Important is now the
following explicit form of $\circwick$, which is a routine
verification:
\begin{lemma}
\label{lemma:FiberwiseWick}%
For the fiberwise Wick product $\circwick$ build out of $\pi = r + \I
s$ with a Kähler structure $s$ one has
\begin{equation}
\label{eq:FiberwiseWick}
a \circwick b
=
\mu \circ
\E^{2 t g^{k\ell} \inss(Z_k) \tensor \inss(\cc{Z}_\ell)}
(a \tensor b),
\end{equation}
where $g$ is the matrix from \eqref{eq:complexMatrixg}.
\end{lemma}
The first important observation is that the scalar matrix $g$ can be
viewed as element of $\Mat_n(\algebra{A})$ for any unital
$^*$-algebra. Then we have the following positivity property:
\begin{lemma}
\label{lemma:gIsReallyPositive}%
Let $\algebra{A}$ be a unital $^*$-algebra over $\ring{C}$. Then
for all $m \in \mathbb{N}$ and for all
$a_{k_1 \ldots k_m} \in \algebra{A}$ with
$k_1, \ldots, k_m = 1, \ldots, n$ we have
\begin{equation}
\label{eq:TheTruePositivityNeeded}
\sum_{k_1, \ell_1, \ldots, k_m, \ell_m = 1}^n
g^{k_1\ell_1} \cdots g^{k_m\ell_m}
a_{k_1 \ldots k_m}^* a_{\ell_1 \ldots \ell_m}
\in
\algebra{A}^+.
\end{equation}
\end{lemma}
\begin{proof}
First we note that
$g^{\tensor m} = g \tensor \cdots \tensor g \in \Mat_n(\ring{C})
\tensor \cdots \tensor \Mat_n(\ring{C}) = \Mat_{n^m}(\ring{C})$
still satisfies the positivity property
\[
\sum_{k_1, \ell_1, \ldots, k_m, \ell_m = 1}^n
g^{k_1\ell_1} \cdots g^{k_m\ell_m}
\cc{z^{(1)}}_{k_1} \cdots \cc{z^{(m)}}_{k_m}
z^{(1)}_{\ell_1} \cdots z^{(m)}_{\ell_m}
\ge
0
\]
for all $z^{(1)}, \ldots, z^{(m)} \in \ring{C}^n$ as the left hand
side clearly factorizes into $m$ copies of the left hand side of
\eqref{eq:gPositive}. Hence
$g^{\tensor m} \in \Mat_{n^m}(\ring{C})$ is a positive
element. For a given positive linear functional
$\omega\colon \algebra{A} \longrightarrow \ring{C}$ and
$b_1, \ldots, b_N \in \algebra{A}$ we consider the matrix
$(\omega(b_i^*b_j)) \in \Mat_N(\ring{C})$. We claim that this
matrix is positive, too. Indeed, with the criterion from
\cite[App.~A]{bursztyn.waldmann:2001a} we have for all
$z_1, \ldots, z_N \in \ring{C}$
\[
\sum_{i,j = 1}^N \cc{z}_i \omega(b_i^*b_j) z_j
=
\omega\left(
\left(
\sum_{i=1}^N z_i b_i
\right)^*
\left(
\sum_{j=1}^N z_j b_j
\right)
\right)
\ge 0
\]
and hence $(\omega(b_i^*b_j))$ is positive. Putting these
statements together we see that for every positive linear
functional $\omega\colon \algebra{A} \longrightarrow \ring{C}$ we
have for the matrix
$\Omega = (\omega(a_{k_1 \ldots k_m}^* a_{\ell_1 \ldots \ell_m}))
\in \Mat_{n^m}(\ring{C})$
\begin{align*}
\omega\left(
\sum_{k_1, \ell_1, \ldots, k_m, \ell_m = 1}^n
g^{k_1\ell_1} \cdots g^{k_m\ell_m}
a_{k_1 \ldots k_m}^* a_{\ell_1 \ldots \ell_m}
\right)
&=
\sum_{k_1, \ell_1, \ldots, k_m, \ell_m = 1}^n
g^{k_1\ell_1} \cdots g^{k_m\ell_m}
\omega\left(
a_{k_1 \ldots k_m}^* a_{\ell_1 \ldots \ell_m}
\right) \\
&=
\tr(g^{\tensor m} \Omega)
\ge
0,
\end{align*}
since the trace of the product of two positive matrices is
positive by \cite[App.~A]{bursztyn.waldmann:2001a}. Note that for
a \emph{ring} $\ring{R}$ one has to use this slightly more
complicated argumentation: for a field one could use the
diagonalization of $g$ instead. By definition of $\algebra{A}^+$,
this shows the positivity of \eqref{eq:TheTruePositivityNeeded}.
\end{proof}
\begin{remark}
\label{remark:Diagonalg}%
Suppose that in addition $g = \diag(\lambda_1, \ldots, \lambda_n)$
is diagonal with positive $\lambda_1, \ldots, \lambda_n > 0$. In
this case one can directly see that the left hand side of
\eqref{eq:TheTruePositivityNeeded} is a convex combination of
squares and hence in $\algebra{A}^{++}$. This situation can often
be achieved, e.g. for $\ring{R} = \mathbb{R}$.
\end{remark}
We come now to the main theorem of this section: unlike the Weyl-type
deformation, using the fiberwise Wick product yields a positive
deformation in a universal way:
\begin{theorem}
\label{theorem:WickIsPositive}%
Let $\algebra{A}$ be a unital $^*$-algebra over
$\ring{C} = \ring{R}(\I)$ with a $^*$-action of $\lie{g}$ and let
$\Omega = \Omega^* \in \Anti^2 \lie{g}_{\ring{C}}^*$ be a formal
series of Hermitian $\delta_\CE$-closed two-forms. Moreover, let
$s$ be a Kähler structure for the non-degenerate $r$-matrix
$r \in \lie{g}$ and consider the fiberwise Wick product
$\circwick$ yielding the Hermitian deformation $\starwick$ as in
Proposition~\ref{proposition:HermitianFedosov}.
\begin{theoremlist}
\item \label{item:astara} For all $a \in \algebra{A}$ we have
\begin{equation}
\label{eq:astara}
a^* \starwick a
=
\sum_{m=0}^\infty \frac{(2 t )^m}{m!}
\sum_{k_1, \ldots, k_m, \ell_1, \ldots, \ell_m = 1}^n
g^{k_1\ell_1} \cdots g^{k_m\ell_m}
a_{k_1\ldots k_m}^* a_{\ell_1 \ldots \ell_m},
\end{equation}
where
$a_{k_1\ldots k_m} = \sigma\left( \inss(\cc{Z}_{k_1}) \cdots
\inss(\cc{Z}_{k_m}) \tauwick(a) \right)$.
\item \label{item:starwickPositive} The deformation $\starwick$ is
strongly positive.
\end{theoremlist}
\end{theorem}
\begin{proof}
From Lemma~\ref{lemma:FiberwiseWick} we immediately obtain
\eqref{eq:astara}. Now let
$\omega\colon \algebra{A} \longrightarrow \ring{C}$ be
positive. Then also the $\ring{C}[[ t ]]$-linear extension
$\omega\colon \algebra{A}[[ t ]] \longrightarrow
\ring{C}[[ t ]]$
is positive with respect to the undeformed product: this is a
simple consequence of the Cauchy-Schwarz inequality for
$\omega$. Then we apply Lemma~\ref{lemma:gIsReallyPositive} to
conclude that $\omega(a^* \star a) \ge 0$.
\end{proof}
\begin{corollary}
\label{corollary:PositiveTwist}%
The Wick-type twist $\twist{F}_{\mathrm{\scriptscriptstyle Wick}}$
in the Kähler situation is a convex series of positive elements.
\end{corollary}
\begin{remark}[Positive twist]
\label{remark:HermitianPositiveWhatever}%
Note that already for a Hermitian deformation, the twist
$\twist{F} = 1 \star 1 = 1^* \star 1$ constructed as above is a
\emph{positive} element of the deformed algebra
$\Tensor^\bullet(\ueac{\lie{g}})[[ t ]]$. However, this seems to
be not yet very significant: it is the statement of
Corollary~\ref{corollary:PositiveTwist} and
Theorem~\ref{theorem:WickIsPositive} which gives the additional
and important feature of the corresponding universal deformation
formula.
\end{remark}
| proofpile-arXiv_065-12066 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intriduction}
\IEEEPARstart{H}{igh} level computer vision applications, ranging from video surveillance and monitoring to intelligent vehicles, utilize visible spectrum information. However, under certain environmental conditions, this type of sensing can be severely impaired. This emerges the necessity for imaging beyond the visible spectrum, exploiting thermal sensors,
which are equally applicable both for day and night scenarios, while at the same time are less affected by illumination changes.
However, thermal sensors present their own unique challenges. First, they have low signal-to-noise ratio (SNR) implying the existence of noisy data. Second, there is a lack of color and texture information deteriorating visual content interpretation \cite{wang_improved_2010}. Third, objects are not thermally homogeneous and are exposed to variations of sun illumination
\cite{davis_background-subtraction_2007}. All these issues complicate pixel modeling especially when applying for object categorization and foreground/background detection.
For many high level applications, either they use visual-optical videos \cite{porikli_achieving_2006, tuzel_pedestrian_2008}
or thermal data
\cite{wang_improved_2010, jungling_feature_2009, yadav2016combined, sharma2016fisher}, the task of background subtraction constitutes a key component for locating moving objects \cite{bouwmans2014background}. The most common approach to model the background is to use mixtures of Gaussians, the number of which is assumed to be a priori known. While such assumption is valid for sensors capturing the visible spectrum, mainly due to their ultra high resolution accuracy, that is, high SNR, they are inappropriate for thermal data. Selection of a large number of components results in modeling the noise and therefore it reduces discrimination performance. On the contrary, low number of components yields approximate modeling that fails to capture the complexity of a scene. Consequently, methods that automatically estimate the most suitable components to fit the statistics of thermal data are important towards an efficient background subtraction scheme.
Furthermore, to increase the penetration of thermal sensors to the surveillance market, embedded
acceleration methods are needed. This means that the background subtraction algorithms should be re-designed to be implemented in low power devices. This way, the benefits are twofold. First, we offload a significant computation load near the source of the data,
and thus, bandwidth is saved as only the region of interest (or an event) is transmitted. Second, costly operations are executed in low-power embedded hardware saving valuable resources.
\subsection{Related Work}
\label{ssec:related work}
Background subtraction techniques applied on visual-optical videos model the color properties of objects \cite{herrero_background_2009}
and can be classified into three main categories \cite{el_baf_fuzzy_2009}: {\it basic background modeling},
{\it statistical background modeling}
and {\it background estimation}.
The most used methods are the statistical ones due to their robustness to critical application scenarios.
One common approach for the statistical modeling of visual content is based on the exploitation of Gaussian Mixture Models (GMMs). In the work of \cite{greenspan2004probabilistic} GMMs are utilized to create space-time representations in video segment level. However, the specific task of background subtraction requires the estimation of a pixel level background model.
Towards this direction, the work of Stauffer and Grimson \cite{stauffer_adaptive_1999} is one of the best known approaches. It uses a GMM with a fixed number of components to estimate per-pixel density. The work of \cite{makantasis_student-t_2012} proposes a Student-t mixture model improving compactness and robustness to noise and outliers. The work of \cite{haque2008stable} proposes a background subtraction algorithm based on GMMs with only one user-tunable parameter. In \cite{lee2005effective} a GMM-based background modeling that incorporates incremental EM type of learning is proposed in order to improve convergence speed and stability of learnt models. These works assume that the number of components is a priori known.
Following this assumption, the intensities of all pixels are represented using GMMs, all of the same number of components. However, in many real world applications this assumption can be very restrictive, because the intensities of different pixels may need different number of components for being accurately modeled
The works of \cite{zivkovic_improved_2004} and \cite{zivkovic_efficient_2006} extend the method of \cite{stauffer_adaptive_1999} by introducing a user defined threshold to estimate the number of components. However, this rule is application dependent and not directly derived from the data.
Another extension of \cite{stauffer_adaptive_1999} is proposed in \cite{chan2011generalized}, where a patch-wise background model is estimated using dynamic texture. However, patch level decisions may produce very coarse results when low resolution videos, such as the thermal ones, need to be processed.
An alternative approach that makes no a priori assumptions on the number of components is presented in the work of \cite{han2004sequential}. This work proposes a recursive density approximation method that relies on the propagation of density modes, which are detected by using the mean shift. Although, mean shift is a great nonparametric technique, it is computationally intensive and thus not suitable for low power hardware implementations.
The work of \cite{haines_background_2014} proposes the exploitation of a Dirichlet Process Mixture Model (DPMM). This method automatically estimates the number of components by utilizing sampling techniques. However, sampling algorithms are computational intensive and memory inefficient and thus inappropriate for real-time applications, as the ones we focus on. To address this problem, the authors of \cite{haines_background_2014} suggest a GPU implementation.
Another approach for modeling data distributions using GMMs is presented in \cite{priebe1994adaptive}. This work focuses on the general problem of density estimation. Based on the property of GMMs to approximate arbitrarily close any distribution, it estimates the true distribution by creating a sufficiently large number of components, which may be very "close". For density estimation problem, creating such components is not an issue since it may increase approximation accuracy. However, when one needs to design and develop an algorithm for low power hardware devices, as in our case, then this algorithm should keep in memory as few as possible parameters.
Finally, when someone knows some important features of foreground and/or background objects, supervised learning techniques can be utilized. For example in the work of \cite{ravichandran2012long} a supervised learning approach is proposed for discriminating fire than the background. However, in the general problem of background subtraction it is very uncommon to known some specific features of foreground and/or background in advance.
Techniques that use visual/optical data present the drawback that objects' properties are highly affected by scene illumination, making the same object to look completely different under different lighting or weather conditions. Although, thermal imagery can provide a challenging alternative for addressing this difficulty, there exist few works for thermal data.
The authors of \cite{davis_fusion-based_2005, davis_background-subtraction_2007} exploit contour saliency and a unimodal background modeling technique to extract foreground objects. However, unimodal models are not usually capable of capturing the background dynamics and its complexity. Baf \textit{et al.} in \cite{el_baf_fuzzy_2009} present a fuzzy statistical method for background subtraction to incorporate uncertainty into the GMM. Elguebaly and Bouguila in \cite{elguebaly_finite_2013} propose a finite asymmetric generalized Gaussian mixture model for object detection. However, both of these methods require a predefined maximum number of components, presenting therefore limitations when they are applied on uncontrolled environments.
Dai \textit{et al.} in \cite{dai_pedestrian_2007} propose a method for pedestrian detection and tracking using thermal imagery. This method consists of a background subtraction technique that exploits a two-layer representation (foreground/background) of frame sequences. However, they assume that the foreground is restricted to moving objects, a consideration which is not sufficient for dynamically changing environments. One way to handle the aforementioned difficulties is to introduce a background model, the parameters and the structure of which are directly estimated from the data, while at the same time it takes into account the specific characteristics of thermal imagery.
The computational cost, and thus the performance, of a background subtraction algorithm is always an issue as it usually performs poor in CPUs. One of the first attempts for real-time performance
was the work of \cite{wren_pfinder:_1997} implemented in SGI O2 computer. Since then, many implementations in GPUs were proposed. In \cite{carr2008gpu} and \cite{pham2010gpu} implementations based on the model of \cite{stauffer_adaptive_1999} achieving real-time performance even for High-Definition (HD) resolutions are presented. In the work of \cite{zhang2014gpu} the authors managed to accelerate the algorithm of \cite{zivkovic_improved_2004} up to 1080p resolution of 60fps. However, GPUs cannot be considered low power devices, which can be seen as a disadvantage especially for 24/7 surveillance systems.
This gap is addressed from Field Programmable Gate Arrays (FPGA) accelerators. In the work of \cite{kristensen2008embedded} and \cite{jiang2009hardware}, a real-time video surveillance system using a GMM, that also handles memory bandwidth reduction requirements, is proposed. Other approaches such as the work of \cite{genovese2013fpga} propose accelerators of the GMM algorithm in reconfigurable hardware reaching 24fps for HD video. The same authors claim even better performance of 91fps in HD video in their improved work of \cite{genovese2014asic} if a Xilinx Virtex 4 device is used. However, the main limitation to achieve this performance is the memory bandwidth which becomes the main bottleneck in the pipeline. The requested bandwidth for this performance is about 8GB/sec where FPGA boards usually hold 64bit-wide Dynamic Random Access Memory (DRAM) clocked in a range of 100-200 MHz. As a result the memory subsystem can support at least one order of magnitude lower bandwidth. This means that we need technologies for reducing memory requirements in case that the background subtraction algorithm is adapted to be implemented under reconfigurable hardware architectures.
\subsection{Our Contribution}
\label{sec:our contribution}
The main contribution of this paper is the design of a background subtraction system (as a whole) that is completely data driven, takes into account the specific characteristics of thermal imagery and is suitable for implementation in low power and low memory hardware.
This work extends our previous works in \cite{makantasis2015variational, nikitakis2016novel}. Our method exploits GMMs with unknown number of components, which are dynamically estimated directly from the data.
In particular, the Variational Inference framework is adopted to associate the functional structure of the model with real data obtained from thermal images. Variational Inference belongs to the class of probability approximation methods, which try to estimate the approximate posterior distribution by minimizing the KL-divergence between the approximate and the true posteriors. As it has been shown in \cite{bernardo2003variational} and \cite{teschendorff2005variational}, when the number of samples tends to infinity the lower variational bound approaches the BIC criterion for model selection. When someone targets low power hardware devices, and thus must keep in memory as few as possible parameters, this feature is very important, since through Variational Inference the true distribution is approximated without over/under fitting (creates the ''right'' number of components).
The adopted approach, instead of treating the mixing coefficients of the components as single parameters, it considers them as probability distributions. Under this framework, we need to estimate forms of probability distributions that best fit data properties, instead of fitting an a priori known number of components to the captured data. Then, the Expectation-Maximization (EM) algorithm is adopted to estimate model parameters. To compensate computational challenges
we utilize conjugate priors for deriving analytical equations for model estimation.
Updating procedures are incorporated to allow dynamic model adaptation. Our updating method avoids the use of accumulated data from previous time instances, resulting in low memory requirements. Such a scheme assists the implementation of an in-camera module suitable for devices of low power and memory demands.
This paper is organized as follows: Section {\ref {sec:variational inference for gaussian mixture modeling}} introduces the Variational Inference framework,
while Section {\ref {sec:derivation of random variables distribution}} describes the algorithm for optimally estimating the model parameters. In Section {\ref {sec:random variables optimization}}, we present the EM optimization that best fits model parameters to the data.
A threshold independent on-line updating algorithm is introduced in Section {\ref {Sec: updating}}.
The in-camera reconfigurable architecture is discussed in Section {\ref {sec:in-camera acceleration architecture}}, while experimental results are presented in Section {\ref {sec:experimental validation}}.q Finally, Section {\ref {sec: conclusions}} draws the conclusions of the paper.
\section{Variational Inference for Gaussian Mixture Modeling}
\label{sec:variational inference for gaussian mixture modeling}
\subsection{Gaussian Mixture Model Fundamentals}
\label{ssec:gaussian mixture model fundamentals}
The Gaussian mixture distribution has the following form;
\begin{equation}
p(x|\bm \varpi, \bm \mu, \bm \tau) = \sum_{\bm z} p(\bm z | \bm \varpi) p(x| \bm z, \bm \mu, \bm \tau),
\label{eq:gaussian_mm_z}
\end{equation}
where $p(\bm z|\bm \varpi)$ and and $p(x|\bm z, \bm \mu, \bm \tau)$ are in the form of
\begin{equation}
p(\bm z | \bm \varpi) = \prod_{k=1}^{K} \varpi_k^{z_k},
\label{eq:p_z}
\end{equation}
\begin{equation}
p(x|\bm z, \bm \mu, \bm \tau) = \prod_{k=1}^{K} \mathcal{N}(x|\mu_k, \tau_k^{-1})^{z_k}.
\label{eq:p_x}
\end{equation}
In Eq.(\ref{eq:p_z}) and Eq.(\ref{eq:p_x}), $\mathcal N(\cdot)$ represents the Gaussian distribution, $K$ is the number of components, variables $\{\varpi_k\}_{k=1}^K$ refer to the mixing coefficients that represent the proportion of data that belong to each component and which satisfy $0 \leq \varpi_k \leq 1$ and $\sum_{k=1}^K \varpi_k = 1$. Variable $x$ corresponds to the intensity of a pixel (i.e., the observed variable) and $\{\mu_k\}_{k=1}^K$, $\{\tau_k\}_{k=1}^K$ stand for the mean values and precisions of the Gaussian components respectively. The $K$-dimensional vector $\bm z=[z_1, ..., z_K]$ is a binary latent variable in which a particular element is equal to one and all other elements are equal to zero, such as $\sum_{k=1}^K z_k= 1$ and $p(z_k=1) = \varpi_k$. This vector is related to the number of components that are used for modeling pixels intensities. In the work of \cite{stauffer_adaptive_1999} the value of $K$ is assumed to be a priori known, while in our case, this value is estimated directly from the data.
Eq.(\ref{eq:gaussian_mm_z}) models the effect of one sample. Given a set $\bm X = \{x_1,...,x_N\}$ of $N$ pixel intensities (i.e., observed data), we conclude to a set of $N$ latent variables, $\bm Z = \{\bm z_1,...,\bm z_N \}$. Each $\bm z_n$ is a $K$-dimensional binary vector with one element equals one and all the others equal zero, such as $\sum_{k=1}^K z_{nk}= 1$. Then, Eq.(\ref{eq:p_z}) and Eq.t(\ref{eq:p_x}) are transformed to
\begin{equation}
p(\bm Z | \bm \varpi) = \prod_{n=1}^{N} \prod_{k=1}^{K} \varpi_k^{z_{nk}},
\label{eq:p_Z}
\end{equation}
\begin{equation}
p(\bm X|\bm Z, \bm \mu, \bm \tau) = \prod_{n=1}^{N} \prod_{k=1}^{K} \mathcal{N}(x_n|\mu_k, \tau_k^{-1})^{z_{nk}}.
\label{eq:p_X}
\end{equation}
The goal is to estimate a background model exploiting pixel intensities, that is, to calculate the values of $\bm \varpi$, $\bm \mu$ and $\bm \tau$, involved in the probability $p(x|\bm \varpi, \bm \mu, \bm \tau)$.
\subsection{Distribution Approximation through Variational Inference}
\label{ssec:distribution approximation through variational inference}
In case that variable $K$ of a GMM is a priori known, the values of $\bm \varpi$, $\bm \mu$ and $\bm \tau$ can be straightforward calculated using the methods of \cite{stauffer_adaptive_1999,zivkovic_improved_2004}, which exploit the k-means algorithm. For many real-life application scenarios, as the one this paper targets, it is better to let variable $K$ fit the statistics of the data (i.e., let variable $K$ be unknown). In such cases, one way to estimate $K$ is to apply computationally expensive methods through sampling algorithms or to build several models of different number of components and then select the best one. An alternative computationally efficient approach, adopted in this paper, is to exploit the Variational Inference framework. More specifically, instead of treating the mixing coefficients $\bm \varpi$ as single parameters, which requires the knowledge of $K$, we treat them as probability distributions. This way, we are able to estimate the coefficients $\bm \varpi$ independently from $K$. Such an approach keeps computational complexity low since it avoids sampling methods or experimentation on different number of components.
Let us denote as $\bm Y=\{\bm Z, \bm \varpi, \bm \mu, \bm \tau\}$ a set which contains model parameters and the respective latent variables. Let us also denote as $q(\bm Y)$ the variational distribution of $\bm Y$. Our objective is to estimate $q(\bm Y)$ to be as close as possible to $p(\bm Y | \bm X)$ for a given observation $\bm X$. Regarding similarity between two distributions, the Kullback-Leibler divergence,
\begin{equation}
\label{eq:kullback-leibler}
KL(q||p) = \int q(\bm Y) \ln \frac{q(\bm Y)} {p(\bm Y | \bm X)} d\bm Y,
\end{equation}
is used. $KL(q||p)$ has to be minimized since it is a non negative quantity, which equals zero only if $q(\bm Y)=p(\bm Y | \bm X)$.
In the context of the most common type of Variational Inference, known as \textit{mean-field approximation}, the variational distribution is assumed to be factorized over $M$ disjoint sets such as $q(\bm Y) = \prod_{i=1}^{M}q_i(\bm Y_i)$. Then, as shown in \cite{bishop_pattern_2007}, the optimal solution $q_j^*(Y_j)$ that minimizes $KL(q||p)$ metric is given by
\begin{equation}
\ln q_j^*(\bm Y_j) = \mathbb{E}_{i \neq j} [ \ln p(\bm X, \bm Y) ] + \mathcal{C},
\label{eq:q_star}
\end{equation}
where $\mathbb{E}_{i \neq j} [ \ln p(\bm X, \bm Y) ]$ is the expectation of the logarithm of the joint distribution over all variables that do not belong to the $j^{th}$ partition and $\mathcal{C}$ is a constant. Eq.(\ref{eq:q_star}) indicates the presence of circular dependencies between the variables that belong to different partitions. Thus, estimating the optimal distribution over all variables suggests the exploitation of an iterative process such as the EM algorithm (see Section \ref{sec:random variables optimization}).
\section{Optimal distributions over model parameters}
\label{sec:derivation of random variables distribution}
In this section, we present the analytical form for the optimal distributions $q_j^*(Y_j)$, considering the model coefficients and the latent variables; i.e., $q_Z^*(\bm Z)$, $q_{\varpi}^*(\bm \varpi)$, $q_{\tau}^*(\bm \tau)$ and $q_{\mu|\tau}^*(\bm \mu|\bm \tau)$. For simplifying the notation, in the following the superscript of optimal distributions and the subscript for the $j^{th}$ partition are omitted.
\subsection{Factorized Form of the Joint Distribution}
To estimate $q(\bm Y)$, we require to rewrite the right hand of Eq.(\ref{eq:q_star}), that is, the joint distribution $p(\bm X, \bm Y)$, as a product
\begin{equation}
p(\bm X, \bm Y) = p(\bm X|\bm Z, \bm \mu, \bm \tau) p(\bm Z|\bm \varpi) p(\bm \varpi) p(\bm \mu,\bm \tau).
\label{eq:joint_factorization}
\end{equation}
The distributions $p(\bm X|\bm Z, \bm \mu, \bm \tau)$ and $p(\bm Z|\bm \varpi)$ are already known from Eq.(\ref{eq:p_X}) and Eq.(\ref{eq:p_Z}). Thus, we need to define the prior distribution $p(\bm \varpi)$ and the joint distribution $p(\bm \mu , \bm \tau)$. In this paper, conjugate priors
are adopted to estimate the distributions $p(\bm \varpi)$ and $p(\bm \mu , \bm \tau)$. Such an approach is computational efficient since it avoids implementation of the expensive sampling methods yielding analytical solutions.
We start our analysis by the prior distribution $p(\bm \varpi)$. In particular, since $p(\bm Z | \bm \varpi)$ has the form of a multinomial distribution, [see Eq.(\ref{eq:p_Z})], its conjugate prior is given by
\begin{equation}
p(\bm \varpi) = \frac{\Gamma(K\lambda_0)}{\Gamma(\lambda_0)^K} \prod_{k=1}^{K}\varpi_k^{\lambda_0-1}.
\label{eq:varpi_prior}
\end{equation}
Eq.(\ref{eq:varpi_prior}) is a Dirichlet distribution
with $\Gamma(\cdot)$ stands for the Gamma function and scalar $\lambda_0$ a control parameter. The smaller the value of $\lambda_0$ is, the larger the influence of the data rather than the prior on the posterior distribution $p(\bm Z|\bm \varpi)$. The choice of setting the parameter $\lambda_0$ as a scalar instead of a vector of different values for each mixing coefficient, is due to the fact that we adopt an uninformative prior framework, that is not preferring a specific component against the others.
Similarly, $p(\bm \mu, \bm \tau)$ is the prior of $p(\bm X|\bm Z, \bm \mu, \bm \tau)$ which is modeled through Eq. (\ref{eq:p_X}). The conjugate prior of (\ref{eq:p_X}) takes the form of a Gaussian-Gamma distribution
since both $\bm \mu$ and $\bm \tau$ are unknown. Subsequently, the joint distribution $p(\bm \mu, \bm \tau)$ can be modeled as
\begin{subequations}
\begin{align}
p(\bm \mu, \bm \tau & ) = p(\bm \mu | \bm \tau)p(\bm \tau) \\
& = \prod_{k=1}^{K} \mathcal{N}(\mu_k|m_0,(\beta_0\tau_k)^{-1})Gam(\tau_k|a_0,b_0),
\end{align}
\label{eq:joint_mu_tau_prior}
\end{subequations}
where $Gam(\cdot)$ denotes the Gamma distribution. Again, an uninformative prior framework is adopted meaning that no specific preference about the form of the Gaussian components is given. The parameters $m_0$, $\beta_0$, $a_0$ and $b_0$ are discussed in Section \ref{priors}.
In the following, the forms of optimal variational distributions are presented using the results from Appendix \ref{ap:appendix}.
\subsection{Optimal $q^*(\bm Z)$ Distribution}
Using Eq.(\ref{eq:q_star}) and the factorized form of Eq.(\ref{eq:joint_factorization}), the distribution of the optimized factor $q^*(\bm Z)$ is given by a Multinomial distribution of the form
\begin{subequations}
\begin{align}
& q^*(\bm Z) = \prod_{n=1}^{N}\prod_{k=1}^{K}\bigg(\frac{\rho_{nk}}{\sum_{j=1}^{K}\rho_{nj}}\bigg)^{z_{nk}} = \\
& \:\:\:\:\:\:\:\:\:\:\:\:= \prod_{n=1}^{N}\prod_{k=1}^{K} r_{nk}^{z_{nk}},
\label{eq:q_Z_optimized_dirichlet_b}
\end{align}
\label{eq:q_Z_optimized_dirichlet}
\end{subequations}
where quantity $\rho_{nk}$ is given as
\begin{equation}
\begin{aligned}
\rho_{nk} = \exp\bigg(& \mathbb{E}\big[\ln \varpi_k \big] + \frac{1}{2}\mathbb{E}\big[\ln \tau_k\big] -\frac{1}{2}\ln2\pi - \\ &-\frac{1}{2} \mathbb{E}_{\bm \mu, \bm \tau}\big[(x_n-\mu_k)^2\tau_k \big]\bigg).
\end{aligned}
\label{eq:pho_nk}
\end{equation}
Due to the fact that $q^*(\bm Z)$ is a Multinomial distribution we have that its expected value $\mathbb{E}[z_{nk}]$ will be equal to $r_{nk} $
\subsection{Optimal $q^*(\bm \varpi)$ Distribution}
\label{opt}
Using Eq.(\ref{eq:joint_factorization}) and Eq.(\ref{eq:q_star}) the variational distribution of the optimized factor $q^*(\bm \varpi)$ is given by the Dirichlet distribution
\begin{equation}
q^*(\bm \varpi) = \frac{\Gamma(\sum_{i=1}^{K}\lambda_i)}{\prod_{j=1}^{K}\Gamma(\lambda_j)} \prod_{k=1}^{K}\varpi_k^{\lambda_k - 1}.
\label{eq:q_star_varpi_derivation}
\end{equation}
Variable $\lambda_k$ is equal to $N_k + \lambda_0$, while $N_k=\sum_{n=1}^{N}r_{nk}$ represents the proportion of data that belong to the $k$-th component.
\subsection{Optimal $q^*(\mu_k | \tau_k)$ distribution}
Similarly, the variational distribution of the optimized factor $q^*(\mu_k, \tau_k)$ is given by a Gaussian distribution of the form
\begin{equation}
q^*(\mu_k|\tau_k) = \mathcal{N}(\mu_k|m_k, (\beta_k \tau_k)^{-1}),
\label{eq:q_star_mu_distribution}
\end{equation}
where the parameters $m_k$ and $\beta_k$ are given by
\begin{subequations}
\begin{align}
\beta_k & = \beta_0 + N_k, \\
m_k & = \frac{1}{\beta_k}\Big(\beta_0 m_0 + N_k \bar x_k\Big).
\end{align}
\label{eq:q_star_mk_tauk}
\end{subequations}
Variable $\bar x_k$ is equal to $\frac{1}{N_k}\sum_{n=1}^{N}r_{nk}x_n$ and represents the centroid of the data that belong to the $k$-th component.
\subsection{Optimal $q^*(\tau_k)$ distribution}
After the estimation of $q^*(\mu_k|\tau_k)$, the variational distribution of the optimized factor $q^*(\tau_k)$ is given by a Gamma distribution of the following form
\begin{equation}
q^*(\tau_k) = Gam(\tau_k|a_k, b_k),
\label{eq:q_star_tau_distribution}
\end{equation}
while the parameters $a_k$ and $b_k$ are given as
\begin{subequations}
\begin{align}
a_k & = a_0 + \frac{N_k}{2}
\label{eq:q_star_tauk_a},\\
b_k & = b_0 + \frac{1}{2}\bigg(N_k\sigma_k + \frac{\beta_0 N_k}{\beta_0 + N_k}\big(\bar x_k - m_0\big)^2 \bigg),
\label{eq:q_star_tauk_b}
\end{align}
\label{eq:q_star_tauk}
\end{subequations}
where $\sigma_k = \frac{1}{N_k}\sum_{n=1}^{N}(x_n-\bar x_k)^2$.
\section{Distribution Parameters optimization}
\label{sec:random variables optimization}
In Section \ref{sec:derivation of random variables distribution}, we derive approximations of the random variable distributions. While the works of \cite{stauffer_adaptive_1999} and \cite{zivkovic_improved_2004} adopt k-means algorithm to approximately estimate the parameters of the background model, in this work, the EM algorithm is employed to optimally estimate the coefficient distributions that best fit the observations.
\subsection{The EM Optimization Framework}
\label{EM}
{\bf E-Step:} Let us assume the $t$-th iteration of the EM optimization algorithm. Then, during the E-step, only the value of $r_{nk}$ is readjusted according to the statistics of the currently available observed data. Variable $r_{nk}$ actually expresses the degree of fitness of the $n$-th datum to the $k$-th cluster, as derived from Eq.(\ref{eq:q_Z_optimized_dirichlet}). Due to the fact that $q^*(\bm \varpi)$ is a Dirichlet distribution and $q^*(\tau_k)$ is a Gamma distribution, the following holds
\begin{subequations}
\begin{align}
& \ln \tilde{\tau_k}(t) \equiv \mathbb{E}\big[\ln \tau_k(t)\big] = \Psi(a_k(t)) - \ln b_k(t), \\
& \ln \tilde{\varpi_k}(t) \equiv \mathbb{E}\big[\ln \varpi_k(t)\big] = \Psi(\lambda_k(t)) - \Psi\bigg(\sum_{k=1}^{K} \lambda_k(t)\bigg), \\
& \mathbb{E}\big[\tau_k(t)]\big] = \frac{a_k(t)}{b_k(t)},
\end{align}
\label{eq:tilde_pi_tilde_tau}
\end{subequations}
where $\Psi(\cdot)$ is the digamma function. In Eq.(\ref{eq:tilde_pi_tilde_tau}), we set $\ln \tilde{\tau_k}(t) \equiv \mathbb{E}\big[\ln \tau_k(t)\big]$ and $\ln \tilde{\varpi_k}(t) \equiv \mathbb{E}\big[\ln \varpi_k(t)\big]$ to simplify the notation of the following equations. Then,
\begin{equation}
\begin{aligned}
r_{nk}(t+1) & \propto \tilde{\varpi_k}(t) \tilde{\tau_k}(t)^{1/2} \\ & \exp \bigg(-\frac{a_k(t)}{2b_k(t)}\big(x_n-m_k(t)\big)^2 - \frac{1}{2\beta_k(t)} \bigg) \end{aligned}
\label{eq:r_nk}
\end{equation}
by substituting Eq.(\ref{eq:tilde_pi_tilde_tau}) into Eq.(\ref{eq:pho_nk}) and using Eq.(\ref{eq:q_Z_optimized_dirichlet}). In Eq.(\ref{eq:r_nk}), $r_{nk}(t+1)$ expresses the degree of fitness of the $n$-th datum to the $k$-th cluster at the next $t$+1 iteration of the algorithm.
{\bf M-Step:} During the M-step, we keep fixed the value of $r_{nk}(t)$, as it has been calculated through the E-Step. Then, we update the values of the background model coefficients, which will allow us to re-estimate the degree of fitness $r_{nk}$ at the next iteration stage, exploiting Eq.(\ref{eq:r_nk}).
Particularly, initially, the variables $N_k(t+1)$ and $\lambda_k(t+1)$ are estimated, based on the statements of Section \ref{opt} and the $r_{nk}(t+1)$ of Eq.(\ref{eq:r_nk}),
\begin{subequations}
\begin{align}
& N_k(t+1)=\sum_{n=1}^{N}r_{nk}(t+1),\\
& \lambda_k(t+1)=N_k(t+1) + \lambda_0.
\end{align}
\label{eq:N and Lambda}
\end{subequations}
These are the only variables that are needed for updating model parameters
using Eq.(\ref{eq:q_star_varpi_derivation}), Eq.(\ref{eq:q_star_mu_distribution}) and Eq.(\ref{eq:q_star_tau_distribution}).
The distribution $q^*(\bm \varpi(t+1))$ of the model coefficients is computed based on Eq.(\ref{eq:q_star_varpi_derivation}). The value $\lambda_0$ is given in Section \ref{priors}. We recall that in our approach, the number of components that the background content is composed to is not a priori known. For this reason, the mixing coefficients of the background model are treated as probability distributions and not as single parameters. Due to this fact, we can initialize the EM algorithm by setting the number of components to be smaller than or equal to a maximum value, coinciding with the number of observed data, that is, $K_{max}\leq N$. Then, the probability coefficients distributions re-arrange the number of components, in order to best fit the statistics of the observations. This is achieved through EM optimization.
In the following, the parameters $a_k(t+1)$ and $b_k(t+1)$ are updated to define the Gamma distribution of $q^*(\tau_k(t+1))$ that best fit the observations through Eq.(\ref{eq:q_star_tauk}). Again, the priors $a_0$, $b_0$ and $\beta_0$ are given in Section \ref{priors}.
Next, the distribution $q^*(\mu_k(t+1)| \tau_k(t+1))$ is updated exploiting $\tau_k(t+1)$. In order to do this, we need to update $\beta_k(t+1)$ and $m_k(t+1)$ based on Eq.(\ref{eq:q_star_mk_tauk}).
The E and M steps are repeated sequentially until the values for model parameters are not significantly changing. As shown in \cite{boyd_convex_2004} convergence of EM algorithm is guaranteed because bound is convex with respect to each of the factors $q(\bm Z)$, $q(\bm \varpi)$, $q(\bm \mu|\bm \tau)$ and $q(\bm \tau)$.
During model training the mixing coefficient for some of the components takes value very close to zero. Components with mixing coefficient less than $1/N$ are removed (we require each component to model at least one observed sample) and thus after training, the model has automatically determined the right number of Gaussian components.
\subsection{Initialization Aspects}
\label{model}
The k-means++ algorithm \cite{arthur2007k} is exploited to initialize the EM algorithm at $t=0$. The k-means++ presents advantages compared to the conventional k-means, since it is less depended on initialization. It has to be mentioned that fuzzy versions of k-means are not suitable for the initialization process, since each sample should belong to exactly on cluster/component. The k-means++ creates an initial partition of the data used to initialize EM algorithm. Then, at the updating stages of the algorithm (Section \ref{EM}), the probabilities of each observed datum to belong to one of the $K_{max}$ available clusters, expressed through $r_{nk}$, are updated. This way, the final number of clusters are dynamically refined according to the statistical distributions of the image pixel intensities.
Let us denote as $\hat N_k=N_k(t=0)$ the number of observations that belong to $k$-th cluster at iteration $t=0$. Then, an initial estimate of the mixing coefficients is $\varpi_k(t=0) = \hat N_k/N$, meaning that the significance of the $k$-th component is proportional to the number of data that belong to the $k$-th cluster. Thus, the initialization of $\lambda_k(t=0)=N\varpi_k(t=0)+\lambda_0$, [see Eq.(\ref{eq:q_star_varpi_derivation})] expresses the number of observations associated with each component.
The parameters $a_k(t=0)$, $b_k(t=0)$, $\beta_k(t=0)$ and $m_k(t=0)$ are initially estimated from Eq.(\ref{eq:q_star_tauk}) and Eq.(\ref{eq:q_star_mk_tauk}), considering the knowledge of the priors parameters as discused in Section \ref{priors}. Finally, the model parameter $\tau_k(t=0)$ is given as inverse proportional of the variance of the data of the $k$-th initial cluster, that is, $\tau_k(t=0)=\hat v_k^{-1}(t=0)$.
\subsection{Priors Parameters}
\label{priors}
The parameter $\lambda_0$ in Eq.(\ref{eq:varpi_prior}) can be interpreted as the effective prior number of observations associated with each component. However, we do not have any prior information regarding this number. In order to use an uninformative prior and maximize the influence of the data over the posterior distribution we set $\lambda_0=1$, see \cite{yang1996catalog}.
Relations Eq.(\ref{eq:q_star_tauk}) and Eq. (\ref{eq:q_star_tauk_b}) suggest that the values of parameters $a_k$ and $b_k$ are primarily affected by the data and not by the prior, when the values of the parameters $a_0$ and $b_0$ are close to zero. For this reason we set $a_0$ and $b_0$ to a very small value ($10^{-3}$ in our implementation).
Similarly, we initialize $m_0$ as the mean value of the observed data and precision $\beta_0=\frac{b_0}{a_0 v_0}$, where $v_0$ is the variance of the observed data. We use uninformative priors, since we do not have any information regarding neither the number of components nor their true mean and variance values.
\section{Online Updating Mechanism and Background Subtraction}
\label{Sec: updating}
Using the aforementioned approach, we fit a model to the background considering a pool of $N$ observed data. In this section, an adaptive strategy that is threshold-independent and memory efficient is presented. Such an approach permits implementation of the proposed algorithm to an in-camera embedded hardware of low power and memory requirements. This way we deliver thermal sensors embedding with the capability of detecting moving objects in real-time. Furthermore, by exploiting the updating mechanism the presented system can online process streams of frames yielding a small computational time. So, in a sense, it can handle big data volumes.
Let us denote as $x_{new}$ a new observed sample. Then, a decision is made whether $x_{new}$ can be approximated by our best fitted model or not. For this reason, the best matched Gaussian component $c$ to $x_{new}$ is estimated by minimizing the Mahalanobis distance $D_k$,
\begin{equation}
c = \arg \min_k D_k = \arg \min_k \sqrt{(x_{new}-\mu_k)^2 \tau_k},
\label{eq:Mahalanobis}
\end{equation}
where $\mu_k$ and $\tau_k$ stand for the mean and precision of the $k$-th component. We use Mahalanobis distance, since this is a reliable distance measure between a point and a distribution. Then, $x_{new}$ belongs to $c$ with probability
\begin{equation}
p(x_{new}|\mu_c, \tau_c) = \mathcal{N}(x_{new}|\mu_c, \tau_c^{-1}).
\label{eq:p_mix}
\end{equation}
\subsection{Threshold Independent}
Conventionally, Eq.(\ref{eq:p_mix}) implies a threshold to determine the probability limit over which the new sample $x_{new}$ belongs to $c$. To overcome threshold limitations, the following adaptive approach is adopted in this paper.
Let us denote as $\Omega$ the image pixel responses over a fixed time span. Then, we model the probability to observe the new sample $x_{new}$ in a region of range $2\epsilon$ centered at $x_{new}$ as
\begin{equation}
p(x_{new};\epsilon) = \frac{N_{\epsilon}}{N} \mathcal{U}(x_{new};x_{new}-\epsilon, x_{new}+\epsilon),
\label{eq:p_xnew}
\end{equation}
where $N_{\epsilon}=\big|\{ x_i \in \Omega : x_{new}-\epsilon \leq x_i \leq x_{new}+\epsilon \}\big|$ is the cardinality of the set that contains samples $\epsilon$-close to $x_{new}$ and $\mathcal{U}(x_{new};x_{new}-\epsilon, x_{new}+\epsilon)$ is a Uniform distribution with lower and upper bounds that equal to $x_{new}-\epsilon$ and $x_{new}+\epsilon$. respectively.
Eq.(\ref{eq:p_xnew}) suggests that the probability to observe the $x_{new}$ is related to the portion of data that have been already observed around $x_{new}$. By increasing the neighborhood around $x_{new}$, i.e., increasing the value of $\epsilon$, the quantity $\mathcal{U}(x_{new};x_{new}-\epsilon, x_{new}+\epsilon)$ is decreasing, while the value of $N_{\epsilon}$ is increasing. Therefore, we can estimate the optimal range $\epsilon^*$ around $x_{new}$ that maximizes Eq. (\ref{eq:p_xnew})
\begin{equation}
\epsilon^* = \arg \max_{\epsilon} p(x_{new};\epsilon).
\label{eq:epsilon}
\end{equation}
Based on the probabilities $p(x_{new}|\mu_c, \tau_c)$ and $p(x_{new};\epsilon^*)$, which are exclusively derived by the observations, we can define our decision making mechanism. Concretely, if
\begin{equation}
p(x_{new}|\mu_c, \tau_c) \geq p(x_{new}|\epsilon^*),
\label{eq:decision}
\end{equation}
the new observed sample $x_{new}$ can sufficiently represented by our model, i.e., the value of the new observed sample is sufficiently close to an existing Gaussian component. Otherwise, a new Gaussian component should be created, since the value of $x_{new}$ will not be close to what the model has already learnt.
\subsection{Model Updating}
When the value of the new observed sample is sufficiently close to an existing Gaussian component, the parameters of the mixture are being updated using the \textit{following the leader} \cite{dasgupta_-line_2007} approach described as
\begin{subequations}
\begin{align}
& \varpi_k \leftarrow \varpi_k + \frac{1}{N} \big(o_k - \varpi_k\big), \\
& \mu_k \leftarrow \mu_k + o_k\bigg( \frac{x_{new}-\mu_k}{\varpi_k N + 1} \bigg), \\
& \sigma_k^2 \leftarrow \sigma_k^2 + o_k \bigg(\frac{\varpi_k N (x_{new}-\mu_k)^2}{(\varpi_k N + 1)^2} -\frac{\sigma_k^2}{\varpi_k N + 1} \bigg),
\end{align}
\label{eq:updating_equations}
\end{subequations}
where $\sigma_k^2=\tau_k^{-1}$ is the variance of the $k$-th component. The binary variable $o_k$ takes value one when $k=c$ and zero otherwise.
When the new observed sample cannot be modeled by any existing component, i.e. the value of the new sample will not be close to what the model has already learnt [see Eq.(\ref{eq:decision})], a new component is created with mixing coefficient $\varpi_{new}$, mean value $\mu_{new}$ and standard deviation $\sigma_{new}$, defined as
\begin{subequations}
\begin{align}
& \varpi_{new} = \frac{1}{N},\\
& \mu_{new} = x_{new},\\
& \sigma_{new}^2 = \frac{(2\epsilon)^2 - 1}{12}.
\end{align}
\label{eq:new_component}
\end{subequations}
Variable $\sigma_{new}^2 $ is estimated using the variance of the Uniform distribution. From Eq.(\ref{eq:new_component}), we derive that $\varpi_{new}=1/N$ since it models only one sample (the new observed one), its mean value equals the value of the new sample and its variance the variance of the Uniform distribution, whose the lower and upper bounds are $x_{new}-\epsilon$ and $x_{new}+\epsilon$ respectively. When a new component is created the values for the parameters for all the other components remain unchanged except from the mixing coefficients $\{\varpi_k\}_{k=1}^K$ which are normalized to sum $\frac{N-1}{N}$. Then, the components whose mixing coefficients are less than $\frac{1}{N}$ are removed
and the mixing coefficients of the remaining components are re-normalized.
\subsection{Memory Efficient Implementation}
\label{ssec:online adaptation mechanism}
The main limitation of the aforementioned threshold independent approach is that it requires the storage of several observations, in order to reliably estimate the probability $p(x_{new};\epsilon^*)$. In this section, we introduce a framework of updating the model parameters without the need of storing observations. This reduces memory requirements, and thus, it is a crucial step towards implementing our proposed system on devices of low power and memory requirements.
We recall that we have denoted as $c$ the closest component, in terms of Mahalaobis distance, to the new observed datum $x_{new}$. This component is a Gaussian distribution with mean value $\mu_c$, precision $\tau_c$ and mixing coefficient $\varpi_c$. Therefore, the quantity $N_{\epsilon}$ can be approximated as
\begin{equation}
\label{eq:N_e_approx}
N_e \approx \tilde{N}_{\epsilon} = N \varpi_c \int_{x_{new}-\epsilon}^{x_{new}+\epsilon} \mathcal{N}(t|\mu_c, \tau_c^{-1}) dt.
\end{equation}
Denoting as
\begin{equation}
G_c(x) = \int_{-\infty}^{x} \mathcal{N}(t|\mu_c, \tau_c^{-1}) dt
\end{equation}
the cumulative Gaussian distribution of the closest Gaussian component and using Eq.(\ref{eq:N_e_approx}), $\tilde{N}_e$ is equal to
\begin{equation}
\tilde{N}_{\epsilon} = N \varpi_c \big(G_c(x_{new}+\epsilon) - G_c(x_{new}-\epsilon)\big)
\end{equation}
Then, the probability $p(x_{new};\epsilon)$ is approximated as
\begin{equation}
\begin{aligned}
p(x_{new};\epsilon) & \approx \tilde{p}(x_{new};\epsilon) =\\
& =\frac{\tilde{N}_{\epsilon}}{N}\mathcal{U}(x_{new};x_{new}-\epsilon, x_{new}+\epsilon).
\end{aligned}
\label{eq:p_xnew_approx}
\end{equation}
Probability $\tilde{p}(x_{new};\epsilon)$ is a continuous and unimodal function. Therefore, $\epsilon^*$ can be found either by setting the first derivative of Eq.(\ref{eq:p_xnew_approx}) equal to zero or by using a numerical approach according to which the epsilon value is increased using "sufficiently" small steps in order to approximate the point where the curvature of Eq.(\ref{eq:p_xnew_approx}) changes. This point indicates the optimal value of epsilon. After the estimation of $\epsilon^*$, we can compute $\tilde{p}(x_{new};\epsilon^*)$. Thus, we are able to update the mixture model by comparing $\tilde{p}(x_{new};\epsilon^*)$ to $p(x_{new}|\mu_c, \tau_c)$.
\subsection{Background Subtraction}
\label{ssec:background subtraction}
Let us denote as $bg$ and $fg$ the classes of background and foreground pixels respectively. The aforementioned modeling process actually approximates the probability $p(x|bg)$. However, our goal is to calculate the probability $p(bg|x)$, in order to as foreground or background a set of observations. Hence the Bayes rule is applied;
\begin{equation}
\label{eq:bayes rule}
p(bg|x) = \frac{p(x|bg) p(bg)}{p(x|bg) + p(x|fg)}.
\end{equation}
Then, the foreground object is derived through a subtraction process. The unknown factors of Eq.(\ref{eq:bayes rule}) are $p(bg)$ and $p(x|fg)$. The probability $p(bg)$ corresponds to the prior probability of background class. In our case, we have set it to be larger than $1/2$, since the number of pixels that belong to the background class is larger than the number of pixel that belong to the foreground class. The probability $p(x|fg)$ is modeled using a uniform distribution as in \cite{haines_background_2014}. Thus, $p(x|fg)$ at arbitrary value of $x$ is $1/256$, since $x$ can take arbitrary integer values between 0 and 255. The overview of the proposed scheme is shown in Algorithm 1.
Following this approach, our method avoids outliers by assigning them to components with very low weight. This way outliers are not practically considered during background subtraction, since $p(x|bg)$ will be close to zero when $x$ is an outlier. Furthermore, by exploiting the proposed online adaptation mechanism, components assigned to outliers will be discarded after the capturing of a few new frames, because their weight will be smaller than $1/N$.
\begin{tabular}{ l }
\hline \hline
\textbf{Algorithm 1}: Overview of Background Subtraction \\
\hline
1:\:\:\: capture $N$ frames \\
2:\:\:\: create $N$-length history for each pixel \\
3:\:\:\: initialize parameters (see Section \ref{sec:random variables optimization}) \\
4:\:\:\: \textbf{until} convergence (training phase: Section \ref{sec:random variables optimization}) \\
5:\:\:\:\:\:\:\:\:\: compute $r_{nk}$ using (\ref{eq:r_nk}) \\
6:\:\:\:\:\:\:\:\:\: recompute parameters using (\ref{eq:q_star_varpi_derivation}), (\ref{eq:q_star_mk_tauk}) and (\ref{eq:q_star_tauk}) \\
7:\:\:\: \textbf{for each} new captured frame \\
8:\:\:\:\:\:\:\:\:\: classify each pixel as foreground or background \\
\:\:\:\:\:\:\:\:\:\:\:\:\: (see subection \ref{ssec:background subtraction}) \\
9:\:\:\:\:\:\:\:\:\: update background model (see subection \ref{ssec:online adaptation mechanism}) \\
\hline
\end{tabular}
\subsection{Interesting Cases}
\subsubsection{Branches sway in the wind}
When, at some part of the scene, there are tree branches sway in the wind, the intensities of the pixels that depict this part will be clustered into two (or more) different clusters; intensities of the branches and intensities of the sky. In such cases, conventional methods that utilize a fixed number of components for representing the underlying data distribution may encounter serious problems. On the contrary, the proposed approach can estimate the number of components directly from the data. Therefore, the aforementioned clusters will be correctly identified and the corresponding pixels will be considered as background.
\subsubsection{Switching from background to foreground and vice versa}
Consider the following scenario; a foreground object, let us say a pedestrian, appears at pixel $x_i$ at time $t=t_0$ and stays there, standing still, before leaving at time $t=t_0+t_1$. Then, he/she returns back at the same location at time $t=t_0+t_1+t_2$. We want to discuss what will be the behavior of the proposed method regarding times $t_1$ and $t_2$. In order to provide a formal explanation, we have to employ i) the history of pixel's intensities, ii) relations Eq.(25), Eq.(27), Eq.(28) and Eq.(33), iii) the $p(bg)$ parameter and iv) the threshold for considering a pixel to belong to the background. We consider that the length of pixel’s history equals $100$, the $\epsilon^*$ equals $2$ and the threshold for considering a pixel to belong to the background equals $0.5$.
Consider that a pedestrian appears at pixel $x_i$ at frame $t_0$ (assuming that we use a camera with constant fps rate, we measure time in frames and not in seconds, this way the analysis is camera independent). A new component will be created for that pixel. The new component will be initialized using Eq.(28). Then, the pedestrian stays there (standing still) for the next frames. The following figures depict the evolution of the output of Eq.(33) using different values for $p(bg)$.
\begin{figure}[t]
\begin{minipage}[b]{0.95\linewidth}
\centering
\centerline{\includegraphics[width=0.795\linewidth]{Fig_1}}
\end{minipage}
\caption{Time required for switching from foreground to background.}
\label{fig:switch}
\end{figure}
The $x-$axis in Fig.\ref{fig:switch} corresponds to the new captured frames. As we can see, if we use a value close to $0.5$ for $p(bg)$, then the pixel $x_i$ will be considered as background after 32 frames. If we increase the value for $p(bg)$ then much fewer frames are needed for considering the same pixel as background, because the prior belief that a pixel belongs to the background class is larger.
Now consider that the pedestrian decides to leave this location. Then there are two different cases. In the first case, the pedestrian starts moving before he/she will be considered as background. In that case, the system will be correctly considering the pedestrian as foreground object for the whole period started at $t_0$. In the second case, the pedestrian decides to leave $N$ frames later after the time when pixel $x_i$ considered background. Since the rate of increment of the component's weight is the same as the rate of decrement, and due to the fact that $p(bg|x) + p(fg|x) = 1$, $N$ additional frames will be required before our system consider the pixel $x_i$ as background.
\subsubsection{Sensor noise and flickering intensities}
The sensor noise, typically, is zero mean, Gaussian and additive. In this case the noise will slightly affect the intensity of the pixels around the mean value. Therefore, variations due to this kind of noise will be captured by the components of the proposed GMM. On the other hand, flickering of intensities and/or salt and pepper sensor noise will indeed result to consider individual pixels as foreground. In this case we remove foreground blobs whose area is smaller than a threshold. During the evaluation of the proposed method, this threshold was manually optimized for each dataset and this post processing step applied on all algorithms that our method is compared against.
\section{In-Camera Acceleration Architecture}
In this section, we describe in detail the hardware architecture for the proposed background subtraction algorithm. We call the proposed parallel implementation as Background Subtraction Parallel System (BSPS). BSPS exploits the reconfigurable resources of today's FPGA devices.
\subsection{High Level Architecture}
Since the proposed approach makes no assumption for pixel relationships, the background model for all pixels can be computed independently. Thus, our hardware architecture consists of many parallel cores in a scalable configuration, each processing a single pixel. In Section \ref{sec:experimental validation} we demonstrate two configurations; one low cost, featuring a 4-core BSPS Engine and a second one featuring a 16-core BSPS Engine.
Each of the cores is connected to a shared bus in order to get the processing data from the external DRAM (or memory mapped camera module) of a host system. The data loading is performed in batches of up to 16 pixels as shown in Fig.\ref{fig:BSPS}.
All operations are per pixel with no dependencies between them. Thus, using a buffering scheme utilizing simple FIFOs, we can hide the latency of the external DRAM and make our scheme working seamlessly as a streaming accelerator. However, it has to be mentioned that the parallelization of the algorithm, or the overall performance in general, does not actually depend on the buffering scheme, which in our case prevents possible "data starvation" from the outside. The operations regarding data loading and write-back are fully pipelined. More details regarding the bandwidth demands are given in Section \ref{sec:experimental validation}. The output of each core is a probabilistic classification for the corresponding pixel (background or foreground) and the updated parameters of the background model.
\subsection{System Organization}
The BSPS comprises of two basic sub-modules: the \textit{Model Estimation Unit} (MEU), which is depicted in Fig.\ref{fig:MEU} and the \textit{Background Subtraction Unit} (BSU) depicted in Fig.\ref{fig:BSU}. The MEU is activated just once at the initialization process of the system. It is responsible for building the proposed background model at all pixel locations. It uses a small history of pixel values and automatically estimates the appropriate number of Gaussian components along with their mixing coefficients. After the model is built, the MEU stores the model parameters to the external DRAM for each one of the image pixels.
\label{sec:in-camera acceleration architecture}
\begin{figure}[t]
\begin{minipage}[b]{0.9\linewidth}
\centering
\centerline{\includegraphics[width=0.85\linewidth]{Fig_2}}
\end{minipage}
\caption{The data loading process of the BSPS.}
\label{fig:BSPS}
\end{figure}
\begin{figure}[t]
\begin{minipage}[b]{0.95\linewidth}
\centering
\centerline{\includegraphics[width=0.8\linewidth]{Fig_3}}
\end{minipage}
\caption{The Model Estimation Unit (MEU) organization.}
\label{fig:MEU}
\end{figure}
Then and during the normal operation of the system, only the BSU is activated. The BSU takes as input the pixel data stream along with the model parameters and gives as output the probabilistic segmentation of each pixel, while it also updates model parameters. This way for each pixel a background model is maintained and updated, which is utilized for the classification of all the new incoming pixels
\subsection{The Model Estimation Unit}
One of the key advantages of the proposed scheme is that it does not require any prior knowledge about the structure of the background model in order to achieve an optimal operation. The MEU, depicted in Fig.\ref{fig:MEU}, is responsible for this task. It builds an accurate model for the specific background and its inherent temporal characteristics. It takes as input a small history of pixel responses ($\sim100$) at a specific pixel location and outputs an accurate background model for this pixel. As mentioned in Section \ref{sec:random variables optimization}, in this module two basic algorithms are utilized; the k-means++ and the EM algorithm. Around 100 frames correspond to 13 seconds of videos when the frame rate of the camera is 7.5Hz, a typical value for thermal cameras. It has to be mentioned that the presented algorithm does not require any reference background frame to operate properly. In case that foreground objects appear in these $\sim100$ frames, due to the fact that they are moving objects, they will be modeled by components with very low weight and thus they will slightly affect the background estimation process. Furthermore, by employing the updating mechanism the model will be adapted to new frames and will discard the components that model foreground objects. The history of frames could have been chosen to include 150 or 200 or even more frames. However, increasing the length of the history increases the computational requirements for model initialization. Since this work presents a model for in-camera background subtraction, $\sim100$ frames were chosen due to the fact that this number of frames is sufficient for describing the current dynamics of a scene and at the same time the computational cost for initializing the model is acceptable.
\begin{figure}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=0.95\linewidth]{Fig_4}}
\end{minipage}
\caption{The Background Subtraction Unit (BSU) organization.}
\label{fig:BSU}
\end{figure}
\subsection{The Background Subtraction Unit}
The BSU, depicted in Fig.\ref{fig:BSU}, is responsible for classifying the incoming pixels into the two available classes (background and foreground) and also updating the background model according to the new pixel response. For this reason, BSU takes as input a new pixel value ($x_{new}$) and the current Gaussian mixture for this pixel, which is stored outside the chip, and gives as output the updated Gaussian mixture, as well as, the probabilistic classification of the incoming pixel.
During the background subtraction task the FIFO based scheme processes all pixels of one frame before proceed to next frame; from the same frame it loads parallel batches of pixels depending on the number of parallel cores on chip. This way, we can achieve lower latency during processing and also have lower buffering when accessing the camera sensor. On the contrary, during the initialization of the system, the FIFO based scheme processes for each pixel a history of intensities, since it is required for the parameter estimation task.
\section{Experimental Validation}
\label{sec:experimental validation}
\subsection{VI Mixture Model Fitting Capabilities}
During experimental validation, we evaluate the Variational Inference Mixture Model (VIMM) in terms of fitting accuracy and computational cost. We compare VIMM with the GMM and DPMM. The GMMs are employed under two different settings; (i) with more and (ii) less components than the underlying distribution.
For experimentation purposes, we create one synthetic dataset
from three non overlapping Gaussian distributions.
The initial value for the number of components for VIMM is set to $10$. In order to compare our method with the conventional GMMs of fixed number of components, we create two Gaussian models of $10$ and $2$ components respectively. These numbers are arbitrarily chosen, since the correct number of components is not a priori known.
Fig. \ref{fig:fig2} presents the fitting performance of all models. Our method correctly estimates the number of components. The GMM with $2$ components under-fits the data, since the underlying distribution comes from $3$ Gaussians. The GMM with $10$ components also under-fits the data. This is very likely to happen when the underlying distribution of the data is simpler than the structure of the GMM. In such cases, the GMM is likely either to overfit the data by assigning some components to outliers or underfit the data by constructing several components to describe samples coming from the same Gaussian. The DPMM approach yields better results, since it is able to be adapted to the current data statistics, but it still under-fits the data. Table \ref{tab:tab1} presents time performance of the different models. All presented times were computed in Python and not in hardware implementation [see subsection \ref{ssec:hardware cost}]
\begin{figure}[t]
\begin{minipage}{1.0\linewidth}
\centering
\centerline{\includegraphics[width=1.0\linewidth]{Fig_5}}
\end{minipage}
\caption{Fitting performance -- three Gaussian distributions.}
\label{fig:fig2}
\end{figure}
\begin{table}[t]
\centering
\caption{Time performance of the different models in seconds.}
\label{tab:tab1}
\newcolumntype{L}[1]{>{\hsize=#1\hsize\raggedright\arraybackslash}X}%
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centering\arraybackslash}X}%
\begin{tabularx}{1.0\linewidth}{L{7.5}C{4.0}C{4.0}C{4.0}C{4.0}}
\hline \hline
& \vspace{0.005mm} \textbf{VIMM} & \vspace{0.005mm} \textbf{GMM-10} & \vspace{0.005mm} \textbf{GMM-2} & \vspace{0.005mm} \textbf{DPMM} \\
First dataset & 0.156 & 0.034 & 0.011 & 21.35 \\
Second dataset & 0.124 & 0.067 & 0.031 & 30.19 \vspace{0.3cm} \\ \hline \hline
\end{tabularx}
\end{table}
\subsection{Updating Mechanism Performance}
\label{ssec:Updating Mechanism Performance}
In this section we evaluate the quality of the proposed updating mechanism, with and without keeping in memory the observed data, and compare it against the updating mechanism presented in \cite{zivkovic_improved_2004}. The rationale behind the decision to explore both cases lies in the fact that we target special purpose hardware with very limited on-chip memory. In this respect, we have to validate that even without keeping the data in memory the algorithm performance is not affected at all.
Fig.\ref{fig:adaptation} presents the adaptation of all models. To evaluate the quality of the adaptation, we use a toy dataset with 100 observations. Observed data were generated from two Gaussian distributions with mean values 16 and 50 and standard deviations 1.5 and 2.0 respectively. The initially trained models are presented in the left column. Actually, there are two cases for evaluating the performance of the updating mechanisms. In the first case, the evaluation could have been performed by creating one more well-separated Gaussian. In the second one, which we have chosen to follow and is much harder, the performance is evaluated on a Gaussian distribution that overlaps with one of the two initial Gaussians. Therefore, we generated 25 new samples from a third Gaussian distribution with mean value 21 and standard deviation 1.0. The middle column indicates the adaptation performance after 25 new observations, while the right column after 50 new observations. Our model, either it uses the history of observed data or not, creates a new component and successfully fits the data. On the contrary, the model of \cite{zivkovic_improved_2004} is not able to capture the statistical relations of the new observations and fails to separate the data generated from the overlapping Gaussians (middle and right columns). The quality of the presented updating mechanism becomes more clear in the right column, which presents the adaptation of the models after 50 new observations.
\begin{figure}[t]
\begin{minipage}{1.0\linewidth}
\centering
{\includegraphics[width=1.0\linewidth]{Fig_6_a}}
\centerline{\footnotesize (a) Proposed adaptation process using observed data.} \vspace{0.001in}
\end{minipage}
\begin{minipage}{1.0\linewidth}
\centering
{\includegraphics[width=1.0\linewidth]{Fig_6_b}}
\centerline{\footnotesize (b) Proposed adaptation process without using observed data.} \vspace{0.001in}
\end{minipage}
\begin{minipage}{1.0\linewidth}
\centering
{\includegraphics[width=1.0\linewidth]{Fig_6_c}}
\centerline{\footnotesize (c) Adaptation process presented in \cite{zivkovic_improved_2004}.}
\end{minipage}
\caption{Performance evaluation of model updating mechanisms.}
\label{fig:adaptation}
\end{figure}
\subsection{Background Subtraction Algorithm Evaluation}
\subsubsection{OSU and AIA datasets}
For evaluating our algorithm, we use the Ohio State University (OSU) thermal datasets and a dataset captured at Athens International Airport (AIA) during eVacutate\footnote{http://www.evacuate.eu} European Union funded project. Specifically, we used two OSU datasets, referred as OSU1 and OSU2, which contain frames that have been captured using a thermal camera and have been converted to grayscale images. On the contrary, the AIA dataset contains raw thermal frames whose pixel values correspond to the real temperature of objects.
OSU datasets \cite{davis_fusion-based_2005, davis_robust_2004, davis_background-subtraction_2007} are widely used for benchmarking algorithms for pedestrian detection and tracking in thermal imagery. Videos were captured under different illumination and weather conditions. AIA dataset was captured using a Flir A315 camera at different Airside Corridors and the Departure Level. Ten video sequences were captured, with frame size $320 \times 240$ pixels of total duration 32051 frames, at 7.5fps. The experimentation was conducted throughout the third pilot scenario of eVacuate project. For all datasets we created our own ground truth by selecting 50 frames randomly but uniformly distributed, in order to cover the whole videos duration. Then, we manually annotated this frames by creating a binary mask around the foreground objects.
We compared our method against the method of \cite{zivkovic_improved_2004} (MOG), which is one of the most robust and widely used background subtraction techniques. MOG algorithm uses a pre-defined number of Gaussian components for building the background model. In order to perform a fair comparison we fine-tune the parameters of MOG algorithm for each of the two datasets to optimize its performance. Furthermore, we compare our method against the method of \cite{davis_robust_2004, davis_background-subtraction_2007} (SBG)
used for background substraction in thermal data. This method uses a single Gaussian distribution for modeling the background and, thus, it often under-fits the data. Comparison against this technique can highlight problems that arise when the number of components of a GMM is underestimated. We do not compare our method against a DPMM-based background subtraction technique, like the one in \cite{haines_background_2014}, since its computational cost is high and we target low power and memory devices.
\begin{figure}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=1.0\linewidth]{Fig_7}}
\end{minipage}
\caption{Visual results for all datasets.}
\label{fig:OSU4}
\end{figure}
For estimating the background model, we utilized 100 frames as history and set the maximum number of components equal to 50. After the initialization of the background model, we observed that the models for OSU and AIA datasets consists of $2$ to $4$ and $3$ to $6$ components respectively. Fig. \ref{fig:OSU4} visually present the performance of the three methods. As is observed, our method outperforms both MOG and SBG on all datasets. While MOG and SBG perform satisfactory on grayscale frames of OSU datasets, their performance collapses when they applied on AIA dataset, which contains actual thermal responses, due to their strong assumptions regarding the distribution of the responses of pixels and the peculiarities of thermal imagery i.e. high signal-to-noise ratio, lack of color and texture and non-homogeneous thermal responses of objects (see Section I).
Then, an objective evaluation takes place in terms of \textit{recall}, \textit{precision} and \textit{F1 score}. Regarding OSU datasets, MOG algorithm presents high precision, however, it yields very low recall values, i.e. the pixels that have been classified as foreground are indeed belong to the foreground class, but a lot of pixels that in fact belong to background have been misclassified. SBG algorithm seems to suffer by the opposite problem. Regarding AIA dataset, our method significantly outperforms both approaches. Although, MOG and SBG algorithms present relative high precision, their recall values are under $20\%$. Figure \ref{fig:prf} presents average precision, recall and F1 score per dataset and per algorithm
\begin{figure}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=0.95\linewidth]{Fig_8}}
\centerline{\footnotesize Precision, recall and F1 score} \medskip
\end{minipage}
\caption{Algorithms performance per dataset.}
\label{fig:prf}
\end{figure}
Regarding computational cost, the main load of our algorithm is in the implementation of EM optimization. In all experiments conducted, the EM optimization converges within 10 iterations. Practically, the time required to apply our method is similar to the time requirements of Zivkovic's method making it suitable for real-time applications.
\subsubsection{Change Detection Benchmark}
Besides OSU and AIA datasets we evaluated the performance of our algorithm on the \textit{change detection benchmark - 2014} (CDB). The CDB provides five thermal videos, recorded at indoor and outdoor environments, along with their ground truth. For evaluating the performance of our algorithm, we utilized the same metrics as in CDB-2014, i.e. \textit{precision}, \textit{recall} and \textit{F1 score}, \textit{specificity}, \textit{False Positive Rate} (FPR), \textit{False Negative Rate} (FNR) and \textit{Percentage of Wrong Classifications} (PWC).
\begin{table}[t]
\centering
\caption{Performance evaluation on the thermal datasets of CDB.}
\label{tab:perf}
\newcolumntype{L}[1]{>{\hsize=#1\hsize\raggedright\arraybackslash}X}%
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centering\arraybackslash}X}%
\begin{tabularx}{1.0\linewidth}{L{7.73}C{1.63}C{1.63}C{1.63}C{1.63}C{1.63}C{1.63}C{1.63}}
\hline \hline
\vspace{0.001mm} \textbf{Method} & \vspace{0.001mm} \textbf{Prec.} & \vspace{0.001mm} \textbf{Rec.} & \vspace{0.001mm} \textbf{F1S} & \vspace{0.001mm} \textbf{Spec.} & \vspace{0.001mm} \textbf{FPR} & \vspace{0.001mm} \textbf{FNR} & \vspace{0.001mm} \textbf{PWC} \\
Our Method & 0.718 & 0.868 & 0.732 & 0.970 & 0.031 & 0.132 & 3.347 \\
Cascade CNN \cite{wang2016interactive} & \textbf{0.951} & \textbf{0.899} & \textbf{0.920} & \textbf{0.997} & \textbf{0.003} & \textbf{0.049} & \textbf{0.405} \\
DeepBS \cite{babaee2017deep} & 0.754 & 0.833 & 0.746 & 0.990 & 0.009 & 0.245 & 1.992 \\
IUTIS-5 \cite{bianco2015far} & 0.785 & 0.808 & 0.771 & 0.994 & 0.005 & 0.215 & 1.198 \\
SuBSENSE \cite{st2015subsense} & 0.812 & 0.751 & 0.747 & 0.990 & 0.009 & 0.187 & 1.678 \\
PAWCS \cite{st2015self} & 0.772 & 0.785 & 0.750 & 0.995 & 0.005 & 0.228 & 1.199 \\
\hline \hline
\end{tabularx}
\end{table}
Table \ref{tab:perf} presents the performance of our algorithm on the thermal datasets of CDB and compares it against the top two methods for each one of the metrics. The method of \cite{wang2016interactive} outperforms all methods. Our method presents the second highest recall, however, due to its lower precision it presents lower F1 score. Although, our method performs slightly worst than the leaders of CDB 2014, it is much less complicated and thus suitable for implementation in-camera.
\subsection{Hardware Cost}
\label{ssec:hardware cost}
The main argument of this work is that a novel highly accurate and demanding algorithm that needs to run in a 24/7 basis could be handled very efficiently by a reconfigurable device running as an in-camera accelerator. Thus, we primarily demonstrate our system in a low cost Xilinx Atrix7 FPGA device (xc7a200tfbg484-3). In addition, we deploy our system in a more powerful Virtex7 device (xc7vx550tffg1158-3) to show that it seamlessly scales to support more parallel cores.
For the code synthesis and bitstream generation we used Xilinx Vivado and Vivado HLS. For validation and proof only purposes our system was implemented in a low end Zedboard evaluation platform
powered by a small Xilinx Zynq device.
Table \ref{tab:tab2} shows the hardware utilization for the Artix7 device when implementing 4 BSU cores and 1 MEU core. We implemented only 1 MEU, since this unit operates only for the initialization and parameter estimation of the system, and thus, its performance is not crucial. Table \ref{tab:tab2} also shows that the critical resource is the LUTs and DSPs. This is justified by the fact that the operations involved in the algorithm are mostly multiplications and divisions, and thus, apart from the DSPs, additional logic and signals are necessary to route the intermediate results and handle all algorithm states. DRAM utilization is almost zero as all operations are per pixel and no further caching in data is necessary, since there is no need for keeping the observed data in memory. It should be mentioned that keeping the observed data in memory would prohibit the implementation of this algorithm in low memory devices.
Table \ref{tab:tab3} shows the hardware utilization of the Virtex 7 device when implementing 16 BSU cores and 1 MEU core. The resource utilization in this case follows the same reasoning as before. The above two hardware configurations are compared with a quad-core ARM Cortex A9 CPU (Exynos4412 SoC) clocked at 1.7 GHz with 2GB RAM and a low power mobile Intel i5 (2450M) Processor clocked at 2.5Ghz with 8GB RAM, which features two physical cores with hyper threading capability (4 threads in total). It is selected for the evaluation as it offers a competitive computation power per watt.
\begin{table}[t]
\centering
\caption{Typical Hardware Cost on low cost, low power Xilinx Artix7 Device (xc7a200tfbg484-3). 4-BSU cores/1-MEU core.}
\label{tab:tab2}
\newcolumntype{L}[1]{>{\hsize=#1\hsize\raggedright\arraybackslash}X}%
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centering\arraybackslash}X}%
\begin{tabularx}{1.0\linewidth}{L{13.0}C{4.0}C{4.0}C{4.0}}
\hline \hline
\vspace{0.005mm} \textbf{Logic Utilization} & \vspace{0.005mm} \textbf{Used} & \vspace{0.005mm} \textbf{Available} & \vspace{0.005mm} \textbf{Utilization} \\
Number of Flip Flops & 143089 & 269200 & 53\% \\
Number of Slice LUTs & 119964 & 129000 & 92\% \\
Number of DSP48E & 506 & 740 & 68\% \\
Number of Block RAM\_18K & 20 & 730 & 2\% \\ \hline \hline
\end{tabularx}
\end{table}
\begin{table}[t]
\centering
\caption{Typical Hardware Cost on Xilinx Virtex 7 device (xc7vx550tffg1158-3). 16-BSU cores/1-MEU core.}
\label{tab:tab3}
\newcolumntype{L}[1]{>{\hsize=#1\hsize\raggedright\arraybackslash}X}%
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centering\arraybackslash}X}%
\begin{tabularx}{1.0\linewidth}{L{13.0}C{4.0}C{4.0}C{4.0}}
\hline \hline
\vspace{0.005mm} \textbf{Logic Utilization} & \vspace{0.005mm} \textbf{Used} & \vspace{0.005mm} \textbf{Available} & \vspace{0.005mm} \textbf{Utilization} \\
Number of Flip Flops & 241604 & 692800 & 35\% \\
Number of Slice LUTs & 269004 & 346400 & 78\% \\
Number of DSP48E & 1184 & 2880 & 41\% \\
Number of Block RAM\_18K & 14.50 & 1180 & 1.2\% \\ \hline \hline
\end{tabularx}
\end{table}
\begin{table}[t]
\centering
\caption{Comparison table between a Xilinx Atrix7 device @210Mhz, a Xilinx Virtex7 device @ 222 Mhz, an Intel i5 @2.5Ghz an ARM Cortex A9 @1.7Ghz and a DSP @ 600Mhz.}
\label{tab:tab4}
\newcolumntype{L}[1]{>{\hsize=#1\hsize\raggedright\arraybackslash}X}%
\newcolumntype{C}[1]{>{\hsize=#1\hsize\centering\arraybackslash}X}%
\begin{tabularx}{1.0\linewidth}{C{12.5}C{4.7}C{4.7}C{3.1}}
\hline \hline
\\ \textbf{Image frame} & $\bf{320\times240}$ & $\bm{640\times480}$ &$\bm{\mu}$\textbf{J/pixel}\\
\textbf{Artix 7} 4--cores & 17.36 fps & 4.34 fps & 3.45 \\
\textbf{Vertex 7} 16-cores & 69.88 fps & 17.47 fps & 3.49 \\
\textbf{ARM A9} 4-cores & 8.27 fps & 2.07 fps & 4.7-6.2 \\
\textbf{Intel i5} 2-cores/ 4-threads & 58.59 fps & 14.56 fps & 5.82 \\
\textbf{MOG \cite{zivkovic_improved_2004}} BF-537 DSP & 3.57 fps & - & -
\\ \hline \hline
\end{tabularx}
\end{table}
For the Intel i5 the software compiler platform used was Microsoft Visual Studio 2012 and our code was optimized for maximum speed (-O2 optimization level). For the ARM A9 platform, we used a lightweight XUbuntu 13.10 operating system with a g++ compiler using -O2 and-O3 optimization level.
In all software reference implementations OpenMP was also used to utilize all the available cores/threads of the platform.
For the FPGA, we measure the exact clock cycles needed for segmenting a single pixel by a single core including loading and write back cycles. For this purpose, we use the Zedboard evaluation board. The exact clock cycles measured between 700-830 when real datasets are evaluated. These measurements are also verified for the proposed Atrix7 and Virtex7 devices using post-place and route timing simulation.
The I/O latency between the DRAM and the FPGA is completely hidden as the operations for each core are depending only on a single pixel and its corresponding background model. All this information is encoded in about 256 bits in average, thus a buffering scheme using simple FIFOs is easily implemented. The bandwidth demands between the device and the DRAM is no more than 250 MB/sec for 25FPS at 640x480 resolution, which is easily achievable even from low-end FPGA devices.
In all the experiments for the Intel i5 and ARM A9 we start measuring latency times after the data are loaded in the DRAM of the CPU. This probably is in benefit of the CPUs as the cache is hot in most of the measured cases.
Table \ref{tab:tab4} shows that implementing just 4-cores in the Atrix7 device we get 17.3 FPS at 320x240 exceeding by far the capabilities of the FLIR A-315 thermal camera. The 4-core FPGA design outperforms the ARM A9 quad core CPU giving twice the FPS. In terms of power, Atrix7 consumes 4.6 watts based on Vivado's Power analysis tool while quad core ARM A9 consumes about 3.5-4 watts
As expected the Intel i5 utilizing 4-threads outperforms the two previous platforms offering also the best performance per core. Its consumption is measured at 26.2 watts
and refers only to the CPU consumption. The Virtex 7 device offers better performance, as it is capable of fitting 16-BSU cores. In terms of power the Virtex7 consumes 18.6 Watts measured using Vivado's Power analysis tool.
Looking at the energy metric μJ/pixel in Table \ref{tab:tab4}, both FPGA devices give similar μJ/pixel and also better than the Intel i5. For the ARM A9 this metric is expressed as a range as it is based mostly on specs. In our evaluation experiments we could measure the total dynamic power of the board using the ODROID smart Power5 but it is not possible to safely measure only the CPU core modules.
The last column in Table \ref{tab:tab4} refers to the work of \cite{shen2012efficient} which implements the original MOG algorithm in an in-camera DSP processor (Blackfin BF-537) as a reference design for his proposed scheme. Even though it is hard to make a direct comparison, we see how challenging for embedded processors is it to keep up with the demanding task of background segmentation; even for a less accurate algorithm such as MOG.
\section{Conclusions}
\label{sec: conclusions}
In this work a novel algorithm for background subtraction was presented which is suitable for in-camera acceleration in thermal imagery. The presented scheme through an automated parameter estimation process, takes into account the special characteristics of data, and gives highly accurate results without any fine-tuning from the user. It is implemented in reconfigurable hardware using a HLS design flow with no approximations in accuracy, arithmetic or in the mathematical formulation of the proposed algorithm. Unlike previously introduced custom-fit hardware accelerators, our scheme is platform independent, scalable and easily maintainable. Finally, to the best of our knowledge this is the first time that the very demanding task of background subtraction can be executed to thermal camera sensor in real-time and at low power budget, which allows for a distributed new approach that avoids the bottlenecks of the existing centralized solutions.
\appendices
\section{Derivation of Optimal Variational Distributions}
\label{ap:appendix}
Using (\ref{eq:q_star}) and (\ref{eq:joint_factorization}) the logarithm of $q^*(\bm Z)$ is given by
\begin{equation}
\begin{aligned}
\ln q^*(\bm Z) = & \mathbb{E}_{\bm \varpi}[\ln p(\bm Z|\bm \varpi)] + \\ & + \mathbb{E}_{\bm \mu, \bm \tau}[\ln p(\bm X|\bm Z, \bm \mu, \bm \tau)] + \mathcal{C}
\end{aligned}
\label{eq:q_Z_optimized_2}
\end{equation}
substituting (\ref{eq:p_Z}) and (\ref{eq:p_X}) into (\ref{eq:q_Z_optimized_2}) we get
\begin{subequations}
\begin{align}
& \ln q^*(\bm Z)= \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \bigg( \mathbb{E}\big[\ln \varpi_k \big] + \frac{1}{2}\mathbb{E}\big[\ln \tau_k\big] -\nonumber \\
&\:\:\:\:\:\:\:\:\:\:-\frac{1}{2}\ln2\pi - \frac{1}{2} \mathbb{E}_{\bm \mu, \bm \tau}\big[(x_n-\mu_k)^2\tau_k \big]\bigg) + \mathcal{C} \Rightarrow \nonumber
\end{align}
\label{eq:q_star_Z_derivation}
\end{subequations}
Using (\ref{eq:joint_factorization}) and (\ref{eq:q_star}) the logarithm of $q^*(\bm \varpi, \bm \mu, \bm \tau)$ is
\begin{subequations}
\begin{align}
\ln q^*(\bm \varpi, \bm \mu, \bm \tau) & = \mathbb{E}_{\bm Z}\big[\ln p(\bm X|\bm Z, \bm \mu, \bm \tau) + \nonumber \\
& + \ln p(\bm Z|\bm \varpi) + \nonumber \\
& + \ln p(\bm \varpi) + \ln p(\bm \mu, \bm \tau)\big] + \mathcal{C} = \\
& = \sum_{n=1}^{N}\sum_{k=1}^{K} \mathbb{E}\big[z_{nk}\big] \ln \mathcal{N}(x_n|\mu_k, \tau_k^{-1}) + \nonumber \\
&+ \mathbb{E}_{\bm Z}\big[\ln p(\bm Z|\bm \varpi)\big] \nonumber \\
& + \ln p(\bm \varpi) + \sum_{k=1}^{K}\ln p(\mu_k, \tau_k) + \mathcal{C}
\label{eq:q_varpi_mu_tau_optimized_b}
\end{align}
\label{eq:q_varpi_mu_tau_optimized}
\end{subequations}
Due to the fact that there is no term in (\ref{eq:q_varpi_mu_tau_optimized_b}) that contains parameters from both sets $\{\bm \varpi\}$ and $\{\bm \mu, \bm \tau\}$, the distribution $q^*(\bm \varpi, \bm \mu, \bm \tau)$ can be factorized as $q(\bm \varpi, \bm \mu, \bm \tau) = q(\bm \varpi) \prod_{k=1}^{K}q(\mu_k, \tau_k)$.
The distribution for $q^*(\bm \varpi)$ is derived using only those terms of (\ref{eq:q_varpi_mu_tau_optimized_b}) that depend on the variable $\bm \varpi$. Therefore the logarithm of $q(\bm \varpi)$ is given by
\begin{subequations}
\begin{align}
\ln q^*(\bm \varpi) & = \mathbb{E}_{\bm Z}\big[\ln p(\bm Z|\bm \varpi)\big] + \ln p(\bm \varpi) + \mathcal{C} = \\
& = \sum_{k=1}^{K} \ln \varpi_k^{(\sum_{n=1}^{N}r_{nk} + \lambda_0 -1)} + \mathcal{C} = \\
& = \sum_{k=1}^{K} \ln \varpi_k^{(N_k + \lambda_0 -1)} + \mathcal{C}
\label{eq:q_star_varpi_derivation_c}
\end{align}
\end{subequations}
We have made use of $\mathbb{E}[z_{nk}]=r_{nk}$, and we have denote as $N_k=\sum_{n=1}^{N}r_{nk}$. (\ref{eq:q_star_varpi_derivation_c}) suggests that $q^*(\bm \varpi)$ is a Dirichlet distribution with hyperparameters $\bm \lambda = \{N_k + \lambda_0\}_{k=1}^K$.
Using only those terms of (\ref{eq:q_varpi_mu_tau_optimized_b}) that depend on variables $\bm \mu$ and $\bm \tau$, the logarithm of $q^*(\mu_k, \tau_k)$ is given by
\begin{align}
\ln q^*(\mu_k, \tau_k) & = \ln \mathcal{N}(\mu_k|m_0, (\beta_0 \tau_k)^1) + \nonumber \\
&\:\:\:\:\: + \ln Gam(\tau_k|a_0, b_0) + \nonumber \\
&\:\:\:\:\: + \sum_{n=1}^{N}\mathbb{E}\big[z_{nk}\big]\ln \mathcal{N}(x_n|\mu_k,\tau_k^{-1}) + \mathcal{C} = \nonumber \\
& = -\frac{\beta_0\tau_k}{2}(\mu_k-m_0)^2 + \frac{1}{2}\ln (\beta_0 \tau_k) + \nonumber \\
&\:\:\:\:\: + (a_0-1)\ln \tau_k - b_0\tau_k - \nonumber \\ &\:\:\:\:\: -\frac{1}{2}\sum_{n=1}^{N}\mathbb{E}\big[z_{nk}\big](x_n-\mu_k)^2\tau_k + \nonumber \\
&\:\:\:\:\: + \frac{1}{2}\bigg(\sum_{n=1}^{N}\mathbb{E}\big[z_{nk}\big] \bigg) \ln(\beta_0\tau_k) + \mathcal{C}
\label{eq:q_star_mu_tau_derivation_b}
\end{align}
For the estimation of $q^*(\mu_k|\tau_k)$, we use (\ref{eq:q_star_mu_tau_derivation_b}) and keep only those factors that depend on $\mu_k$.
\begin{subequations}
\begin{align}
\ln q^*(\mu_k|\tau_k) & = -\frac{\beta_0\tau_k}{2}\big(\mu_k-m_0\big)^2 - \nonumber \\
&\:\:\:\:\: - \frac{1}{2}\sum_{n=1}^{N}\mathbb{E}\big[z_{nk}\big]\big(x_n - \mu_k\big)^2\tau_k = \\
& = -\frac{1}{2}\mu_k^2\Big(\beta_0 + N_k\Big)\tau_k +\nonumber \\
&\:\:\:\:\: + \mu_k \tau_k \Big(\beta_0 m_0 + N_k\bar x_k\Big) + \mathcal{C} \Rightarrow \\
& q^*(\mu_k|\tau_k) = \mathcal{N}(\mu_k|m_k, (\beta_k \tau)^{-1})
\label{eq:q_star_mu_derivation_b}
\end{align}
\label{eq:q_star_mu_derivation}
\end{subequations}
where $\bar x_k = \frac{1}{N_k}\sum_{n=1}^{N}r_{nk}x_n$, $\beta_k = \beta_0 + N_k$ and $m_k = \frac{1}{\beta_k}(\beta_0 m_0 + N_k \bar x_k)$.
After the estimation of $q^*(\mu_k|\tau_k)$, logarithm of the optimized the distribution $q^*(\tau_k)$ is given by
\begin{subequations}
\begin{align}
\ln q^*(\tau_k) & = \ln q^*(\mu_k, \tau_k) - \ln q^*(\mu_k|\tau_k) = \\
& = \bigg(a_0+\frac{N_k}{2} - 1\bigg) \ln \tau_k - \nonumber \\ &\:\:\:\:\:\: -\frac{1}{2}\tau_k\bigg(\beta_0\big(\mu_k-m_0\big)^2 + \nonumber \\
&\:\:\:\:\:\: +2b_0 + \sum_{n=1}^{N}r_{nk}\big(x_n-\mu_k\big)^2 -\nonumber \\ &\:\:\:\:\:\: -\beta_k\big(\mu_k-m_k\big)^2 \bigg) + \mathcal{C} \Rightarrow \\
& q^*(\tau_k) = Gam(\tau_k|a_k, b_k)
\label{eq:q_star_tauk_derivation_b}
\end{align}
\label{eq:q_star_tauk_derivation}
\end{subequations}
The parameters $a_k$ and $b_k$ are given by
\begin{subequations}
\begin{align}
a_k & = a_0 + \frac{N_k}{2} \\
b_k & = b_0 + \frac{1}{2}\bigg(N_k\sigma_k + \frac{\beta_0 N_k}{\beta_0 + N_k}\big(\bar x_k - m_0\big)^2 \bigg)
\end{align}
\label{eq:ap_q_star_tauk}
\end{subequations}
where $\sigma_k = \frac{1}{N_k}\sum_{n=1}^{N}(x_n-\bar x_k)^2$.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| proofpile-arXiv_065-12172 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Fine-grained object classification consists of discriminating between classes in a sub-category of objects,
for instance the particular species of bird or dog~\cite{berg2013poof,chai2013symbiotic,farrell2011birdlets,gavves2013fine,zhang2014part}.
This is a very challenging problem due to large intra-class variations caused by pose and appearance changes,
as well as small inter-class variation due to subtle differences in the overall appearance between classes~\cite{berg2013you,ge2015modelling}.
Prior work in fine-grained classification has concentrated on learning image-based features to cope with pose variations.
Initially such approaches used traditional image-based features such as colour and histograms of gradients~\cite{berg2013poof}
while modelling the pose using a range of methods including deformable parts-based approaches~\cite{chai2013symbiotic,liu2012dog,zhang2013deformable}.
More recently, deep convolutional neural networks (DCNNs) have been used to learn robust features~\cite{donahue2013decaf},
cope with large variations by using a hierarchical model~\cite{ge2016fine},
and automatically localise regions of importance~\cite{jaderberg2015spatial}.
Despite the advances provided by these approaches,
prior work treats the fine-grained classification task as a still-image classification problem and ignores complementary temporal information present in videos.
Recent work on neural network based approaches has provided notable results in video-based recognition~\cite{ge2015modelling,karpathy2014large,simonyan2014two,tran2014learning,yue2015beyond}.
Karpathy et al.~\cite{karpathy2014large} demonstrated the surprising result that classifying a single frame from a video using a DCNN
was sufficient to perform accurate video classification, for broad categories such as activity and sport recognition.
Within the action recognition area,
Simonyan and Zisserman~\cite{simonyan2014two} incorporate optical flow and RGB colour information into two stream networks.
Tran~et~al.~\cite{tran2014learning} apply deep 3D convolutional networks (3D ConvNets)
to implicitly learn motion features from raw frames and then aggregate predictions at the video level.
Ng~et~al.~\cite{yue2015beyond} employ Long Short-Term Memory cells which are connected to the output of the underlying CNN
to achieve notable results on the UCF-101~\cite{soomro2012ucf101} and Sports 1 million datasets~\cite{karpathy2014large}.
To date, the above neural network based approaches have not been explored for the task of video-based fine-grained object classification.
\textbf{Contributions.}
In this paper, we introduce the problem of video-based fine-grained object classification,
propose a corresponding new dataset,
and explore several methods to exploit the temporal information.
A~systematic study is performed comparing several DCNN based approaches which we have specifically adapted to the task,
highlighting the potential benefits that fine-grained object classification can gain by modelling temporal information.
We evaluate 3D~ConvNets~\cite{tran2014learning},
two-stream DCNNs~\cite{simonyan2014two},
and bilinear DCNNs~\cite{lin2015bilinear}.
Two forms of the two-stream approach are used:
(i) the originally proposed late-fusion form which concatenates the softmax outputs of two independent spatial and temporal DCNNs,
and
(ii) our modified form, which performs early-fusion via combination of the fully-connected layers.
In contrast to the two forms of the two-stream approach, we adapt the bilinear DCNN to extract local co-occurrences
by combining information from the convolutional layers of spatial and temporal DCNNs.
The adapted bilinear DCNN is then fused with the two-stream approach (early fusion) to combine spatial and temporal information at the local and global level.
The study is performed on the VB100 dataset, a new and challenging video dataset of birds
consisting of 1,416 video clips of 100 species birds taken by expert bird watchers.
The dataset contains several compounded challenges, such as clutter, large variations in scale, camera movement and considerable pose variations.
Experiments show that classification performance is improved from 23.1\% (using single images) to 41.1\% when using the spatio-temporal bilinear DCNN approach,
which outperforms 3D~ConvNets as well as both forms of the two-stream approach.
We highlight the importance of performing early fusion, either at the input layer (3D~ConvNets) or feature layer (adapted bilinear DCNN),
as this consistently outperforms late fusion (ie.~the original two-stream approach).
Incorporating automatically detected bounding box location further improves the classification accuracy of the spatio-temporal bilinear DCNN approach to 53.6\%.
We continue the paper as follows.
Section~\ref{sec:method} describes the studied methods and our adaptations,
while Section~\ref{sec:dataset} describes the new VB100 bird dataset.
Section~\ref{sec:experiments} is devoted to comparative evaluations.
The main findings are summarised in Section~\ref{sec:conclusion}.
\section{Combining Spatial and Temporal Information}
\label{sec:method}
In this section we first describe two baseline networks that make use of either image or temporal information.
We then outline the deep 3-dimensional convolutional network~\cite{tran2014learning},
extend the two-stream approach~\cite{simonyan2014two}
and adapt the bilinear DCNN approach~\cite{lin2015bilinear} to encode local spatial and temporal co-occurrences.
\subsection{Underlying Spatial and Temporal Networks}
Our baseline systems are DCNNs that use as input either optical flow (temporal) or image-based features.
The temporal network $\mathcal{T}$ uses as input
the horizontal flow $\mathbf{O}_{x}$, vertical flow $\mathbf{O}_{y}$, and magnitude of the optical flow $\mathbf{O}_{mag}$
combined to form a single optical feature map $\mathbf{O}\in\mathrm{R}^{h \times w \times 3}$,
where $h \times w$ is the size of the feature map (image).
The spatial network $\mathcal{S}$ uses RGB frames (images) as input.
Both $\mathcal{S}$ and $\mathcal{T}$ use the DCNN architecture of Krizhevsky et al.~\cite{krizhevsky2012imagenet}
which consists of 5 convolutional layers, $\mathbf{S}^{c1}, \mathbf{S}^{c2}, \dots, \mathbf{S}^{c5}$,
followed by 2 fully connected layers, $\mathbf{S}^{fc6}$ and $\mathbf{S}^{fc7}$,
prior to the softmax classification layer, $\mathbf{S}^{o}$.
The networks are trained by considering each input frame from a video (either image or optical flow) to be a separate instance,
and are fine-tuned to the specific task (and modality) by using a pre-trained network.
Fine-tuning~\cite{yosinski2014transferable} is necessary as we have insufficient classes and observations to train the networks from scratch
(preliminary experiments indicated that training the networks from scratch resulted in considerably lower performance).
When performing classification, each image (or frame of optical flow) is initially treated as an independent observation.
For a video of $N_{f}$ frames this leads to $N_{f}$ classification decisions.
To combine the decisions, the max vote of these decisions is taken.
\subsection{Deep 3D Convolutional Network}
The deep 3-dimensional convolutional network (3D~ConvNet) approach~\cite{tran2014learning},
originally proposed for action recognition,
utilises 3-dimensional convolutional kernels to model $L$ frames of information simultaneously.
In contrast to optical flow features where temporal information is explicitly modelled,
the approach implicitly models the information within the deep neural network structure.
This approach obtains state-of-the-art performance on various action recognition datasets
such as \mbox{UCF-101}~\cite{soomro2012ucf101} and ASLAN~\cite{kliper2012action}.
The network is fine-tuned for our classification task by taking a sliding window of $L=15$ frames and moving the sliding window one frame at a time; each sliding window is considered to be a separate instance.
This results in $N_{f} - 14$ classification decisions which are combined using the max vote.
\subsection{Spatio-Temporal Two-Stream Network: Early and Late Fusion}
\label{sec:two_stream_forms}
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{pool}
\caption
{
Conceptual illustration of the spatio-temporal co-occurrence approach.
}
\label{fig:pool}
\end{figure}
The two-stream network proposed for action recognition by Simonyan and Zisserman~\cite{simonyan2014two}
uses the two independent spatial and temporal networks $\mathcal{S}$ and $\mathcal{T}$.
The softmax output of these two networks
is then concatenated and used as a feature vector that is classified by a multi-class support vector machine (SVM).
We refer to this network as {\it Two-Stream (late fusion)}; it is conceptually illustrated in Fig.~\ref{fig:key_figure}(a).
A potential downside of this approach is that fusion of spatial and temporal information is done at the very end.
This limits the amount of complementary information captured as scores (or decisions) from the softmax classification layer are combined.
To address this issue, we propose to combine the two streams of information much earlier (early fusion)
by combining the {\it fc6} outputs, $\mathbf{S}^{fc6}$ and $\mathbf{T}^{fc6}$;
{\it fc6} is the first fully connected layer and is often used to extract a single feature from DCNNs~\cite{donahue2013decaf}.
We refer to this modified network as {\it Two-Stream (early fusion)}.
See Fig.~\ref{fig:key_figure}(b).
\subsection{Joint Spatial and Temporal Features via Co-occurrences}
We adapt the recently proposed bilinear DCNN approach by Lin et al.~\cite{lin2015bilinear}
via combining the convolutional layers of the baseline spatial and temporal networks
by calculating co-occurrences. The rationale behind is that different species of birds may have different appearance and motion patterns and their combination.
Specifically,
let the feature maps of the $n$-th layer of the spatial and temporal networks be
$\mathbf{S}^n\in\mathbb{R}^{h\times{w}\times{d_n}}$ and $\mathbf{T}^n\in\mathbb{R}^{h\times{w}\times{d_n}}$,
where $d_n$ is the number of dimensions for the feature map (number of kernels).
The two feature maps are combined by calculating an outer product:
\begin{equation}
\mathbf{P}_{i,j} = \operatorname{vec} \left( {\mathbf{S}^n_{i,j}}{\mathbf{T}^n_{i,j}}^{\intercal} \right)
\label{eqn:outer_product}
\end{equation}
\noindent
where $\mathbf{S}^n_{i,j}\in\mathbb{R}^{d_n}$ and $\mathbf{T}^n_{i,j}\in\mathbb{R}^{d_n}$
are the local feature vectors of the spatial and temporal streams at location $(i,j)$,
$\operatorname{vec(\cdot)}$ is the vectorisation operation,
and
$\mathbf{P} \in\mathbb{R}^{h \times w \times d^2_n}$, with $\mathbf{P}_{i,j} \in \mathbb{R}^{d^2_n}$ being the co-occurrence feature at location $(i,j)$.
As such, the outer product operation captures the co-occurrence of the visual and motion patterns at each spatial location. Max pooling is applied to all the local encoding vectors $\mathbf{P}_{i,j}$ to create the final feature representation $\mathbf{F}\in\mathbb{R}^{d_{n}^2}$.
Finally, $L_2$ normalisation is applied to the encoding vector~\cite{lin2015bilinear}.
The overall process is conceptually illustrated in Fig.~\ref{fig:pool}.
The spatio-temporal bilinear DCNN feature is combined with the {\it fc6} spatial and temporal features used for {\it Two-Stream (early fusion)}.
This allows us to combine the spatial and temporal information at both the local and global level.
The resultant features are fed to an SVM classifier.
See Fig.~\ref{fig:key_figure}(c) for a conceptual illustration.
We refer this system as {\it Spatio-Temporal Co-occurrence}.
\begin{figure}[!t]
\centering
\begin{minipage}{1\columnwidth}
\centering
\begin{minipage}{0.80\textwidth}
\centering
\includegraphics[width=1\textwidth]{key_figure_late}\\
~~~~~~(a) Two-Stream (late fusion)
\end{minipage}
\vspace{4ex}
\begin{minipage}{0.90\textwidth}
\centering
~~~~~~\includegraphics[width=1\textwidth]{key_figure_early}\\
~~~~~~(b) Two-Stream (early fusion)
\end{minipage}
\vspace{4ex}
\begin{minipage}{0.90\textwidth}
\centering
~~~~~~~\includegraphics[width=1\textwidth]{key_figure_co}\\
~~~~~~(c) Spatio-Temporal Co-Occurrence
\end{minipage}
\vspace{1ex}
\end{minipage}
\caption
{
Overview of the Two-Stream and Spatio-Temporal Co-Occurrence approaches for fine-grained video classification.
In (a) the Two-Stream approach uses {\it late fusion}, where features are combined from the softmax layer.
In (b) the Two-Stream approach uses {\it early fusion}, where features are combined from the {\it fc6} layer.
The Spatio-Temporal Co-Occurrence approach (c) combines the co-occurrence (bilinear DCNN) features with the features from {\it fc6}.
}
\label{fig:key_figure}
\end{figure}
\newpage
\textcolor{white}{.}
\newpage
\section{VB100 Dataset: Videos of 100 Bird Species}
\label{sec:dataset}
\vspace{1ex}
To investigate video-based fine-grained object classification we propose the VB100 dataset,
a new and challenging dataset consisting of 1,416 video clips of 100 bird species taken by expert bird watchers.
The birds were often recorded at a distance, introducing several challenges such as large variations in scale, bird movement, camera movement and considerable pose variations.
See Fig.~\ref{fig:dataset} for examples.
For each class (species of bird), the following data is provided:
video clips, sound clips, as well as taxonomy and distribution location.
See Fig.~\ref{fig:sample} for an example.
Each class has on average 14 video clips.
The median length of a video is 32 seconds.
The frame rate varies across the videos; approximately 69\% of videos were captured at 30 frames per second (fps), 30\% at 25 fps, and the remaining at 60 and 100 fps.
Often the camera will need to move in order to track the bird, keeping it in view;
this form of camera movement is present in 798 videos, with the remaining 618 videos obtained using either static or largely static cameras.
The dataset can be obtained from:
\href{http://arma.sf.net/vb100/}{\small\textsf{http://arma.sf.net/vb100/}}
\begin{figure}[!t]
\centering
\vspace{1.5ex}
\begin{minipage}{1\columnwidth}
\centering
\includegraphics[width=0.32\columnwidth]{vb100-1-1}
\includegraphics[width=0.32\columnwidth]{vb100-1-2}
\includegraphics[width=0.32\columnwidth]{vb100-1-4}
\end{minipage}
\vspace{1.5ex}
\begin{minipage}{1\columnwidth}
\centering
\includegraphics[width=0.32\columnwidth]{vb100-2-1}
\includegraphics[width=0.32\columnwidth]{vb100-2-2}
\includegraphics[width=0.32\columnwidth]{vb100-2-4}
\end{minipage}
\vspace{1.5ex}
\begin{minipage}{1\columnwidth}
\centering
\includegraphics[width=0.32\columnwidth]{vb100-3-1}
\includegraphics[width=0.32\columnwidth]{vb100-3-2}
\includegraphics[width=0.32\columnwidth]{vb100-3-4}
\end{minipage}
\caption
{
\small
Example frames from video clips in the VB100 dataset.
Each row shows three sample frames for a unique class.
The first frame in each row (left to right) shows an easy situation, followed by images with variations such as pose, scale and background.
}
\label{fig:dataset}
\end{figure}
\begin{figure}[!t]
\vspace{2ex}
\centering
\includegraphics[width=1\columnwidth]{vb100_sample}\\
\vspace{-1ex}
\caption
{
\small
An example for the class {\it Elegant Tern} in VB100.
Top-left: a~still shot from one of the video clips.
Bottom-left: spectrogram created from the corresponding audio file.
Right: taxonomy information.
}
\label{fig:sample}
\end{figure}
\newpage
\section{Experiments}
\label{sec:experiments}
Two sets of experiments are presented in this section.
In the first set (Section~\ref{sec:comparative_evaluation}), we evaluate the performance without taking into account whether each video clip was recorded by a static or moving camera.
In the second set (Section~\ref{sec:static_vs_moving}), we study the effect of camera movement on performance.
In all cases, to obtain a per video classification decision we use the max voting from the classified frames.
For the Spatio-Temporal Co-occurrence approach, initial experiments found that using the last convolutional layer $n=c5$ provided the best performance;
this leads to $d=65,536$ for the spatio-temporal bilinear features.
The input frame size for all networks is \mbox{$224\times224$}.
Training and testing is performed using Caffe~\cite{jia2014caffe}.
The dataset is divided into 730 training videos (train set) and 686 testing videos (test set).
Results are presented in terms of mean classification accuracy.
Classification accuracy is calculated on a per video basis and per class basis,
with $\mbox{accuracy} = {N^{c}_{p}} / {N^{c}}$,
where $N^{c}_{p}$ is the number of correctly classified videos for the $c$-th class and $N^{c}$ is the number of videos for the $c$-th class.
The mean classification accuracy is then calculated across all of the classes.
\subsection{Comparative Evaluation}
\label{sec:comparative_evaluation}
We first investigate the performance of two independent networks for spatial and temporal information: Spatial-DCNN and Temporal-DCNN.
We then compare the performance of
3D~ConvNets~\cite{tran2014learning} fine-tuned for our bird classification task (referred to as 3D~ConvNets-FT),
the two-stream approach~\cite{simonyan2014two} (which combines the Spatial-DCNN and Temporal-DCNN networks),
and the spatio-temporal co-occurrence approach.
Finally we evaluate the performance of the co-occurrence approach in conjunction with an off-the-shelf bird detector/locator.
For this we use the recent Faster Region CNN~\cite{ren2015faster} approach
with default parameters learned for the PASCAL VOC challenge~\cite{everingham2010pascal};
only bird localisations are used, with all other objects ignored.
Examples of localisation are shown in Fig.~\ref{fig:detect}.
\begin{figure}[!b]
\centering
\begin{minipage}{1\columnwidth}
\begin{minipage}{1\textwidth}
\centering
\includegraphics[width=0.32\textwidth]{detect_good1}
\includegraphics[width=0.32\textwidth]{detect_good2}
\includegraphics[width=0.32\textwidth]{detect_good3}
\end{minipage}
~\\
\begin{minipage}{1\textwidth}
\centering
\includegraphics[width=0.32\textwidth]{detect_bad2}
\includegraphics[width=0.32\textwidth]{detect_bad3}
\includegraphics[width=0.32\textwidth]{detect_bad4}
\end{minipage}
\end{minipage}
\caption
{
Examples of bird localisation (red bounding box) using the default settings of Faster R-CNN~\cite{ren2015faster}.
Top row: good localisations.
Bottom row: bad localisations due to confounding textures, clutter, small objects, and occlusions.
}
\label{fig:detect}
\end{figure}
\textbf{Network Setup.}
The Spatial-DCNN uses the AlexNet structure pre-trained on the ImageNet dataset~\cite{krizhevsky2012imagenet} before being fine-tuned for our bird classification task.
It is trained by considering each frame from a video to be a separate instance (image).
Two variants of Spatial-DCNN are used:
(i)~randomly selecting one frame per video clip,
and
(ii)~using 5 frames per second (fps) from each video clip\footnote{The video clips were normalised to 5 fps, as this was computationally more efficient. Preliminary experiments indicated that using 5 fps leads to similar performance as normalising at 25 fps.}.
The Temporal-DCNN uses dense optical flow features computed from the Matlab implementation of Brox et al.~\cite{brox2004high}.
For the sake of computational efficiency, we have calculated the optical flow every 5 frames.
It is generally beneficial to perform zero-centering of the network input,
as it allows the model to better exploit the rectification non-linearities and for optical flow features provides robustness to camera movement~\cite{simonyan2014two}.
Therefore, for both Spatial-DCNN and Temporal-DCNN we perform mean normalisation of the input data.
For Spatial-DCNN we subtract the mean value for each RGB channel,
while for Temporal-DCNN mean flow subtraction is performed for the temporal input.
For the two-stream approach we use two forms (as described in Section~\ref{sec:two_stream_forms}):
(i)~early fusion, where the first fully connected features (fc6) from the Spatial-DCNN (with 5 fps) and Temporal-DCNN networks are concatenated,
and
(ii)~late fusion, where the softmax output of the two networks is concatenated.
For the two-stream and the spatio-temporal co-occurrence approaches,
the resultant feature vectors are fed to a multi-class linear SVM for classification.
\textbf{Quantitative Results.}
The results presented in Table~\ref{table:baseline_results}
show that using more frames from each video (ie. more spatial data) leads to a notable increase in accuracy.
This supports the use of videos for fine-grained classification.
The results also show that spatial data provides considerably more discriminatory information than temporal data.
In all cases, combining spatial and temporal information results in higher accuracy than using either type of information alone,
confirming that the two streams of data carry some complementary information.
In contrast to the using late fusion in the standard two-stream approach,
performing early fusion yields a minor increase in accuracy ($37.5\%$ vs $38.9\%$) and slightly exceeds the accuracy obtained by 3D~ConvNets-FT ($38.6\%$).
Using the co-occurrence approach leads to the highest fusion accuracy of $41.1\%$.
This highlights the importance of making use of the extra information from the video domain for object classification.
Finally, using the Spatio-Temporal Co-occurrence system in conjunction with an automatic bird locator increases the accuracy from $41.1\%$ to $53.6\%$.
This in turn highlights the usefulness of focusing attention on the object of interest and reducing the effect of nuisance variations.
\begin{table}[!b]
\small
\setlength{\tabcolsep}{4pt}
\centering
\caption{Fine-grained video classification results on the VB100 video dataset.}
\label{table:baseline_results}
\begin{tabular}{lc}
\hline\hline\noalign{\smallskip}
{\bf Method} & {\bf Mean Accuracy}\\
\noalign{\smallskip}
\hline\hline
\noalign{\smallskip}
Spatial-DCNN (random frame) & 23.1\% \\
Spatial-DCNN (5 fps) & 37.0\% \\
Temporal-DCNN ($\Delta=5$) & 22.9\% \\ \hline
Two-Stream (early fusion) & 38.9\% \\
Two-Stream (late fusion) & 37.5\% \\
3D~ConvNets-FT & 38.6\% \\ \hline
Bilinear DCNNs~\cite{lin2015bilinear} & 33.8\% \\
Spatio-Temporal Co-occurrence & 41.1\% \\
Spatio-Temporal Co-occurrence + bounding box & 53.6\% \\
\hline
\end{tabular}
\end{table}
\textbf{Qualitative Results.}
To further examine the impact of incorporating temporal information via the co-occurrence approach,
we visualise 10 classes with features taken from the Spatial-DCNN and Spatio-Temporal Co-occurrence approaches.
To that end we use the t-Distributed Stochastic Neighbour Embedding (t-SNE) data visualisation technique based on dimensionality reduction~\cite{van2008visualizing}.
In Fig.~\ref{fig:qualitative_eval} it can be seen that both sets of features yields several distinct clusters for each class.
However, by using the co-occurrence approach fewer separated clusters are formed,
and the separated clusters tend to be closer together.
This further indicates that benefit can be obtained from exploiting temporal information in addition to spatial information.
\begin{figure}[!b]
\centering
\vline
\begin{minipage}{1\columnwidth}
\centering
\hrule
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{tsne_spatial_dcnn.png}
\end{minipage}
\hfill
\vline
\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=1\textwidth]{tsne_spatial_cooc.png}
\end{minipage}
\vline
\hrule
\end{minipage}
\begin{minipage}{1\columnwidth}
\centering
\begin{minipage}{0.45\textwidth}
\centering
~\\
{\small (a)}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
~\\
{\small (b)}
\end{minipage}
\end{minipage}
\caption
{
Qualitative evaluation using t-SNE~\cite{van2008visualizing} to visualise the data for 10 classes indicated by unique colours:
(a)~using Spatial-DCNN features,
and
(b)~using Spatio-Temporal Co-occurrence features.
For both approaches several distinct clusters are formed for each class.
By using the co-occurrence approach fewer separated clusters are formed, and the separated clusters tend to be closer together.
}
\label{fig:qualitative_eval}
\end{figure}
\subsection{Effect of Camera Type: Static vs Moving}
\label{sec:static_vs_moving}
In this section we explore how camera motion affects performance.
Camera motion is a dominant variation within the VB100 dataset as it contains 618 video clips recorded with a static camera and 798 video clips recorded with a moving camera, which follow bird movement (eg., flight).
Fig.~\ref{fig:static} shows examples from two videos of Elegant Tern recorded by static and moving cameras.
Previous work in action recognition~\cite{Jain13_1:conference,Kuehne14_1:conference},
rather than fine-grained object classification,
has presented conflicting results regarding the impact of camera motion.
Jain et al.~\cite{Jain13_1:conference} showed that features which compensated for camera motion improved performance,
while Kuehne et al.~\cite{Kuehne14_1:conference} showed that the presence of camera motion either had little effect or improved performance.
We manually select 21 classes with videos recorded with and without camera movement,
and examine the performance of the Spatial-DCNN, Temporal-DCNN and the Spatio-Temporal Co-occurrence approach.
The setup of the networks is the same as per Section~\ref{sec:comparative_evaluation}.
The results in Table~\ref{table:camera} show that Spatial-DCNN is adversely affected by camera movement with the accuracy dropping from 57.6\% to 47.8\%.
This leads to a similar degradation in performance for the Spatio-Temporal Co-occurrence approach: from 61.1\% to 53.7\%.
We attribute the degradation in performance of the spatial networks to the highly challenging conditions,
such as the difference between stationary and flying bird presented in Fig.~\ref{fig:static}.
By contrast, performance of Temporal-DCNN is largely unaffected.
We hypothesise that the Temporal-DCNN is robust to camera movement due to the mean subtraction operation that can reduce the impact of global motion between frames.
To test the above hypothesis we re-trained the Temporal-DCNN without mean subtraction (no zero-norm).
This results in the performance for the Static case reducing from 32.2\% to 28.9\%, while for the Moving case the performance reduced considerably further: from 33.3\% to 23.7\%.
This supports our hypothesis and highlights the importance of the mean subtraction pre-processing stage for temporal features in the presence of camera motion.
\begin{figure}[!b]
\centering
\begin{minipage}{1\columnwidth}
\centering
\includegraphics[width=0.32\textwidth]{camera_moving_03}
\includegraphics[width=0.32\textwidth]{camera_moving_04}
\includegraphics[width=0.32\textwidth]{camera_moving_05}
\end{minipage}
\caption
{
Examples of video frames recorded by a moving camera, manually tracking the bird.
}
\label{fig:static}
\end{figure}
\begin{table}[!b]
\small
\setlength{\tabcolsep}{4pt}
\centering
\caption
{
Effect of static and moving cameras on performance, using a 21 class subset of the VB100 dataset without bounding box detections.
Temporal-DCNN (no zero-norm) is trained without applying mean subtraction to the input features.
}
\label{table:camera}
\begin{tabular}{llc}
\hline\hline\noalign{\smallskip}
{\bf Network} & {\bf Camera Type} & {\bf Mean Accuracy}\hspace{-1ex} \\
\noalign{\smallskip}
\hline\hline
\noalign{\smallskip}
Spatial-DCNN & Static & 57.6\% \\
Spatial-DCNN & Moving & 47.8\% \\ \hline
Temporal-DCNN (no zero-norm) & Static & 28.9\% \\
Temporal-DCNN (no zero-norm) & Moving & 23.7\% \\ \hline
Temporal-DCNN & Static & 32.2\% \\
Temporal-DCNN & Moving & 33.3\% \\ \hline
Spatio-Temporal Co-occurrence & Static & \textbf{61.1\%} \\
Spatio-Temporal Co-occurrence & Moving & \textbf{53.7\%} \\
\hline
\end{tabular}
\end{table}
\section{Main Findings}
\label{sec:conclusion}
In this work, we introduced the problem of video-based fine-grained object classification
along with a challenging new dataset and explored methods to exploit the temporal information.
A~systematic comparison of state-of-the-art DCNN based approaches adapted to the task was performed
which highlighted that incorporating temporal information is useful for improving performance and robustness.
We presented a system that encodes local spatial and temporal co-occurrence information, based on the bilinear CNN, that outperforms 3D~ConvNets and the Two-Stream approach.
This system improves the mean classification accuracy from 23.1\% for still image classification to 41.1\%.
Incorporating bounding box information, automatically estimated using the Faster Region CNN, further improves performance to 53.6\%.
In conducting this work we have developed and released the novel video bird dataset VB100 which consists of 1,416 video clips of 100 bird species.
This dataset is the first for video-based fine-grained classification and presents challenges such as how best to combine the spatial and temporal information for classification.
We have also highlighted the importance of normalising the temporal features, using zero-centering, for fine-grained video classification.
Future work will exploit other modalities by incorporating the audio (sound), taxonomy information, and the textual description of the video clips.
\newpage
\section*{Acknowledgement}
\begin{small}
\noindent
The Australian Centre for Robotic Vision is supported by the Australian Research Council via the Centre of Excellence program.
\end{small}
\balance
\bibliographystyle{ieee}
| proofpile-arXiv_065-12175 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
After nearly a century's development, the theory of quantum mechanics has become an indispensable component of modern science with so many highly accurate experimental verifications of its theoretical predictions. It is well known that in quantum mechanics, in order to guarantee the conservation of probability and keep the energy eigenvalues real, the Hamiltonian of a quantum system should be Hermitian \cite{Shankar}. To study the properties of a quantum system, we have to take into account of the interaction between the system and the environment. Normally the system we treat is localized in space and the environment can be considered to be a measuring device. However, there is always a natural environment which is independent of any observer and exists at all times. A quantum system thus should be treated as an open system by including the environment it is embedded \cite{IRotter}. Under these circumstances, quantum mechanics with non-Hermitian Hamiltonians can provide an effective way to include the influences of the environment on the system we discuss. Actually, many non-Hermitian Hamiltonians have been widely used to treat various problems in early days, such as free-electron lasers, transverse mode propagation in optical resonators and so on \cite{GDattoli, ASiegman, HCBaker, Moiseyev, Hatano1, Hatano2, Hatano3}. Non-Hermitian quantum mechanics attracted much more attention when people found that a large class of non-Hermitian Hamiltonians can exhibit all real eigenvalues when these systems are $\mathcal{PT}$-symmetric \cite{CMBender1, CMBender2}. These $\mathcal{PT}$-symmetric non-Hermitian systems are studied in many fields and have been experimentally realized in different physical systems in recent years \cite{Goldsheid, Heinrichs, Molinari, Regensburger, Guo, DKip, Chang, Peng, Fleury, Zhu, Schindler}.
The growing interest in non-Hermitian systems has motivated various discussions and extensions of the Hermitian Hamiltonians studied before, and have brought many more and deeper understandings about the quantum systems. For example, the topological properties of non-Hermitian systems are investigated in \cite{Liang, Esaki, Zeuner}. A non-Hermitian tight-binding network engineering method is proposed in \cite{Longhi1} and it is shown that effective complex non-Hermitian hopping rates can be realized with only complex onsite energies in the network. In \cite{Longhi2}, the author studied the spectral and dynamical properties of a quantum particle constrained on non-Hermitian quantum rings and found that very different behavior of particle motion showed up in the non-Hermitian case. The scattering propagation and transport problems in non-Hermitian systems have also been investigated and many interesting effects have been found \cite{Mostafa1, BZhu, Chong, Mostafa2, Ambichl, Ramezani, Jing, Garmon}.
Recently, the transmission through non-Hermitian scattering centers have been explored \citep{Li, Jin}. Interesting transport properties have been revealed in these non-Hermitian scatering systems. However, the authors did not check that whether it is still true that when the Aharonov-Bohm (AB) system is non-Hermitian, the transmission is still independent of the way we allocate the flux phase factor, which originates from the magnetic field threading through the AB ring, to the tunneling amplitudes between the AB ring and the leads, just like in the Hermitian systems. In addition, these papers are mainly focus on the $\mathcal{PT}$-symmetric cases, while in fact we can allocate the flux phase factor in different ways so that we can study the $\mathcal{PT}$ symmetric and asymmetric cases at the same time. Thus we should extend these systems to generalized non-Hermitian situations and check that if these different Hamiltonians with different flux factor allocation lead to the same result.
In this paper, we study the transport properties through a non-Hermitian AB ring system as shown in Fig. \ref{fig1}. There are two quantum dots (QDs) embedded in the two arms of the ring and the ring is attached to two metallic leads which are represented by two one-dimensional chains. The energy levels of these two QDs can be complex in order to take into account of the physical gain or loss during the interacting processes between the ring and the environment. By allocating the flux phase factor induced by the magnetic flux threading through the ring into the tunneling amplitudes between the QDs and the leads in different ways, the Hamiltonian of the system would have different formalisms. We calculate the transmissions of the AB ring by using these Hamiltonians of the same system and find that they are equal to each other. The transmission is not dependent on the way we distribute the phase factor, which is the same as in the Hermitian case. This proof paves the way for further studies of the non-Hermitian AB ring systems. Besides, by checking the conductance spectrum, we find that the asymmetric Fano profile would show up by just tuning the physical gain and loss of the system. Due to the interaction between the QDs and the environment, the two channels through the AB ring will be broadened or narrowed down. Electrons traversing these channels with different widths will interfere and result in Fano effect in the conductance spectrum. This non-Hermitian system provides us with a simple model to check the influences of the environment on an otherwise isolated system.
\begin{figure}[!ht]
\centering
\includegraphics[width=3in]{FIG1.pdf}
\caption{Schematic setup of the Aharonov-Bohm ring system discussed in this paper. $u$ and $d$ are the two QDs embedded in the arms of the ring. The leads coupled to the ring are represented by two one-dimensional tight binding chains. $\phi$ is the magnetic flux threading through the ring.}
\label{fig1}
\end{figure}
The rest of the paper is organized as follows. In Sec. \ref{sec2}, we introduce the model Hamiltonians of the system. In Sec. \ref{sec3}, we calculate the transmissions through the AB ring according to the different Hamiltonians we introduced and compare these results. We also investigate the Fano profile in the conductance spectra of the system in this section. The last section (Sec. \ref{sec4}) is dedicated to a brief summary.
\section{Model and Hamiltonian}\label{sec2}
Consider an Aharonov-Bohm ring with a impurity site or quantum dot at each arm of the ring, see Fig. \ref{fig1}. We can add some imaginary potentials to the two QDs in the ring to represent the physical gain or loss during the interacting processes between the ring and the environment. The AB ring is also attached to two metallic leads which are represented by two one-dimensional chains. The Hamiltonian of such a system is
\begin{equation}\label{}
\mathcal{H} = \mathcal{H}_{DQD} + H_{Leads} + H_T,
\end{equation}
where each part of $\mathcal{H}$ is described as follows
\begin{equation}\label{}
\begin{split}
\mathcal{H}_{DQD}=& E_u f_u^\dagger f_u + E_d f_d^\dagger f_d, \\
\mathcal{H}_{Leads}=& -t_0 \sum_{j} (c_j^\dagger c_{j+1} + H.c.), \\
\mathcal{H}_T= - & \sum_{n=u,d} (t_{nL} c_{-1}^\dagger f_n + t_{nR} c_1^\dagger f_n + H.c. ).
\end{split}
\end{equation}
Here, $f_u^\dagger$ ($f_u$) and $f_d^\dagger$ ($f_d$) are the creation (annihilation) operators for the quantum dots implemented in the two arms of the ring with one single energy level $E_u$ and $E_d $ respectively. When $E_u$ and $E_d$ are both real, the Hamiltonian is Hermitian while if one or two of them are complex, the Hamiltonian becomes non-Hermitian. $c_j^\dagger$ ($c_j$) is the creation (annihilation) operator at site $j$ with $t_0$ being the hopping amplitude between the nearest sites in the chain. $t_{nL(R)}$ is the tunneling amplitude between the QDs and the lead $L$ ($R$). The tunneling amplitudes can be complex due to the allocation of the phase factor which originates from the magnetic filed threading through the ring. Due to gauge transformation, these phase factors can be allocated differently and leads to different Hamiltonians. In the Hermitian case, the physical variables will not be influenced by these differences, however, as we will show later, this physical picture also applies to this non-Hermitian system. We will mainly consider two kinds of Hamiltonians. One is symmetric with the phase factor distributed averagely to the four tunneling amplitudes
\begin{equation}\label{symcase}
\begin{split}
&t_{uL}= t e^{\frac{i\phi}{4}}, \hspace{0.5cm} t_{dL} = t e^{\frac{-i\phi}{4}},\\
&t_{uR}= t e^{\frac{-i\phi}{4}}, \hspace{0.5cm} t_{dR} = t e^{\frac{i\phi}{4}}.
\end{split}
\end{equation}
However, the Hamiltonian can also be written in an asymmetric form if the coupling strengths are chosen as
\begin{equation}\label{asymcase}
t_{uL}=t e^{i\phi}, \hspace{0.5cm} t_{uR}=t_{dL}=t_{dR}=t.
\end{equation}
We will calculate the transmissions through the Aharonov-Bohm ring with these two different Hamiltonians and will compare the results with the Hermitian system in the following.
\section{Results and discussions}\label{sec3}
Now we calculate the transmission rate through the non-Hermitian AB ring with imaginary potentials. Suppose that the wave function of the system can be written as the linear combination of atomic orbitals, $| \Psi_k \rangle = \sum_n a_{nk} |n\rangle + \sum_{j} a_{jk} |j \rangle$, where $a_{nk}$ and $a_{jk}$ are the probability amplitudes to find the electrons with momentum $k$ at the QD site $n=u,d$ in the ring arms or at site $j$ in the leads, respectively. Assuming that there is an incoming electron from the left lead, and it is described by a plane wave which will be reflected and transmitted at the AB ring. Then we have
\begin{equation}\label{}
\begin{split}
a_{jkL} &= e^{ik\cdot j} + r_{LL} e^{-ik\cdot j}, \hspace{0.5cm} if \hspace{0.2cm} j<0 \\
a_{jkL} &= \tau_{RL} e^{ik\cdot j}, \hspace{0.5cm} if \hspace{0.2cm} j>0,
\end{split}
\end{equation}
where $r_{LL}$ and $\tau_{RL}$ are the reflection amplitude in the left lead and transmission amplitude from the left lead to the right lead, respectively. By substituting the wave function of the system into the Schr\"odinger equation $i \frac{\partial}{\partial t} |\Psi_k \rangle = H | \Psi_k \rangle$, we have
\begin{equation}\label{}
\frac{\partial}{\partial t} |\Psi_k \rangle = \dot{a}_{uk}|u\rangle + \dot{a}_{dk} |d\rangle + \sum_j \dot{a}_{jk} |j\rangle,
\end{equation}
and
\begin{equation}\label{}
\begin{split}
& H |\Psi_k \rangle = E_u a_{uk} |u\rangle + E_d a_{dk} |d\rangle \\
& -t_{uL} a_{uk} |-1\rangle - t_{uR} a_{uk} |1\rangle - t_{dL} a_{dk} |-1\rangle - t_{dR} a_{dk} |1\rangle \\
& -t_{uL}^* a_{-1k} |u\rangle - t_{uR}^* a_{1k} |u\rangle - t_{dL}^* a_{-1k} |d\rangle - t_{dR}^* a_{1k} |d\rangle \\
& -t_0 \sum_j a_{j-1,k} |j\rangle - t_0 \sum_{j} a_{j+1,k} |j\rangle.
\end{split}
\end{equation}
Let $a_{jk}(t) = a_{jk} e^{-i\omega t}$ with $\omega = -2t_0 \cos(k)$ being the energy dispersion of the one-dimensional chain. Then we have
\begin{equation}\label{}
i \frac{\partial}{\partial t} |\Psi_k \rangle = \omega a_{uk} |u\rangle + \omega a_{dk} |d\rangle + \sum_j \omega a_{jk} |j\rangle.
\end{equation}
After substituting these into the Schr\"odinger equation, we can get the following equations
\begin{subequations}
\begin{equation}\label{a}
-t_0 r_{LL} + t_{uL} a_{uk} + t_{dL} a_{dk} = t_0
\end{equation}
\begin{equation}\label{b}
-t_0 \tau_{RL} + t_{uR} a_{uk} + t_{dR} a_{dk} =0
\end{equation}
\begin{equation}\label{c}
t_{uL}^* r_{LL} + t_{uR}^* \tau_{RL} + (\omega - E_u) e^{-ik} a_{uk} = -t_{uL}^* e^{-2ik}
\end{equation}
\begin{equation}\label{d}
t_{dL}^* r_{LL} + t_{dR}^* \tau_{RL} + (\omega - E_d) e^{-ik} a_{dk} = -t_{dL}^* e^{-2ik}
\end{equation}
\end{subequations}
From Eq. (\ref{a}) and (\ref{b}), we have
\begin{equation*}
\begin{split}
a_{uk} &= \frac{1}{A} t_0 [t_{dR} (1+r_{LL}) - t_{dL} \tau_{RL} ], \\
a_{dk} &= \frac{1}{A} t_0 [-t_{uR} (1+ r_{LL}) + t_{uL} \tau_{RL} ],
\end{split}
\end{equation*}
with $A$ defined as $A=t_{uL}t_{dR} - t_{dL} t_{uR}$. Substituting $a_{uk}$ and $a_{dk}$ into Eq. (\ref{c}) and (\ref{d}), we can get the transmission and reflection coefficient, which are shown as follows.
\begin{widetext}
\begin{equation}\label{}
\begin{split}
&\tau_{RL} = \\
&\frac{[ (\omega-E_u)e^{-ik} t_0 t_{dL}^* t_{dR} + (\omega-E_d)e^{-ik} t_0 t_{uL}^* t_{uR} ] (e^{-2ik}-1)}
{A(t_{dL}^* t_{uR}^*-t_{uL}^* t_{dR}^*) - (\omega - E_u)e^{-ik} t_0 ( |t_{dL}|^2 + |t_{dR}|^2) - (\omega - E_d)e^{-ik} t_0 (|t_{uR}|^2 + |t_{uL}|^2) - (\omega - E_u)(\omega - E_d)e^{-2ik} t_0^2}
\end{split}
\end{equation}
and
\begin{equation}\label{}
r_{LL} = \frac{-A t_{uL}^* e^{-2ik} - (\omega - E_u)e^{-ik} t_0 t_{dR} - [ At_{uR}^* - (\omega - E_u) e^{-ik} t_0 t_{dL}] \tau_{RL}}
{At_{uL}^* + (\omega - E_u) e^{-ik} t_0 t_{dR}}.
\end{equation}
When the phase factor is averagely distributed to the four hopping amplitudes, as shown in Eq. (\ref{symcase}), $A=2it^2 \sin \frac{\phi}{2}$, and the transmission coefficient becomes
\begin{equation}\label{}
\tau_1= - \frac{[ (\omega - E_u) e^{i \frac{\phi}{2}} + (\omega - E_d) e^{-i\frac{\phi}{2}} ] (e^{-2ik} - 1) \Gamma}
{(\omega - E_u)(\omega - E_d) e^{-ik} + 2\Gamma(\omega - E_u) + 2 \Gamma (\omega - E_d) + 4 \Gamma^2 e^{ik} \sin^2 \frac{\phi}{2} }
\end{equation}
where $\Gamma = \frac{t^2}{t_0}$.
However, if the phase factor is distributed as in Eq. (\ref{asymcase}), then $A=t^2 (e^{i\phi} -1)$, and the transmission coefficient becomes
\begin{equation}\label{}
\tau_2 = - \frac{[ (\omega - E_u) + (\omega - E_d) e^{-i \phi} ] (e^{-2ik} - 1) \Gamma}
{(\omega - E_u)(\omega - E_d) e^{-ik} + 2\Gamma(\omega - E_u) + 2 \Gamma (\omega - E_d) + 4 \Gamma^2 e^{ik} \sin^2 \frac{\phi}{2} }.
\end{equation}
Apparently, $\tau_2=e^{-i\phi/2} \tau_1$, so the transmissions through the AB ring system, which is defined as $T=|\tau|^2$, will be the same for the different Hamiltonians. The transmission can be expressed as
\begin{equation}\label{T}
T = \frac{1}{|B|^2} 4 \Gamma^2 \sin^2 k [ (\omega - E_u)(\omega - E_u^*) + (\omega - E_u)(\omega - E_d^*)e^{i\phi} + (\omega - E_u^*)(\omega - E_d)E^{-i\phi} + (\omega - E_d)(\omega - E_d^*) ],
\end{equation}
where $B=(\omega - E_u)(\omega - E_d) e^{-ik} + 2\Gamma(2\omega - E_u - E_d) + 4 \Gamma^2 e^{ik} \sin^2 (\phi/2)$ is the denominator of the transmission amplitude. So the transmission is not dependent on how we distribute the phase factors in the Hamiltonian even when the system is non-Hermitian.
Now, let's consider the non-Hermitian cases with and without $\mathcal{PT}$-symmetry. If $E_u= \epsilon + i\gamma$ and $E_d=\epsilon - i\gamma$, the Hamiltonian is $\mathcal{PT}$-symmetric when the phase factor is written in the form in Eq. (\ref{symcase}), the transmission of the system is
\begin{equation}\label{}
T_1=\frac{1}{|B|^2} 4 \Gamma^2 \sin^2 k \{ 2[(\omega - \epsilon)^2 + \gamma^2] + 2\cos \phi (\omega - \epsilon)^2 + 4\gamma \sin \phi (\omega - \epsilon) - 2 \gamma^2 \cos \phi \}.
\end{equation}
However, if the Hamiltonian is written in an form without $\mathcal{PT}$-symmetry, as in Eq. (\ref{asymcase}), the transmission of the system becomes
\begin{equation}\label{}
T_2=\frac{1}{|B|^2} 4 \Gamma^2 \sin^2 k \{ 2[(\omega - \epsilon)^2 + \gamma^2] + 2\cos \phi (\omega - \epsilon)^2 + 4\gamma \sin \phi (\omega - \epsilon) - 2 \gamma^2 \cos \phi \},
\end{equation}
with $B=[(\omega - \epsilon)^2 + \gamma^2] e^{-ik} + 4\Gamma(\omega - \epsilon) + 4\Gamma^2 e^{ik} \sin^2(\phi/2)$. Thus $T_1 = T_2$, the transmissions calculated by using $\mathcal{PT}$-symmetric and -asymmetric Hamiltonian are the same.
\end{widetext}
In fact, if only one of the two QDs is coupled by imaginary potential, namely only $E_u$ or $E_d$ is complex, the transmissions calculated by those different Hamiltonians would also become equal with each other, as shown in Eq. (\ref{T}), though the Hamiltonian can not be $\mathcal{PT}$-symmetric any more. So as long as there is imaginary potential added to the QDs in the arms of the AB ring, the transmission of the system we get will not depend on the form of Hamiltonian we write down.
Next let's investigate the conductance properties of the system. The conductance through the AB ring is defined as $G=\frac{2e^2}{h} T$. Here we mainly focus on the conductance at the Fermi energy ($k=\pi/2$ then $\omega=0$) and we have
\begin{equation}\label{}
\tau(k=\frac{\pi}{2}) = -i \frac{2\Gamma (E_u e^{i \frac{\phi}{2}}+E_d e^{-i\frac{\phi}{2}})}{(E_u - 2i\Gamma)(E_d -2i\Gamma)+4\Gamma^2 \cos^2 \frac{\phi}{2}}.
\end{equation}
This is very similar to the expression in \cite{Agundez} except that the energy levels of the QDs are complex now. Due to the coupling to the leads, the level of the QD will be broadened and the width is represented by $2\Gamma$. Since $E_u$ and $E_d$ are complex, we can set $E_{u/d} = \epsilon_{u/d} + i \gamma_{u/d}$ with $\epsilon_{u/d}$ and $\gamma_{u/d}$ being real, then the width of the energy levels for the $u$ dot and the $d$ dot are $(2\Gamma-\gamma_u)$ and $(2\Gamma-\gamma_d)$, respectively. So if $\gamma_u \neq \gamma_d$, these two levels will have different widths, one is broader and the other is relatively narrow. Then there are two channels in this system, with appropriate parameters, the broader channel can be taken as a continuous background while the narrow one as a discrete resonant channel. Electrons traversing through these two different channels will interfere with each other and lead to the asymmetric Fano profile in the conductance spectra \cite{Fano, Mirosh}. We have supposed that the dots are symmetrically coupled to the leads, so the only way to differently change the widths of the two QD energy levels is by tuning $\gamma$, namely by tuning the physical gain or loss originating from the interaction between the QDs and the environment. The situation becomes more clear when $\gamma_u$ is positive while $\gamma_d$ is negative, since then one of the energy level will be broadened while the other will be narrowed down, thus the interferences of electrons traveling through these two channels will make the Fano profile more significant.
In Fig. \ref{fig2}, we present the conductance spectra of the AB ring system with different physical gain and loss. We take $t_0=1$ as the energy unit and set $\Gamma=0.1t_0$ throughout this paper. When there is no physical gain or loss in the system, the conductance spectra are always symmetric (Fig. \ref{fig2}(a)). When $\gamma_u \neq \gamma_d$, as shown by the red dashed curve in Fig. \ref{fig2}(b), (c) and (d), the asymmetric Fano lineshape shows up when $\phi=\pi/2$. The Fano profile becomes more sharp when the difference between $\gamma_u$ and $\gamma_d$ gets larger. Besides, there is a dip when $\phi=0$, which is denoted by the blue solid line and the dip will reach to zero when $\gamma_u=\gamma_d$ (see Fig. \ref{fig2}(d)). When $\phi=\pi$, the conductance keeps the symmetric Lorentzian shape (or zero) in situations with (or without) physical gain and loss, as represented by the black dot-dashed line in the figure.
\begin{figure}[!ht]
\centering
\includegraphics[width=3in]{FIG2.pdf}
\caption{(Color online) The conductance through the AB ring system with different physical gain and loss when $\phi=0.0$ (blue solid curve), $\phi=0.5\pi$ (red dashed curve) and $\phi=\pi$ (black dot-dashed curve). }
\label{fig2}
\end{figure}
Actually, if we choose $E_u=E_d=\epsilon$ and $\gamma_u = \gamma_d=\gamma$, which corresponds to a situation with balanced physical gain and loss in these two QDs. Then the conductance of the system becomes
\begin{equation}\label{G}
G=\frac{2e^2}{h}\cos^2 \frac{\phi}{2} \frac{(\epsilon - \gamma \tan \frac{\phi}{2})^2}{\epsilon^2 + \alpha^2},
\end{equation}
where $\alpha=\frac{\epsilon^2 - (4\Gamma^2 \sin^2 \frac{\phi}{2} - \gamma^2)}{4\Gamma}$. This is very similar to the standard formula for Fano resonance profile which is defined as \cite{Fano, Mirosh}
\begin{equation}\label{}
\sigma = \frac{(\epsilon + q)^2}{\epsilon^2 +1},
\end{equation}
where $q$ is the asymmetric parameter. We can simply take $q=-\gamma \tan (\phi/2)$ in Eq. (\ref{G}). When $\phi=0$, $q=0$, the conductance profile is symmetric and there is a dip down to zero, as shown by the blue solid line in Fig. \ref{fig2}(d). When $\phi=\pi$, $q=-\infty$, the conductance shows the standard symmetric Lorentzian type, like the black dot-dashed curve shows. When $\phi=0.5\pi$, we have $q=-\gamma$, and we have the asymmetric Fano profile and the dip shows up at $\epsilon=-q$, just as the red dashed line indicates. Though we do not normalize the conductance, all the characteristics revealed in the conductance spectrum of our system are consistent with the standard Fano profiles. The difference about this non-Hermitian system is that the broadened width of the QD energy level and thus the width of the channel are controlled by the physical gain and loss of the system. So the influences of the environment are directly reflected in the behavior of the conductance spectrum.
Another aspect needs to be noticed is that the maximum of the conductance will exceed $2e^2/h$ when the $\gamma_{u/d}$ becomes large, which denotes that the probability is not conserved in this system due to the non-Hermiticity.
\section{Summary}\label{sec4}
We investigate a non-Hermitian Aharonov-Bohm ring system in which the energy level of the two embedded quantum dots (QDs) in the two arms of the ring could be complex. The complex energy level represents the physical gain or loss during the interacting process between the AB ring and the environment. Due to the magnetic flux threading through the ring, the Hamiltonian of this model can be written in different forms by differently distributing the phase factor inducing by the magnetic field to the hopping amplitudes between the QDs and the leads. We calculate the transmission through the AB ring using these different Hamiltonians, including $\mathcal{PT}$-symmetric and -asymmetric cases, and find that it is not dependent on the way we distribute the phase factor in the Hamiltonian, which is the same as in the Hermitian case. In addition, by checking the conductance spectrum, we find that the asymmetric Fano profile can show up by just tuning the physical gain and loss of the system. The interaction between the QDs and the environment will broaden or narrow down the two channels through the ring and electrons traveling through different channels will interfere and result in Fano effect. So the influence of the environment is revealed in the transport properties of the system. This non-Hermitian Aharonov-Bohm ring system we discussed in this paper provides a simple model to check the influence of the environment on an otherwise isolated system and a demonstration of the basic principles of quantum mechanics. The proof we provide here, however, paves the way for further studies on non-Hermitian AB ring systems.
\section*{Acknowledgments}
This work has been supported by the NSFC under Grant No. 11274195 and the National Basic Research Program of China (973 Program) Grant No. 2011CB606405 and No. 2013CB922000. Shu Chen is supported by NSFC under Grants No. 11425419, No. 11374354 and No. 11174360, and the Strategic Priority Research Program (B) of the Chinese Academy of Sciences (No. XDB07020000).
| proofpile-arXiv_065-12198 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The themes of the research programs of the
TEXONO (Taiwan EXperiment On NeutrinO) Collaboration
are on the studies of
low energy neutrino and dark matter physics.
The Collaboration
realized the first particle physics experiment
in Taiwan at the
Kuo-Sheng Reactor Neutrino Laboratory (KSNL)
and, through the process,
the first basic research
collaboration among researcher scientists
from Taiwan and China~\cite{science2003}.
The efforts of the starting decade
catalyzed and laid the foundation to
the establishment of the
China Jinping Underground Laboratory (CJPL) in China,
together with its first generation China Dark matter
EXperiment (CDEX).
The history and evolution
of the TEXONO story is given
in the following Sections.
The scientific objectives,
status, results and prospects of
the neutrino physics program at KSNL,
as well as the underground physics program at CJPL,
are discussed.
Also surveyed is the
theory program inspired by
the experimental activities.
\section{History and Evolution}
\subsection{Foundation}
Phenomenal growth in basic and applied
research has been taking place in the Asia Pacific
region in the decades of 1980's and 1990's~\cite{nature1996}.
As the economy strengthened,
research projects in new and advanced subjects
were initiated.
Research infrastructures,
resources and positions were made available.
Research directions and traditions were explored
and defined, with far-reaching consequences
beyond their original subject matters.
Activities in experimental particle physics
started in Taiwan in the early 90's.
The starting projects involved participation
in international experiments, including
L3, CDF, Belle, AMS, RHIC-PHOBOS, with
contributions on various detector hardware,
data acquisition software and data analysis projects.
It became natural, almost inevitable, that
serious thoughts were given to an attempt to
``perform an experiment at home'', where
local researchers would take major responsibilities
in its conception, formulation, design, construction,
execution and scientific harvest.
Chang Chung-Yun (University of Maryland,
while on a sabbatical year at the Academia Sinica (AS))
and Lee Shih-Chang (AS) initiated
a research program towards such goals in 1996.
This ambition soon found a resonating chord with
Zheng Zhi-Peng (then Director of the Institute of
High Energy Physics (IHEP), Beijing, China) who
mobilized his Institute to participate.
Therefore, the project would carry with it
an additional pioneering spirit of being
the first collaboration in basic
research among scientists from Taiwan and China,
standing on a decade of mutual visits and
academic exchanges.
It was obvious that
many scientific, technical, administrative
and logistic issues
have to be ironed out to make
advances on these virgin grounds.
The author (H.T. Wong) was recruited by AS
in 1997 to take up the challenges of realizing such
visions. The Chinese groups were headed
by Li Jin (senior physicist at IHEP).
The TEXONO Collaboration was formed, where
the founding partners comprised institutes
from Taiwan (AS, Institute of
Nuclear Energy Research, Taiwan Power Company
Nuclear Station II, National Taiwan University
and National Tsing-Hua University),
China (IHEP,
China Institute of Atomic Energy (CIAE),
Nanjing University) and the USA (University
of Maryland).
The operation center has since been at
AS in Taiwan,
and the AS-group has been leading and coordinating
the efforts.
By the mid-2000's, the TEXONO Collaboration
has established research facilities and infra-structures,
formulated interesting research programs
and produced world-level scientific results.
International partners
from India (Banaras Hindu University)
and Turkey (Middle East Technical University,
Dokuz Eyl\"{u}l University) joined the
research program
and contribute in various major items.
In particular, an international
graduate student training scheme
was set up as part of the operation.
Numerous graduate students from China, India
and Turkey have stationed long-term
at AS to pursue research within of the
TEXONO program and produced research theses
from the outcomes.
\subsection{Research Directions and Strategies}
Anomalous results from solar and atmospheric neutrino
measurements~\cite{pdg-nuosc}
were gathering momentum in the 1990's,
cumulating in the presentation of evidence of
neutrino oscillation by the Super-Kamiokande
experiment in 1998~\cite{neutrino1998}.
The case of having missing energy density in the
Universe in form of Dark Matter was also getting
increasingly compelling. Non-accelerator based particle
physics was at its early stage and constituted
a good opportunity for a start-up research
group to move into.
Reactor neutrino was identified in
the foundation days as a
realistic platform for the TEXONO program.
One needs neutrino sources to study neutrino
physics, and reactor neutrino is an intense
and understood neutrino source, available for free
as a by-product of commercial power
reactor operation, and allows systematic studies
and control through Reactor ON/OFF comparison.
And $-$ most importantly, there are operating
power reactor plants in Taiwan within comfortable
commuting distance from AS.
The early target was a long baseline
reactor neutrino oscillation experiment~\cite{npbps97}.
Feasibility studies and a liquid scintillator R\&D program
were pursued~\cite{nim-liqscin} in the first years.
However, intense competition in the world stage
called for the necessity to re-consider such directions.
The Chooz experiment just began to produce results
while KamLAND was already advanced in securing
resources and has started
hardware construction~\cite{neutrino1998}.
It was necessary for the TEXONO program to
identify its niche, based on honest assessment
of its strength in terms of accessible resources,
manpower pool, experience and expertise.
The aspired goal
was that the first experiment should
on its own be able to produce valid scientific results.
The science subjects evolved to the studies
of neutrino interactions and properties, which
benefit from a location close to the reactor core
having an intense neutrino flux.
This would be complementary to
the neutrino oscillation
programs being pursued world-wide~\cite{pdg-nuosc},
where the experiments would
require baseline of kilometers or longer,
translating directly into large detector size
while the optimal detector technology (liquid scintillator)
and its technical details have mostly been
defined.
Reactor neutrino experiments before TEXONO
were all based on measurements of events at
M$\rm{eV_{ee}}$ (electron-equivalent energy $\rm{eV_{ee}}$ is
used throughout this article as unit to detector
response, unless otherwise stated)
or above, therefore only sampling the tail of
the reactor neutrino spectra.
The TEXONO program would open the previously-unexplored
detector window in the low energy regime.
To realize these goals,
we selected detector techniques where
``the best technology is in the market''
$-$ namely, scintillating crystal detectors~\cite{ap-cryscin}
and germanium (Ge) ionization detectors.
With the benefits of hindsights,
this important strategic decision
allowed a new group with ``zero background''
in running a particle
physics experiment to get lifted off
and start its flight in a
relatively short time.
The stage is thus set for the construction
of the KSNL reactor laboratory and for the formulation
of the details of its research programs.
As a record and for completeness,
the TEXONO group, together with
international partners, has considered
the possibility and performed feasibility
studies for a reactor measurement~\cite{theta13WP}
of the oscillation angle $\theta_{13}$ in
the early days of its formulation in 2003.
The merits with both the Kuo-Sheng Power Plant and the
new ``Nuclear Station IV'' at Lung-Men
were investigated.
Compared with other site proposals,
the Kuo-Sheng location lacks high mountains in
the appropriate distance, whereas Lung-Men Power Plant,
while having larger overburden, suffered from the lack
of a definite plan of starting operation
(it {\it still} does not operate in 2016!).
In addition, both plants are two-cores facilities,
weak in total neutrino flux compared to other
international proposals based at locations
with six cores or more.
Accordingly, this line of
research was not pursued after the
initial round of investigations.
A Taiwan team with three university groups
subsequently participated
in the Daya Bay experiment which provided
the best measurement of $\theta_{13}$~\cite{dayabay}
at the six-core Daya Bay Nuclear Power complex
in southern China.
\section{Kuo-Sheng Reactor Neutrino Laboratory
and the TEXONO Research Programs}
\subsection{The Facility}
The Kuo-Sheng Reactor Neutrino Laboratory (KSNL)
is located at a distance of 28~m from the core \#1
of the Kuo-Sheng Nuclear Power Station operated
by the Taiwan Power Company
at the northern shore of Taiwan.
The nominal thermal power output is 2.9~GW.
Conceptual design discussions were initiated in 1997.
First physics data taking started in July 2001.
A schematic view is depicted in Figure~\ref{fig::ksnlsite}a.
\begin{figure
\begin{center}
{\bf (a)}\\
\includegraphics[width=8.0cm]{FIGURES/f1a-KSNL-Schematics.pdf}\\
{\bf (b)}\\
\includegraphics[width=8.0cm]{FIGURES/f1b-KSNL-Shielding.pdf}
\caption{
(a) Schematic side view, not drawn to scale,
of the Kuo-Sheng Nuclear Power Station
Reactor Building,
indicating the experimental site.
The reactor core-detector distance is about 28~m.
(b) Schematic layout of the general purpose
inner target space,
passive shieldings and cosmic-ray veto panels.
}
\label{fig::ksnlsite}
\end{center}
\end{figure}
A multi-purpose ``inner target'' detector space of
100~cm$\times$80~cm$\times$75~cm is
enclosed by 4$\pi$ passive shielding materials
which have a total weight of about 50 tons.
The shielding provides attenuation
to the ambient neutron and gamma background, and
consists of, from inside out,
5~cm of OFHC copper, 25~cm of boron-loaded
polyethylene, 5~cm of steel, 15~cm of lead,
and cosmic-ray veto scintillator panels.
The schematic layout of of the shielding
structure is shown in Figure~\ref{fig::ksnlsite}b.
Different detectors can be placed in the
inner space for the different scientific programs.
The detectors are read out by a general purpose
electronics and data acquisition (DAQ) systems.
Earlier versions of home-made
electronics and VME-based DAQ~\cite{early-eledaq}
evolves into current version with a commercial-based
PXI-DAQ system with Flash Analog-to-Digital convertors
and Field Programmable Gate Array which provides
real time processing capabilities, with
DAQ-software via LabView packages.
The reactor laboratory is connected
via telephone line
(internet access not available to the reactor buildings)
to the home-base laboratory at AS, where remote access
and monitoring are performed regularly.
The data storage
capacities are about 2~Tbytes
{\it in situ} at KSNL and
500~Tbytes at the operation
base at AS.
\subsection{Reactor Neutrino Source}
The standard operation of the Kuo-Sheng Power Plant
includes about 18~months
of Reactor ON time at nominal power
followed by about 50~days of
Reactor outage OFF period.
Reactor operation data on the thermal power output
and control rod status
as functions of time and locations within the core
are provided, when necessary, to the experiment by the
Power Station.
The $\bar{\nu}_e$'s emitted in
power reactors
are predominantly
produced through $\beta$-decays of
(a) the fission products, following the
fission of the four dominant fissile isotopes:
$^{235}$U, $^{238}$U, $^{239}$Pu and $^{241}$Pu,
and
(b) $^{239}$U, following
the neutron capture on the $^{238}$U fuel:
$^{238}$U(n,$\gamma$)$^{239}$U.
\begin{figure
\begin{center}
{\bf (a)}\\
\includegraphics[width=8.0cm]{FIGURES/f2a-Rnu-Spectra.pdf}\\
{\bf (b)}\\
\includegraphics[width=8.0cm]{FIGURES/f2b-Recoil-Spectra.pdf}
\caption{
(a)
Typical total reactor $\bar{\nu}_e$ spectrum~\cite{texonomunu,texononue},
normalized to per fission in a 1-MeV energy bin.
(b)
The observable recoil spectra due to
reactor-$\bar{\nu}_e$ interactions on Ge target
via Eq.~\ref{eq::nuai}
with $\phi ( \bar{\nu}_e ) = 10^{13}~{\rm cm^{-2} s^{-1}}$,
neutrino magnetic moment
and neutrino milli-charge fraction
at the current bounds from direct experimental searches:
$\mu_{\nu} = 2.9 \times 10^{-11} ~ \rm{\mu_B}$~\cite{gemma} and
$| \delta_{\rm Q} | = 1.1 \times 10^{-12}$~\cite{numq},
respectively. Overlaid are the SM
$\bar{\nu}_e$-e and coherent scattering $\bar{\nu}_e$-N.
Quenching effects of nuclear recoils are taken into account.
}
\label{fig::rnu+diffcs}
\end{center}
\end{figure}
The reactor neutrino spectra ($\rm{\phi ( \bar{\nu_e} ) }$)
as function of neutrino energy
($E_{\nu}$) due to the individual components
are summed as a function of time
according to the relative contributions
per fission~\cite{texonomunu,texononue},
and a typical combined reactor neutrino spectrum
is shown in Figure~\ref{fig::rnu+diffcs}a.
The typical total flux at KSNL site is
$\rm{6.4 \times 10^{12} ~ cm^{-2} s^{-1} }$.
\subsection{Low Energy Neutrino Physics}
Investigations of
neutrino properties and interactions
can reveal physics within the Standard Model (SM)
and probe new physics beyond it (BSM).
The KSNL site provides an intense flux
of $\bar{\nu}_e$ and is ideal for such investigations.
The nuclear and electron recoil differential spectra
due to reactor $\bar{\nu}_e$
as a function of measurable recoil energy $T$
are depicted in
Figure~\ref{fig::rnu+diffcs}b,
showing signatures due to both SM interactions
and BSM neutrino electromagnetic effects
at the current limits.
The physics potentials becomes richer
with lower detector threshold.
New detector technologies are necessary
to open new windows of measurable energy.
The objectives of our current
detector R\&D program
are to develop detectors with
modular mass of $\mathcal{O}$(1~kg),
physics threshold
of $\mathcal{O} ( 100 ~ \rm{eV_{ee}} )$ and background
level at threshold of $\mathcal{O}( 1~\rm{kg^{-1}k\rm{eV_{ee}}^{-1}day^{-1}})$~\cite{texonoprogram}.
Intense research efforts are invested
on the operation and optimization and efficiency measurements
of Ge-detectors at sub-keV sensitivities~\cite{texono-Ge-RandD,canberra},
crucial to the studies of neutrino-nucleus coherent scattering
and to light Dark Matter searches discussed in the the
following Sections.
Complementary to and supporting the neutrino physics
and low energy detector programs is the acquisition
of low background techniques crucial for the
low count-rate experiments. Radio-purity
levels of various hardware components
were measured with different techniques.
In particular, the TEXONO-CIAE group
explored a new arena and
performed trace contamination measurements on
crystal and liquid scintillator materials
with accelerator mass spectrometry techniques~\cite{ciaeams}.
\subsubsection{Neutrino Electromagnetic Properties}
An avenue of BSM is the study
of possible neutrino electromagnetic
interactions~\cite{nuem-review}
on atomic target $A$,
via the interaction:
\begin{equation}
\label{eq::nuai}
\bar{\nu}_e ~ + ~ A ~ \rightarrow ~
\nu_X ~ + ~ A^+ ~ + ~ e^- ~~.
\end{equation}
The target can be taken as free electrons
at $T$ above the atomic energy scale.
Otherwise, atomic physics effects
have to be taken into account~\cite{nuemai}.
The neutrino magnetic moment ($\mu_{\nu}$) is an
intrinsic neutrino property
that describes possible
neutrino-photon couplings via its spin~\cite{munureview,texonomunu}.
The helicity is flipped in $\mu_{\nu}$-induced interactions.
Observations of $\mu_{\nu}$ at levels relevant to present or future
generations of experiments would strongly favor the case of
Majorana neutrinos~\cite{naturalnessth}.
The differential cross-section with reactor $\bar{\nu}_e$
is depicted in Figure~\ref{fig::rnu+diffcs}b,
and is given by
\begin{equation}
\label{eq::mm}
( \frac{ d \sigma }{ dT } ) _{\mu_{\nu}} ~ = ~
\frac{ \pi \alpha _{em} ^2 {\it \mu_{\nu} } ^2 }{ m_e^2 }
[ \frac{ 1 - T/E_{\nu} }{T} ] ~ .
\end{equation}
above the atomic energy regions ($T > 10~{\rm keV}$ for Ge).
The $\mu_{\nu}$ contributions
are enhanced at low energy with modifications of
the atomic binding energy effects~\cite{nuemai,nmmai,munuai10}.
\begin{figure}
\begin{center}
\includegraphics[width=8.0cm]{FIGURES/f3-Ge-BaselineDesign.pdf}
\caption{
Schematic layout of the Ge-detector
with its anti-Compton detectors
as well as inner shieldings and
radon purge system.
This is the baseline design
for Ge-experiments at KSNL and
CDEX-1.
}
\label{fig::ge-blndesign}
\end{center}
\end{figure}
\begin{figure
\begin{center}
\includegraphics[width=8.0cm]{FIGURES/f4-Ge-Residual.pdf}
\caption{
The residual plot
on the combined Reactor ON data
over the OFF-background spectra.
The allowed 2$\sigma$-band for the search
of neutrino magnetic moments is superimposed~\cite{texonomunu}.
}
\label{fig::munuresidual}
\end{center}
\end{figure}
The neutrino spin-flavor precession (SFP) mechanism,
with or without matter
resonance effects in the solar medium,
has been used to explain solar neutrino
deficit~\cite{sfpsolar}. This scenario
is compatible with all solar neutrino data
before 2003 until the terrestrial
KamLAND experiment selected
the scenario of neutrino oscillation at
large mixing angle~\cite{kamland}
as {\it the} solution to the
solar neutrino problem~\cite{pdg-nuosc}.
The TEXONO program pioneered the studies
of neutrino physics in the low energy ($T \ll 1~{\rm M\rm{eV_{ee}}}$)
regime~\cite{texonomunu}.
The $\mu_{\nu}$-experiment adopted
an ultra-low-background
high-purity germanium
detector of mass 1.06~kg surrounded by
NaI(Tl) and CsI(Tl) crystal scintillators
as anti-Compton detectors, as schematically
depicted in Figure~\ref{fig::ge-blndesign}.
The setup was placed in the inner volume
within the shielding structure of Figure~\ref{fig::ksnlsite}b.
A detection threshold of 5~k$\rm{eV_{ee}}$ and
a background level of 1~$\rm{kg^{-1}k\rm{eV_{ee}}^{-1}day^{-1}}$ at KSNL
near threshold were achieved.
The reactor $\rm{\phi ( \bar{\nu_e} ) }$ below
2~MeV is poorly modelled and contributed
to systematic uncertainties~\cite{lenu} to
earlier experiments at the M$\rm{eV_{ee}}$ range~\cite{munu}.
At $T \ll E_{\nu} \sim {\rm M\rm{eV_{ee}}}$,
the potential $\mu_{\nu}$-signal rates
is much increased due to the {\it 1/T} dependence
of Eq.~\ref{eq::mm},
and is significantly higher than the
SM $\bar{\nu}_e$-e ``background''
making its uncertainties less important.
The $\mu_{\nu}$-rate is mostly
independent of $E_{\nu}$ at
$T \sim$10-100~k$\rm{eV_{ee}}$, such that the $\mu_{\nu}$-rates
depend only on the well-known total reactor neutrino flux
but not the details of $\rm{\phi ( \bar{\nu_e} ) }$.
thereby reducing the systematic uncertainties.
Based on 570.7 and 127.8 days of
Reactor ON and OFF data, respectively,
a limit of
\begin{equation}
\label{eq::munulimit}
\mu_{\nu} (\bar{\nu}_e) < 7.4 \times 10^{-11} ~ \rm{\mu_B}
\end{equation}
at 90\% confidence level (CL) was derived.
This result improved over existing limits~\cite{munu}
and probed the $\mu_{\nu}$-SFP scenario
at the relevant range to the solar neutrino problem~\cite{sfpsolar}.
The residual Reactor ON$-$OFF spectrum
is displayed in Figure~\ref{fig::munuresidual}.
An analogous process to $\mu_{\nu}$-interactions
is the neutrino radiative decay~\cite{rdkmunu}
$\nu_i ~ \rightarrow ~ \nu_j ~ + ~ \gamma$
where a change of the
neutrino helicity-states takes place
and a final-state real photon is produced.
The decay rate $\Gamma _{ij}$ and
the decay lifetime $\tau _{ij}$
is related to $\mu_{ij}$ via
\begin{equation}
\label{eq::rdk}
\Gamma_{ij} = \frac{ 1 }{ \tau _{ij} } =
\frac{1}{8 \pi} \frac{( m_i^2 - m_j^2 ) ^ 3}{m_i^3}
\mu_{ij}^2 ~ ~ ,
\end{equation}
where $m_{i,j}$ are the masses for
$\nu_{i,j}$.
The $\mu_{\nu}$-limit in Eq.~\ref{eq::munulimit}
translates to indirect bounds:
\begin{eqnarray}
\frac{\tau_{13}}{m_{1}^3} ( {\rm I:} \nu_1 \rightarrow \nu_3 )
& > & 3.2 \times 10 ^{27} ~ {\rm s / eV^3} \nonumber
\\
\frac{\tau_{23}}{m_{2}^3} ( {\rm I:} \nu_2 \rightarrow \nu_3 )
& > & 1.2 \times 10 ^{27} ~ {\rm s / eV^3}
\\
\frac{\tau_{21}}{m_{2}^3} ( {\rm N/I:} \nu_2 \rightarrow \nu_1 )
& > & 5.0 \times 10 ^{31} ~ {\rm s / eV^3 } \nonumber
\end{eqnarray}
for the normal(N) or inverted(I)
neutrino mass hierarchies~\cite{pdg-nuosc} at 90\% CL,
and are much more stringent than those from the direct
search experiments.
Experiments with similar
baseline design of Figure~\ref{fig::ge-blndesign}
are further pursued by the GEMMA experiments at
the Kalinin Reactor in Russia,
and the current $\mu_{\nu}$-limit~\cite{gemma}
exceeds that of Eq.~\ref{eq::munulimit}.
Additional work on the KSNL Ge-data
by the TEXONO group derived the flux
and placed new constraints on the magnetic
moments of $\nu_e$~\cite{texononue2005},
as well as on possible axion emissions
from the reactor~\cite{texonoaxion2007}.
Another analysis studied the production
and decay of the $^{73}$Ge$^*$(1/2$^-$) metastable
states and placed constraints on neutral-current
excitation cross-sections~\cite{texonoge73}.
\begin{figure}
\begin{center}
\includegraphics[width=8.0cm]{FIGURES/f5-CsI-Array.pdf}
\caption{\label{fig::csi}
Schematic drawing of the CsI(Tl)
scintillating crystal array
for the KSNL $\bar{\nu}_e$-e scattering
measurements~\cite{texononue}.
Light output is recorded
by PMTs at both ends.
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
{\bf (a)}\\
\includegraphics[width=8.0cm]{FIGURES/f6a-CsI-Residual.pdf}\\
{\bf (b)}\\
\includegraphics[width=8.0cm]{FIGURES/f6b-CsI-Interference.pdf}
\caption{ \label{fig::nue-spectra}
(a)
The combined
residual spectrum of Reactor ON data
over background~\cite{texononue}
in the 3$-$8~M$\rm{eV_{ee}}$ energy region.
The blue and red lines correspond to the SM expectations
and to the best-fit of the data, respectively,
showing excellent agreement.
(b)
The measurement of interference term based on
a residual spectrum of subtracting the charged-
and neutral-currents contributions from the data,
in the 3$-$8 M$\rm{eV_{ee}}$ energy range.
The scenario of
(no,constructive,destructive)-interference
is denoted by $\eta$=(0,1,$-$1).
The measurements verify
the SM expectation of $\eta = - 1$.
}
\end{center}
\end{figure}
\subsubsection{Neutrino-Electron Elastic Scattering}
The TEXONO program has provided the
best cross-section measurement on
two of the fundamental leptons in Nature $-$
$\bar{\nu}_e$ with electrons~\cite{texononue}:
\begin{equation}
\label{eq::nuew}
\bar{\nu}_e ~ + ~ e^- ~ \rightarrow ~
\bar{\nu}_e ~ + ~ e^- ~~.
\end{equation}
Reactor $\bar{\nu}_e$ provides a unique
laboratory to measure
neutrino-electron scattering,
and therefore probe electro-weak physics~\cite{pdg-electroweak}
at the MeV momentum transfer range.
The $\bar{\nu}_e$-e interaction,
together with the analogous $\nu_e$-e studied
with accelerator neutrinos~\cite{lsndnue},
are among the few
processes which proceed via charged- and
neutral-currents {\it and} their interference channel.
The SM cross-section can be written as:
\begin{eqnarray}
\left[ \frac{d\sigma}{dT}(\bar{\nu}_{e}e ) \right] _{SM} & = &
\frac{G_{F}^{2}m_{e}}{2\pi } \cdot
[ ~ \left(g_{V}-g_{A}\right) ^{2} \\
& + & \left( g_{V}+g_{A}+2\right) ^{2}\left(1-
\frac{T}{E_{\nu }}\right) ^{2} \nonumber \\
& - & (g_{V}-g_{A})(g_{V}+g_{A}
+2)\frac{m_{e}T}
{E_{\nu}^{2}} ~ ] . \nonumber
\label{eq::gvga}
\end{eqnarray}
The SM assignments to the electroweak coupling constants
are:
\begin{equation}
g_{V}=-\frac{1}{2}+2\sin ^{2}\theta _{W}\text{ \ \ \ \ and \ \ \ \ }
g_{A}=-\frac{1}{2}\label{eq_gvga} ~~~ ,
\end{equation}
where $\rm{ sin ^2 \theta _W }$ is the weak mixing angle.
A scintillating CsI(Tl) crystal detector
array~\cite{texononue,csiRandD},
as depicted in Figure~\ref{fig::csi},
was constructed for this measurement.
The detector is as a proton-free target,
with modules packed into a matrix array, having minimal
inactive dead space due to the teflon wrapping sheets.
A total of $12 \times 9$ array was deployed
giving a total mass of 187~kg.
Each crystal module is 40~cm in length with
light output read out by photo-multipliers (PMT)
at both ends. The sum of the normalized PMT signals
provides the energy, while their difference
defines the longitudinal location. Therefore,
event reconstruction in three-dimension
is achieved. The fiducial volume was defined
to be the inner crystals
with a separation of $>$4~cm
from the PMTs at both ends.
Reactor ON/OFF data,
of 29882/7369 kg-days strength were taken.
The Reactor ON over background residual spectrum,
as depicted in Figure~\ref{fig::nue-spectra}a
shows excellent consistency with SM predictions.
The ratio of experimental to SM cross-sections of
\begin{equation}
\frac{R_{expt} ( \nu )}{R_{SM} ( \nu )}
= 1.08 \pm 0.21 (stat) \pm 0.16 (sys)
\end{equation}
was measured.
After accounting for the charged- and neutral-current
components, the SM destructive interference
in $\bar{\nu}_e$-e interactions was verified,
as illustrated in Figure~\ref{fig::nue-spectra}b.
\begin{figure}
\begin{center}
\includegraphics[width=8.0cm]{FIGURES/f7-CsI-gVgA.pdf}
\caption{\label{fig::gvga}
Best-fit results
in $( g_{V} , g_{A} )$ space and in the $\rm{ sin ^2 \theta _W }$ axis
from the TEXONO-CsI(Tl) experiment at KSNL
on $\bar{\nu}_e -$e~\cite{texononue} and
the LSND experiment on $\nu_e -$e~\cite{lsndnue}.
The allowed regions are defined by
their corresponding statistical uncertainties.
}
\end{center}
\end{figure}
Constraints on the electroweak parameters
$( g_V , g_A )$ were placed, as
illustrated in Figure~\ref{fig::gvga}.
The corresponding weak mixing angle
at the squared 4-momentum transfer range of
$\rm{Q^2 \sim 3 \times 10^{-6} ~ GeV^2}$
was measured:
\begin{equation}
\rm{ sin ^2 \theta _W } = 0.251 \pm 0.031 ({\it stat}) \pm 0.024 ({\it sys}) ~~.
\end{equation}
The consistency of the data with SM
can be translated to bounds on the
neutrino charge radius of
\begin{equation}
-2.1 \times 10^{-32} ~{\rm cm^{2}}
~ < ~ \langle r_{\bar{\nu}_e}^2 \rangle ~ < ~
3.3 \times 10^{-32} ~{\rm cm^{2}}
\end{equation}
at 90\% CL, improving over earlier limits.
\subsubsection{Neutrino-Nucleus Elastic Scattering}
The current theme of the neutrino physics program
at KSNL is on the observation of
the elastic scattering between a neutrino and a
nucleus ($\nu N$)~\cite{texonoprogram,nuNcoh} :
\begin{equation}
\label{eq::nuN}
\nu ~ + ~ N ~ \rightarrow ~
\nu ~ + ~ N
\end{equation}
It is a fundamental
SM-predicted
neutrino interaction which has never been observed.
It probes coherence effects in electroweak interactions~\cite{nuNalpha},
and provides a sensitive test to physics beyond SM.
The coherent interaction plays an important role
in astrophysical processes
and constitutes the irreducible background channel to
forthcoming generation of dark matter experiments.
The maximum nuclear recoil energy for a Ge target (A=72.6)
due to reactor $\bar{\nu}_e$ is about 2~${\rm keV_{\rm nr}}$.
The quenching factor (ratio of ionization to total deposited energy),
is about 0.2 for Ge in the
$< {\rm 10~ keV_{\rm nr}}$ region~\cite{texono-Ge-RandD}.
Accordingly, the maximum measurable energy
for nuclear recoil events in Ge due to reactor
$\bar{\nu}_e$ is about 300~$\rm{eV_{ee}}$.
The typical differential spectrum
is given in Figures~\ref{fig::rnu+diffcs}b.
At benchmark sensitivities,
the expected rate is of $\mathcal{O}{\rm ( 10~kg^{-1}day^{-1} )}$
with a signal-to-background ratio $>$50.
Improvement of the lower reach of detector sensitivity
without compromising background
is therefore crucial for such experiments,
and are the focuses of our current research.
\subsection{Dark Matter Searches}
The goal of measuring the
neutrino-nucleus coherent scattering
process of Eq.~\ref{eq::nuN} at KSNL
leads to the development of low threshold
Ge-detector with sub-keV sensitivity~\cite{texono-Ge-RandD}.
This detector technology
naturally brings the Collaboration
to venture into the important arena on
direct Dark Matter searches~\cite{texonoprogram}.
Weakly Interacting Mass Particles (WIMPs, denoted by $\chi$)
are leading candidates to the Dark Matter Problem in the
Universe~\cite{cdmpdg14}.
The elastic recoils between WIMPs
and the nuclei
\begin{equation}
\label{eq::chiN}
\chi ~ + ~ N ~ \rightarrow ~
\chi ~ + ~ N
\end{equation}
are the favored channel in direct dark matter
search experiments.
Consistency with observations on
cosmological structure formation
requires that WIMPs
should be massive and their motions are non-relativistic.
The measurable nuclear recoil energy is therefore small,
such that the experimental requirements are similar to those
for $\nu N$ where low detector threshold is crucial.
\begin{figure}
\begin{center}
{\bf (a)}\\
\includegraphics[width=8.0cm]{FIGURES/f8a-bs-events.pdf}\\
{\bf (b)}\\
\includegraphics[width=8.0cm]{FIGURES/f8b-tau-pPCGe.pdf}\\
{\bf (c)}\\
\includegraphics[width=8.0cm]{FIGURES/f8c-tau-nPCGe.pdf}
\caption{\label{fig::tau-PCGe}
(a)
Measured pulse profiles for typical Bulk and Surface
events~\cite{texono-Ge-RandD,bsel2014}
in p-PCGe.
(b)
The rise-time($\tau$) distribution in p-PCGe.
The selection criteria of signal events in the Bulk
is defined by the cut at $\tau_0$.
(c)
The rise-time($\tau$) distribution in n-PCGe.
There are no anomalous surface events.
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
{\bf (a)}\\
\includegraphics[width=8.0cm]{FIGURES/f9a-WIMP-ExPlot-SI.pdf}\\
{\bf (b)}\\
\includegraphics[width=8.0cm]{FIGURES/f9b-WIMP-ExPlot-SD.pdf}
\caption{\label{fig::explot}
Exclusion plots of $\chi N$
(a) spin-independent and (b) spin-dependent
interactions, showing the TEXONO~\cite{texonocdm2013}
and CDEX~\cite{cdex,cdex0} results
together with the allowed regions and limits
from other benchmark experiments~\cite{cdmpdg14}.
}
\end{center}
\end{figure}
We opened the sub-keV detector window with pilot
``Ultra-Low-Energy'' Ge detectors (ULEGe)~\cite{texonoprogram}
with modular mass of the order of 10~g.
Data taken with a 20~g ULEGe array at
an analysis threshold
of 220~$\rm{eV_{ee}}$ at KSNL~\cite{texonocdm2009},
a surface laboratory
hardly appropriate for such experiments,
already allowed the probing of new
parameter space in the ``Light WIMPs'' region
of several GeV in mass.
Our early efforts on WIMP searches
inspired advances
in point-contact germanium detectors (PCGe)
by a US group based on an earlier design~\cite{pcge},
realizing sub-keV sensitivity with a modular mass
at kg-scale.
The CoGeNT experiment subsequently reported
possible allowed regions for light WIMPs~\cite{cogent}
turning this into a domain of intense interest.
The TEXONO group performed a
measurement at KSNL
with a PCGe with p-type germanium (p-PCGe)
of 840~g fiducial mass~\cite{texonocdm2013},
following the baseline setup of Figure~\ref{fig::ge-blndesign}.
Crucial to this study is
the bulk-surface events differentiation
at the sub-keV range~\cite{texono-Ge-RandD,bsel2014}.
As illustrated with Figure~\ref{fig::tau-PCGe}a,
the surface background events in p-PCGe detectors,
exhibit slower rise-time and
partial charge collection compared to
bulk events which are candidates for $\chi N$-signals.
The measured rise-time distribution
is depicted in Figure~\ref{fig::tau-PCGe}b.
In the contrary, n-type PCGe does not
have anomalous surface layer and shows
uniform rise-time, as shown in
Figure~\ref{fig::tau-PCGe}c.
Selection schemes of bulk signal
events in p-PCGe were devised, and the
corresponding signal-retaining and background-rejecting
efficiencies were measured~\cite{bsel2014}.
Our results indicated deficiencies of the previous
approaches, and the excess events
interpreted to be WIMP candidate signals
are due to incomplete correction of the
leakage of surface background into the
bulk signal samples.
The exclusion plot
of $\chi N$ spin-independent
cross-section versus WIMP-mass
is displayed in Figure~\ref{fig::explot}a.
Light WIMP searches started as
an exploratory by-product of the TEXONO
program at KSNL on sub-keV Ge-detectors
and neutrino-nucleus coherent scattering.
As we shall discuss in
Section~\ref{sect::cjpl}, these
efforts would inspire and catalyze
the realization of the deepest and
largest underground scientific facility
at CJPL.
\subsection{Theory Programs}
The TEXONO experimental program and
the unique data at KSNL triggered
several theory research directions,
and establishes
fruitful collaborations among
the experimental and theory researchers.
One direction, spearheaded by the TEXONO Turkish groups,
is to study the constraints from neutrino-electron
scattering to BSM models, especially those where
data at low-$T$, such as those from KSNL,
would provide enhanced sensitivities.
These include~\cite{texonoBSMconstraints}
generic non-standard interactions,
unparticle physics, non-commutative physics and
dark photon physics.
Another line is rooted in
our current experimental theme and uniqueness
of using novel germanium detectors with
sub-keV sensitivities and very good energy resolution
to explore neutrino and dark matter physics.
This technique excels in probing experimental
signatures at the ``atomic'' energy range
and with possible spectral structures
due to atomic ionization.
A pilot investigation~\cite{munuai10}
set the stage on the subsequent studies
of ``atomic ionization'' cross-sections.
Theorists in Taiwan (J.W. Chen and C.P. Liu and their groups),
through collaboration with the TEXONO group,
introduced state-of-the-art theoretical tools
in atomic physics
(MCRRPA$-$ Multi-Configuration
Relativistic Random-Phase Approximation~\cite{mcrrpa})
leading to a series of results
on experimental signatures due to neutrino
electromagnetic effects~\cite{nuemai,numq,munusterilenu},
illustrated in the differential cross-sections
of Figure~\ref{fig::rnu+diffcs}b.
Some of the results provide positive feedback
to the experiment programs and data interpretation,
examples of which include:
\begin{enumerate}
\item
Studies of the ``neutrino milli-charge''
probe possible helicity conserving QED-like interactions.
Finiteness of the neutrino charge fraction
($\delta _Q$) would imply
neutrinos are Dirac particles.
It was demonstrated that
atomic ionization effects due to $\delta _Q$
lead to big enhancement
in cross-sections~\cite{numq},
as depicted in Figure~\ref{fig::rnu+diffcs}b.
The known ratios of peaks at discrete binding
energies provide
smoking gun signatures for positive observations.
\item
A massive sterile neutrino can have transition-$\mu_{\nu}$
and interact with matter to become a light SM neutrino.
If it is non-relativistic, as in the case
of a dark matter candidate, the interaction would have
a cross-section pole and enhancement at half its
mass~\cite{munusterilenu}. Constraints were
derived from KSNL data.
\item
The quantum mechanical coherence effects
of electroweak interaction
can be quantitatively studied,
using the $\nu N$ scattering of Eq.~\ref{eq::nuN}.
We derived how the degree of coherence would
vary with realistic neutrino sources, target
materials and detection threshold~\cite{nuNalpha},
showing how the forthcoming experimental projects
can complement each other.
\end{enumerate}
\section{China Jinping Underground Laboratory
and the CDEX Research Programs}
\label{sect::cjpl}
\subsection{Foundation}
The potentials of dark matter experiments
were immediately realized after
initial sub-keV measurements achieved with
germanium detectors~\cite{texonoprogram}.
While the main thrust of the Collaboration
remains on the development of the detector
techniques and reactor neutrino physics at KSNL
where dark matter physics is a by-product,
the TEXONO-Tsinghua University (THU) group explored
the means to turn it into dedicated experiments.
An underground site is therefore mandatory.
The THU group spearheaded a pilot project of
installing a 5~g ULEGe detector
at the Yangyang Underground Laboratory in
Korea in 2004, supported by the KIMS group as host
of that Facility.
A construction road tunnel was completed
in 2008 under the Jinping mountains in
Sichuan province in China
to facilitate the construction of the numerous
hydro-electric power facilities in that region
$-$ the flagship project is the Jinping-I dam
which, at 305~m, is the tallest dam in the world.
The physics communities in China immediately
recognized the opportunities and potentials.
By 2010, agreement was made between the site owner
Yalong River Hydropower Development Company
and THU to jointly develop
and operate an underground laboratory facility −
the China Jinping Underground Laboratory (CJPL)~\cite{cjpl-birth}.
Civil engineering proceeded in full swing.
The TEXONO-THU group got significant
boost in its manpower and resource pool
to match the expanding engineering
demands and scientific program.
The group evolved and emerged to form and lead a new
CDEX (China Dark matter EXperiment)
Collaboration focusing on,
in its start-up phase,
dark matter experiments at CJPL.
The CDEX program is led by
Kang Ke-Jun (a former Vice President of THU).
The TEXONO Collaboration became
a founding partner and participant
of this endeavour.
\begin{figure}
\begin{center}
{\bf (a)}\\
\includegraphics[width=8.0cm]{FIGURES/f10a-CJPL1.pdf}\\
{\bf (b)}\\
\includegraphics[width=8.0cm]{FIGURES/f10b-CJPL2.pdf}
\caption{\label{fig::cjpl}
Schematic diagram of
(a) CJPL~Phase-I inaugurated in 2012,
showing the space allocation
to the CDEX and PandaX Dark Matter experiments, as well as to
the radio-purity screening facilities,
(b) CJPL~Phase-II scheduled to complete
by early 2017, showing the four experimental halls
and the tunnel systems.
}
\end{center}
\end{figure}
\subsection{The Facility}
The Facility CJPL~\cite{cjpl}
is located in Sichuan, China, and
was inaugurated in December 2012.
With a rock overburden of about 2400~meter,
it is the deepest operating underground laboratory in
the world.
The muon flux is measured to be
$( 2.0 \pm 0.4 ) \times 10^{-10} \rm{cm^{-2} s^{-1}}$~\cite{cjpl-cosmic},
suppressed from the sea-level flux by a factor of $10^{-8}$.
The drive-in tunnel access can greatly facilitate the
deployment of big experiments and large teams.
Supporting infrastructures of catering and accommodation,
as well as office and workshop spaces,
already exist.
All these merits make CJPL an ideal location
to perform low count-rate experiments.
As depicted schematically in Figure~\ref{fig::cjpl}a,
the completed CJPL~Phase-I consist of a laboratory hall
of dimension 6~m(W)$\times$ 6~m(H)$\times$40~m(L).
This space is currently shared by the CDEX~\cite{cdex}
and PandaX\cite{pandax} dark matter
experiments, as well as a general purpose low radio-purity
screening facility.
Additional laboratory space for CJPL~Phase-II~\cite{cjpl2-media},
located about 500~m from the Phase-I site,
is currently under construction.
It will consist of four experiment halls each with dimension
14~m(W)$\times$14~m(H)$\times$130~m(L), connected by
an array of access tunnels.
The schematic layout of CJPL-II is displayed
in Figure~\ref{fig::cjpl}b.
Upon the scheduled completion by early 2017,
CJPL will be, in addition, the largest underground
laboratory by volume in the world.
\subsection{CDEX Dark Matter Program}
The scientific theme of
CDEX program~\cite{cdex}
is to pursue studies of light WIMPs with p-PCGe.
It is one of the two founding experimental programs at CJPL.
\begin{figure
\begin{center}
{\bf (a)}\\
\includegraphics[width=8.0cm]{FIGURES/f11a-C1A-Spectra.pdf}\\
{\bf (b)}\\
\includegraphics[width=7.0cm]{FIGURES/f11b-C1A-LXray.pdf}\\
{\bf (c)}\\
\includegraphics[width=7.0cm]{FIGURES/f11c-C1A-Residual.pdf}
\caption{\label{fig::cdex1-spectra}
(a)
Background spectra of the CDEX-1 measurement~\cite{cdex}
at their various stages of selection:
basic cuts (TT+Ped+PSD), Anti-Compton (AC) and Bulk (BS) events.
(b)
All events can be accounted for with
the known background channels $-$ L-shell X-rays and flat background
due to ambient high energy $\gamma$-rays.
(c)
Residual spectrum with known background channels subtracted.
An excluded $\chi$N recoil spectrum with parameters listed
is superimposed.
}
\end{center}
\end{figure}
\subsubsection{First Generation CDEX Experiments}
Following schematics of the
shielding structures and target detectors
depicted in, respectively,
Figures~\ref{fig::ksnlsite}b\&\ref{fig::ge-blndesign},
the first-generation experiments adopted
the KSNL baseline design~\cite{texonomunu,texonocdm2009}
of single-element ``1-kg mass scale''
p-PCGe enclosed by NaI(Tl) crystal
scintillator as anti-Compton detectors,
further surrounded by passive shieldings and
radon purge system.
Active cosmic-ray vetos are not necessary at this depth.
The pilot CDEX-0 measurement is based on
the 20~g ULEGe detector array~\cite{texonocdm2009}
at an exposure of 0.784~kg-days
and a threshold of 177~$\rm(eV_{ee})$~\cite{cdex0}.
The CDEX-1 experiment
adopts a p-PCGe detector of mass 1~kg.
The latest results are based on
an analysis threshold of 475~$\rm{eV_{ee}}$
with an exposure of 335.6~kg-days\cite{cdex}.
After suppression of the anomalous surface background events
and measuring their signal efficiencies and background
leakage factors with calibration data~\cite{bsel2014},
all residual events
can be accounted for by known background models.
The spectra are depicted in Figures~\ref{fig::cdex1-spectra}a,b\&c.
Dark Matter constraints on $\chi$N spin-independent
cross-sections were derived for both data set,
and are displayed in Figures~\ref{fig::explot}a\&b,
together with other selected benchmark results~\cite{cdmpdg14}.
In particular, the allowed region from
the CoGeNT\cite{cogent} experiment
is probed and excluded with the CDEX-1 results.
Analysis is currently performed on CDEX-1 data set with year-long
exposure.
Annual modulation effects as well as other physics channels
are being studied. New data is also taken with an upgraded p-PCGe
with lower threshold.
\subsubsection{Current Efforts and Future Goals}
The long-term goal of the CDEX program will be
a ton-scale germanium experiment (CDEX-1T)
at CJPL for the
searches of dark matter and of neutrinoless
double beta decay ($0 \nu \beta \beta$)\cite{0nubb}.
A pit of diameter 18~m and height 18~m is being built
at one of the halls of CJPL-II to house such
an experiment, as illustrated in Figure~\ref{fig::cdex1t}.
The conceptual design is a central region for
germanium detector arrays, surrounded by cryogenic liquid
and/or water shielding.
\begin{figure}
\begin{center}
\includegraphics[width=8.0cm]{FIGURES/f12-CDEX1T.pdf}
\caption{\label{fig::cdex1t}
Conceptual configuration of a future
CDEX-1T experiment at CJPL-II,
showing the pit in which the detector, cryogenic liquid
and water shielding can be placed.
}
\end{center}
\end{figure}
Towards this ends, the ``CDEX-10'' prototype
has been constructed with detectors in array structure
having a target mass at the 10-kg range.
This would provide a platform to study
the many issues of scaling up in detector mass and in
improvement of background and threshold.
The detector array is shielded and cooled by a cryogenic liquid.
Liquid nitrogen is being used, while liquid argon is
a future option to investigate, which may offer the
additional potential benefits of
an active shielding as anti-Compton detector.
In addition, various crucial technology acquisition projects
are pursued, which would make a ton-scale germanium experiment
realistic and feasible.
These include:
\begin{enumerate}
\item detector grade germanium crystal growth;
\item germanium detector fabrication;
\item isotopic enrichment of $^{76}$Ge for $0 \nu \beta \beta$;
\item production of electro-formed copper, eventually underground at CJPL.
\end{enumerate}
The first detector fabricated by the CDEX program from
commercial crystal that matches expected performance
is scheduled to be installed at CJPL in 2016.
It allows control of
assembly materials placed at its vicinity, known to be
the dominant source of radioactive background, while
providing an efficient test platform
of novel electronics and readout schemes.
The benchmark would be to perform light WIMP searches
with germanium detectors with ``$0 \nu \beta \beta$-grade''
background control.
This configuration would provide the
first observation (or stringent upper bounds)
of the potential cosmogenic tritium contaminations in
germanium detectors, from which
the strategies to suppress such background
can be explored.
The projected sensitivity in
$\chi$N spin-independent interactions
for CDEX-1T
is shown in Figure~\ref{fig::explot}a,
taking a realistic minimal
surface exposure of six months.
The studies of $0 \nu \beta \beta$ can
address the fundamental question
on whether the neutrinos are Dirac or Majorana
particles~\cite{0nubb}.
The current generation of
$^{76}$Ge-$0 \nu \beta \beta$ experiments~\cite{gerda+mjd}
are among those with leading sensitivities in
the pursuit.
The objective of a possible
``CDEX-1T@CJPL-II'' experiment
in $0 \nu \beta \beta$ will be
to achieve sensitivities sufficient
completely cover
the inverted neutrino mass hierarchy~\cite{pdg-nuosc}.
An international network is emerging
towards the formation of a proto-collaboration
to pursue this challenging goal.
Such visions stand on
several important merits and
deserve serious considerations.
The overburden at CJPL is among the deepest in
the world, essential for the unprecedented
background control requirements for such a
project. Being a new laboratory, there are ample space
for possible Ge-crystal growth and
detector fabrication and copper production,
in addition to the pit for
the main detector and shielding in Figure~\ref{fig::cdex1t}.
Furthermore, the crucial aspect of this project
is the necessity of
delivering industrial standard practices
and control during the mass production
of detector hardware.
This requirement matches well to the profile
of the CDEX-THU group, being closely
associated with an industry~\cite{nuctech}
which has strong experience in
the construction and deployment of
large radiation detector systems
for international clients.
\section{Prospects}
The TEXONO Collaboration
was launched from an
operating system without
previous traditions
and infra-structures and experience
of running its own particle physics
experiments.
With almost two decades of dedications
and persevering efforts,
the Collaboration has grown and thrived and emerged into
having a recognizable presence in the
world stage.
It has contributed to the advances
and opening of new and innovative avenues in
low energy neutrino physics, light dark matter
searches as well as low threshold germanium detector techniques.
The TEXONO-DNA is propagating in China, India and Turkey
as the alumni members are setting up the
first-generation research efforts on
the low-background experimentation
in these countries.
The observation of neutrino-nucleus coherent scattering
is the top-priority science goal at KSNL.
Scaling that summit would require further advancing
the technologies and techniques
of the germanium detectors.
While the realization of the
underground facility CJPL $-$ and
the implicit investment and commitment $-$
is an impressive (and intimidating) feat on its own,
the science programs are still in their
embryonic and formative stage.
The TEXONO group is at a favorable vantage position
to explore and define and formulate the
future research programs at CJPL
which may bring the Collaboration
to higher grounds.
In particular,
discussions and studies have been initiated on
a ton-scale $^{76}$Ge $0 \nu \beta \beta$
program at CJPL.
There are grounds to expect that
the future of the TEXONO story,
in spite of $-$ or perhaps because of $-$
not having the detailed landscape charted out yet,
would be as exciting and rewarding.
\section{Acknowledgments}
The author is whole-heartedly grateful
to the collaborators of both the TEXONO
and CDEX groups,
the technical and
administrative staff of Academia
Sinica and the collaborating institutes,
as well as the supporting staff of
the Kuo-Sheng Nuclear Power Station
and the
Yalong River Hydropower Development Company,
for the various invaluable contributions
which ``made these happen''.
Funding support is provided by
Academia Sinica and
the National Science Foundation,
Ministry of Science and Technology,
and the National Center of Theoretical Science
in Taiwan,
the National Natural Science Foundation in China,
and the Scientific and Technological Research Council
in Turkey.
| proofpile-arXiv_065-12205 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In recent years, strong approximation has been intensively studied for
stochastic differential equations (SDEs) of the form
\begin{align}\label{eq:cir-intro}
\mathrm{d} X^x_t = (a-bX^x_t)\,\mathrm{d} t + \sigma\sqrt{X^x_t}\,\mathrm{d} W_t, \quad X^x_0=x, \quad t\geq0,
\end{align}
with a one-dimensional Brownian motion $W$, initial value $x\geq0$,
and parameters $a,\sigma>0$, $b\in\mathbb{R}$. It is well known that these SDEs
have a unique non-negative strong solution. Such SDEs often arise
in mathematical finance, e.g., as the volatility process in the
Heston model. Moreover, they were proposed as a model for
interest rates in the Cox-Ingersoll-Ross (CIR) model.
In the particular case of $b=0$ and $\sigma=2$ the solution of
SDE~\eqref{eq:cir-intro} is a squared Bessel process,
which plays in important role in the analysis of Brownian motion.
Strong approximation is of particular interest due to the multi-level
Monte Carlo technique, see \cite{giles1,giles2,heinrich1}.
In this context, a sufficiently high convergence rate with respect to
an $L_2$-norm is crucial. The multi-level Monte Carlo technique is
used for the approximation of the expected value of a functional applied
to the solution of an SDE. In mathematical finance, such a functional
often represents a discounted payoff of some derivative
and the price is then given by the corresponding expected value.
Strong convergence of numerical schemes for the SDE~\eqref{eq:cir-intro}
has been widely studied in the past twenty years. Various schemes
have been proposed and proven to be strongly convergent, see, e.g.,
\cite{alfonsi2005,deelstra1998,higham2005,gyoengy2011,milstein2015,hj2015}.
In recent years, the speed of convergence in terms of polynomial convergence
rates has been intensively studied, see
\cite{bessel1,berkaoui,dereich,alfonsi2013,neuenkirch-szpruch,hjn-cir,chassagneux,bossy}.
In all these articles the approximation error is measured with respect
to the $L_p$-norm either at a single time point,
at the discretization points, or on a compact interval.
Moreover, all these results impose conditions on $p$ (appearing in the $L_p$-norm)
and the quantity
\begin{align}\label{eq:delta}
\delta = \frac{4a}{\sigma^2} \in\left]0,\infty\right[,
\end{align}
which depends on the two parameters $a,\sigma>0$ appearing
in \eqref{eq:cir-intro}. None of these results
yield a polynomial convergence rate for $\delta<1$, cf.~Figure~\ref{fig:CIR-rates}.
By the Feller test, the solution remains strictly positive, i.e.,
$\ensuremath{ \mathrm{P} }(\forall t>0: X^x_t>0)=1$, if and only if $\delta\geq2$.
Roughly speaking, the smaller the value of $\delta$, the closer the
solution to the boundary point zero, where the diffusion coefficient
is not even locally Lipschitz continuous.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\linewidth]{graph_rates.pdf}
\caption{The colored dashed lines show the convergence rates of the known
best upper bounds for the error criterion in \eqref{eq:intro} for
different values of $p$, see \cite{dereich,hjn-cir}.
The corresponding solid lines show the convergence rates obtained
in this paper. The cross shows the convergence rate at $\delta=1$
for all $p\in\left[1,\infty\right[$, see \cite{bessel1}.}
\label{fig:CIR-rates}
\end{figure}
The main aim of this paper is to construct a numerical scheme
for the SDE~\eqref{eq:cir-intro} and to prove its convergence at a polynomial rate
for all parameters. Let $T>0$ and
define the approximation scheme $ \ensuremath{ \bar{Y}^{x,N} } =( \ensuremath{ \bar{Y}^{x,N} } _t)_{0\leq t\leq T}$,
which uses $N\in\mathbb{N}$ increments of Brownian motion, by
\begin{align*}
\ensuremath{ \bar{Y}^{x,N} } _0=x \qquad\text{and}\qquad
\ensuremath{ \bar{Y}^{x,N} } _{(n+1)T/N}
= \ensuremath{ \oss_\mathrm{Mil} } \left( \ensuremath{ \bar{Y}^{x,N} } _{nT/N}, T/N, W_{(n+1)T/N}-W_{nT/N} \right),
\end{align*}
for $n=0,\dots,N-1$, where the one-step function
$ \ensuremath{ \oss_\mathrm{Mil} } \colon\mathbb{R}^+_0\times\left]0,T\right]\times\mathbb{R}\to\mathbb{R}^+_0$ is given by
\begin{align*}
\ensuremath{ \oss_\mathrm{Mil} } (x,t,w) = \left(
\left( \max\left(
\sqrt{{\sigma^2}/{4}\cdot t},\sqrt{\max(\sigma^2/4\cdot t,x)}+{\sigma}/{2}\cdot w
\right) \right)^2
+ \left( a-\sigma^2/4-b\cdot x \right)\cdot t
\right)^+.
\end{align*}
This yields a discrete-time approximation of the SDE~\eqref{eq:cir-intro}
on $[0,T]$ based on a grid of mesh size $T/N$.
Furthermore, between two grid points we use constant interpolation to get
a continuous-time approximation, i.e.,
\begin{align*}
\ensuremath{ \bar{Y}^{x,N} } _t = \ensuremath{ \bar{Y}^{x,N} } _{nT/N}, \qquad t\in{[nT/N,(n+1)T/N[},
\end{align*}
for $n=0,\dots,N-1$.
We refer to $ \ensuremath{ \bar{Y}^{x,N} } $ as truncated Milstein scheme. Let us mention that this scheme
coincides with the classical Milstein scheme as long as it is ``away'' from zero.
Moreover, it is suitably truncated close to zero.
\begin{thm}[Main Result]\label{thm:intro}
Let $\delta>0$ be according to \eqref{eq:delta}.
For every $1\leq p<\infty$ and every $\ensuremath{\varepsilon}>0$ there exists a constant $C>0$
such that
\begin{align}\label{eq:intro}
\sup_{0\leq t\leq T}\left( \E{ \abs{X^x_t- \ensuremath{ \bar{Y}^{x,N} } _t}^p } \right)^{1/p}
\leq C\cdot(1+x)\cdot \frac{ 1}{ N^{\min(1,\delta)/(2p)-\ensuremath{\varepsilon}} }
\end{align}
for all $N\in\mathbb{N}$ and for all $x\geq0$.
\end{thm}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\linewidth]{numerics.pdf}
\caption{Numerical results for the truncated Milstein scheme
with $\sigma=2$, $a=\delta=1/2$, $b=T=1$, and $x=1/20$ are shown for $p=1,2$.
The error given by \eqref{eq:intro} is estimated based on $10^5$ replications
of consecutive differences.
The dashed lines show the corresponding convergence rates
from Theorem~\ref{thm:intro}. For a numerical study of various Euler-
and Milstein-type schemes (with $2/\delta<3$ and $p=1$) we refer to
\cite[Fig.~3]{alfonsi2005}.
}
\label{fig:numerics}
\end{figure}
The results of Theorem~\ref{thm:intro} are illustrated in
Figure~\ref{fig:CIR-rates}. Observe that the convergence rate in
Theorem~\ref{thm:intro}, given by $\min(1,\delta)/(2p)$,
is monotonically decreasing in $1/\delta$ and $p$.
This is the same monotonicity as in previous results.
The numerical results with $\delta=1/2$ for the truncated Milstein scheme
presented in Figure~\ref{fig:numerics} indicate that the convergence rate
in Theorem~\ref{thm:intro} is sharp for $p=1$.
Furthermore, these results suggest that the $L_p$-norm affects the convergence rate,
as expected from Theorem~\ref{thm:intro}.
We think that the convergence rate in Theorem~\ref{thm:intro} is sharp for $p=1$
in the full parameter range but might be too pessimistic for $p>1$.
Note that a Milstein-type scheme, the drift-implicit Euler scheme, converges at rate
$1/2$ for all $p\geq1$ in the particular situation of $\delta=1$ and $b=0$, see \cite{bessel1}.
Let us stress that a convergence rate of $1/2$ is optimal for various global error criteria
in case of SDEs satisfying standard Lipschitz assumptions, see, e.g., \cite{gronbach1,hofmann}.
Let us mention that Theorem~\ref{thm:intro} actually holds for a whole class of
approximation schemes. More precisely, a one-step approximation scheme satisfies the error bound
\eqref{eq:intro} if it is $L_1$-Lipschitz continuous, has a Milstein-type local
discretization error and is uniformly bounded, see
(A\ref{ass:lipschitz})-(A\ref{ass:bounded}) for the precise assumptions.
Our analysis is based on the $L_1$-norm since CIR processes are in general
not locally $L_p$-Lipschitz continuous in the initial value if $p>1$,
see Proposition~\ref{prop:initial-value}.
The results for $L_1$ are then extended to $p>1$ by using classical
interpolation techniques. Due to this interpolation the convergence
rate in Theorem~\ref{thm:intro} is divided by $p$.
This paper is organized as follows. In Section~\ref{sec:preliminaries} we
recall some basic properties of the solution of SDE~\eqref{eq:cir-intro}.
In Section~\ref{sec:strong} we provide a general framework for the analysis
of one-step approximation schemes and prove strong convergence rates under suitable
assumptions on such a one-step scheme. In Section~\ref{sec:scheme} we study the
truncated Milstein scheme and show that it satisfies the assumptions
of Section~\ref{sec:strong}.
\section{Notation}
We use $\mathbb{N}=\{1,2,3,\dots\}$, $\mathbb{N}_0=\mathbb{N}\cup\{0\}$, $\mathbb{R}^+=\{x\in\mathbb{R}: x>0\}$,
and $\mathbb{R}^+_0=\mathbb{R}^+\cup\{0\}$.
Moreover, $x^+=\max(0,x)$ denotes the positive part of $x\in\mathbb{R}$.
We use $ \ensuremath{ \stackrel{\mathrm{d}}{=} } $ to denote equality in distribution.
We do not explicitly indicate if statements are only
valid almost surely, e.g., equality of two random variables.
For functions $f$ and $g$ taking non-negative values we write $f\preceq g$
if there exists a constant $C>0$ such that $f\leq C\cdot g$.
Moreover, we write $f\asymp g$ if $f\preceq g$ and $g\preceq f$.
From the context it will be clear which objects the constant $C$ is allowed
to depend on.
\section{Preliminaries}\label{sec:preliminaries}
In this section we set up the framework and provide some basic facts about the
SDE~\eqref{eq:cir-intro} that will be frequently used.
To simplify the presentation we only consider the SDE~\eqref{eq:cir-intro}
with $\sigma=2$. Moreover, we only study
approximation on the unit interval $[0,1]$ instead of $[0,T]$.
Both simplifications are justified by the following two remarks.
\begin{rem}[Reduction to $\sigma=2$]\label{rem:scaling-space}
Consider the particular case of SDE~\eqref{eq:cir-intro} given by
\begin{align*}
\mathrm{d} \hat{X}^{\hat{x}}_t = (\delta-b\hat{X}^{\hat{x}}_t)\,\mathrm{d} t
+ 2\,\sqrt{\hat{X}^{\hat{x}}_t}\,\mathrm{d} W_t,
\quad \hat{X}^{\hat{x}}_0=\hat{x}, \quad t\geq0,
\end{align*}
with $\hat x=x\cdot 4/\sigma^2$ and $\delta$ given by \eqref{eq:delta}.
Then we have
\begin{align*}
X_t^x = \frac{\sigma^2}{4}\cdot \hat X^{\hat x}_t
\end{align*}
for all $t\geq 0$.
\end{rem}
\begin{rem}[Reduction to $T=1$]\label{rem:scaling-time}
Consider the instance of SDE~\eqref{eq:cir-intro} given by
\begin{align*}
\mathrm{d} \tilde{X}^{x}_t = (\tilde{a}-\tilde{b} \tilde{X}^{x}_t)\,\mathrm{d} t
+ \tilde{\sigma}\,\sqrt{\tilde{X}^{x}_t}\,\mathrm{d} \tilde{W}_t,
\quad \tilde{X}^{x}_0={x}, \quad t\geq0,
\end{align*}
with $\tilde a=T\cdot a$, $\tilde b=T\cdot b$, $\tilde\sigma=\sqrt{T}\cdot\sigma$,
and Brownian motion $\tilde W=(\tilde W_t)_{t\geq 0}$ given by
\begin{align*}
\tilde W_t=1/\sqrt{T}\cdot W_{t\cdot T}.
\end{align*}
Then we have
\begin{align*}
X^x_{t} = \tilde X^x_{t/T}
\end{align*}
for all $t\geq 0$.
\end{rem}
Throughout the rest of the paper $(\Omega,\mathfrak A,\ensuremath{ \mathrm{P} })$ denotes the
underlying probability space that is assumed to be complete and $\ensuremath{ \mathfrak{F} }=(\ensuremath{ \mathfrak{F} }_t)_{t\geq 0}$
denotes a filtration that satisfies the usual conditions, i.e., $\ensuremath{ \mathfrak{F} }_0$ contains
all $\ensuremath{ \mathrm{P} }$-null-sets and $\ensuremath{ \mathfrak{F} }$ is right-continuous.
Moreover, $W=(W_t)_{t\geq 0}$ denotes a scalar Brownian motion with respect to $\ensuremath{ \mathfrak{F} }$.
Finally, the process $X^x=(X^x_t)_{t\geq 0}$ with initial value $x\geq 0$ is given by the SDE
\begin{align}\label{eq:cir}
\mathrm{d} X^x_t = (\delta-bX^x_t)\,\mathrm{d} t + 2\sqrt{X^x_t}\,\mathrm{d} W_t, \quad X^x_0=x, \quad t\geq0,
\end{align}
with fixed parameters $\delta>0$ and $b\in\mathbb{R}$.
\begin{rem}
Let us mention that
\begin{align}\label{eq:mean-sol}
\E{X^x_t} = x\cdot e^{-bt} + \delta\cdot
\begin{cases}
(1-e^{-bt})/b, &\text{if }\ b\neq0, \\
t, &\text{if }\ b=0,
\end{cases}
\end{align}
for $t\geq0$, see, e.g., \cite{cir}.
\end{rem}
\begin{rem}[Marginal distribution of CIR processes]\label{rem:marginal}
Let $Z$ be $\chi^2$-distributed with $\delta$ degrees of freedom,
i.e., $Z$ admits a Lebesgue density that is proportional to
\begin{align*}
z\mapsto z^{\delta/2-1}\cdot \exp(-z/2)\cdot \ind{\mathbb{R}^+}(z).
\end{align*}
If $b=0$, we have
\begin{align*}
X^0_t \ensuremath{ \stackrel{\mathrm{d}}{=} } t\cdot X^0_1 \ensuremath{ \stackrel{\mathrm{d}}{=} } t\cdot Z
\end{align*}
for $t\geq0$, according to
\cite[Prop.~XI.1.6]{revuz-yor} and \cite[Cor.~XI.1.4]{revuz-yor}.
In the general case $b\in\mathbb{R}$ we obtain
\begin{align}\label{eq:marginal}
X^0_t \ensuremath{ \stackrel{\mathrm{d}}{=} } \psi(t)\cdot Z
\end{align}
for $t\geq0$ with
\begin{align*}
\psi(t) =
\begin{cases}
{(1-e^{-b t})}/{b}, &\text{if }\ b\neq 0,\\
t, &\text{if }\ b=0,
\end{cases}
\end{align*}
due to \cite[Eq.~(4)]{yor}.
\end{rem}
\begin{rem}\label{rem:linear-growth}
For every $1\leq p<\infty$ there exists a constant $C>0$ such that
\begin{align}\label{eq:linear-growth}
\left( \E{ \sup_{0\leq t\leq 1} \abs{X^x_t}^p} \right)^{1/p}
\leq C\cdot (1+x)
\end{align}
for $x\geq0$, since the coefficients of SDE~\eqref{eq:cir} satisfy
a linear growth condition, see, e.g., \cite[Thm.~2.4.4]{mao}.
\end{rem}
\begin{rem}[Monotonicity in the initial value]\label{rem:comparison-principle}
The comparison principle for one-dimensional SDEs yields
\begin{align*}
X^x_t\leq X^y_t
\end{align*}
for all $t\geq 0$ and $0\leq x\leq y$, see, e.g., \cite[Thm.~IX.3.7]{revuz-yor}.
\end{rem}
\section{Strong Approximation}\label{sec:strong}
In this section we prove strong convergence rates for suitable
one-step approximation schemes for the SDE~\eqref{eq:cir}.
At first, we introduce the notion of a one-step scheme and identify
sufficient conditions for such a scheme to be strongly convergent.
A one-step scheme for the SDE~\eqref{eq:cir} with initial value $x\geq 0$
is a sequence of approximating processes $ \ensuremath{ Y^{x,N} } =( \ensuremath{ Y^{x,N} } _t)_{t\geq 0}$
for $N\in\mathbb{N}$ that is defined by a Borel-measurable one-step function
\begin{align*}
\ensuremath{ \Theta } \colon\mathbb{R}^+_0\times\left]0,1\right]\times\mathbb{R}\to\mathbb{R}^+_0
\end{align*}
in the following way. The process $ \ensuremath{ Y^{x,N} } $ is defined at the grid of mesh size
$1/N$ by
\begin{align}\label{eq:oss}
\ensuremath{ Y^{x,N} } _0=x \qquad\text{and}\qquad
\ensuremath{ Y^{x,N} } _{(n+1)/N} = \ensuremath{ \Theta } \left( \ensuremath{ Y^{x,N} } _{n/N}, 1/N, \ensuremath{ \Delta W^N } _{n} \right)
\end{align}
for $n\in\mathbb{N}_0$, where
\begin{align*}
\ensuremath{ \Delta W^N } _n = W_{(n+1)/N} - W_{n/N}.
\end{align*}
Moreover, between two grid points we use constant interpolation, i.e.,
\begin{align}\label{eq:oss-int}
\ensuremath{ Y^{x,N} } _t = \ensuremath{ Y^{x,N} } _{n/N}, \qquad t\in{[n/N,(n+1)/N[},
\end{align}
for $n\in\mathbb{N}_0$. The value of $ \ensuremath{ Y^{x,N} } _{n/N}$ for $n\in\mathbb{N}$ is thus a function
of the previous value $ \ensuremath{ Y^{x,N} } _{(n-1)/N}$, the time increment and the Brownian increment.
Clearly, Euler and Milstein-type schemes are such one-step schemes.
We refer to \cite[Sec.~2.1.4]{hj2015} for one-step schemes
in a more general setting.
In case of SDE~\eqref{eq:cir} the classical Euler-Maruyama scheme
would be given by
\begin{align*}
\ensuremath{ \Theta } (x,t,w) = x + (\delta-bx)\cdot t + 2\sqrt{x}\cdot w.
\end{align*}
However, this is not well-defined since $ \ensuremath{ \Theta } (x,t,w)$ might be negative.
Hence classical schemes need to be appropriately modified to ensure
positivity if a square root term is used.
We consider the following conditions on one-step functions for $p\in{[1,\infty[}$.
\begin{enumerate}[({A}1)]
\item\label{ass:lipschitz}
There exists a constant $K>0$ such that
\begin{align*}
\left( \E{ \abs{ \ensuremath{ \Theta } (x_1,t,W_t)- \ensuremath{ \Theta } (x_2,t,W_t) }^p } \right)^{1/p}
\leq (1+Kt)\cdot \abs{x_1-x_2}
\end{align*}
for all $x_1,x_2\geq0$ and $t\in\left]0,1\right]$.
\end{enumerate}
We say that a one-step function $ \ensuremath{ \Theta } $ satisfying (A\ref{ass:lipschitz}) is
$L_p$-Lipschitz continuous.
We define $ \ensuremath{ \Delta_\mathrm{loc} } \colon\mathbb{R}^+_0\times\left]0,1\right]\to\mathbb{R}^+_0$ by
\begin{align}\label{eq:loc-milstein}
\ensuremath{ \Delta_\mathrm{loc} } (x,t) =
\begin{cases}
t, &\text{if }\ x\leq t,\\
t^{3/2}\cdot{{x^{-1/2}}}, &\text{if }\ t\leq x\leq1,\\
t^{3/2}\cdot x, &\text{if }\ x\geq1.
\end{cases}
\end{align}
\begin{enumerate}[({A}1)]\setcounter{enumi}{1}
\item\label{ass:local-error}
There exists a constant $C>0$ such that
\begin{align*}
\left( \E{ \abs{ \ensuremath{ \Theta } (x,t,W_t)-X^x_t }^p } \right)^{1/p}
\leq C\cdot \ensuremath{ \Delta_\mathrm{loc} } (x,t)
\end{align*}
for all $x\geq0$ and $t\in\left]0,1\right]$.
\end{enumerate}
In Proposition~\ref{prop:milstein} we will show that the local discretization error
of a single Milstein-step is bounded by $ \ensuremath{ \Delta_\mathrm{loc} } $.
Hence we say that a one-step function $ \ensuremath{ \Theta } $ satisfying (A\ref{ass:local-error})
has an $L_p$-Milstein-type local discretization error for the SDE~\eqref{eq:cir}.
\begin{rem}
Under standard Lipschitz assumptions on the coefficients of an SDE, the Euler-Maruyama method
and the Milstein method are $L_p$-Lipschitz continuous for all $p\in{[1,\infty[}$.
Moreover, for all $p\in{[1,\infty[}$ the Euler-Maruyama method and the Milstein method
have a local error of order $t$ and $t^{3/2}$, respectively, see, e.g.,
\cite[Chap.~1.1]{milstein2004}.
\end{rem}
\subsection{$L_1$-convergence}\label{sec:L1-conv}
In this section we prove strong convergence rates with respect to the $L_1$-norm
for suitable one-step schemes. The reason why we restrict ourselves to $L_1$
in this section is indicated in Section~\ref{sec:regularity}.
\begin{prop}[Average local error]\label{prop:balance}
There exists a constant $C>0$ such that
\begin{align*}
\E{ \ensuremath{ \Delta_\mathrm{loc} } \left( X^x_s, t \right) }
\leq C \cdot (1+x)\cdot t \cdot
\begin{cases}
1, &\text{if }\ s\leq t,\\
(t/s)^{\min(1,\delta)/2}\cdot \left( 1+\ln(s/t)\cdot \ind{\set{1}}(\delta) \right),
&\text{if }\ t\leq s,
\end{cases}
\end{align*}
for all $t\in{]0,1]}$, $s\in{[0,1]}$, and $x\geq0$.
\end{prop}
\begin{proof}
Consider the situation of Remark~\ref{rem:marginal} and let $c>0$.
Observe that for $\varepsilon\in\left]0,c\right]$ we have
\begin{align}\label{eq:chi1}
\ensuremath{ \mathrm{P} }(Z\leq\ensuremath{\varepsilon}) \preceq \ensuremath{\varepsilon}^{\delta/2}
\end{align}
and
\begin{align}\label{eq:chi2}
\E{ Z^{-1/2}\cdot \ind{\set{Z\geq\ensuremath{\varepsilon}}} } \preceq
\begin{cases}
1, &\text{if }\ \delta>1,\\
1+\ln(c/\ensuremath{\varepsilon}), &\text{if }\ \delta=1,\\
\ensuremath{\varepsilon}^{(\delta-1)/2}, &\text{if }\ \delta<1.
\end{cases}
\end{align}
Furthermore, we clearly have
\begin{align}\label{eq:scaling}
\psi(s) \asymp s
\end{align}
for $s\in[0,1]$.
For $s\leq t$ the claim follows from \eqref{eq:linear-growth} with $p=1$.
In the following we consider the case $t\leq s$.
Let $U_1,U_2\colon\mathbb{R}^+_0\times\left]0,1\right]\to\mathbb{R}^+_0$ be given by
\begin{align*}
U_1(x,t) = t^{3/2}\cdot x
\end{align*}
and
\begin{align*}
U_2(x,t) =
\begin{cases}
t, &\text{if }\ x\leq t,\\
t^{3/2}\cdot{{x^{-1/2}}}, &\text{if }\ t\leq x,
\end{cases}
\end{align*}
such that
\begin{align}\label{eq:sum}
\ensuremath{ \Delta_\mathrm{loc} } (x,t) \leq U_1(x,t) + U_2(x,t).
\end{align}
Using \eqref{eq:linear-growth} with $p=1$ we obtain
\begin{align}\label{eq:sum1}
\E{ U_1( X^x_s, t) }
\preceq t^{3/2}\cdot (1+x)
\end{align}
for $t\in{]0,1]}$, $s\in{[0,1]}$, and $x\geq0$.
Combining \eqref{eq:marginal}, \eqref{eq:chi1}, \eqref{eq:chi2}, and \eqref{eq:scaling} yields
\begin{align*}
t\cdot \ensuremath{ \mathrm{P} }(X^0_s\leq t) = t\cdot \ensuremath{ \mathrm{P} }(Z\leq t/\psi(s))
\preceq t\cdot (t/s)^{\delta/2}
\end{align*}
for $0<t\leq s\leq 1$ and
\begin{align*}
t^{3/2}\cdot \E{ (X^0_s )^{-1/2} \cdot \ind{ \set{X^0_s\geq t} }}
&= t^{3/2}\cdot (\psi(s))^{-1/2}\cdot \E{ Z^{-1/2} \cdot \ind{ \set{Z\geq t/\psi(s)} }} \\
&\preceq \frac{t^{3/2}}{\sqrt{s}} \cdot
\begin{cases}
1, &\text{if }\ \delta>1, \\
1+\ln(s/t), &\text{if }\ \delta=1, \\
\left(t/s\right)^{(\delta-1)/2}, &\text{if }\ \delta<1,
\end{cases}
\end{align*}
for $0<t\leq s\leq 1$. Moreover, due to Remark~\ref{rem:comparison-principle} and
monotonicity of $U_2(\cdot,t)$ we have
\begin{align}\label{eq:sum2}
\begin{aligned}[b]
\E{ U_2(X^x_s,t) } &\leq \E{ U_2(X^0_s,t) } \\
&\preceq t\cdot (t/s)^{\delta/2}+t\cdot
\begin{cases}
(t/s)^{1/2}, &\text{if }\ \delta>1, \\
(t/s)^{1/2}\cdot (1+\ln(s/t)), &\text{if }\ \delta=1, \\
(t/s)^{\delta/2}, &\text{if }\ \delta<1,
\end{cases}
\end{aligned}
\end{align}
for $0<t\leq s\leq 1$ and $x\geq0$.
Combining \eqref{eq:sum}, \eqref{eq:sum1}, and \eqref{eq:sum2} completes the proof.
\end{proof}
\begin{thm}[$L_1$-convergence of one-step schemes]\label{thm:strong-L1}
Let $ \ensuremath{ Y^{x,N} } $ be a one-step scheme given by \eqref{eq:oss} and \eqref{eq:oss-int},
and assume that (A\ref{ass:lipschitz}) and (A\ref{ass:local-error})
are fulfilled for $p=1$. Then there exists a constant $C>0$ such that
\begin{align*}
\sup_{0\leq t\leq 1}\E{ \abs{X^x_t- \ensuremath{ Y^{x,N} } _t} } \leq C\cdot (1+x)\cdot
\frac{ 1+\ind{\set{1}}(\delta)\cdot\ln N }{ N^{\min(1,\delta)/2} }
\end{align*}
for all $N\in\mathbb{N}$ and for all $x\geq 0$.
\end{thm}
\begin{proof}
For notational convenience, we set
\begin{align*}
\ensuremath{ X^{x,N} } _n = X^x_{n/N} \qquad\text{and}\qquad
\ensuremath{ \hat{Y}^{x,N} } _n = \ensuremath{ Y^{x,N} } _{n/N}
\end{align*}
for $n=0,\ldots,N$. Furthermore, we define
\begin{align*}
\ensuremath{ e^{x,N} } _n = \E{ \abs{ \ensuremath{ X^{x,N} } _n- \ensuremath{ \hat{Y}^{x,N} } _n} }
\end{align*}
for $n=0,\ldots,N$.
Then we have $ \ensuremath{ e^{x,N} } _0=0$ and
\begin{align*}
\ensuremath{ e^{x,N} } _{n+1} &\leq \E{ \abs{
\ensuremath{ X^{x,N} } _{n+1} - \ensuremath{ \Theta } \left( \ensuremath{ X^{x,N} } _n, 1/N, \ensuremath{ \Delta W^N } _{n} \right)
} } \\
&\qquad+ \E{ \abs{
\ensuremath{ \Theta } ( \ensuremath{ X^{x,N} } _n,1/N, \ensuremath{ \Delta W^N } _{n}) - \ensuremath{ \hat{Y}^{x,N} } _{n+1}
} } \\
&= \E { \E{ \abs{
X_{1/N}^{\tilde{x}} - \ensuremath{ \Theta } \left( \tilde{x}, 1/N, W_{1/N} \right)
} }_{\tilde{x}= \ensuremath{ X^{x,N} } _n} } \\
&\qquad+ \E{ \E{ \abs{
\ensuremath{ \Theta } (\tilde{x},1/N,W_{1/N}) - \ensuremath{ \Theta } (\tilde{y},1/N,W_{1/N})
} }_{(\tilde{x},\tilde{y})=( \ensuremath{ X^{x,N} } _n, \ensuremath{ \hat{Y}^{x,N} } _n)} }
\end{align*}
for $n=0,\ldots,N-1$. Using (A\ref{ass:local-error}) and (A\ref{ass:lipschitz})
with $p=1$ we obtain
\begin{align*}
\ensuremath{ e^{x,N} } _{n+1} \leq C_1\cdot \E{ \ensuremath{ \Delta_\mathrm{loc} } ( \ensuremath{ X^{x,N} } _n,1/N) }
+ \left( 1+K/N \right)\cdot \E{ \abs{ \ensuremath{ X^{x,N} } _n- \ensuremath{ \hat{Y}^{x,N} } _n} }.
\end{align*}
Moreover, applying Proposition~\ref{prop:balance} yields
\begin{align*}
\ensuremath{ e^{x,N} } _{n+1} &\leq \left( 1+K/N \right)\cdot \ensuremath{ e^{x,N} } _n \\
&\qquad+ C_1C_2\cdot (1+x) \cdot \frac{1}{N}\cdot
\begin{cases}
1, &\text{if }\ n=0,\\
n^{-\min(1,\delta)/2}\cdot \left( 1+\ln(n)\cdot \ind{\set{1}}(\delta) \right),
&\text{if }\ n\geq1,
\end{cases}
\end{align*}
and hence
\begin{align*}
\ensuremath{ e^{x,N} } _{n+1} \leq \left( 1+K/N \right)\cdot \ensuremath{ e^{x,N} } _n
+ 2\,C_1C_2\cdot (1+x) \cdot \frac{1}{N}\cdot
\frac{ 1+\ln(N)\cdot\ind{\set{1}}(\delta) }{ (n+1)^{\min(1,\delta)/2} }
\end{align*}
for $n=0,\ldots,N-1$. Recursively, we get
\begin{align*}
\ensuremath{ e^{x,N} } _{n} &\leq 2\,C_1C_2\cdot (1+x) \cdot \frac{1}{N}
\cdot\left( 1+\ln(N)\cdot \ind{\set{1}}(\delta) \right)
\cdot \sum_{k=1}^n \frac{ \left( 1+K/N \right)^{n-k} }{ k^{\min(1,\delta)/2} }
\end{align*}
and hence
\begin{align*}
\ensuremath{ e^{x,N} } _{n} &\leq 2\,C_1C_2\,e^{K}\cdot (1+x)
\cdot\left( 1+\ln(N)\cdot \ind{\set{1}}(\delta) \right)
\cdot \frac{1}{N}\sum_{k=1}^N k^{-\min(1,\delta)/2} \\
&\leq 4\,C_1C_2\,e^{K}\cdot (1+x)
\cdot\frac{ 1+\ln(N)\cdot \ind{\set{1}}(\delta) }{ N^{\min(1,\delta)/2} }
\end{align*}
for $n=0,\ldots,N$. This yields
\begin{align}\label{eq:rate-grid}
\max_{n=0,\ldots,N}\E{ \abs{ \ensuremath{ X^{x,N} } _n- \ensuremath{ \hat{Y}^{x,N} } _n} } \leq 4\,C_1C_2\,e^{K}\cdot (1+x)\cdot
\frac{ 1+\ind{\set{1}}(\delta)\cdot\ln N }{ N^{\min(1,\delta)/2} }
\end{align}
for all $N\in\mathbb{N}$ and for all $x\geq0$.
Since the coefficients of SDE~\eqref{eq:cir} satisfy a linear
growth condition we have
\begin{align}\label{eq:sol-time-regularity}
\E{ \abs{ X^x_s-X^x_t } } \preceq (1+x)\cdot\sqrt{\abs{s-t}}
\end{align}
for all $x\geq0$ and $s,t\in[0,1]$, see, e.g., \cite[Thm.~2.4.3]{mao}.
Combining \eqref{eq:rate-grid} and \eqref{eq:sol-time-regularity}
completes the proof.
\end{proof}
\begin{rem}
Consider the proof of Theorem~\ref{thm:strong-L1}.
An analysis of the global error by adding local errors is a
classical technique for ordinary differential equations.
For such a technique in the context of SDEs under standard
Lipschitz assumptions we refer to \cite[Chap.~1.1]{milstein2004}.
In case of SDE~\eqref{eq:cir}, it is crucial to control the average local
error, see Proposition~\ref{prop:balance}.
\end{rem}
\subsection{$L_p$-convergence}\label{sec:Lp-conv}
In this section we extend the result from Theorem~\ref{thm:strong-L1}
to arbitrary $p>1$ by using interpolation of $L_p$-spaces. For this
we need the following additional assumption on a one-step scheme.
\begin{enumerate}[({A}1)]\setcounter{enumi}{2}
\item\label{ass:bounded}
For every $1\leq q<\infty$ there exists a constant $C>0$ such that
\begin{align*}
\sup_{0\leq t\leq 1} \left( \E{ \abs{ \ensuremath{ Y^{x,N} } _t }^q } \right)^{1/q}
\leq C\cdot (1+x)
\end{align*}
for all $x\geq0$ and $N\in\mathbb{N}$.
\end{enumerate}
We say that a one-step scheme $ \ensuremath{ Y^{x,N} } $ satisfying (A\ref{ass:bounded})
is uniformly bounded.
\begin{rem}
Under standard linear growth conditions on the coefficients of the SDE,
the Euler-Maruyama method and the Milstein method are uniformly bounded,
see, e.g., \cite[Lem.~2.7.1]{mao} for the Euler-Maruyama method and $q=2$.
\end{rem}
\begin{rem}[Interpolation of $L_p$-spaces]\label{rem:interpolation}
Let $1\leq p<\infty$ and $0<\ensuremath{\varepsilon}<1/p$. Set
\begin{align}\label{eq:def-q}
q = 1+\frac{1-1/p}{\ensuremath{\varepsilon}}.
\end{align}
Note that $p\leq q<\infty$. An application of H\"older's inequality yields
\begin{align*}
\left( \E{ \abs{Z}^p } \right)^{1/p}
= \left( \E{ \abs{Z}^{q\cdot(\ensuremath{\varepsilon} p)}\cdot\abs{Z}^{(1/p-\ensuremath{\varepsilon})\cdot p} } \right)^{1/p}
\leq \left( \E{ \abs{Z}^q } \right)^{\ensuremath{\varepsilon}}
\cdot \left( \E{ \abs{Z} } \right)^{1/p-\ensuremath{\varepsilon}}
\end{align*}
for all random variables $Z$.
\end{rem}
\begin{cor}[$L_p$-convergence of one-step schemes]\label{cor:p}
Consider the situation of Theorem~\ref{thm:strong-L1} and assume in addition that
(A\ref{ass:bounded}) is fulfilled. Furthermore, let $1\leq p<\infty$ and $\ensuremath{\varepsilon}>0$.
Then there exists a constant $C>0$ such that
\begin{align*}
\sup_{0\leq t\leq 1}\left( \E{ \abs{X^x_t- \ensuremath{ Y^{x,N} } _t}^p } \right)^{1/p}
\leq C\cdot(1+x)\cdot \frac{ 1}{ N^{\min(1,\delta)/(2p)-\ensuremath{\varepsilon}} }
\end{align*}
for all $N\in\mathbb{N}$ and for all $x\geq0$.
\end{cor}
\begin{proof}
We may assume $\ensuremath{\varepsilon}<1/p$. According to Remark~\ref{rem:interpolation}
we have
\begin{align*}
\sup_{0\leq t\leq 1} &\left( \E{ \abs{X^x_t- \ensuremath{ Y^{x,N} } _t}^p } \right)^{1/p} \\
&\leq \sup_{0\leq t\leq 1}\left( \E{ \abs{X^x_t- \ensuremath{ Y^{x,N} } _t} } \right)^{1/p-\ensuremath{\varepsilon}} \\
&\qquad\qquad\cdot \left(
\sup_{0\leq t\leq 1}\left( \E{ \abs{X^x_t}^q } \right)^{1/q}
+ \sup_{0\leq t\leq 1}\left( \E{ \abs{ \ensuremath{ Y^{x,N} } _t}^q } \right)^{1/q}
\right)^{q\cdot\ensuremath{\varepsilon}}
\end{align*}
with $q$ given by \eqref{eq:def-q}. Using \eqref{eq:linear-growth} and (A\ref{ass:bounded})
we get
\begin{align*}
\sup_{0\leq t\leq 1} &\left( \E{ \abs{X^x_t- \ensuremath{ Y^{x,N} } _t}^p } \right)^{1/p}
\preceq \sup_{0\leq t\leq 1}\left( \E{ \abs{X^x_t- \ensuremath{ Y^{x,N} } _t} } \right)^{1/p-\ensuremath{\varepsilon}}
\cdot \left(1+x\right)^{q\cdot \ensuremath{\varepsilon}}.
\end{align*}
It remains to apply Theorem~\ref{thm:strong-L1} and to observe that $1/p-\ensuremath{\varepsilon}+q\ensuremath{\varepsilon}=1$.
\end{proof}
\subsection{Regularity in the initial value}\label{sec:regularity}
In this section we study continuity properties of the solution of
SDE~\eqref{eq:cir} in the initial value. It turns out that the solution
is in general not locally Lipschitz continuous in the initial value
with respect to the $L_p$-norm if $p>1$.
This is the reason why we have restricted ourselves
in Section~\ref{sec:L1-conv} to the case $p=1$.
For results on local Lipschitz continuity in the initial value for more
general SDEs we refer to \cite{cox-hj}. However, in case of SDE~\eqref{eq:cir}
these results are restricted to $\delta\geq2$.
The following lemma implies that the solution of SDE~\eqref{eq:cir}
is Lipschitz continuous in the initial value with respect to the $L_1$-norm
on any compact time interval.
\begin{lem}\label{lem:initial-value}
We have
\begin{align*}
\E{ \abs{X^x_t-X^y_t} } = e^{-bt}\cdot\abs{x-y}
\end{align*}
for all $x,y\geq0$ and for all $t\geq0$.
\end{lem}
\begin{proof}
Let $x\geq y\geq0$. According to Remark~\ref{rem:comparison-principle} we have
\begin{align*}
\E{ \abs{X^x_t-X^y_t} } = \E{ X^x_t-X^y_t } = \E{ X^x_t } - \E{X^y_t }
\end{align*}
for all $t\geq0$. Using \eqref{eq:mean-sol} completes the proof.
\end{proof}
\begin{rem}
The proof of Lemma~\ref{lem:initial-value}
is a general technique to obtain $L_1$-Lipschitz continuity
in the initial value for a large class of one-dimensional SDEs.
The comparison principle reduces this problem to the computation
of expected values.
\end{rem}
\begin{ex}[One-dimensional squared Bessel process]\label{ex:bessel}
In \cite{bessel1}, it was shown that for $\delta=1$ and $b=0$ we have
\begin{align*}
X^x_t = \left( W_t+\sqrt{x} - \min\left(
0, \inf_{0\leq s\leq t} W_s+\sqrt{x}
\right) \right)^2
\end{align*}
for $t\geq0$ and $x\geq 0$. This yields
\begin{align*}
\abs{ X_t^x-X_t^0 } =
\begin{cases}
0, &\text{if }\inf_{0\leq s\leq t} W_s+\sqrt{x}\leq 0,\\
(W_t+\sqrt{x})^2-(W_t-\inf_{0\leq s\leq t} W_s)^2,
&\text{if }\inf_{0\leq s\leq t} W_s+\sqrt{x}\geq 0.
\end{cases}
\end{align*}
Using this one can show
\begin{align*}
\left(\E{ \abs{X_t^x-X_t^0}^p }\right)^{1/p} \asymp x^{(1+1/p)/2}
\end{align*}
for $x\in[0,1]$, where $1\leq p<\infty$ and $t>0$.
\end{ex}
In the rest of this section we assume
\begin{align*}
0<\delta<2 \qquad\text{and}\qquad b=0,
\end{align*}
i.e., the solution of SDE~\eqref{eq:cir} is a squared Bessel process of dimension $\delta$.
For $1\leq p<\infty$ define the maximal H\"older exponent by
\begin{align*}
\ensuremath{\alpha_\mathrm{ex} } (\delta,p) = \sup\left\{\alpha\geq 0:\ \exists C>0\ \ \forall x\in \left]0,1\right]:
\ \left(\E{ \abs{X^x_1-X^0_1}^p }\right)^{1/p} \leq C\cdot x^\alpha
\right\}.
\end{align*}
Note that replacing the time point $t=1$ in the definition of $ \ensuremath{\alpha_\mathrm{ex} } $
to an arbitrary time point $t>0$ does not affect the value of $ \ensuremath{\alpha_\mathrm{ex} } $,
see Remark~\ref{rem:scaling-space} and Remark~\ref{rem:scaling-time}.
From Lemma~\ref{lem:initial-value} and Example~\ref{ex:bessel} we already have
\begin{align}\label{eq:exin-1}
\ensuremath{\alpha_\mathrm{ex} } (\delta,1) = 1
\end{align}
and
\begin{align}\label{eq:exin-2}
\ensuremath{\alpha_\mathrm{ex} } (1,p) = \left(1+1/p\right)/2.
\end{align}
\begin{prop}\label{prop:initial-value}
We have
\begin{align}\label{eq:exin}
1/p \leq \ensuremath{\alpha_\mathrm{ex} } (\delta,p) \leq 1/p+\delta/2-\delta/(2p).
\end{align}
In particular, we have $ \ensuremath{\alpha_\mathrm{ex} } (\delta,p)<1$ if and only if $p>1$.
\end{prop}
\begin{proof}
If $p=1$, then \eqref{eq:exin} follows from \eqref{eq:exin-1}.
Note that for $x\in\left]0,1\right]$ we have
\begin{align}\label{eq:prob-positive}
\ensuremath{ \mathrm{P} }\left( \forall t\in[0,1]:\ X_{t}^{x}>0 \right)
\asymp x^{1-\delta/2}.
\end{align}
This follows from \cite[p.~75]{handbook},
where the density of the first hitting time of zero is given for Bessel processes.
Let $1<p<\infty$ and let $1<q<\infty$ be the dual of $p$, i.e.,
$1/p+1/q=1$. Using H\"older's inequality, Remark~\ref{rem:comparison-principle},
Lemma~\ref{lem:initial-value}, and \eqref{eq:prob-positive} we get
\begin{align*}
\left(\E{ \abs{X^x_1-X^0_1}^p }\right)^{1/p}
&\geq \E{ \abs{X^x_1-X^0_1}\cdot\ind{\set{\forall t\in[0,1]:X^x_t>0}} }
\cdot \big(\, \ensuremath{ \mathrm{P} }\left(\forall t\in[0,1]:X^x_t>0\right) \big)^{-1/q} \\
&= \E{ \abs{X^x_1-X^0_1} }
\cdot \big(\, \ensuremath{ \mathrm{P} }\left(\forall t\in[0,1]:X^x_t>0\right) \big)^{-1/q} \\
&\asymp x\cdot x^{-1/q\cdot (1-\delta/2)}
\end{align*}
for $x\in\left]0,1\right]$,
which shows the upper bound in \eqref{eq:exin}. The lower bound follows by
interpolation using Remark~\ref{rem:interpolation}, Lemma~\ref{lem:initial-value}
and \eqref{eq:linear-growth}, cf.~the proof of Corollary~\ref{cor:p}.
\end{proof}
\begin{rem}
Let us mention that the upper bound in \eqref{eq:exin} is sharp
for $p=1$ and $\delta=1$, see \eqref{eq:exin-1} and \eqref{eq:exin-2}.
\end{rem}
\section{Tamed Milstein Scheme}\label{sec:scheme}
In this section we introduce a truncated Milstein scheme and prove
that this scheme satisfies the assumptions of
Theorem~\ref{thm:strong-L1} and Corollary~\ref{cor:p}.
Consider $ \ensuremath{ \varphi_\mathrm{Mil} } \colon\mathbb{R}^+_0\times\left]0,1\right]\times\mathbb{R}\to\mathbb{R}$ given by
\begin{align*}
\ensuremath{ \varphi_\mathrm{Mil} } (x,t,w) &= x + (\delta-bx)\cdot t + 2\sqrt{x}\cdot w
+ \left(w^2-t\right) \\
&= \left( \sqrt{x}+w \right)^2
+ \left( \delta-1-bx \right)\cdot t.
\end{align*}
This function models a single Milstein-step. Moreover, it only
preserves positivity if $b\leq0$ and $\delta\geq1$.
Hence it is not a valid one-step scheme in general.
However, the analysis of $ \ensuremath{ \varphi_\mathrm{Mil} } $ is an important step since we will
construct valid one-step schemes that are close to $ \ensuremath{ \varphi_\mathrm{Mil} } $.
\begin{prop}[Error of a Milstein-step]\label{prop:milstein}
For every $1\leq p<\infty$ there exists a constant $C>0$ such that
\begin{align*}
\left( \E{ \abs{ \ensuremath{ \varphi_\mathrm{Mil} } (x,t,W_t)-X^x_t }^p } \right)^{1/p}
\leq C\cdot \ensuremath{ \Delta_\mathrm{loc} } (x,t)
\end{align*}
for all $x\geq0$ and $t\in\left]0,1\right]$, where $ \ensuremath{ \Delta_\mathrm{loc} } $ is
given by \eqref{eq:loc-milstein}.
\end{prop}
The proof of Proposition~\ref{prop:milstein} exploits the following
simple lemma, which is a refinement of \eqref{eq:linear-growth}.
\begin{lem}
For every $1\leq p<\infty$ there exists a constant $C>0$ such that
\begin{align}\label{eq:sol-sup}
\left( \E{ \sup_{0\leq s\leq t} \abs{X^x_s}^p}\right)^{1/p}
\leq C\cdot (x+t)
\end{align}
for all $x\geq0$ and $t\in[0,1]$.
\end{lem}
\begin{proof}
According to \eqref{eq:mean-sol} we have
\begin{align*}
\E{X^x_t} \preceq x+t
\end{align*}
for $x\geq0$ and $t\in[0,1]$. Combining this with \eqref{eq:linear-growth}
and a Burkholder-Davis-Gundy-type inequality~\cite[Thm.~1.7.2]{mao}
we obtain
\begin{align*}
\left( \E{ \sup_{0\leq s\leq t} \abs{X^x_s}^2}\right)^{1/2}
\preceq x + (1+x)\,t + \sqrt{t}\cdot\sqrt{x+t}
\end{align*}
for $x\geq0$ and $t\in[0,1]$, i.e., \eqref{eq:sol-sup} holds for $p=2$.
In the same way we get \eqref{eq:sol-sup} for $p=4,8,16,\ldots$,
which suffices.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:milstein}]
We may assume $2\leq p<\infty$. From \eqref{eq:linear-growth},
a Burkholder-Davis-Gundy-type inequality~\cite[Thm.~1.7.2]{mao},
and \eqref{eq:sol-sup} we get
\begin{align}\label{eq:2}
\left( \E{ \sup_{0\leq s\leq t} \abs{X^x_s-x}^p } \right)^{1/p}
\preceq (1+x)\,t + \sqrt{t}\cdot\sqrt{x+t}
\preceq (1+x)\,t + \sqrt{xt}
\end{align}
for $x\geq0$ and $t\in[0,1]$. Moreover, we obtain
\begin{align}\label{eq:3}
\left( \E{ \abs{ \ensuremath{ \varphi_\mathrm{Mil} } (x,t,W_t)-x}^p }\right)^{1/p}
\preceq (1+x)\,t + \sqrt{xt}+t
\preceq (1+x)\,t + \sqrt{xt}
\end{align}
for $x\geq0$ and $t\in\left]0,1\right]$. Combining \eqref{eq:2} and \eqref{eq:3}
yields the claim for $x\leq t$.
Furthermore, according to \eqref{eq:2} the error of the drift term satisfies
\begin{align*}
\left( \E{ \abs{
\int_0^t (\delta-bX^x_s)\,\mathrm{d} s - \int_0^t (\delta-bx)\,\mathrm{d} s
}^p } \right)^{1/p}
&\preceq (1+x)\,t^2 + \sqrt{x}\cdot t^{3/2} \\
&\preceq \ensuremath{ \Delta_\mathrm{loc} } (x,t)
\end{align*}
for $x\geq0$ and $t\in\left]0,1\right]$.
Define the stopping time
\begin{align*}
\tau^x = \inf \left\{s\geq 0:\,\abs{X^x_s-x}=x/2 \right\}
\end{align*}
for $x\geq 0$. Using Markov's inequality we get
\begin{align*}
\ensuremath{ \mathrm{P} }\left(\tau^x\leq t\right)
= \ensuremath{ \mathrm{P} }\left( \sup_{0\leq s\leq t} \abs{X^x_s-x}\geq x/2 \right)
\leq \frac{ \E{ \sup_{0\leq s\leq t}\abs{X^x_s-x}^p } }{(x/2)^p}
\end{align*}
for $t\geq0$ and $x>0$, and hence \eqref{eq:2} implies
\begin{align*}
\ensuremath{ \mathrm{P} }\left(\tau^x\leq t\right)
\preceq \frac{(xt)^p+(xt)^{p/2}}{x^p}
=t^p+(t/x)^{p/2}
\leq t^{p/2}+(t/x)^{p/2}
\leq 2\cdot\frac{ t^{p/2} }{ \min(1,x^{p/2}) }
\end{align*}
for $x\geq t$ and $t\in\left]0,1\right]$.
By quadrupling $p$ we obtain
\begin{align}\label{eq:hitting}
\ensuremath{ \mathrm{P} }\left(\tau^x\leq t\right)
\preceq \frac{ t^{2p} }{ \min(1,x^{2p}) }
\end{align}
for $x\geq t$ and $t\in\left]0,1\right]$.
Define the stopped process
$\tilde X^x=(\tilde X^x_t)_{t\geq 0}$ by
\begin{align*}
\tilde X^x_t = X^x_{t\wedge\tau^x}
\end{align*}
for $x\geq 0$.
Clearly, $\tilde X^x_t\in[x/2,3x/2]$ for $t\geq 0$.
It\^{o}'s lemma shows
\begin{align*}
\sqrt{\tilde X^x_t} = \sqrt{x} + \int_0^{t\wedge\tau^x}
\frac{ (\delta-1)-b\tilde X^x_s }{ 2\sqrt{\tilde X^x_s} }\,\mathrm{d} s
+ W_{t\wedge\tau^x}
\end{align*}
and hence
\begin{align}\label{eq:4}
\left( \E{ \abs{ \sqrt{\tilde X^x_t}-\left(\sqrt{x}+W_{t\wedge\tau^x}\right) }^p
}\right)^{1/p}
\preceq (1+x)\,t/\sqrt{x}
\end{align}
for $t\geq0$ and $x>0$.
Combining the Cauchy-Schwarz inequality with \eqref{eq:sol-sup}
and \eqref{eq:hitting} yields
\begin{align*}
&\left( \E{ \abs{ \sqrt{X^x_t}-\left(\sqrt{x}+W_t\right) }^p
\cdot \ind{ \set{\tau^x\leq t} }
} \right)^{1/p} \\
&\qquad\leq \left( \E{ \abs{ \sqrt{X^x_t}-\left(\sqrt{x}+W_t\right) }^{2p} }
\right)^{1/(2p)}
\cdot \Big( \ensuremath{ \mathrm{P} }(\tau^x\leq t) \Big)^{1/(2p)} \\
&\qquad\preceq \left(\sqrt{x+t}+\sqrt{x}+\sqrt{t}\right)\cdot \frac{t}{\min(1,x)}
\end{align*}
and hence
\begin{align}\label{eq:5}
\left( \E{ \abs{\sqrt{X^x_t}-\left(\sqrt{x}+W_t\right)}^p
\cdot \ind{ \set{\tau^x\leq t} }
} \right)^{1/p}
\preceq t\cdot (\sqrt{x}+1/\sqrt{x})
\end{align}
for $x\geq t$ and $t\in\left]0,1\right]$. Moreover, we have
\begin{align*}
\left( \E{ \abs{\sqrt{X^x_t}-\left(\sqrt{x}+W_t\right)}^p } \right)^{1/p}
&\leq \left( \E{ \abs{\sqrt{X^x_t}-\left(\sqrt{x}+W_t\right)}^p
\cdot \ind{ \set{\tau^x>t} }
}\right)^{1/p} \\
&\qquad+ \left( \E{ \abs{\sqrt{X^x_t}-\left(\sqrt{x}+W_t\right)}^p
\cdot \ind{ \set{\tau^x\leq t} }
}\right)^{1/p}
\end{align*}
such that \eqref{eq:4} and \eqref{eq:5} yield
\begin{align}\label{eq:6}
\left( \E{ \abs{\sqrt{X^x_t}-\left(\sqrt{x}+W_t\right)}^p } \right)^{1/p}
\preceq t\cdot (\sqrt{x}+1/\sqrt{x})
\end{align}
for $x\geq t$ and $t\in\left]0,1\right]$.
Using a Burkholder-Davis-Gundy-type inequality~\cite[Thm.~1.7.2]{mao}
and \eqref{eq:6} we obtain
\begin{align*}
&\left( \E{ \abs{ \int_0^t \sqrt{X^x_s}\,\mathrm{d} W_s
-\int_0^t \left(\sqrt{x}+W_s\right)\mathrm{d} W_s
}^p } \right)^{1/p} \\
&\qquad\preceq \sqrt{t} \cdot \sup_{0\leq s\leq t}
\left( \E{ \abs{\sqrt{X^x_s}-\left(\sqrt{x}+W_s\right)}^p } \right)^{1/p} \\
&\qquad\preceq \ensuremath{ \Delta_\mathrm{loc} } (x,t)
\end{align*}
for $x\geq t$ and $t\in\left]0,1\right]$.
\end{proof}
\subsection{$L_1$-convergence}\label{sec:L1-mil}
Recall that $ \ensuremath{ \varphi_\mathrm{Mil} } $ is given by
\begin{align*}
\ensuremath{ \varphi_\mathrm{Mil} } (x,t,w) = h(x,t,w) + \left( \delta-1-bx \right)\cdot t
\end{align*}
with
\begin{align*}
h(x,t,w)=\left( \sqrt{x}+w \right)^2.
\end{align*}
Consider the one-step function
$ \ensuremath{ \oss_\mathrm{Mil} } \colon\mathbb{R}^+_0\times\left]0,1\right]\times\mathbb{R}\to\mathbb{R}^+_0$ given by
\begin{align}\label{eq:truncated-scheme}
\ensuremath{ \oss_\mathrm{Mil} } (x,t,w) = \left(
\ensuremath{ \tilde{h} } (x,t,w) + \left( \delta-1-bx \right)\cdot t
\right)^+
\end{align}
with
\begin{align*}
\ensuremath{ \tilde{h} } (x,t,w)=\left( \max\left(
\sqrt{t},\sqrt{\max(t,x)}+w
\right) \right)^2.
\end{align*}
The corresponding one-step scheme is denoted by $ \ensuremath{ \bar{Y}^{x,N} } $.
We refer to this scheme as truncated Milstein scheme.
Let us mention that we have separated the nonlinear parts of $ \ensuremath{ \varphi_\mathrm{Mil} } $
and $ \ensuremath{ \oss_\mathrm{Mil} } $, namely $h$ and $ \ensuremath{ \tilde{h} } $, from
the linear drift term.
The following lemma shows that $ \ensuremath{ \tilde{h} } $ is $L_1$-Lipschitz continuous
with constant $1$ and that $ \ensuremath{ \tilde{h} } $ is close to $h$ in a suitable way,
cf.~(A\ref{ass:lipschitz}) and (A\ref{ass:local-error}).
The proof of Lemma~\ref{lem:lipschitz-part} is postponed to the appendix.
\begin{lem}\label{lem:lipschitz-part}
We have
\begin{align*}
\E{ \abs{ \ensuremath{ \tilde{h} } (x_1,t,W_t)- \ensuremath{ \tilde{h} } (x_2,t,W_t) } } \leq \abs{x_1-x_2}
\end{align*}
for all $x_1,x_2\geq0$ and $t\in\left]0,1\right]$.
Furthermore, for every $1\leq p<\infty$ there exists a constant $C>0$ such that
\begin{align*}
\left(\E{ \abs{ h(x,t,W_t)- \ensuremath{ \tilde{h} } (x,t,W_t) }^p }\right)^{1/p} \leq C\cdot \ensuremath{ \Delta_\mathrm{loc} } (x,t)
\end{align*}
for all $x\geq0$ and $t\in\left]0,1\right]$.
\end{lem}
The following theorem shows that the truncated Milstein scheme satisfies
the assumptions of Theorem~\ref{thm:strong-L1}.
\begin{thm}[$L_1$-convergence of truncated Milstein scheme]\label{thm:truncated-milstein}
There exists a constant $K>0$ such that
\begin{align*}
\E{ \abs{ \ensuremath{ \oss_\mathrm{Mil} } (x_1,t,W_t)- \ensuremath{ \oss_\mathrm{Mil} } (x_2,t,W_t) } }
\leq (1+Kt)\cdot \abs{x_1-x_2}
\end{align*}
for all $x_1,x_2\geq0$ and $t\in\left]0,1\right]$, i.e., (A\ref{ass:lipschitz})
is fulfilled for $p=1$.
Furthermore, for every $1\leq p<\infty$ there exists a constant $C>0$ such that
\begin{align*}
\left(\E{ \abs{ \ensuremath{ \oss_\mathrm{Mil} } (x,t,W_t)-X^x_t }^p }\right)^{1/p}
\leq C\cdot \ensuremath{ \Delta_\mathrm{loc} } (x,t)
\end{align*}
for all $x\geq0$ and $t\in\left]0,1\right]$, i.e., (A\ref{ass:local-error})
is fulfilled for every $1\leq p<\infty$.
\end{thm}
\begin{proof}
Using $\abs{y^+-z^+}\leq\abs{y-z}$ for $y,z\in\mathbb{R}$ we obtain
\begin{align*}
&\E{ \abs{ \ensuremath{ \oss_\mathrm{Mil} } (x_1,t,W_t)- \ensuremath{ \oss_\mathrm{Mil} } (x_2,t,W_t) } } \\
&\qquad\leq\E{ \abs{
\ensuremath{ \tilde{h} } (x_1,t,W_t)- \ensuremath{ \tilde{h} } (x_2,t,W_t) + (x_2-x_1)\cdot bt
} } \\
&\qquad\leq (1+bt)\cdot \abs{x_1-x_2}
\end{align*}
due to the first part of Lemma~\ref{lem:lipschitz-part}.
Moreover, using $\abs{z^+-y}\leq\abs{z-y}$ for $y\geq0$ and $z\in\mathbb{R}$ we have
\begin{align*}
&\left(\E{ \abs{ \ensuremath{ \oss_\mathrm{Mil} } (x,t,W_t)-X^x_t }^p }\right)^{1/p} \\
&\qquad\leq \left(\E{ \abs{
\ensuremath{ \tilde{h} } (x,t,W_t) + (\delta-1-bx)\cdot t-X^x_t
}^p }\right)^{1/p} \\
&\qquad\leq \left(\E{ \abs{ \ensuremath{ \varphi_\mathrm{Mil} } (x,t,W_t)-X^x_t }^p }\right)^{1/p}
+ \left(\E{ \abs{ h(x,t,W_t)- \ensuremath{ \tilde{h} } (x,t,W_t) }^p }\right)^{1/p}.
\end{align*}
Applying Proposition~\ref{prop:milstein} and the second part of Lemma~\ref{lem:lipschitz-part}
yields the second statement.
\end{proof}
\begin{rem}
Let us stress that there is some freedom regarding the particular
form of the truncated Milstein scheme. For instance, the proof
of Theorem~\ref{thm:truncated-milstein} shows that the positive
part in \eqref{eq:truncated-scheme} may be replaced by the absolute
value.
\end{rem}
\subsection{$L_p$-convergence}\label{sec:Lp-mil}
In this section we show that the truncated Milstein scheme $ \ensuremath{ \bar{Y}^{x,N} } $ defined by
the one-step function $ \ensuremath{ \oss_\mathrm{Mil} } $ given in \eqref{eq:truncated-scheme} is uniformly
bounded and hence satisfies the assumptions of Corollary~\ref{cor:p}.
\begin{prop}\label{lem:bounded}
For every $1\leq q<\infty$ there exists a constant $C>0$ such that
\begin{align*}
\sup_{0\leq t\leq 1} \left( \E{ \abs{ \ensuremath{ \bar{Y}^{x,N} } _t }^q } \right)^{1/q}
\leq C\cdot (1+x)
\end{align*}
for all $x\geq0$ and $N\in\mathbb{N}$, i.e., (A\ref{ass:bounded})
is fulfilled.
\end{prop}
\begin{proof}
Let $x\geq0$, $t\in\left]0,1\right]$, and $w\in\mathbb{R}$. At first, note that
\begin{align*}
\ensuremath{ \tilde{h} } (x,t,w) &\leq
\left( \max\left(\sqrt{t},\sqrt{t}+w\right) \right)^2
+ \left( \max\left(\sqrt{t},\sqrt{x}+w\right) \right)^2 \\
&\leq 2t + 2 w^2 + t + \left(\sqrt{x}+w\right)^2
\end{align*}
and
\begin{align*}
\left( \delta-1-bx \right)\cdot t \leq ( \delta+\abs{b}x ) \cdot t.
\end{align*}
Moreover, note that $\max\left(\sqrt{t},\sqrt{x}+w\right)$
is monotonically increasing in $x$.
Hence the auxiliary one-step function
$g\colon\mathbb{R}^+_0\times\left]0,1\right]\times\mathbb{R}\to\mathbb{R}^+_0$ defined by
\begin{align*}
g(x,t,w) = x + ( \delta+\abs{b}x+3 ) \cdot t
+ 3w^2 + 2\sqrt{x}\cdot w
\end{align*}
satisfies
\begin{align*}
\ensuremath{ \oss_\mathrm{Mil} } (x_1,t,w)\leq g(x_2,t,w).
\end{align*}
for $0\leq x_1\leq x_2$. This yields
\begin{align}\label{eq:aux-scheme-bound}
0\leq \ensuremath{ \bar{Y}^{x,N} } _t \leq \ensuremath{ Z^{x,N} } _t
\end{align}
for all $t\geq0$, where $ \ensuremath{ Z^{x,N} } $ denotes the corresponding auxiliary scheme
with piecewise constant interpolation. Moreover, the auxiliary scheme
satisfies the integral equation
\begin{align*}
\ensuremath{ Z^{x,N} } _t = x + \int_0^t \left( \delta+\abs{b} \ensuremath{ Z^{x,N} } _s+6 \right) \mathrm{d} s
+ \int_0^t \left(
2\sqrt{ \ensuremath{ Z^{x,N} } _s} + 6\,(W_s- \ensuremath{ \bar{W}^N } _s)
\right) \mathrm{d} W_s
\end{align*}
for $t=0,1/N,2/N,\ldots$, where $( \ensuremath{ \bar{W}^N } _t)_{t\geq0}$
denotes the piecewise constant interpolation of $W$ at the grid of
mesh size $1/N$, i.e., $ \ensuremath{ \bar{W}^N } _t=W_{\lfloor tN\rfloor/N}$.
Due to this integral equation we can now apply standard techniques
exploiting the linear growth condition to obtain uniform boundedness,
cf.~\cite[Lem.~2.6.1]{mao}. Let $2\leq q<\infty$. Straightforward
calculations show
\begin{align*}
\abs{ \ensuremath{ Z^{x,N} } _s}^q &\preceq 1 + x^q + \int_0^s \abs{ \ensuremath{ Z^{x,N} } _u}^q \mathrm{d} u \\
&\qquad\qquad+
\abs{ \int_0^s \left(2\sqrt{ \ensuremath{ Z^{x,N} } _u}+6\,(W_u- \ensuremath{ \bar{W}^N } _u)\right) \mathrm{d} W_u }^q
\end{align*}
for $x\geq0$ and $s=0,1/N,\ldots,1$. This yields
\begin{align*}
\sup_{0\leq s\leq t} \abs{ \ensuremath{ Z^{x,N} } _s}^q &\preceq 1 + x^q + \int_0^t \abs{ \ensuremath{ Z^{x,N} } _u}^q \mathrm{d} u \\
&\qquad\qquad+ \sup_{0\leq s\leq t}
\abs{ \int_0^s \left(2\sqrt{ \ensuremath{ Z^{x,N} } _u}+6\,(W_u- \ensuremath{ \bar{W}^N } _u)\right) \mathrm{d} W_u }^q
\end{align*}
for $x\geq0$ and $t\in[0,1]$.
Using a Burkholder-Davis-Gundy-type inequality \cite[Thm.~1.7.2]{mao} and
the linear growth condition we obtain
\begin{align*}
\E{ \sup_{0\leq s\leq t} \abs{ \ensuremath{ Z^{x,N} } _s}^q } &\preceq 1 + x^q
+ \int_0^t \E{ \sup_{0\leq s\leq u} \abs{ \ensuremath{ Z^{x,N} } _s}^q } \mathrm{d} u \\
&\qquad\qquad+ \int_0^t \E{ \abs{2\sqrt{ \ensuremath{ Z^{x,N} } _u}+6\,(W_u- \ensuremath{ \bar{W}^N } _u)}^q } \mathrm{d} u \\
&\preceq 1 + x^q
+ \int_0^t \E{ \sup_{0\leq s\leq u} \abs{ \ensuremath{ Z^{x,N} } _s}^q } \mathrm{d} u
\end{align*}
for $x\geq0$ and $t\in[0,1]$.
Applying Gronwall's lemma yields the desired inequality for the
auxiliary scheme. Due to \eqref{eq:aux-scheme-bound} this
inequality also holds for the truncated Milstein scheme.
\end{proof}
\section*{Appendix}
\begin{lem}\label{lem:lipschitz-normal}
Let $Z$ be standard normally distributed. We have
\begin{align*}
\E{ \abs{ \left(\max(1,\sqrt{x_1}+Z)\right)^2
-\left(\max(1,\sqrt{x_2}+Z)\right)^2
} } \leq \abs{x_1-x_2}
\end{align*}
for all $x_1,x_2\geq1$.
Furthermore, for every $1\leq p<\infty$ there exists a constant $C>0$ such that
\begin{align*}
\left(\E{ \abs{ \left(\max(1,\sqrt{x}+Z)\right)^2
-\left(\sqrt{x}+Z\right)^2
}^p }\right)^{1/p} \leq C\cdot \frac{1}{\sqrt{x}}
\end{align*}
for all $x\geq 1$.
\end{lem}
\begin{proof}
Let $\phi$ and $\Phi$ denote the Lebesgue density and the
distribution function of the standard normal distribution,
respectively. Moreover, let $f\colon\mathbb{R}\to\mathbb{R}$ be given by
\begin{align*}
f(x) &= \E{ \left(\max\left(1,x+Z\right)\right)^2 } \\
&= \Phi(1-x) + x^2 \Phi(x-1)
+ 2x\int_{1-x}^\infty z\phi(z)\,\mathrm{d} z
+ \int_{1-x}^\infty z^2\phi(z)\,\mathrm{d} z \\
&= \Phi(1-x) + x^2 \Phi(x-1)
+ 2x\,\phi(1-x) \\
&\qquad\qquad+ 1+(1-x)\,\phi(1-x)-\Phi(1-x) \\
&= 1+ (1+x)\,\phi(1-x) + x^2\Phi(x-1).
\end{align*}
Hence the derivative of $f$ reads
\begin{align*}
f'(x) &= \phi(1-x) - (1+x)\,\phi'(1-x)
+2x\,\Phi(x-1) + x^2\phi(x-1) \\
&= 2\phi(1-x) + 2x\,\Phi(x-1).
\end{align*}
For $x>0$ we define $g(x)=f(\sqrt{x})$ such that
\begin{align*}
g'(x) = \frac{1}{\sqrt{x}}\cdot \phi(\sqrt{x}-1) + \Phi(\sqrt{x}-1).
\end{align*}
Using
\begin{align*}
\frac{1}{x+1} \leq \frac{ \sqrt{4+x^2}-x }{2}
\end{align*}
for $x\geq0$ and \cite{birnbaum}, we have
\begin{align*}
\frac{1}{x+1}\cdot\phi(x)+\Phi(x)
\leq \frac{ \sqrt{4+x^2}-x }{2}\cdot\phi(x)+\Phi(x) \leq 1
\end{align*}
for $x\geq0$. This yields
\begin{align*}
g'(x)\leq 1
\end{align*}
for $x\geq1$. For $x_1\geq x_2\geq1$ we hence get
\begin{align*}
\E{ \abs{ \left(\max(1,\sqrt{x_1}+Z)\right)^2
-\left(\max(1,\sqrt{x_2}+Z)\right)^2
} }
= g(x_1)-g(x_2) \leq x_1-x_2,
\end{align*}
which shows the first claim.
The Cauchy-Schwarz inequality implies
\begin{align*}
&\E{ \abs{ \left(\max(1,\sqrt{x}+Z)\right)^2
-\left(\sqrt{x}+Z\right)^2
}^p } \\
&\qquad= \E{ \abs{ 1-\left(\sqrt{x}+Z\right)^2 }^p
\cdot \ind{\set{\sqrt{x}+Z<1}} } \\
&\qquad\leq \sqrt{ \E{ \abs{ 1-\left(\sqrt{x}+Z\right)^2 }^{2p} } }
\cdot \sqrt{ \ensuremath{ \mathrm{P} }(\sqrt{x}+Z<1) }.
\end{align*}
Hence we get
\begin{align*}
&\left( \E{ \abs{ \left(\max(1,\sqrt{x}+Z)\right)^2
-\left(\sqrt{x}+Z\right)^2
}^p } \right)^{1/p} \\
&\qquad\leq \left( 1+2x+2\cdot \left(\E{(Z^2)^{2p}}\right)^{1/(2p)}
\right) \cdot \left(\ensuremath{ \mathrm{P} }(Z>\sqrt{x}-1)\right)^{1/(2p)} \\
&\qquad\preceq x\cdot \left(\ensuremath{ \mathrm{P} }(Z>\sqrt{x}-1)\right)^{1/(2p)}
\end{align*}
for $x\geq 1$. By using a standard tail estimate for the standard
normal distribution, see, e.g., \cite[p.~31]{revuz-yor}, we get the second claim.
\end{proof}
\begin{rem}
The upper bound in the second statement in Lemma~\ref{lem:lipschitz-normal}
can be considerably improved due to the exponential decay of the density of the
standard normal distribution. However, $1/\sqrt{x}$ suffices for
our purposes.
\end{rem}
\begin{proof}[Proof of Lemma~\ref{lem:lipschitz-part}]
At first, note that
\begin{align}\label{eq:normal-scaling1}
h(x,t,w) = t\cdot \left(\sqrt{x/t}+{w}/{\sqrt{t}}
\right)^2
\end{align}
and
\begin{align}\label{eq:normal-scaling2}
\ensuremath{ \tilde{h} } (x,t,w) = t\cdot \left(\max\left(
1,\sqrt{\max\left(1,x/t\right)}+{w}/{\sqrt{t}}
\right)\right)^2.
\end{align}
Using \eqref{eq:normal-scaling2} and the first part of Lemma~\ref{lem:lipschitz-normal}
we obtain
\begin{align*}
\E{ \abs{ \ensuremath{ \tilde{h} } (x_1,t,W_t)- \ensuremath{ \tilde{h} } (x_2,t,W_t) } }
\leq t\cdot \abs{\max\left(1,x_1/t\right)-\max\left(1,x_2/t\right)}
\leq \abs{x_1-x_2}.
\end{align*}
Moreover, using \eqref{eq:normal-scaling1}, \eqref{eq:normal-scaling2}, and
the second part of Lemma~\ref{lem:lipschitz-normal} we obtain
\begin{align*}
\left(\E{ \abs{ h(x,t,W_t)- \ensuremath{ \tilde{h} } (x,t,W_t)}^p }\right)^{1/p}
\leq t\cdot C\cdot\frac{1}{\sqrt{x/t}}
\preceq \ensuremath{ \Delta_\mathrm{loc} } (x,t)
\end{align*}
for $x\geq t$. Finally, we have
\begin{align*}
&\left( \E{ \abs{h(x,t,W_t)- \ensuremath{ \tilde{h} } (x,t,W_t)}^p } \right)^{1/p} \\
&\qquad\leq \left( \E{ \abs{ h(x,t,W_t)}^p } \right)^{1/p}
+ \left( \E{ \abs{ \ensuremath{ \tilde{h} } (x,t,W_t)}^p }\right)^{1/p}
\preceq t = \ensuremath{ \Delta_\mathrm{loc} } (x,t)
\end{align*}
for $x\leq t$.
\end{proof}
\section*{Acknowledgement}
We thank Thomas M\"uller-Gronbach and Klaus Ritter for valuable discussions
and comments.
\bibliographystyle{plain}
\renewcommand*{\bibname}{References}
| proofpile-arXiv_065-12240 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The theoretical study of gas dynamics is mostly based on the analytical and numerical solutions of the Euler and Navier-Stokes (NS) equations.
The modeling of the NS is coming from the conservative physical laws, and the inclusion of additional constitutive relationships
for the stress and strain, and the relationship between the heat flux and temperature gradient.
But, the scale for the validity of the NS equations has never been clearly defined, even though it always refers as the hydrodynamic one.
Even with the wide applications of the NS equations, the boundary for the validity of NS equations is unclear.
For a hypersonic flow around a flying vehicle, different flow physics may emerge at different regions, such as the highly non-equilibrium shock layer,
low density trailing edge, and the wake turbulence.
Fig. \ref{fig:vehicle} presents the local Knudsen number around a flying vehicle at Mach number $4$ and Reynolds number $59,373$.
As shown in this figure, the local Knudsen number can cover a wide range of values with five order of magnitude difference.
It seems that a single scale governing equation can be hardly applicable in an efficient way to all flow regimes.
On the other hand, the Boltzmann equation is derived on a well-defined modeling scale, which is the particle mean free path and the particle
mean traveling time between collisions \cite{chapman}. This is also the finest resolution of the Boltzmann equation.
Only under such a modeling scale, the particle transport and collision can be separately formulated.
In the kinetic scale, the particle distribution is modeled as a field, and the Boltzmann equation
becomes a statistical modeling equation. The Boltzmann equation can be numerically solved through the Direct Simulation Monte Carlo (DSMC) method \cite{bird}
or the direct Boltzmann solver \cite{aristov} with the numerical resolution on the same scale.
Since the Boltzmann equation has a much refined resolution in comparison with hydrodynamic one,
in order to derive the NS equations a coarse-graining process has to be
used. One of the most successful theoretical study of the Boltzmann equation is the Chapman-Enskog expansion,
where with a proper stretching of the space and time scales the NS equations can be obtained.
It is fortunate that due to the separation of scales between NS and Boltzmann equations, both equations can be confidently used in their respective scales.
Even though the NS equations can be correctly derived form the Boltzmann equation, there are tremendous difficulties
to derive other equations between the kinetic and hydrodynamic scales,
which span over the whole non-equilibrium flow regime.
The difficulties are associated with the following reasons.
Firstly, how to define a continuously varying scale between the kinetic and hydrodynamic ones to do the modeling and derive the equations.
Secondly, what kinds of flow variables can be used to describe the flow motion between these two limits.
Thirdly, in the transition region there is no clear scale separation and the conventional mathematical tool may not be applicable.
For the NS equations, there are only five flow variables,
such as mass, momentum, and energy, to describe the dynamics \cite{landau}.
However, for the Boltzmann equation, there are theoretically infinite number degrees of freedom due to the
capturing of individual particle movement.
How many flow variables should be properly used between these two limiting cases to recover all possible non-equilibrium state are basically unclear.
All extended thermodynamic theories or irreversible thermodynamics are focusing on the study of flow close to equilibrium only.
In fact, we have no much knowledge about the non-equilibrium physics between the hydrodynamic and kinetic scales.
\begin{figure}[!htb]
\centering
\includegraphics[bb=291 118 667 438, clip,width=8cm]{x38.eps}
\caption{Local Knudsen number around a flying vehicle at Mach number $4$ and Reynolds number $59,373$ calculated by D.W. Jiang using a unified gas kinetic scheme \cite{jiang}.}
\label{fig:vehicle}
\end{figure}
In reality, for the gas dynamics the use of distinct governing equations, such as NS and Boltzmann, is based
on their distinct scales and these descriptions are incomplete.
With the variation of the modeling scale, there should exist a continuous spectrum of dynamics between these two limits.
The multiple scale equation is needed to capture the scale-dependent flow physics from the kinetic to the hydrodynamic ones.
With great difficulty by choosing an appropriate modeling scale in the theoretical study,
the computation provides us an opportunity to do direct modeling with a freely varying scale, which is the mesh size and time step.
In other words, the traditional derivation of governing equation can be replaced by a modeling procedure in a discretized space.
Therefore, the numerical algorithm itself provides governing equation for the description of gas dynamics.
Based on the direct modeling on the mesh size and time step, a unified gas-kinetic scheme (UGKS) has been developed for the flow description in all regimes \cite{xu2010,xu-book}.
The main purpose of this paper is to point out the way beyond the traditional numerical PDE methodology
to the construct the direct modeling method.
At the same time,
we are going to use the direct modeling to validate the NS equations through case studies.
The paradigm for modeling and computation is useful in the study of multiple scale transport process.
\section{Gas Dynamics Modeling}
\begin{figure}[!htb]
\centering
\includegraphics[bb=90 207 694 387, clip,width=13cm]{wave-particle.eps}
\caption{Modeling variations from the fluid element in hydrodynamic scale (left) to the particle representation in kinetic scale (right) through the non-equilibrium
regime with a variable degree of freedom (middle).}
\label{fig:wave-particle}
\end{figure}
Now consider a box with the length scale $L$, such as with a value $L=0.01m$, and the box is supposed to hold different amount of molecules, see Fig.{\ref{fig:wave-particle}}.
Under the standard atmospherical condition, the number density of molecules is $n= 2.687 \times 10^{25} m^{-3}$.
In the following mental experiments, we assume that the number density of the particles inside the box can be changed significantly to different levels.
Define the diameter of the molecule as $d$, which is on the order of $d=3.7\times 10^{-10}m$,
and the mean free path between the collisions of the molecule $l$.
The relationship between $d$ and $l$ is $l = 1/{{\sqrt{2}}(\pi n d^2)}$, such as $6.1 \times 10^{-8} m$ in the standard atmospheric condition.
The density of the gas inside the box can be defined as
$\rho = nm$ and $m$ is molecular mass.
The Kundsen number defined as ${\mbox{Kn}} = l/L$ indicates the rarefaction of the molecular distribution. The scales from the molecular diameter to the
dimension of the box can be varied across a scale of ten orders of magnitude.
Let's assume a constant gas temperature $T$ and different number density inside the box.
In the kinetic limit, such as at the Knudsen number $Kn \simeq 1$, the molecules can freely move through the box from one end to the other side,
and the interactions between the molecules and the walls are equally important.
As the Knudsen number reduces with the increment of molecules inside the box,
such as $Kn=0.1$, each particle may take $10$ collisions to move from one end to the other end of the box.
At the same time, each particle can still move freely to anywhere inside the box.
There is fully penetration among all molecules.
With the Boltzmann modeling, the flow physics under such a condition can be easily solved using a mesh size on the order of
the particle mean free path.
In the Boltzmann solver, particle free transport and collision can be treated separately. If there is no bulk velocity for the molecules inside the box,
for such a system with $ 0.1 < Kn < 10 $ information inside the box can propagate from one end to another end through molecular motion
with a speed $C_r \sim \sqrt{RT}$, where $R=k/m$ is the gas constant and $k$ is the Boltzmann constant, as shown in the right sub-figure in Fig.{\ref{fig:wave-particle}}.
Before we consider the syetem with a continuous reduction of Knudsen number, let's go to another limit, i.e., the hydrodynamic one.
Under such a situation, such as at the standard atmospheric condition there are an order of $10^{19}$ particles inside the box.
At such a limit, the Knudsen number can reach an extremely small value, as low as $10^{-6}$.
If we still use the Boltzmann modeling to study the system, we need a high resolution calculation with the mesh size on the
order of $10^{-8}m$.
For such a system, due to the high molecular density and small particle mean free path, in order to study the system efficiently, the fluid element modeling can be used
and the traditional hydrodynamic equations are accurate enough for the description of flow structure in such a large scale, such as at the resolution
of a cell size $10^{-4} m$.
With the intensive collisions in the scales $10^{-8} m$ and $10^{-10} s$ for the particle collision,
the exchange of the momentum and energy will equalize the temperature and averaged velocity locally. Therefore, the local
equilibrium assumption can be achieved in the hydrodynamic scale $10^{-4} m$.
With the separation of the scales in the hydrodynamic and kinetic ones,
the NS scale modeling for such a system can use the fluid element concept, where the
molecules inside the box can be separated into different distinguishable elements with a gigantic amount of molecules inside each unit.
Between the elements, there is pressure, viscous friction, and heat exchange,
but there is no mass or molecules penetration between the elements due to scale separation, such as the absence of mass diffusion term.
The interactions between fluid elements are basically through waves, such as the left sub-figure in Fig.{\ref{fig:wave-particle}}.
This is also the foundation for using the equation of state to each isolated fluid element through the classical thermodynamics.
In other words, in the continuum NS limit, the intensive particle collision prevents the particle from free penetration
between elements. The energy exchange, such as the work done by the force and the heat transfer, takes place
through the boundary, such as the heat diffusion in the Fourier's law.
In such a case, any information in the gas system will propagate through wave behavior, i.e., a process for each fluid element to push neighboring one.
This wave propagating process has the same speed as the molecular motion $C_c \sim \sqrt{RT}$ in the continuum limit.
Only under the fluid element picture, there have Lagrangian and Euler descriptions for the gas dynamics.
The fluid element picture sometimes is associated with difficulties to cope with other requirement, such as the non-slip boundary condition.
Under such a condition, a fluid element needs to be stretched forever in the boundary layer, which cannot be true.
More importantly, for the NS equations there is no a clear definition of the scale for the validity of the equation itself, such as the
scale of element where the constitutive relationship can be faithfully applied for the gas dynamic equations.
Starting from the continuum limit, if the gas density inside the box is reduced,
the mean free path of the gas molecule will increase.
The assumption of the isolated fluid element will break down as the particle penetration effect emerges. With the further reduction of the gas density,
the fluid element assumption has to be abandoned due to the intensive particle exchange among neighboring fluid elements.
During this process, both the pressure interaction (waves through fluid element) and particle penetration (particles free transport)
will take effect, such as the middle figure in Fig.{\ref{fig:wave-particle}}.
In this regime, the information can propagate through the wave interaction and particle transport, which have the same speed of $C_m \sim \sqrt{RT}$.
In terms of physical modeling, in such a scale it is difficult to give a complete description of the flow system using fluid element picture or individual particle motion.
Unfortunately, all extended hydrodynamic equations or moment equations derived from the Boltzmann equation
are intrinsically based on the fluid
element assumption through macroscopic flow variables.
To reach a macroscopic level of description, starting from the microscopic reality a coarse graining process has to be used. During this process,
a certain amount of information gets lost and a corresponding uncertainty is added to the macroscopic description, such as the supplement of constitutive relationship.
With the reduction of gas density,
the degrees of freedom for an accurate description of the flow system increases continuously from the hydrodynamic to the
kinetic level.
In other words, the construction of the extended hydrodynamic equations with a fixed number of flow variables, such as Burnett, Super-Burnett, or moments equations,
cannot give a complete representation.
On the other hand, with a kinetic scale resolution everywhere, the direct use of the Boltzmann equation will be very expensive
to present the solution with an enforced mean free path scale resolution.
In fact, in the regime between the hydrodynamic and kinetic ones,
there is basically no any valid governing equation yet with a variation of degrees of freedom for the capturing of non-equilibrium effect.
Furthermore, no proper flow variables can be identified to give a valid description for such a system, such as the mathematical description
in scale with multiple particle collisions.
Therefore, this regime is basically unexplored even though we have two successful limiting governing equations, i.e., NS and Boltzmann, in two distinct and separate modeling scales.
The inseparable or continuous variation of scales in the transition regime makes theoretical modeling difficult.
For example, how could we present a mathematical formulation in a scale with a few particle mean free path?
The separation of mathematical formulation of particle free transport and collision in the Boltzmann modeling cannot be
applied in such a coarse graining scale with multiple collisions for individual particle. Even though it is difficult to formulate a mathematical description in such a scale,
it can be done directly through the modeling of the gas evolution process in the mesh size scale. This is the basic idea for the modeling and computation together.
In order to give a full description of gas dynamics at different scales,
we have to construct valid governing equations for a continuous variation of flow physics.
The scale used for the modeling can be defined as the ratio of the mean free path over the cell size, the so-called cell Knudsen number $Kn_c = l/\Delta x$.
The pure theoretical study has difficulty in identifying a modeling scale.
Fortunately, for the computation we can adopt the mesh size as a modeling scale.
Based on the direct modeling on the mesh size scale,
a continuous description of flow physics has been obtained
in the unified gas-kinetic scheme (UGKS) \cite{xu2010,xu-book}, which covers the NS and Boltzmann physics in the limiting cases,
and provides a valid solution in the transition regime \cite{liu}.
Without using the direct modeling methodology, it will be difficulty for any numerical PDE approach to properly connect NS and Boltzmann solutions \cite{chen}.
As shown in the later sections, at low Reynolds number case the numerical computation for the direct NS solver requires a very small time step.
This indicates that the NS modeling is not appropriate here,
where the particle penetration has been ignored in the fluid element assumption.
For UGKS, a physical time step, which is independent of Reynolds numbers, can be used uniformly in all flow regimes.
This is consistent with the above analysis, where the physical propagating speed is independent of the gas density.
The aim of this paper is to figure out the dynamics differences quantitatively between the
UGKS and NS modeling, and points out the importance to adopt direct modeling for computation in order to solve the multiple scale transport problem.
\section{Distinct governing equations and direct modeling scheme}
In order to present the ideas of the conventional CFD simulation and the direct modeling approach, we are going to
use the linear advection diffusion equation and the kinetic Boltzmann BGK model for flow description
in different scales. The extension of the scheme to gas dynamic equations will be presented in the next section as well.
\subsection{Hydrodynamic equation and its connection to kinetic model equation}
The kinetic Boltzmann equation and the hydrodynamic Navier-Stokes equations are obtained based on different modeling scales.
In the kinetic mean free path scale, the BGK equation models the flow physics as \cite{bgk}
\begin{equation}\label{bgk}
\partial_t f+c\partial_xf =\frac{1}{\tau}(g-f),
\end{equation}
for the evolution of a gas distribution function $f$ with free transport (left) and collision term (right) effects.
In Eq.\eqref{bgk}, $f(c,x,t)$ is the velocity distribution function, $c$ is the microscopic particle velocity, $g$ is the equilibrium state, and
$\tau$ is the collision time.
In the hydrodynamic scale with the fluid element approximation, the linear advection-diffusion equation is
\begin{equation}\label{advection-diffusion}
u_t+au_x=\nu u_{xx},
\end{equation}
for the propagation of macroscopic variable $u$ with macroscopic velocity $a$, and diffusive mechanism with a constant viscosity coefficient $\nu$.
The macroscopic and microscopic quantities are related through
\begin{equation}
u(x,t)= \int_R f(c,x,t) dc.
\end{equation}
The equilibrium Maxwellian distribution $g$ is
\begin{equation}\label{maxwell}
g=u\frac{1}{\sqrt{\theta \pi}}\mathrm{e}^{-\frac{(c-a)^2}{\theta}}.
\end{equation}
The $\theta$ corresponds to the temperature, which is related to the spread of particle random velocity.
Integrating the particle velocity on both sides of Eq.\eqref{bgk} gives the macroscopic equation
$$\partial_t u+\partial_x F=0,$$
with the flux
$$F(x,t)=\int_R cf(c,x,t) dc.$$
In the continuum regime, with the underlying physical assumption that the variation of $f$ in the hydrodynamic scale is smooth enough
due to substantial particle collisions, the Chapman-Enskog method gives a solution
\begin{equation}
f^{(N)}(c,x,t)=\sum^N_{n=0}\tau^n f_n(c,x,t).
\end{equation}
The first order expansion becomes
\begin{equation}\label{CE}
f^{(1)}(c,x,t)=(u-\tau(c-a)\partial_xu)\frac{1}{\sqrt{\theta \pi}}
\mathrm{e}^{-\frac{(c-a)^2}{\theta}}.
\end{equation}
The corresponding macroscopic flux reads
\begin{equation}\label{flux-1st}
F^{(1)}=au-\frac{\theta \tau}{2}\partial_x u,
\end{equation}
which leads to the advection-diffusion
$$u_t+au_x=\frac{\theta \tau}{2} u_{xx}.$$
By comparing the coefficient with equation \eqref{advection-diffusion},
we have the relation
$$\nu=\frac{\theta \tau}{2}.$$
The expansion \eqref{CE} converges when $\tau$ is small, and Eq.\eqref{advection-diffusion} may not be consistent with Eq.\eqref{bgk} when $\tau$ gets large.
\subsection{The direct modeling scheme and the hydrodynamic equation solver}
We consider a discretization of the space-time $\Omega\times[0,T]$
with constant spatial cell size $\Delta x$ and time step $\Delta t$.
The basic numerical method is an explicit finite volume method.
The evolution equation for the macroscopic conservative variable is
\begin{equation}\label{Finite-volume-u}
U^{n+1}_i=U^{n}_i+\frac{\Delta t}{\Delta x}(F^{n}_{i-\frac12}-F^{n}_{i+\frac12}),
\end{equation}
where $F^{n}_{i+\frac12}$ is the time averaged numerical flux at cell interface, which can be obtained
from the gas distribution there,
\begin{equation}\label{flux-u}
F^{n}_{i+\frac12}=\frac{1}{\Delta t}\int_0^{\Delta t}\int_{-\infty}^{\infty}
cf(c,x_{i+\frac12},t)dc dt.
\end{equation}
The above Eq.(\ref{Finite-volume-u}) is the physical conservation law in a discretized space.
The physics to be captured depends on the modeling of the cell interface distribution function in Eq.(\ref{flux-u}).
As analyzed before, in the hydrodynamic scale, the fluid element will be used to model the flux, such as the solver based on Eq.(\ref{advection-diffusion}).
In the kinetic scale, the
particle transport and collision will take effect, and the solution from Eq.(\ref{bgk}) can be used.
In the direct modeling method, the physics to be simulated will depend on the scale of $\Delta x$ and $\Delta t$
in Eq.(\ref{Finite-volume-u})
with respect to the particle mean free path and collision time.
For the hydrodynamic solver, we only need to update the above macroscopic conservative flow variable.
However, as the gas density reduces, in the transition flow regime the macroscopic flow variable update alone is not enough for the capturing of the
peculiarity of the non-equilibrium property, where more degrees of freedom are needed to follow the flow evolution.
The unified gas-kinetic scheme (UGKS) is based on the evolution of both
macroscopic variable Eq.(\ref{Finite-volume-u}) and the gas distribution function.
The evolution equation for the microscopic velocity distribution function is
\begin{equation}\label{Finite-volume-f}
f^{n+1}_i=\left(1+\frac{\Delta t}{2\tau}\right)^{-1}
\left[f^n_i+\frac{\Delta t}{\Delta x}(\tilde{f}_{i-\frac12}-\tilde{f}_{i+\frac12})
+\frac{\Delta t}{2}\left(\frac{g^{n+1}}{\tau}+\frac{g^{n}-f^n}{\tau}\right)\right],
\end{equation}
where $\tilde{f}_{i+\frac12}$ is the time averaged numerical flux for distribution function,
which is calculated by
\begin{equation}\label{flux-f}
\tilde{f}_{i+\frac12}=\frac{1}{\Delta t}\int_0^{\Delta t}c f(c,x_{i+\frac12},t)dt.
\end{equation}
Here we solve the kinetic BGK Eq.(\ref{bgk}). For solving the full Boltzmann equation, similar technique can be applied \cite{liu}.
For the macroscopic equation solver, Eq.(\ref{Finite-volume-u}) alone is used for the update of macroscopic flow variable.
For direct modeling method, the UGKS uses both Eq.(\ref{Finite-volume-u}) and Eq.(\ref{Finite-volume-f}) for the update of flow variable and the
gas distribution function.
The flux function for both schemes is based on the same integral solution of Eq.\eqref{bgk} at a cell interface,
\begin{equation}\label{solution}
f(c,x,t)=\frac{1}{\tau}\int_0^t g(c,x-cs,t-s)\mathrm{e}^{-\frac{s}{t}}ds+
\mathrm{e}^{-\frac{t}{\tau}}f_0(c,x-ct,0),
\end{equation}
where $f_0$ is the initial condition.
This is a multiple scale transport solution, which covers from the free molecular flow to the hydrodynamic solution.
The physics to be presented depends on the ratio of time step $\Delta t$ over the particle collision time $\tau$.
But, the different choices of the initial condition $f_0$ at the beginning of each time step determines the different evolution mechanism,
i.e., the macroscopic flow solver, or a multiple scale evolution model.
For the linear advection-diffusion Eq.(\ref{advection-diffusion}),
the corresponding scheme used for its solution is the gas-kinetic scheme (GKS) for the update of macroscopic flow variable \cite{xu2001}, which is basically a NS solver
in the continuum flow regime.
Here, the initial condition $f_0$ in the solution Eq.\eqref{solution} is constructed based on the Chapman-Enskog
expansion, such as Eq.(\ref{CE}). This assumption automatically projects the distribution function to the fluid element modeling,
where a small deviation from the
equilibriums state is used for the capturing of diffusive effect.
For the direct modeling UGKS, the initial condition $f_0$ is known through the update of the gas distribution function in Eq.(\ref{Finite-volume-f}).
Therefore, the departure from the equilibrium depends on the scale, such as the ratio of $\Delta t/\tau$. Note that the cell size and time step are the modeling scales of UGKS.
The GKS is a direct macroscopic Eq.(\ref{advection-diffusion}) solver through the update of Eq.(\ref{Finite-volume-u}) alone.
The interface flux is based on solution Eq.\eqref{solution} with the adoption of the Chapman-Enskog expansion for its initial condition $f_0$.
The use of the Chapman-Enskog expansion makes GKS solve the advection-diffusion equation only.
For the UGKS, there is no such an assumption about the form of the initial gas distribution function,
and the real scale-dependent distribution function is followed for its evolution.
The capability of capturing multiscale physics from UGKS is mainly due to the scale dependent evolution solution Eq.(\ref{solution})
for the cell interface flux evaluation, which depends on the ratio of the time step $\Delta t$ over the particle collision time $\tau$.
The capturing of different physics can be
easily understood from the solution Eq.(\ref{solution}).
In the kinetic regime, i.e., $\Delta t \leq \tau$, the particle free transport from$f_0$ in
Eq.(\ref{solution}) contributes mainly for the flux function.
For the scale with $\Delta t \geq \tau$, the collision will gradually take effect. In the hydrodynamic limit, i.e.,
$\Delta t \gg \tau$, the NS gas distribution from the equilibrium state integration in Eq.(\ref{solution}) will play a dominant role.
Therefore, the solution provided in Eq.(\ref{solution}) depends on the
ratio $\Delta t /\tau$ or $\Delta x / l$. In other words, with the cell size $\Delta x \gg l$ and time step $\Delta t \gg \tau$, the multiple particle
collision is included in the integral solution Eq.(\ref{solution}), which is beyond the binary collision model in the full Boltzmann collision term.
In the transition regime with a cell resolution of multiple mean free path,
the solution in Eq.(\ref{solution}) includes the effect of multiple collisions for the individual particle.
In terms of the flow modeling, the UGKS presents a direct modeling equation in all scales,
which is beyond the scale for the derivation of the Boltzmann equation.
Even though the GKS has the same evolution mechanism from the particle free transport to the hydrodynamic evolution, the use of the Chapman-Enskog expansion confines its applicable regime to the
near equilibrium only in the macroscopic scale.
With a different approximation for the initial gas distribution function $f_0$,
the UGKS can give a valid solution in all regimes and use a time step with a fixed CFL number, which is independent of the Reynolds number.
However, for the GKS, same as other explicit NS solver, its solution is limited to the continuum flow regime and the time step is severely constrained at the low Reynolds number case.
\section{Dynamical differences in case studies}
\subsection{Linear advection-diffusion process and the corresponding multiple scale solution}
To study the dynamical differences quantitatively from different modeling, we first solve Eq.\eqref{advection-diffusion} for the advection diffusion solution
in the domain $x\in[-1,3]$ with the periodic boundary condition. The initial condition is set as
\begin{equation}\label{initial}
u_0(x)=4+\frac{8}{\pi}\sin\left(\frac{\pi}{2}x\right)+\frac{16}{3\pi}\sin\left(3\frac{\pi}{2}x\right).
\end{equation}
For the linear advection-diffusion equation, the analytic solution is given by
\begin{equation}\label{analytic}
u(x,t)=4+\frac{8}{\pi}\mathrm{e}^{-\frac{\pi^2}{4}\nu t}\sin\left(\frac{\pi}{2}(x-at)\right)+\frac{16}{3\pi}\mathrm{e}^{-\frac{9\pi^2}{4}\nu t}\sin\left(3\frac{\pi}{2}(x-at)\right).
\end{equation}
Based on the GKS (advection-diffusion solution) and the UGKS (multiscale modeling solution),
in the high Reynolds number limit, both results
are identical in the hydrodynamic regime. The current study is mostly on the transition regime at the low Reynolds number limit, where both
fluid element and particle penetration play important role.
In the low Reynolds number regime, the stability condition for GKS, like many other advection-diffusion solvers, becomes restricted.
The time step is limited by
$\Delta t < (\Delta x)^2/(2\nu) $. However, for the the UGKS, a necessary stability condition is due to the particle velocity range used to discretize the velocity space.
In other words, for the UGKS, the time step is determined by the CFL condition only,
\begin{equation}\label{stability-ugks}
\Delta t \le \frac{\Delta x}{|a|+3\sqrt{\theta}}.
\end{equation}
The stability condition for GKS and UGKS is shown in Fig. \ref{stability} with $a=3$ and $\theta =1.0$ under different cell's Reynolds number, i.e., $Re_{x} = a \Delta x /(\theta \tau) $.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{CFL.eps}
\caption{Maximum CFL number for GKS (NS) and UGKS with different Reynolds numbers.}
\label{stability}
\end{figure}
Under the stability condition, the solutions from the above two schemes behave differently with the variation of Reynolds numbers.
By setting the parameters $a=3$, $\theta =1$, $\Delta x=0.04$, we compare the solutions of GKS and UGKS with cell Reynolds number $\text{Re}_x=0.1, 0.5, 1.0, 3.0$.
In Fig. \ref{compare}, we plot the macroscopic quantity $u$ and the velocity distributions at $t=0.7$ and $x=2$.
At high Reynolds number regime, both advection-diffusion equation and UGKS solutions are consistent, and the macroscopic description is a valid model.
When Reynolds number decreases to be less than $1$, the advection-diffusion solution deviate from the UGKS solution.
Especially, when $\text{Re}_x=0.1$, the distribution function corresponding to the advection diffusion model, which is obtained from the Chapman-Enskog expansion,
can become negative in certain particle velocity region.
The possible negative particle velocity distribution and the severe time step limitation indicate that
the advection-diffusion equation is not applicable for capturing flow physics in this regime.
For low Reynolds number flow, the time step used in UGKS is independent of Reynolds number, which is consistent with the physical reality.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{u-0-1.eps}
\includegraphics[width=0.3\textwidth]{f-0-1.eps}\\
\includegraphics[width=0.3\textwidth]{u-0-5.eps}
\includegraphics[width=0.3\textwidth]{f-0-5.eps}\\
\includegraphics[width=0.3\textwidth]{u-1.eps}
\includegraphics[width=0.3\textwidth]{f-1.eps}\\
\includegraphics[width=0.3\textwidth]{u-3.eps}
\includegraphics[width=0.3\textwidth]{f-3.eps}
\caption{Left column shows the comparison of the macroscopic quantity $u$, symbols are the numerical solutions, lines are the exact solutions.
Right column shows the comparison of the velocity distribution function at $x=2$, lines are the exact solutions of kinetic equation.
From top to bottom, the corresponding cell Reynolds numbers are ${Re}_x = a \Delta x /(\theta \tau) = 0.1, 0.5, 1.0 $, and $3.0$.
}
\label{compare}
\end{figure}
\subsection{The NS solution and the multiscale modeling solution}
\begin{figure}[!htb]
\centering
\includegraphics[width=8cm]{time-evolution.eps}
\caption{Multiscale flow evolution for a shear layer, where $t_i$ is the evolution time and $\tau$ is the particle collision time.
The corresponding computational domain is changing with $t_i$ with different cell size $\Delta x$ relative to the particle mean free path $l_{mfp}$.}
\label{fig:time-evolution}
\end{figure}
We present the simulation results of a shear layer by GKS (NS) and UGKS.
The GKS gives the Navier-Stokes solutions and the UGKS captures a multiscale physical solution.
The VHS model of argon gas is used in the simulation, and the Knudsen number $Kn = l/L$ is fixed to be $5.0\times 10^{-3}$.
The initial condition is given as
\begin{equation}\label{initial-condition}
(\rho,U,V,T)=
\begin{cases}
(1.0,0,1.0,1.0)\qquad &x\leq0,\\
(1.0,0,-1.0,0.5)\qquad &x>0.
\end{cases}
\end{equation}
The mean free path is $l_{mfp}=5.0\times 10^{-3}$ and
the physical mean collision time is $\tau=3.36\times 10^{-3}$.
Since we are going to study a time-dependent multiple scale problem, the solutions to be obtained depend on
the output time, see Fig. \ref{fig:time-evolution}, where the computational domain with a fixed number of grid points is changing with the domain of influence from the initial singular point.
Same as the last case, the time steps used for the GKS and UGKS are shown in Fig. \ref{fig:cfl}, where the GKS or NS modeling has severe time step limitation
at low Reynolds number limit, but the UGKS uses a unform CFL number.
We plot the density, velocity, temperature, heat flux,
as well as the velocity distribution functions at time
$t_1=4\times 10^{-3}$, $t_2=4\times 10^{-2}$, $t_3=0.4$, $t_4=4$, $t_5=40$, and $t_6=400$ with a changeable cell size in order to identify the shear solution in different scales.
For GKS, the cell number in x direction is $100$ for Fig. \ref{t3}-\ref{t1}, $400$ for Fig.\ref{t0}, $1000$ for Fig.\ref{t-1}, and $5000$ for Fig. \ref{t-2}.
For UGKS, the cell number in x direction is $100$ for Fig. \ref{t3}- \ref{t1}, $400$ for Fig. \ref{t0} - \ref{t-1}, and $800$ for Fig. \ref{t-2}.
The computation confirms that the time step for GKS is limited to be small when the cell Reynolds number is small, as shown in Fig. \ref{fig:cfl}.
The solution provided in UGKS is valid in all regimes from the kinetic $t \simeq \tau$ to the hydrodynamic one $t >> \tau$.
Both GKS and UGKS solutions converge in the hydrodynamic regime.
Since a much large cell size is used for the hydrodynamic solution for the case of $t >> \tau$ and $\Delta x >> l_{mfp}$,
the discontinuity cannot be well-resolved by both GKS and UGKS. A shock capturing approach is basically used for both schemes.
In order to give a more accurate physical representation, the cell size used in UGKS should depend on the flow physics to be resolved. In the highly dissipative region, a small cell
size is used for the capturing of non-equilibrium dynamics, and in the smooth region a large cell size for the hydrodynamic solution is accurate enough for its evolution.
The evolution solutions in different scales clearly indicate the usefulness of the direct modeling UGKS in comparison with the single scale NS solution.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{cfl-shearlayer.eps}\\
\caption{time step v.s. cell size for GKS (NS) and UGKS.}\label{fig:cfl}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{t4-3-d.eps}{a}
\includegraphics[width=0.4\textwidth]{t4-3-vx.eps}{b}\\
\includegraphics[width=0.4\textwidth]{t4-3-vy.eps}{c}
\includegraphics[width=0.4\textwidth]{t4-3-t.eps}{d}\\
\includegraphics[width=0.4\textwidth]{t4-3-q.eps}{e}
\includegraphics[width=0.4\textwidth]{t4-3-pdf.eps}{f}
\caption{Results at $t=4\times 10^{-3}$ ($t/\tau=1.05$)
: a. density; b. x-velocity; c. y-velocity; d. temperature; e. x direction heat flux;
f. velocity distribution at $x=0.5$.
For GKS $\Delta x /l_{mfp}=0.1$, $\Delta t /\tau=1.1\times10^{-3}$,
and for UGKS $\Delta x/l_{mfp}=0.1$, $\Delta t/\tau=1\times 10^{-2}$.}\label{t3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{t4-2-d.eps}{a}
\includegraphics[width=0.4\textwidth]{t4-2-vx.eps}{b}\\
\includegraphics[width=0.4\textwidth]{t4-2-vy.eps}{c}
\includegraphics[width=0.4\textwidth]{t4-2-t.eps}{d}\\
\includegraphics[width=0.4\textwidth]{t4-2-q.eps}{e}
\includegraphics[width=0.4\textwidth]{t4-2-pdf.eps}{f}
\caption{Results at $t=4\times 10^{-2}$ ($t/\tau=10.5$)
: a. density; b. x-velocity; c. y-velocity; d. temperature; e. x direction heat flux;
f. velocity distribution at $x=0.5$.
For GKS $\Delta x/l_{mfp}=0.4$, $\Delta t/\tau=1.6\times 10^{-2}$,
and for UGKS $\Delta x/l_{mfp}=0.4$, $\Delta t/\tau=4\times 10^{-2}$.}\label{t2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{t4-1-d.eps}{a}
\includegraphics[width=0.4\textwidth]{t4-1-vx.eps}{b}\\
\includegraphics[width=0.4\textwidth]{t4-1-vy.eps}{c}
\includegraphics[width=0.4\textwidth]{t4-1-t.eps}{d}\\
\includegraphics[width=0.4\textwidth]{t4-1-q.eps}{e}
\includegraphics[width=0.4\textwidth]{t4-1-pdf.eps}{f}
\caption{Results at $t=4\times 10^{-1}$ ($t/\tau=104.5$)
: a. density; b. x-velocity; c. y-velocity; d. temperature; e. x direction heat flux;
f. velocity distribution at $x=0.5$.
For GKS $\Delta x/l_{mfp}=2$, $\Delta t/\tau=0.4$,
and for UGKS $\Delta x/l_{mfp}=2$, $\Delta t/\tau=0.2$.}\label{t1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{t4-d.eps}{a}
\includegraphics[width=0.4\textwidth]{t4-vx.eps}{b}\\
\includegraphics[width=0.4\textwidth]{t4-vy.eps}{c}
\includegraphics[width=0.4\textwidth]{t4-t.eps}{d}\\
\includegraphics[width=0.4\textwidth]{t4-q.eps}{e}
\includegraphics[width=0.4\textwidth]{t4-pdf.eps}{f}
\caption{Results at $t=4$ ($t/\tau=1.05\times10^3$)
: a. density; b. x-velocity; c. y-velocity; d. temperature; e. x direction heat flux;
f. velocity distribution at $x=0.5$.
For GKS $\Delta x/l_{mfp}=10$, $\Delta t/\tau=2$,
and for UGKS $\Delta x/l_{mfp}=10$, $\Delta t/\tau=1$.}\label{t0}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{t41-d.eps}{a}
\includegraphics[width=0.4\textwidth]{t41-vx.eps}{b}\\
\includegraphics[width=0.4\textwidth]{t41-vy.eps}{c}
\includegraphics[width=0.4\textwidth]{t41-t.eps}{d}\\
\includegraphics[width=0.4\textwidth]{t41-q.eps}{e}
\includegraphics[width=0.4\textwidth]{t41-pdf.eps}{f}
\caption{Results at $t=40$ ($t/\tau=1.05\times10^4$)
: a. density; b. x-velocity; c. y-velocity; d. temperature; e. x direction heat flux;
f. velocity distribution at $x=0.5$.
For GKS $\Delta x/l_{mfp}=40$, $\Delta t/\tau=7$,
and for UGKS $\Delta x/l_{mfp}=100$, $\Delta t/\tau=10$.}\label{t-1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{t42-d.eps}{a}
\includegraphics[width=0.4\textwidth]{t42-vx.eps}{b}\\
\includegraphics[width=0.4\textwidth]{t42-vy.eps}{c}
\includegraphics[width=0.4\textwidth]{t42-t.eps}{d}\\
\includegraphics[width=0.4\textwidth]{t42-q.eps}{e}
\includegraphics[width=0.4\textwidth]{t42-pdf.eps}{f}
\caption{Results at $t=400$ ($t/\tau=1.05\times10^5$)
: a. density; b. x-velocity; c. y-velocity; d. temperature; e. x direction heat flux;
f. velocity distribution at $x=0.5$.
For GKS $\Delta x/l_{mfp}=80$, $\Delta t/\tau=10$ (1250 symbols plotted),
and for UGKS $\Delta x/l_{mfp}=500$, $\Delta t/\tau=50$.}\label{t-2}
\end{figure}
\section{Conclusion}
The gas dynamics evolution has an intrinsic multiple scale nature, which depends on the modeling scale relative to the particle mean free path.
In this paper, we present two physical descriptions for gas evolution.
One is the macroscopic based fluid element approach, i.e., the NS equations, and the other is the multiscale modeling algorithm UGKS.
This study presents the limitation of the macroscopic level modeling due to its fluid element assumption.
In the low Reynolds number limit, the NS approach
imposes severe time-step constraint, such as $\Delta t < {(\Delta x)}^2 / \nu$, for the capturing of flow evolution.
This time step limitation is purely an artificial one due to inappropriate macroscopic modeling for the microscopic scale physics.
In other words, the computational difficulties associated with NS solution at low Reynolds number case
is from its physical inconsistency. For example, the fluid element assumption is still used in NS modeling to the cases where the particle penetration effect plays an important role
in the scale smaller than the hydrodynamic one.
For the direct modeling method, such as the UGKS, due to its capability to have a smooth transition between the fluid element and particle free penetration mechanism
with a variation of scales, the time step used in the computation is independent of Reynolds number, which is consistent
with the physical propagating speed in different regimes.
This study indicates that the direct modeling and computation for gas dynamics can provide an indispensable tool for the capturing of multiscale gas evolution.
The numerical computation is not necessarily to target on the exact solution of a specific governing equation, but to model the physical reality in the mesh size scale,
and construct the corresponding evolution model in such a scale.
The UGKS provides both equations and the evolution solution. It goes beyond the traditional numerical PDE principle for the computation.
The direct modeling methodology provides a new way for scientific computing, especially for the multiple scale transport, such as rarefied flow \cite{jiang}, radiative transfer \cite{sun}, phonon heat transfer \cite{guo},
and plasma physics \cite{liu-thesis}.
In the direct modeling scheme, we have no a fixed scale and a fixed governing equation to be solved.
All well developed principles in numerical PDE, such as stability analysis, consistency, and the convergence, have to be reformulated under the direct modeling methodology.
\section*{Acknowledgement}
The current research is supported by Hong Kong research grant council (16207715,16211014,620813) and NSFC-91330203 and 91530319.
\section*{References}
| proofpile-arXiv_065-12335 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\texorpdfstring{$U(1)\times U(1)$}{U(1) x U(1)} invariant potentials and their vacua}\label{sec:U1xU1pot}
Considering two complex scalar fields the most general $U(1)\times U(1)$ symmetric self-interaction potential has already been given in Ref.\ \cite{Witten}:
\begin{equation}\label{eq:pot}
V = \frac{\beta_1}{2} (|\phi_1|^2-1)^2 + \frac{\beta_2}{2} |\phi_2|^4 + \beta' |\phi_1|^2|\phi_2|^2 - \alpha |\phi_2|^2\,,
\end{equation}
containing four real parameters, $\beta_1$, $\beta_2$, $\beta'$ and $\alpha$.
In the rest of our paper we shall consider theories where the potential, $V$, is given by \eqref{eq:pot}, moreover we require that $V>0$ for $|\phi_1|^2\,,|\phi_2|^2\to\infty$, resulting in the following restriction of the parameters: $\beta_1>0$, $\beta_2>0$ and $\beta'>-\sqrt{\beta_1\beta_2}$.
The following two types of minima of the potential \eqref{eq:pot} shall be considered: either a state when only a single scalar field has a VEV, referred to as 1VEV case, and the other one is a 2VEV case where both fields obtain a VEV.
The conditions for a 2VEV state is
\begin{equation}\label{eq:2VEVcond}
\alpha > \beta'\,,\quad \beta_1 \beta_2 > \alpha \beta'\,,
\end{equation}
from which $\beta_1 \beta_2 > {\beta'}^2$ follows. In this 2VEV case, the two vacuum expectation values of the scalar fields, $\eta_1\,,\eta_2$ satisfy
\begin{equation}\label{eq:2VEV}
\eta_1^2 = \frac{\beta_1\beta_2-\alpha \beta'}{\beta_1\beta_2-(\beta')^2}\,,\quad
\eta_2^2 = \frac{\beta_1(\alpha- \beta')}{\beta_1\beta_2-(\beta')^2}\,,
\end{equation}
and the previous conditions guarantee that $\eta_1^2\,,\eta_2^2>0$.
If at least one of the conditions in Eq.\ (\ref{eq:2VEVcond}) fails to hold, the system is in a 1VEV state, and the component having the non-zero VEV is as follows:
\begin{equation}\label{eq:VEVcond1}
\begin{tabular}{c| c| c}
& $\beta_1 \beta_2 > {\beta'}^2$ & $\beta_1 \beta_2 < {\beta'}^2$\\
\hline
upper & $\beta' > \alpha$ & $\sqrt{\beta_1 \beta_2} > \alpha$\\
\hline
lower & $\beta' < \alpha$ & $\sqrt{\beta_1 \beta_2} < \alpha$\\
\end{tabular}
\end{equation}
The classification in Eqs.\ (\ref{eq:2VEVcond}), (\ref{eq:VEVcond1}) is crucial. If $\beta_1\beta_2 > {\beta'}^2$, then at $\alpha = \beta'$, there is
a boundary between the upper component 1VEV and the 2VEV cases. If, on the contrary, $\beta_1\beta_2 < {\beta'}^2$ then at $\alpha = \sqrt{\beta_1 \beta_2}$,
there is a boundary between upper component and lower component 1VEV cases, there the lower component obtains a VEV $\eta_2 = \sqrt{\alpha/\beta_2}$.
In deriving Eq.\ (\ref{eq:VEVcond1}), we have used that in order that the global minimum of the potential be at the field values $(1,0)$,
\begin{equation}\label{eq:cond1stab1}
V(\phi_1=0,\phi_2=\eta_2)=\frac{\beta_1}{2}-\frac{\alpha^2}{2\beta_2} > 0
\end{equation}
has to hold with $\eta_2^2=\alpha/\beta_2$.
Condition (\ref{eq:cond1stab1}) corresponds to that out of the two possible
local minima $(1,0)$ and $(0,\eta_2)$ the first one be the global minimum.
This can be assumed without loss of generality (because otherwise the second component would be the one obtaining a VEV,
and the two components could be interchanged).
\section{Global vortices with a single VEV }\label{sec:1VEVglob}
Let us start by considering the two component scalar theory with interaction potential \eqref{eq:pot}, admitting a global
U(1)$\times$U(1) symmetry, defined by the Lagrangian
\begin{equation}
\label{eq:globlag}
\mathcal{L}_{\rm glob} = \partial_\mu \Phi^\dagger \partial^\mu \Phi - V(\Phi^\dagger, \Phi)\,,
\end{equation}
where $\Phi=(\phi_1,\phi_2)^T$ and $\Phi^\dagger=(\phi_1^*,\phi_2^*)$, the potential, $V$, is given by Eq.\ \eqref{eq:pot}.
We shall now consider global vortex solutions of the theory \eqref{eq:globlag} with rotational symmetry in the plane,
with the following (standard) Ansatz for the scalars
\begin{equation}
\label{eq:globAns}
\phi_1 = f_1(r) \mathrm{e}^{i n\vartheta}\,,\quad \phi_2 = f_2(r) \mathrm{e}^{i m \vartheta}\,,
\end{equation}
where $n$ and $m$ are integers and $(r,\vartheta)$ are the polar coordinates in the plane.
The radial equations read in this case
\begin{equation}
\label{eq:globradeq}
\begin{aligned}
\frac{1}{r}(r f_1')' &= f_1 \left[ \frac{n^2}{r^2} + \beta_1 (f_1^2-1) + \beta' f_2^2 \right]\,,\\
\frac{1}{r}(r f_2')' &= f_2 \left[ \frac{m^2}{r^2} + \beta_2 f_2^2 - \alpha + \beta' f_1^2 \right]\,,
\end{aligned}
\end{equation}
and with the Ansatz (\ref{eq:globAns}), the energy density from the Lagrangian in Eq.\ (\ref{eq:globlag}) is
\begin{equation}
\label{eq:globerg0}
\mathcal{E} = (f_1')^2 + (f_2')^2 + \frac{n^2}{r^2}f_1^2 + \frac{m^2}{r^2}f_2^2 + V\,.
\end{equation}
In the following we shall focus on vortices with $m=0$, as they are expected to give solutions of ``lowest'' energy.
In the present 1VEV case embedded global ANO-type vortices, $(\phi_1,\phi_2)=(\phi_1^{(n)},0)$ automatically satisfy
Eqs.\ \eqref{eq:globradeq}. As it is known, the total energy of global vortices diverges:
\begin{equation}
\label{eq:globerg}
E(R) = 2\pi \int_0^R \mathcal{E} r\mathrm{d} r \sim E(R_{\rm core}) + 2\pi n^2 \log\left( \frac{R}{R_{\rm core}} \right)\,,
\end{equation}
where $R_{\rm core}$ is an arbitrary (core) radius, outside of which all fields can be replaced with their asymptotic form. The energy of the vortices diverges logarithmically with the sample size.
As a non-zero $\phi_2$ lowers the potential energy in the vortex core,
it is natural to expect that Eqs.\ \eqref{eq:globradeq} may also admit vortex solutions with a non-trivial $\phi_2$.
A simple method to search for such non-trivially two-component vortices is to look for the instability of the embedded one.
This can be done by linearising Eqs.\ \eqref{eq:globradeq} around an embedded vortex in the small parameter $\epsilon = f_2(0)$ as follows:
\begin{equation}
\label{eq:bifur-global-expan}
\begin{aligned}
\alpha &=\alpha_{\rm b} + \epsilon^2 \alpha_2 + \dots\,,\\
f_1 &= f_1^{(0)} + \epsilon^2 f_1^{(2)} + \dots\,,\\
f_2 &= \epsilon f_2^{(1)} + \dots\,,
\end{aligned}
\end{equation}
and $f_2^{(1)}$ satisfies the following linear Schr\"odinger-type equation, with a ''potential'' determined by the embedded vortex, $ f_1^{(0)}$:
\begin{equation}
\label{eq:bifur-global}
\frac{1}{r}(r f_2^{(1)}{}')' - \beta' f_1^{(0)2}f_2^{(1)} = \alpha_{\rm b}f_2^{(1)}\,.
\end{equation}
In order that one obtain a linearized vortex solution, one has to impose $f_2^{(1)}\to0$ for $r\to\infty$.
Then Eq.\ \eqref{eq:bifur-global} can be interpreted as an eigenvalue problem for the ``energy'' $\alpha_{\rm b}$ as a function of the parameter $\beta'$.
For the parameter range $\alpha_{\rm b} < \alpha < \beta'$, embedded global vortices are unstable, and they bifurcate with a new family of solutions with a nontrivial $f_2$ as $\alpha \to\alpha_{\rm b}$, to which we shall refer to as condensate core (CC) vortices. A numerical solution for a global CC vortex is depicted on Fig.\ \ref{fig:globcc}. Although the energy of a CC vortex is also divergent logarithmically, just as for an embedded one, for a fixed sample size $R$, the energy difference between the two types can be computed.
It turns out that CC vortices have lower energy than the embedded ones, and in some cases the energy difference is remarkably
large (Table \ref{tab:globcc}).
Remarkably good approximate solutions of Eq.\ \eqref{eq:bifur-global} are known for both large and small values of $\beta'$. In the case when $\beta'\gg\beta_1$, the lowest eigenfunction is concentrated close to the origin, therefore a good approximation of the potential term is only needed there. As noted in Ref.\ \cite{StringsSprings} the potential to leading order is harmonic yielding
a qualitatively good approximation of Eq.\ \eqref{eq:bifur-global}.
The harmonic approximation can be substantially improved by taking into account the Taylor expansion of $f_1^2$ up to the $r^6$ order via perturbation theory \cite{Catelani}, yielding
\begin{equation}
\label{eq:alphablargebetap}
\alpha'_{\rm b} \approx 2\sqrt{\beta'} - \frac{1}{2} + \frac{5+16 c_0^2}{32 c_0} {\beta'}^{-1/4}\,,\quad \left(\beta' > \frac{\beta_1}{3}\right)\,,
\end{equation}
where $c_0=f_1'(0)$. For small $\beta'$, a matching procedure at the boundary of the vortex core yields the eigenvalue \cite{Catelani}
\begin{equation}
\label{eq:alphabsmallbetap}
\alpha'_{\rm b} \approx \beta' - \frac{4}{\beta'} \mathrm{e}^{-2\gamma_{\rm E}} c_0^2 \exp\left[ {-\frac{2}{\sqrt{\beta'}} \arctan\frac{2 c_0^2}{\sqrt{\beta'}}}\right]\,,\quad \left(\beta' < \frac{\beta_1}{3}\right)
\end{equation}
where $\gamma_{\rm E}\approx 0.5772$ is the Euler-Mascheroni constant. A similar result has been obtained in Ref.\ \cite{StringsSprings} for the case of gauged vortices, based on approximate eigenvalues of shallow potentials in 2D \cite{Landau3}.
In Ref.\ \cite{Goodband}, it has been demonstrated numerically, that global vortices for $n>1$ are unstable against splitting into vortices of lower winding. This is in agreement with the known {\sl repulsive} interaction between global vortices at large separations
\cite{Pismen}.
For CC vortices, the leading order asymptotic behaviour is unchanged, therefore at large separation there should be a repulsive interaction between them. On the other hand, for CC vortices close to each other, the nonzero second component also contributes to the inter-vortex force. We have performed a stability analysis with the methods of Ref.\ \cite{Goodband}.
We have found that for parameter values of $\alpha$ away from the bifurcation $\alpha\gg\alpha_{\rm b}$, $n=2,3$ CC vortices are stable at the linear level.
In the case of the embedded vortex, for $n=2$, there is an energy lowering perturbation (an eigenfunction of the perturbation operator with a negative eigenvalue) in the partial wave channel $\ell=2$, and for $n=3$ in $\ell=2,3,4$. For the CC vortex, sufficiently far from the bifurcation, these eigenvalues become positive. We denote by $\alpha_{\rm s}$ the value of $\alpha$ where all of the eigenvalues become positive. See Tab.\ \ref{tab:globstab} for numerical data.
See also Sec.\ \ref{sec:linpert} and Appendix \ref{app:pertdetails} for details of the method.
It is remarkable, that the character of the inter-vortex force changes in the two-component theory, from attractive at small separations to repulsive at large ones. This is analogous to the behaviour of vortices in type 1.5 superconductors.
\begin{figure}
\centering
\noindent\hfil\includegraphics[scale=.5,angle=-90]{globcc}
\caption{A global CC vortex for $\beta_1=1$, $\beta_2=2$, $\beta'=2$ and $\alpha=1.24$.}
\label{fig:globcc}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|c|c|c||c|}
\hline
$\beta_2$ & $\beta'$ & $\alpha$ & $E_{\rm e} - E_{\rm cc}$ \\
\hline\hline
1 & 1 & 0.94 & 2.161 \\
2 & 2 & 1.24 & $8.795 \times 10^{-3}$ \\
50 & 10 & 6.2 & 0.8864 \\
\hline
\end{tabular}
\caption{Energy difference between embedded and condensate core global vortices, $\beta_1=1$.}
\label{tab:globcc}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c||c|c|}
\hline
$n$ & $\alpha_{\rm b}$ & $\alpha_{\rm s}$ \\
\hline\hline
1 & 1.2052 & --- \\
2 & 0.60143 & 1.3208 \\
3 & 0.34771 & 1.3454 \\
\hline
\end{tabular}
\caption{Stabilisation of global CC vortices: for $\alpha_{\rm s} < \alpha < \beta'$, no negative eigenvalues were found. Here $\beta_1=1$, $\beta_2=\beta'=2$.}
\label{tab:globstab}
\end{table}
\section{Twisted vortices in Extended Abelian Scalar models}\label{sec:theory}
The Lagrangian of the two-component EAH model (the relativistic version of the TCGL) is
\begin{equation}
\label{eq:Lag}
{\mathcal{L}}_{\rm lok} = \frac{1}{e^2}\left\{-\frac{1}{4} F_{\mu\nu}F^{\mu\nu} + (D_\mu\Phi)^\dagger (D^\mu\Phi) - V(\Phi,\Phi^\dagger) \right\}\,,
\end{equation}
where $D_\mu\phi_a = \partial_\mu - i e_a A_\mu$ is the standard gauge covariant derivative of the scalars, where for later use we assume general couplings, $(e_1\,,e_2)$, of $\Phi=(\phi_1, \phi_2)^T$ to the U(1) gauge field, and $V$ is defined by Eq.\ (\ref{eq:pot}).
The $U(1)$ gauge symmetry acts on the fields as
$\Phi \to \exp(i\chi)\Phi$, $A_\mu \to A_\mu + \partial_\mu \chi$, where $\chi=\chi(x)$ is the gauge function. The other $U(1)$ symmetry is global, and it acts on the fields as $\phi_1 \to \exp(-i\alpha)\phi_1$, $\phi_2 \to \exp(i\alpha)\phi_2$, where $\alpha$ is a constant.
The field equations obtained from the Lagrangian (\ref{eq:Lag}) read
\begin{equation}
\label{eq:EOM}
\begin{aligned}
\partial^\rho F_{\rho\mu}& =i\sum_a e_a\{(D_\mu\phi_a)^*\phi_a -\phi_a^* D_\mu\phi_a\}\,, \\
D_\rho D^\rho\Phi & = -\partial V(\Phi^\dagger,\Phi)/\partial \Phi^\dagger \,.\\
\end{aligned}
\end{equation}
The theory defined in Eq.\ (\ref{eq:Lag}) is a member of the family of semilocal models, i.e., gauge theories with additional
global symmetries \cite{semilocal}. A thoroughly studied case is the $SU(2)_{\text{global}}\times U(1)_{\text{local}}$ semilocal model \cite{semilocal}, corresponding to the limit $\theta_W\to \pi/2$ of the standard electro-weak model corresponding to the parameter choice
$\beta_1=\beta_1=\beta'=\alpha$.
Importantly, in the 1VEV case, solutions of the ordinary one-component Abelian Higgs model can be embedded in the theory, as $\phi_1=\phi_{AH}$,
$\phi_2=0$ and $A_\mu=A_{\mu,AH}$, where $\phi_{AH}, A_{\mu,AH}$ is a solution of the one-component model with $\beta=\beta_1$. In this way, we can consider embedded ANO vortices in the 1VEV two-component theory.
The conserved current corresponding to the global $U(1)$ symmetry of the theory (\ref{eq:Lag}) is given by
\begin{equation}
\label{eq:jmu}
j_\mu^{3} = -i ( \phi_1^* D_\mu \phi_1 - \phi_2^* D_\mu \phi_2 - \phi_1 (D_\mu\phi_1)^* + \phi_2 (D_\mu\phi_2)^*)\,,
\end{equation}
which agrees with the third isospin component of the global $SU(2)$ current of the semilocal theory \cite{FRV1, FRV2}.
The general stationary, cylindrically symmetric Ansatz introduces $z$-dependent phases for the scalars, and a suitably reduced Ansatz in the radial gauge can be written as
\begin{equation}
\label{eq:FRVAns}
\begin{aligned}
\phi_1(r,\vartheta,z)\, &= f_1(r) e^{i n\vartheta}\,, \\
\phi_2(r,\vartheta,z)\, &= f_2(r) e^{i m\vartheta}e^{i\omega z}\,, \\
\end{aligned} \quad \begin{aligned}
A_\vartheta(r,\vartheta,z) &= n a(r)\,,\\
A_3(r,\vartheta,z) &= \omega a_3(r)\,,
\end{aligned}
\end{equation}
with $A_0=A_r=0$ and $\omega$ is real, it shall be referred to as the twist parameter.
The Ansatz (\ref{eq:FRVAns}) describes cylindrically symmetric fields in the sense, that a translation along the $z$ direction can be compensated by the application of internal symmetries \cite{FM, FRV1, FRV2}. All twisted solutions, where the spacetime dependence of the relative phase is timelike, can be brought to the form of Eq.\ \eqref{eq:FRVAns} by a Lorentz boost.
With the Ansatz, Eq.\ (\ref{eq:FRVAns}), the field equations, Eq.\ (\ref{eq:EOM}) become
\begin{equation}
\label{eq:FRVprofE}
\begin{aligned}
\frac{1}{r}(r a_3')' &= 2 a_3 (e_1^2 f_1^2 + e_2^2 f_2^2) - 2 e_2 f_2^2,\\
r\left(\frac{a'}{r}\right)' &= 2 f_1^2 e_1 ( e_1 a - 1 ) + 2 f_2^2 e_2 (e_2 a - m / n ) \,,\\
\frac{1}{r}(r f_1')' &= f_1\left[ \frac{(1 - e_1 a)^2 n^2}{r^2} + e_1^2 \omega^2 a_3^2 + \beta_1 ( f_1^2 - 1 ) + \beta' f_2^2\right]\,,\\
\frac{1}{r}(r f_2')' &= f_2\left[ \frac{(e_2 n a-m)^2}{r^2} + \omega^2 (1-e_2 a_3)^2 + \beta_2 f_2^2 -\alpha + \beta' f_1^2 \right]\,.
\end{aligned}
\end{equation}
The boundary conditions for regular, 1VEV solutions of Eqs.\ (\ref{eq:FRVprofE}) imply that $f_1(r=0)=0$, and for $m=0$ $f_2(r=0)=$const.\, while for $r\to\infty$ we impose that $f_1,a\to 1$ and $f_2,a_3\to 0$.
In the 2VEV case, $f_{1,2} \to \eta_{1,2}$, where $\phi=(\eta_1,\eta_2)$ is a minimum of $V$.
In this latter case, twisted vortex solution would have infinite energy (proportional to the sample volume), since the
one cannot satisfy for $(r\to\infty)$ both $D_3\phi_1\to0$ and $D_3\phi_2\to0$ simultaneously.
We start with the description of finite energy twisted vortex solutions of Eqs.\ \eqref{eq:FRVprofE}, therefore we impose $f_2\to 0$ for $(r\to\infty)$.
The energy density for the Ansatz \eqref{eq:FRVAns} is found to be
\begin{equation}
\label{eq:Edens}
\begin{aligned}
\mathcal{E} =& \frac{1}{2}\left[ \frac{n^2 (a')^2}{r^2} + \omega^2 (a_3')^2\right]
+ (f_1')^2 + (f_2')^2 \\
&+ \frac{n^2(1 - e_1 a)^2}{r^2} f_1^2 + \frac{(e_2 na - m)^2}{r^2} f_2^2 + \omega^2(e_1^2 a_3^2 f_1^2 + (1-e_2 a_3)^2 f_2^2) + V(f_1,f_2)\,.
\end{aligned}
\end{equation}
with $V(f_1,f_2)=\beta_1 (f_1^2-1)^2/2 + \beta_2f_2^4/2 + \beta' f_1^2f_2^2 - \alpha f_2^2$.
The total energy (per unit length), $E$, is given as the integral over the plane of $\mathcal{E}$,
\begin{equation}
\label{eq:toterg}
E = 2\pi\int_0^\infty r\mathrm{d} r \mathcal{E}\,.
\end{equation}
$E$ is a monotonously increasing function of the parameters $\beta_1$, $\beta_2$, $\beta'$, and of the twist, $\omega$, while it is a monotonously decreasing function of $\alpha$. This follows from the fact that if $\Phi, A_\mu$ is a static solution of the field equations, then, e.g.,
\begin{equation}
\label{eq:Ederivomega}
\frac{\partial E}{\partial \omega^2} = 2\pi \int r \mathrm{d} r \left[ \frac{1}{2}(a_3')^2 + (e_1^2 a_3^2 f_1^2 + (1-e_2 a_3)^2 f_2^2) \right] > 0\,.
\end{equation}
Some curves depicting the total energy as a function of the twist for some solution families are shown on Figure \ref{fig:Eomega}.
Plugging the Ansatz (\ref{eq:FRVAns}) into (\ref{eq:jmu}), the relevant current component is
\begin{equation}
\label{eq:j33}
j_3^{3} = 2 \omega a_3(e_1 f_1^2 - e_2 f_2^2) + 2 \omega f_2^2\,.
\end{equation}
The global current, $\mathcal{I}(\omega)$, is depicted on Fig.\ \ref{fig:j3omega}, where
the $SU(2)$ symmetric case, $\beta_{1,2}=\beta'=\alpha=2$ is compared to a less symmetric one, for $\beta_{1,2}=\alpha=2$, $\beta'=2.1$.
In the $SU(2)$ symmetric case, $\mathcal{I}(\omega)$ diverges for $\omega\to 0$ \cite{FRV1, FRV2}, and there is no
finite energy solution corresponding to $\omega=0$. As we shall demonstrate in the general, nonsymmetric case, finite energy vortex solutions exist in the $\omega\to0$ limit.
The numerical solutions of Eqs.\ (\ref{eq:FRVprofE}) have been calculated using the shooting method with a fitting point \cite{NR}, which is also used for the solution of the linearised equations for the stability analysis. For higher winding number vortices, we also use a minimisation of the energy functional (\ref{eq:Edens}) directly, in a finite difference discretisation.
\subsection{Bifurcation with embedded ANO strings}\label{ssec:bifurcation}
It is by now well known that embedded ANO vortices are unstable to small perturbations of the $f_2$ variable \cite{hin1, hin2},
and that this instability corresponds to the aforementioned bifurcation \cite{FRV1, FRV2}. Close to the bifurcation, a systematic expansion of
the solution in a bifurcation parameter $\epsilon$ has been carried out in Ref.\ \cite{FL} in the $SU(2)$ symmetric case.
The analysis of Ref.\ \cite{FL} can be repeated in the present case with minimal modifications.
The systematic expansion of a twisted vortex near the bifurcation point
can be then written as :
\begin{equation}
\label{eq:bifureps}
\begin{aligned}
f_1 &= f_1^{(0)} +\epsilon^2 f_1^{(2)} + \ldots \\
f_2 &= \epsilon f_2^{(1)}+ \epsilon^2 f_2^{(2)}+\ldots \\
\end{aligned}\quad \begin{aligned}
a &= a^{(0)} + \epsilon^2 a^{(2)} + \ldots \\
a_3 &= \epsilon^2 a_3^{(2)} + \ldots \\
\omega &= \omega_{\rm b} + \epsilon^2 \omega_2\,\,\, + \ldots
\end{aligned}
\end{equation}
where $a^{(0)}\,,f_1^{(0)}$ denotes the ANO vortex, whose equations can be read off
from equations (\ref{eq:FRVprofE}) by putting $f_2=a_3=0$.\
For details, and the Taylor expanded equations, see \cite{FL}.
The leading order equation is
\begin{equation}\label{eq:bifur-f2}
(D_2^{(0)}+ \omega_{\rm b}^2)f_2^{(1)}
:= -\frac{1}{r}\left(r {f_2^{(1)}}'\right)' + \left[ \frac{(e_2 na^{(0)}-m)^2}{r^2}
+ \omega_{\rm b}^2-\alpha +\beta'(f_1^{(0)})^2\right] f_2^{(1)}=0\,.
\end{equation}
The expansion coefficients $\omega_i$ are dictated by the conditions for the cancellation of resonance terms. The procedure
yields $\omega_1=0$, thus
\begin{equation}
\epsilon = \sqrt{\frac{1}{\omega_2}(\omega-\omega_{\rm b})} + \dots\,.
\end{equation}
\paragraph{Energy difference}
Twisted vortices have lower energies than embedded ANO ones (see Subsection \ref{ssec:1VEV11} for numerical values), and in some cases, this energy difference is remarkably large.
The explanation is, that in the core of an embedded vortex, there is false vacuum, which, in the case of a twisted vortex, is filled with the second condensate, reducing the potential enegy. This also has costs in the form of derivative and interaction terms. In those cases, where $f_2 \ll 1$ [e.g., close the the bifurcation ($\omega \approx \omega_{\rm b}$)], this energy difference can be calculated approximately,
with the help of the bifurcation equation, Eq.\ (\ref{eq:bifur-f2}) \cite{Catelani}. Neglecting the term quartic in $f_2$, and the backreaction of $f_2$ on $f_1$ in the energy density (\ref{eq:Edens}),
and performing a partial integration yields
\begin{equation}
\label{eq:echange}
E - E_{ANO} \approx 2\pi\int r \mathrm{d} r f_2 \left\{ -f_2'' - \frac{1}{r}f_2' + f_2 \left[ \frac{(e_2na-m)^2}{r^2} -\alpha + \beta' f_1^2 \right]\right\} = -\omega_{\rm b}^2 2\pi\int r\mathrm{d} r f_2^2\,.
\end{equation}
\subsection{Numerical solutions}\label{ssec:1VEV11}
Let us first consider the case of $e_1=e_2=1$. The $SU(2)$ symmetric case has been considered in Refs.\ \cite{vac-ach, hin1, hin2, semilocal}.
The range of solutions can be found by solving the bifurcation equation, Eq.\ (\ref{eq:bifur-f2}). In the $SU(2)$ symmetric
case, for $\beta_1 > 1$, an instability is found. In these cases, twisted vortices exists for $ 0 < \omega < \omega_{\rm b}$. For some parameter values,
$\omega_{\rm b}$ is shown in Table \ref{tab:bif_e21}.
In Fig.\ \ref{fig:Eomega}, the dependence of the vortex energy on the twist is displayed, and
the dependence of the global current $\mathcal{I}$ on the twist $\omega$ is depicted in figure \ref{fig:j3omega}. See also the $SU(2)$ symmetric case in Ref.\ \cite{FRV1, FRV2}.
Numerically we have found that twisted string solutions exist for $0<\omega<\omega_{\rm b}$, where the upper limit is a function of the parameters $\beta_1$, $\beta_2$, $\beta'$ and $\alpha$ of the potential and the flux $n$ of the vortex, similarly to the $SU(2)$ symmetric case \cite{FRV1, FRV2} (we have assumed $m=0$).
In the case of one charged and one neutral fields, $e_2=0$, as seen from Eqs.\ (\ref{eq:FRVprofE}), $a_3=0$.
In both the field equations and
the energy, Eq.\ (\ref{eq:Edens}), the same profile functions and energy is obtained with the replacement $\omega \to 0$
and $\alpha \to \alpha -\omega^2$, with the global current $j^3_2 = 2 \omega f_2^2$.
Therefore, for $e_2=0$ twisted vortices can be considered as trivial transformations of zero twist ones.
[A similar argument applies to the case of the global theory (Sec. \ref{sec:1VEVglob}) as well.]
\begin{table}
\centering
\begin{tabular}{|c|c|c||c|}
\hline
$\beta_1$ & $\beta'$ & $\alpha$ & $\omega_{\rm b}$ \\
\hline\hline
1.25 & 1.25 & 1.25 & 0.13667 \\
2 & 2 & 2 & 0.32989 \\
2.5 & 2.5 & 2.5 & 0.42744 \\
1.25 & 1.255 & 1.25 & 0.12100 \\
2 & 2.1 & 2 & 0.19453 \\
2.5 & 2.6 & 2.5 & 0.33726 \\
\hline
\end{tabular}
\caption{The value of the twist at the bifurcation for $e_2=1$.}
\label{tab:bif_e21}
\end{table}
\begin{figure}
\centering
\subfigure[{}]{
\includegraphics[scale=.32,angle=-90]{Eomega1}
\label{fig:Eomega}
}
\subfigure[{}]{
\includegraphics[scale=.32,angle=-90]{j3omega}
\label{fig:j3omega}
}
\caption{The energy $E$ and the current, $\mathcal{I}$ as a function of the twist $\omega$}
\label{fig:Ej3omega}
\end{figure}
\section{Condensate core vortices}\label{sec:zerotwist}
The $\omega\to 0$ limit of twisted vortices is quite remarkable: as the energy is a monotonous function of the twist, assuming its maximum at the embedded vortices, $\omega=\omega_{\rm b}$,
the zero twist limit, i.e., condensate core, or coreless \cite{Catelani} vortices are minimum energy solutions, coexisting with embedded ANO vortices, with energies, in some cases significantly, lower.
If $\alpha < \beta'$, they are exponentially localised, $f_2 \sim F_2 r^{-1/2} \exp{-\sqrt{\beta'-\alpha}r}$, where $F_2$ is a constant, determined by the global solution of the bounday value problem [i.e., by the numerical solution of the radial equations, Eq.\ (\ref{eq:FRVprofE})].
As minimum energy solutions, they are expexted to be stable: $n=1$ ANO vortices in this theory are known to have one negative eigenvalue mode, the one corresponding to the bifurcation \cite{hin1, hin2, FRV1, FRV2}. For $n>1$, ANO vortices are also unstable for $\beta_1>1$ against decay into lower winding number ones, therefore, for CC vortices, $n>1$ requires a numerical investigation of the linearised equations.
In the case of ANO vortices, the instability of higher flux vortices for $\beta >1$ is a consequence of the repulsive interaction between unit flux ones. That the change in the stability occurs at $\beta=1$
follows from the fact that for $\beta < 1$, the scalar field has a slower radial fall-off, $\sim F r^{-1/2} \exp(-\sqrt{2\beta}r)$, than the gauge field, $\sim A r^{1/2} \exp(-\sqrt{2}r)$,
whereas for $\beta < 1$, the scalar field falls off more slowly, and the interaction is attractive. Here, for a wide range of parameters, the second scalar has the slowest radial fall-off, and thus
the interaction between two vortices can be attractive even if $\beta_1 > 1$.
The existence of zero twist vortices if $\beta'=\alpha$ is also possible. In the $SU(2)$ symmetric model, no such solutions exist for $\beta >1$, although a consistent asymptotic solution can be found. As $\omega\to 0$, vortices become diluted \cite{FRV1, FRV2}. In the $\beta=1$ case, there is a one parameter family of solutions with degenerate energy \cite{hin1, hin2}. In the non-symmetric case, we have found that if $\beta_1 \beta_2 \ne {\beta'}^2$, zero twist CC vortices still exist, with a power law asymptotic behaviour, $f_2 \sim F_2/r$, where $F_2$ is a constant.
See also Ref.\ \cite{GB} for the $U(1)\times U(1)\times\mathbb{Z}_2$ symmetric case. In the latter case, due to the high degree of symmetry of the potential, a domain structure also exists.
If $\beta_1 \beta_2 \ne (\beta')^2$, the CC vortices, continued into the range $\alpha > \beta'$ (2VEV), become the 2VEV vortices with winding in the upper component.
If $e_2\ne 0$, these are fractional flux vortices of Refs.\ \cite{BabaevF, BS}. We shall briefly return to the 2VEV case in Sec.\ \ref{sec:2VEV}.
In the $\beta_1\beta_2 ={ \beta'}^2$, $\alpha=\beta'$ case, there seems to be no limiting solution. In these cases, as the twist
$\omega$ decreases, the profile functions reach their asymptotic values farther from the origin. This way, the string expands and its
energy density becomes more dilute. In Ref.\ \cite{FRV1, FRV2}, this behaviour has
been described with a scaling argument in the $SU(2)$ symmetric case,
which can be generalized to the $\beta_1 \beta_2 = (\beta')^2$ case without major changes.
If $\beta_1\beta_2 = \alpha^2$ (the boundary between upper and lower component 1VEV), solutions with the upper and the lower component having a non-zero VEV coexist.
For the special case of $U(1)\times U(1)\times\mathbb{Z}_2$ symmetry, see Ref.\ \cite{GB}. The domain structure observed there
is a consequence of the high degree of symmetry of their potential.
The energy difference between embedded ANO and CC vortices can be calculated in a similar manner to that of ANO and twisted vortices. Close to the bifurcation, $\alpha\approx\alpha_{\rm b}$,
\begin{equation}
\label{eq:EdiffOm0}
E_{\rm ANO} - E \approx 2\pi (\alpha - \alpha_{\rm b}) \int r \mathrm{d} r f_2^2\,.
\end{equation}
According to Eq.\ (\ref{eq:EdiffOm0}), the energy of CC vortices is lower than that of embedded ANO vortices.
\paragraph{Condensate core vortices, 2 charged fields} The zero twist limit of twisted vortices, condensate core vortices were calculated for a number of parameter values. One such solution, with exponential radial localisation (i.e., $\beta' > \alpha$) is shown in Fig.\ \ref{fig:vortom0}. In Fig.\ \ref{fig:vortom0P}, on the other hand, a CC vortex with power-law localisation is shown.
The energies of condensate core vortices are collected in Table \ref{tab:erg}. As already mentioned, their energies are below that of the embedded ANO vortex with the same value of $\beta_1$.
For ANO vortices for $\beta>1$, $E_n/n$ assumes its minimum for $n=1$, rendering higher flux vortices unstable. Interestingly, for CC vortices this is not the case. The minimum of $E_n/n$ is
assumed at a finite value of $n$. A plot of $E_n$ vs.\ $n$ is shown in Fig.\ \ref{fig:En}.
\begin{figure}
\centering
\subfigure[$\beta_1=\beta_2=\alpha=2$ and $\beta'=2.1$]{
\includegraphics[scale=.32,angle=-90]{vortom0}
\label{fig:vortom0}
}
\subfigure[$\beta_1=\beta'=\alpha=2$ and $\beta_2=3$]{
\noindent\hfil\includegraphics[scale=.32,angle=-90]{vortom0b23}
\label{fig:vortom0P}
}
\caption{Zero twist solutions}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|c||c|c|c||c|}
\hline
$n$ & (a) & (b) & (c) & ANO\\
\hline \hline
1 & 1.152 & 1.008 & 0.78 & 1.157 \\
2 & 1.121 & 0.913 & 0.75 & 1.210 \\
3 & 1.107 & 0.882 & 0.72 & 1.239 \\
\hline
\end{tabular}
\end{center}
\caption{Energy per unit flux, $E_n/(2\pi n)$ of CC vortices for (a) $\beta_{1,2}=\alpha=2$, $\beta'=2.1$, (b) $\beta_1=2$, $\beta_2=8$, $\beta'=4.2$, $\alpha=4$
and (c) $\beta_1=2$, $\beta_2=3872$, $\beta'=87.4$, $\alpha=83$
compared to ANO $\beta=2$.}
\label{tab:erg}
\end{table}
\paragraph{Condensate core vortices: A charged and a neutral field}
To obtain the range of parameters where solutions exist, we need to solve the bifurcation equation, Eq.\ (\ref{eq:bifur-f2}) again. Result for some parameter values are displayed
in Table \ref{tab:bif_e20}. Here, as the twist $\omega$ is obtained with a trivial transformation, we have collected $\alpha_{\rm b}$.
We have also calculated full nonlinear solutions numerically. Their energy values are shown in Table \ref{tab:erg2}. We would like to draw the attention to the fact, that $E_n/n$ is usually
a non-monotonous function of $n$, leading to stable higher flux vortices for $\beta_1 >1$. In these cases, embedded ANO vortices are unstable both against the formation of the condensate and against decay into unit flux vortices.
\begin{table}
\centering
\begin{tabular}{|c|c||c|}
\hline
$\beta_1$ & $\beta'$ & $\alpha_{\rm b}$ \\
\hline\hline
1.25 & 1.25 & 1.1235 \\
2 & 2 & 1.7610 \\
2.5 & 2.5 & 2.1791 \\
1.25 & 1.255 & 1.1272 \\
2 & 2.1 & 1.8309 \\
2 & 2.3 & 1.9669 \\
2 & 3.98372 & 2.9586 \\
2.5 & 2.6 & 2.2477 \\
\hline
\end{tabular}
\caption{The value of coupling $\alpha$ at the bifurcation for $e_2=0$.}\
\label{tab:bif_e20}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c||c|c||c|}
\hline
$n$ & (a) & (b) & ANO \\
\hline\hline
1 & 1.152 & 1.113 & 1.157 \\
2 & 1.104 & 1.054 & 1.210 \\
3 & 1.102 & 1.011 & 1.239 \\
\hline
\end{tabular}
\caption{Energy per unit flux, $E_n/(2\pi n)$, of vortices with $e_2=0$ for (a) $\beta_1=2$, $\beta_2=3$, $\beta'=2.3$ and $\alpha=2.05$ and
(b) $\beta_1=2$, $\beta_2=9$, $\beta'=3.98372$, $\alpha=3.5507$.}
\label{tab:erg2}
\end{table}
\section{Linear perturbations and stability}\label{sec:linpert}
To assess the stability of the solutions obtained, a linear stability analysis of the solutions has been performed. The formalism of Ref.\ \cite{Goodband} has been
used, extended to the case of two components. The $SU(2)$ symmetric case, has been considered in Ref.\ \cite{twistedinstab1, FL, twistedinstab2}.
Here, the Lagrangian of the theory, Eq.\ (\ref{eq:Lag}) is expanded to second order in small fluctuations of the fields, $\delta\phi_a$ and $\delta A_\mu$, and then, the resulting equations are solved with the help of a suitable form of partial wave expansion and Fourier transformation in $t,z$.
An important part of the procedure is the choice of gauge. The gauge condition is also perturbed,
in a way that removes first order derivatives from the first order equations \cite{Goodband}. The only drawback of this procedure is that the spectrum of the gauge fixing operator is also needed to distinguish physical modes from ghost ones, however, in our case, all ghost mode eigenvalues turn out to be positive, i.e., all unstable modes are physical.
The resulting equations, for a mode in partial wave channel $\ell$, and $z$ direction wave number $k$ can be written in the form
\begin{equation}
\label{eq:genpert}
\mathcal{M}_\ell(k) \Psi_\ell = \Omega^2 \Psi_\ell\,,
\end{equation}
where $\Omega$ is the frequency eigenvalue, $\mathcal{M}_\ell(k)$ a matrix differential operator, and $\Omega^2 <0$ corresponds to an instability.
Here $\Psi_\ell=(s_{1\ell}, s_{1,-\ell}^*, s_{2\ell}, s_{2,-\ell}^*, a_\ell, a_{-\ell}^*, a_{3\ell}, a_{0,\ell})^T$ are the radial functions of the perturbations.
For the details of this analysis, see Appendix \ref{app:pertdetails}. The perturbations of $a_{0,\ell}$ decouple in all cases.
The linearised problem, and its application to assess the stability of the solutions will be presented in Sec.\ \ref{ssec:1VEV11} for the case
of $e_1=e_2=1$. For the embedded ANO string, the following sectors of the perturbations decouple: $\delta\phi_1$, $\delta A_i$; $\delta A_0$; $\delta A_3$;
and that of $\delta \phi_2$. The instability in the $\delta\phi_2$ sector signals the bifurcation, the perturbation operator in that sector
agrees with that in the bifurcation equation, Eq.\ (\ref{eq:bifur-f2}).
The application of the expansion of the vortex solution around the bifurcation to the stability problem has been addressed in Ref.\ \cite{FL}
in the $SU(2)$ symmetric case.
The same argument can be repeated here, $\mathcal{M}_\ell = \mathcal{M}_\ell^{(0)} + \epsilon^2 \mathcal{M}_\ell^{(2)}$. This shows, that
the perturbation problem of the twisted vortices is a one-parameter deformation of that of the embedded ANO vortices, and therefore, twisted vortices
close to the bifurcation are unstable. Vortices farther from the bifurcation need to be treated numerically.
Let us also remark, that the perturbation treatment of the instability problem is a bit involved: for $\beta_1 > 1.5$, a contribution from the
continuum spectrum of the embedded ANO vortex perturbations (as intermediate states in 2nd order perturbation theory) needs to be taken into account \cite{FL}.
\subsection{Stability of vortices with two charged fields}\label{ssec:stabe21}
For twisted vortices, $0 < \omega \le \omega_{\rm b}$, the results are similar to those in the case of an $SU(2)$ symmetric potential (see Refs.\ \cite{twistedinstab1, FL,twistedinstab2}):
fistly, the mode corresponding to the lovest value of the squared frequency $\Omega^2$ is a one-parameter deformation of the instability mode of the embedded ANO vortex.
Second, for all values $0<\omega \le \omega_{\rm b}$ which were available to our numerical code, we have found one unstable mode in the $\ell=0$ sector, i.e.,
the instability of the embedded ANO vortex persisted for all examined twisted vortices, and, for lower values of the twist, $\omega$, the value of
$|\Omega^2|$ got also smaller. The value of $\Omega^2$ is negative for a range of the wave number. Close to the minimum $k=k_{\rm min}$ (most negative $\Omega^2$), an approximate dispersion relation
\begin{equation}
\label{eq:disprel}
\Omega^2 = \Omega^2_{\rm min} +\Omega^2_2(k-k_{\rm min})^2\,,
\end{equation}
holds. For the embedded ANO vortex, Eq.\ (\ref{eq:disprel}) is exact, and $k_{\rm min} = \omega_{\rm b}$. Some data is displayed in Table \ref{tab:instab1}--\ref{tab:instab3}.
As $\omega$ becomes smaller, the errors grow; it is likely that this is because of $\delta A_3$ decoupling at $\omega=0$. For $\omega\to0$, a very small $\delta A_3$ has to be
calculated, which is weakly coupled to the other components. On the other hand, for $\omega=0$, the eigenvalues for $\delta A_3$ are those of the ghost mode (see Table \ref{tab:ghost}).
The ghost mode eigenvalues, collected in Table \ref{tab:ghost} and \ref{tab:ghost2}. They change slowly with parameters of the potential, and $\omega$. Their order of magnitude is
unity. Most importantly, the lowest energy modes relevant for stability are not cancelled by them.
Most importantly for our subject matter, in all examined cases, the eigenvalues in the $\omega=0$ case are $0$ for $k=0$ within numerical precision. For zero twist ($\omega=0$),
the dispersion relation (\ref{eq:disprel}) is exact with $k_{\rm min}=0$ and $\Omega_2^2=1$. This implies, that for any $k\ne 0$, the eigenvalue is positive.
As any local perturbation necessarily contains modes with $k\ne 0$, it is a positive energy perturbation. This is a strong evidence for the stability of the zero twist vortices.
We have also examined the stability of higher flux vortices. We have found, that for many parameter values, $n=2,3$ vortices are stabilised by the addition of the condensate in their core.
This is in accord with the non-monotonicity of the energy per unit flux as a function of the flux. For vortices with number of flux quanta below the one with the strongest binding, it is energetically favourable to avoid decay. This happens when the parameters are far enough from the bifurcational value. See Table \ref{tab:stabe21}: CC vortices exists for $\alpha_{\rm b} < \alpha < \beta'$,
and they are stable for $\alpha > \alpha_{\rm s}$. As we shall see in the large mass ratio limit, in Sec.\ \ref{sec:limits}, this phenomenon is even more pronounced.
\begin{table}
\centering
\begin{tabular}{|c|c|c||c|c|}
\hline
$\beta_1$ & $\beta_2$ & $\beta'$ & $\alpha_{\rm b}$ & $\alpha_{\rm s}$ \\
\hline\hline
1.25 & 1.25 & 1.255 & 1.1742 & 1.2350 \\
2 & 2 & 2.1 & 1.6811 & 1.9335 \\
2.5 & 2.5 & 2.6 & 1.9756 & 2.3984 \\
\hline
\end{tabular}
\caption{Stabilisation of two-flux ($n=2$) vortices, $e_2=1$.}
\label{tab:stabe21}
\end{table}
\subsection{Stability of vortices with one charged and one neutral fields}\label{ssec:stabe20}
We have also examined the stability of CC vortices in the $e_2=0$ case. We have found qualitatively similar results, as in the charged case: the eigenvalue of the mode that is a deformation of the eigenmode of ANO vortices corresponding to the bifurcation looses its energy lowering property for CC vortices. The corresponding eigenvalue becomes zero within numerical precision, for $z$-independent perturbations,
and $k^2$ for perturbations with $z$ direction wave number $k$, implicating that there are no energy-lowering local perturbations in this sector.
For higher winding vortices, we have also observed the stabilisation in the case of a neutral second field. For some numerical data, see Tab.\ \ref{tab:stabe20}.
\begin{table}
\centering
\begin{tabular}{|c|c|c||c|c|}
\hline
$\beta_1$ & $\beta_2$ & $\beta'$ & $\alpha_{\rm b}$ & $\alpha_{\rm s}$ \\
\hline\hline
2 & 3 & 2.3 & 1.4325 & 1.8287 \\
2 & 9 & 3.98372 & 1.9448 & 2.7938 \\
\hline
\end{tabular}
\caption{Stabilisation of two-flux ($n=2$) vortices, $e_2=0$.}
\label{tab:stabe20}
\end{table}
\section{Magnetic bags and large mass ratio \texorpdfstring{$M$}{M}}\label{sec:limits}
\paragraph{Large flux} A remarkable limit of ANO vortices has been considered in Ref.\ \cite{Bolognesi1, Bolognesi2}. An approximate vortex configuration has been constructed as
\begin{equation}
\label{eq:Bolognesi}
f(r) = \left\{ \begin{aligned}
0 &\text{, if } r < R\,,\\
1 &\text{, if } r > R\,,\\
\end{aligned}
\right. \quad a(r) = \left\{ \begin{aligned}
r^2/R^2 &\text{, if } r < R\,,\\
1 &\text{, if } r > R\,,\\
\end{aligned}
\right.
\end{equation}
with optimal radius $R=R_A=\sqrt{2 n}\beta^{-1/4}$ and energy $E_n = E_{An} \sim 2\pi n \sqrt{\beta}$. It is straighforward to generalise this approximation to the case of a neutral second field,
$e_2=0$ with using Eq.\ (\ref{eq:Bolognesi}) for $f_1$ and $a$, and setting
\begin{equation}
f_2(r) = \left\{ \begin{aligned}
\sqrt{\frac{\alpha}{\beta_2}} &\text{, if } r < R\,,\\
0 &\text{, if } r > R\,,
\end{aligned}\right.
\end{equation}
yielding $R = R_{C0} = \sqrt{2 n}\left( \beta_1 -\alpha^2/\beta_2\right)^{-1/4}$ and $E=E_{C0}= 2\pi n \left( \beta_1 -\alpha^2/\beta_2\right)^{1/2}$. It is remarkable, that in this limit, an effective
Ginzburg-Landau parameter, $\beta_{\rm eff}=\beta_1 - \alpha^2/\beta_2$ can be introduced.
In the $e_2=0$ limit, the large flux limit of the effective ANO vortex reproduces well the large flux limit of CC vortices as well.
However, the large flux behaviour is more delicate in the case of two charged fields: in that case, we have observed numerically, that for $n\to\infty$, $E_n/n$ approaches the same limit for CC and ANO vortices. For numerical data, see Figs.\ \ref{fig:En} and \ref{fig:Ergs}.
\begin{figure}[h!]
\noindent\hfil\includegraphics[angle=-90,scale=.5]{Ergs}
\caption{Energy of vortices per unit flux, $\beta_1=2$, $\tilde{\beta}_2=3$, ${\tilde{\beta}}'=2.3$, $\tilde{\alpha}=2.05$ and $e_2=0$,
compared to Abrikosov (ANO) vortex energies.}
\label{fig:Ergs}
\end{figure}
\begin{figure}[h!]
\noindent\hfil\includegraphics[angle=-90,scale=.5]{En}
\caption{Energy of vortices per unit flux, $\beta_1=2$, $\beta_2=M^2\tilde{\beta}_2$, $\beta'=M {\tilde{\beta}}'$, $\alpha=M\tilde{\alpha}$, $\tilde{\beta}_2=9.68$, ${\tilde{\beta}}'=4.37$, $\tilde{\alpha}=4.15$ and $e_2=1$, compared to Abrikosov (ANO) vortex energies. The arrows show the minima, at $n=13$ and $n=78$, respectively.}
\label{fig:En}
\end{figure}
\paragraph{Large mass ratio, $M$}
The large mass ratio limit is another interesting limit, and one that is also physically relevant. In LMH, $\phi_1$ corresponds to Cooper pairs formed of electrons, and $\phi_2$ to ones of protons.
The GL free energy density is
\begin{equation}
\label{eq:GL}
\mathcal{F} = \frac{{\mathbf B}^2}{2} + \sum_{a=1}^2 \left[ \frac{\hbar^2}{2 m_a}|\mathbf{D}\phi_a| + \frac{\lambda_a}{2}|\phi_a|^4 - \mu_a |\phi_a|^2 \right] + \lambda' |\phi_1|^2 |\phi_2|^2\,,
\end{equation}
where ${\mathbf D}\phi_a = (\nabla - e e_a {\mathbf A})\phi_a$, $\lambda_a$, $\lambda'$, and $\mu_a$ are material constants, $e_a$ is the charge of the field $\phi_a$ in some arbitrary units $e$ (e.g., for superconductors, twice the elementary charge is suitable), and we have assumed that there is no Josephson coupling, $\gamma (\phi_1^*\phi_2 + \phi_1 \phi_2^*)$, which would fix the relative phase of the fields at the minimum of the potential, and disallow a 1VEV state. Such is the case if there is a symmetry enforcing the separate conservation of the two fields (e.g., conservation of particle numbers).
With the help of a rescaling of the field $\phi_1$ by $\sqrt{\mu_1/\lambda_1}$, $\phi_2$ by $\sqrt{m_2/m_1} \sqrt{\mu_1/\lambda_1}$, the vector potential ${\bf A}$ by $\hbar\eta_1 \sqrt{\mu_1/2/m_1}$ and distances by $\sqrt{2m_1/(\mu_0e_1^2e^2\eta_1)}$, the penetration depth, $\lambda_L = \sqrt{m_1/(\mu_0e_1^2e^2\eta_1)}$ is scaled to $1/\sqrt{2}$, and we obtain the GL free energy with the potential in the form used in Eq.\ \ref{eq:pot}. The parameters are then related to the microscopic parameters as
\begin{equation}
\label{eq:supercond}
\begin{aligned}
\beta_1 &= 4\lambda_1 m_1^2/(\hbar^2e^2\mu_0)\,,\\
\beta' &= 4 \lambda' m_1 m_2/(\hbar^2 e^2 \mu_0)\,, \\
\end{aligned}
\quad\quad
\begin{aligned}
\beta_2 &= 4\lambda_2 m_2^2/(\hbar^2 e^2\mu_0) \,,\\
\alpha &= 4 \nu_2 m_1 m_2 /(\hbar^2 e^2 \mu_0 \eta_1^2)\,.\\
\end{aligned}
\end{equation}
As for LMH, the mass ratio, $M=m_2/m_1 \approx 1836$ (the mass ratio of protons and electrons), $\beta_2 \gg \alpha, \beta' \gg \beta_1$. Suitable parameters are introduced as
\begin{equation}
\label{eq:scaledparams}
\beta_2 = M^2 \tilde{\beta}_2\,,\quad \beta' = M {\tilde{\beta}}'\,,\quad \alpha = M\tilde{\alpha}\,.
\end{equation}
The tilde parameters are expected to be of the same order of magnitude.
We shall consider here the limiting behaviour of CC vortices for $M\gg 1$.
In Fig.\ \ref{fig:En}, the energy per unit flux, $E_n/n$ is plotted for a large range of fluxes, and some values of the mass ratio. Remarkably, $E_n/n$
is not a monotonous function of $n$, in contradistinction with both the case of type I ($\beta < 1$) and type II ($\beta > 1$) superconductors. As a result, in the two component theory with large mass ratio, ``giant'' vortices exist. Even for moderate values of $M$ (e.g., 20 or 100), the minimum of $E_n/n$ is shifted to 13, resp.\ 78.
Qualitative properties of the function $E_n/n$ can be reproduced with the following approximate vortex configuration.
Let ud consider a bag-type vortex, with $f_1=0$, $f_2=\sqrt{\alpha/\beta_2}$ (the lowest energy false vacuum with $f_1=0$) in its core,
from $r=0$ to $(1-\delta)R$. It is assumed that the vortex has a thin wall, with $f_1$ and $f_2$ hhaving a linear transition to their respective
VEVs between $r=(1-\delta)R$ and $R$. The gauge field is $a=(r/R)^2$ for $r<R$ and $a=1$ otherwise.
The energy of such a configuration is approximately minimised is $R=\sqrt{2 n}\beta_{\rm eff}^{-1/4}$. In $\delta$ we expad the energy in a series containing terms starting with $1/\delta$ and ending with $\delta^3$. We have found, with a numerical minimisation, that a good approximate minimum is obtained by minimising the $\delta^{-1}$ and $\delta^3$ terms, yielding $\delta=(5/2)^{-1/4}((\beta_2 +\alpha)/(\beta_2-3\alpha))^{1/4}n^{-1/2}$. With these, it is obtained that
\begin{equation}\label{eq:ErgApprnM}\begin{aligned}
E \approx 2\pi n \beta_{\rm eff} &+ \frac{8\pi}{3}\left( \frac{2}{5}\right)^{1/4}\left[ 1
-\frac{1}{4\sqrt{10}}\left( 7\beta_{\rm eff} - \frac{\tilde{\alpha}(\tilde{\alpha}+{\tilde{\beta}}')}{\beta_{\rm eff}\tilde{\beta}_2} \right)
\right] n^{1/2}\\
&+\frac{\pi \tilde{\alpha}}{M\tilde{\beta}_2} \left[ 1 - \frac{2^{7/4}5^{1/4}}{3}n^{3/2} + \frac{3 \cdot 5^{1/2}}{2^{1/2}}n \right]\,.
\end{aligned}
\end{equation}
The qualitative formula (\ref{eq:ErgApprnM}) gives an order of magnitude correct value. It also shows, that $E_n/n$ is nonmonotonous, with a minimum
at a value of $n$ growing with $M$. This minimum is significantly below the energy/flux of embedded ANO vortices (in the bag approximation of Refs.\ \cite{Bolognesi1, Bolognesi2}, $2\pi \beta_1$).
The existence of the minimum is the result of the competition of two phenomena, the expansion of the vortices due to the magnetic energy, and the
large $M$ behaviour, fixing $f_2$ to its minimal energy value in the core, at the cost of the interaction energy between the second scalar and the gauge fields. If $n$ becomes much larger than at the minimum of $E_n/n$, CC vortices approach embedded ANO ones.
\paragraph{Boundary of upper and lower component 1VEV: Wall-type vortices} Close to $\alpha=\sqrt{\beta_1 \beta_2}$, the potential energy in the core becomes small, the vortices become large, and their flux is localised closer to
the outer end of their cores. At the same time, the minimum of $E_n/n$ is shifted to larger values of $n$, and at $\alpha=\sqrt{\beta_1 \beta_2}$,
$E_n \propto n$ for large $n$. Here, ANO vortices in the lower component become also allowed.
In this case, it is possible to exchange the role of the 2 components, with the rescaling $\phi_a\to \eta_2\phi_a$, $x\to x/\eta_2$, $A\to \eta_2 A$, where $\eta_2^2=\alpha/\beta_2$.
In this way, we get the same expression for the energy of the vortices with the potential (\ref{eq:pot}) and an overall multiplier $\alpha/\beta_2$.
With the same configuration as above, the estimated energy of these vortices is $E=2\pi( 4\alpha/\beta_2 + \alpha/\sqrt{3\beta_2})$, which
is $M^0$ asymptotically. However, using the large-$\beta$ asymptotics of Abrikosov vortex energy \cite{Pismen}, we get $E\sim 2\pi\frac{\alpha}{\beta_2}\log\sqrt{\beta_2}$,
i.e., $\sim (\log M)/M$, telling us that at the transition, it is energetically favourable for the vortices to break up into $n=1$ lower component
Abrikosov vortices. Linearising the equations in the other component shows, that these vortices are then stable against the formation of a condensate
in their core. This can be seen as follows: the large-$\beta$ asymptotic form of the vortex profile is a small core
with size proportional to $1/\sqrt{\beta_2}\propto 1/M$. The linearised equation is of the form of an eigenvalue equation,
and we have verified numerically, that it has no bound modes, and therefore if $\alpha > \sqrt{\beta_1 \beta_2}$, vortices in the lower component do not have condensate in their cores.
\section{The case of a two-component vacuum expectation value}\label{sec:2VEV}
\paragraph{Global 2VEV vortices} Let us briefly consider global 2VEV vortices. These, in the context of atomic BECs, are discussed in Refs.\ \cite{Kasamatsu, Mason, IvashinPoluektov, KasamatsuEtoNitta}. Let us note, that as for $r\to\infty$, $f_1\to \eta_1$ and $f_2\to\eta_2$, the asymptotic behaviour of the energy density [see Eq.\ (\ref{eq:globerg})] is $\mathcal{E} \sim (n^2 \eta_1^2 + m^2 \eta_2^2)/r^2$, therefore, the energy of the vortex is
\begin{equation}
\label{eq:2VEVglobErg}
E = \int \mathrm{d}^2 x \mathcal{E} = 2\pi\int_0^{R_{\rm core}} \mathrm{d} r r \mathcal{E} + 2\pi (n^2 \eta_1^2 + m^2 \eta_2^2) \log\left( \frac{R}{R_{\rm core}}\right)\,.
\end{equation}
An interesting case is the behaviour close to the boundary between 1VEV and 2VEV classes, at $\alpha=\beta'$. Unless $\beta_1 \beta_2 = (\beta')^2$,
a limiting vortex exists in the 1VEV case, with power-law localisation. It is also a smooth limit of 2VEV vortices: at the transition, $\eta_2$ becomes 0.
For the comparison of an 1VEV and a 2VEV global vortex, both close to the transition, see Fig.\ \ref{fig:2vevcmpglob}. Numerical data is collected in Table \ref{tab:trans}.
\paragraph{Two charged fields} Let us note first, that with two non-zero VEVs, the energy per unit length of a twisted vortex diverges quadratically in $R$,
as there is only one longitudinal gauge field component, $A_3$, which would either not cancel the longitudinal derivative of $\phi_2$, or lead to a non-vanishing $D_3 \phi_1$.
Also, as $A_\vartheta$ cannot cancel the angular derivatives of both fields, unless $n=m$, the energy of 2VEV vortices is only finite in this case.
Vortices with a mildly, i.e., logarithmically divergent energy exist, however, for any pair of windings, $n$, $m$. From minimising the logarithmic energy contribution, $a(r\to\infty) = \eta_1^2/(\eta_1^2 + \eta_2^2)$, agreeing with the number of flux quanta in the vortices, is obtained; in general, this is non-integer. In Refs.\ \cite{BabaevF, BS, BS1, BS2}, these vortices have been termed fractional flux vortices.
Let us now consider the case of $n=1$, $m=0$. Inserting the limiting value of $a$ and the VEVs into the energy density, Eq.\ \ref{eq:Edens}, the asymptotic form of the energy is obtained, yielding
\begin{equation}
\label{eq:2VEVdivergence}
E = 2\pi \int_0^{R_{\rm core}} \mathrm{d} r r \mathcal{E} + 2 \pi E_L \log\left( \frac{R}{R_{\rm core}}\right) = E_{\rm core} + 2 \pi E_L \log\left( \frac{R}{R_{\rm core}}\right)\,,
\end{equation}
where the coefficient of the logarithm is given as
\begin{equation}\label{eq:2VEVchgEL}
E_L = \frac{\eta_1^2 \eta_2^2}{\eta_1^2 + \eta_2^2}\,.
\end{equation}
In the 1VEV case, close to the transition, the radial fall-off of the second field component is $\sim F_2 r^{-1/2}\exp(-\sqrt{\beta' -\alpha}r)$, which gets slower if the system is closer to the 2VEV case. For a finite size sample, at some point, 1VEV solutions and 2VEV fractional flux vortices become indistinguishable in those cases when the zero twist limit exists for $\beta'=\alpha$. For a comparison of 1VEV and 2VEV vortices close to the transition, see Figs.\ \ref{fig:2vevcmpe21}, and Table \ref{tab:trans}.
Let us also mention, that in the large mass ratio ($M$) limit, $\eta_1$ is independent of $M$, and $\eta_2^2 = {\tilde{\eta}}_2^2 / M$. As a result, in the large mass ratio limit, $E_L = O(M^{-1})$, and the dependence of the energy on $R$ becomes weak. The limit of the flux is
\[
n a(r\to\infty)= n \frac{\eta_1^2}{\eta_1^2 + \eta_2^2} =
n \left[ 1 - \frac{1}{M}\frac{\beta_1 (\tilde{\alpha}-{\tilde{\beta}}')}{\beta_1 \tilde{\beta}_2 -\tilde{\alpha}{\tilde{\beta}}'}\right] + O(1/M^2)\,,
\]
i.e., the deviation of the flux from the integer value in the $M\gg 1$ limit decreased with $M$, and in the case of LMH,
distinguishing between fractional flux and ANO vortices is expected to require the measurement of the flux to a precision of
less than one part in a thousand. At the same time, the coefficient of the logarithmic term [see Eq.\ (\ref{eq:2VEVchgEL})] in the energy also becomes small,
\[
E_L = \frac{1}{M} \frac{\beta_2 (\tilde{\alpha}-{\tilde{\beta}}')}{\beta_1 \tilde{\beta}_2 - {\tilde{\beta}}'{}^2} + O(1/M^2)\,.
\]
Also, $f_1$ becomes similar close to the scalar field of an ANO vortex, with $\lambda_{\rm eff}$ and $\alpha_{\rm eff}$ effective couplings, as in the 1VEV case, see Sec.\ \ref{sec:limits}.
Some numerically calculated (core) energy values of 2VEV vortices with both fields charged are shown in Fig.\ \ref{fig:En2V1}, and the corresponding radii in Fig.\ \ref{fig:Rc2V1}. Note, that there seems to be an energy contribution proportional to $n^2$, and $E_n/n$ grows with $n$ despite that $\beta_{\rm eff} <1$.
\begin{figure}[h!]
\noindent\hfil\includegraphics[angle=-90,scale=.5]{En2V1}
\caption{Energy of 2VEV vortices per unit flux, $\beta_1=2$, $\tilde{\beta}_2=2.4$, ${\tilde{\beta}}'=1.8$, $\tilde{\alpha}=2.2$ and $e_2=1$.
For comparison, the energy per unit flux of the corresponding effective ANO vortices for large flux, $(\alpha_{\rm eff}/\lambda_{\rm eff}) 2\pi \sqrt{\lambda_{\rm eff}}$, is also indicated.}
\label{fig:En2V1}
\end{figure}
\begin{figure}[h!]
\noindent\hfil\includegraphics[angle=-90,scale=.5]{Rc2V1}
\caption{Radius of 2VEV vortices ($f_1(R_{\rm core})=0.95 \eta_1$), $\beta_1=2$, $\tilde{\beta}_2=2.4$, ${\tilde{\beta}}'=1.8$, $\tilde{\alpha}=2.2$ and $e_2=1$.}
\label{fig:Rc2V1}
\end{figure}
\paragraph{One field charged, one neutral}
In the 2VEV case with one neutral condensate, $e_2=0$, the $m=0$ case yields finite energy vortices with integer flux, $a(r\to\infty) \to 1$, and the number of flux quanta agrees with $n$. For other values of $m$, $E \sim 2 \pi m^2 \eta_2^2 \log(R/R_{\rm core})$.
We have calculated some 2VEV vortices with $n=1$, $m=0$ for a neutral scalar field numerically. The data is collected in Table \ref{tab:2VEVe20}.
Note, that there is a series of data for $\beta_1=2$, $\tilde{\beta}_2=3$, ${\tilde{\beta}}'=2$, $\tilde{\alpha}=2.1$ and $M=1,2,3$. For $M\to\infty$,
the lowest energy state with $\phi_1=0$ is $|\phi_2|=\sqrt{\alpha/\beta2}=O(1/\sqrt{M})$. With this assumption, the leading terms in the equation of $f_2$
in Eq.\ (\ref{eq:FRVprofE}) is $\beta_2 f_2^2 -\alpha + \beta' f_1^2$, neglecting the remaining terms yields $f_2^2 = (\tilde{\alpha} - {\tilde{\beta}}'f_1^2)/\tilde{\beta}_2 / M$. Substituting this into the equation of $f_1$ yields an ANO vortex profile equation with $\lambda_{\rm eff}=\beta_1 - ({\tilde\beta}')^2/\tilde{\beta}_2$ and $\alpha_{\rm eff}=\beta_1 - {\tilde\beta}' \tilde{\alpha}/\tilde{\beta}_2$. Rescaling this into the usual ANO form yields an approximate energy $\alpha_{\rm eff}/\lambda_{\rm eff} E_{ANO}(\beta=\lambda_{\rm eff})$. For comparison, for the case in Table \ref{tab:2VEVe20}, this yields $E/(2\pi)\approx 0.8279$ (with $\lambda_{\rm eff}=0.6667$, $\alpha_{\rm eff}=0.6$, and $E_{ANO}(\beta=0.6667)/(2\pi)=0.9199$).
Some data for $n=2$ is collected in Table \ref{tab:2VEVe202}. For $n=2$, the approximation from the effective ANO vortex gives
$E/(4\pi) \approx 0.8071$ (with $E_{ANO}(n=2,\beta=0.6667)/(4\pi)=0.8967$). See also Fig.\ \ref{fig:En2V0}.
Numerically, $f_2^2 \approx (\alpha - \beta' f_1^2)/\beta_2$ holds with a good accuracy even for $M=4$.
\begin{figure}
\centering
\includegraphics[scale=.5,angle=-90]{2vevcmpglob1}
\caption{Comparison of 1VEV and 2VEV global vortices close to the boundary: $\beta_1=\beta'=2$, $\beta_2=4.5$, $\alpha=1.99$ 1VEV and $\alpha=2.011$ 2VEV.}
\label{fig:2vevcmpglob}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5,angle=-90]{2vevcmp}
\caption{Comparison of 1VEV and 2VEV vortices close to the boundary: $\beta_1=\beta'=\alpha=2$, $\beta_2=3$ 1VEV and $\alpha=2.011$ 2VEV, $e_2=1$.}
\label{fig:2vevcmpe21}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5,angle=-90]{2vevcmpe20}
\label{fig:2vevcmpe20}
\caption{Comparison of 1VEV and 2VEV vortices close to the boundary: $\beta_1=\beta'=\alpha=2$, $\beta_2=3$ 1VEV and $\alpha=2.011$ 2VEV, $e_2=0$.}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|c|c| c | c || c | c | c || c| c | c|}
\hline
& $\beta_1$ & $\beta_2$ & $\beta'$ & $\alpha$ & $E/(2\pi)$, 1VEV & $R_{\rm core}$ & $E_L$ & $E/(2\pi)$, 2VEV & $R_{\rm core}$ \\
\hline\hline
(a) & 2 & 3 & 2 & 2 & 1.13598 & --- & 0.010879 & 1.10667 & 3.59999 \\
(b) & 2 & 3 & 2 & 2 & 1.0697 & --- & --- & 1.03519 & --- \\
(c) & 1 & 4.5 & 2 & 1.99 & 1.71 & 9.13 & 0.956 & 1.67 & 10.7 \\
\hline
\end{tabular}
\caption{Comparison of 1VEV and 2VEV vortices close to the boundary: (a) $e_1=e_2=1$, (b) $e_1=1$, $e_2=0$, and (c) global. The coefficient of the $2\pi\log(R/R_c)$ term in the energy, $E_L$ is always 1 for 1VEV global vortices, and it is displayed for the $e_1=e_2=1$ case in the table, where $R_{\rm core}$ is defined as $a(R_{\rm core})=0.95 a(r\to\infty)$, for the global case, $f_1(R_{\rm core})=0.95 f_1(r\to\infty)$.}
\label{tab:trans}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c||c|}
\hline
$\beta_1$ & $\beta_2$ & $\beta'$ & $\alpha$ & $E/(2\pi)$\\
\hline\hline
2 & 3 & 2 & 2.011 & 1.03519 \\
2 & 3 & 2 & 2.1 & 0.90588 \\
2 & 12 & 4 & 4.2 & 0.87461 \\
2 & 27 & 6 & 6.3 & 0.86128 \\
2 & 48 & 8 & 8.4 & 0.85389 \\
\hline
\end{tabular}
\caption{The energy of some 2VEV vortices for $e_2=0$, $n=1$, $m=0$.}
\label{tab:2VEVe20}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c||c|}
\hline
$\beta_1$ & $\beta_2$ & $\beta'$ & $\alpha$ & $E/(4\pi)$\\
\hline\hline
2 & 3 & 2 & 2.011 & 0.98606 \\
2 & 3 & 2 & 2.1 & 0.86958 \\
2 & 12 & 4 & 4.2 & 0.84219 \\
2 & 27 & 6 & 6.3 & 0.83159 \\
2 & 48 & 8 & 8.4 & 0.82593 \\
\hline
\end{tabular}
\caption{The energy of some 2VEV vortices for $e_2=0$, $n=2$, $m=0$.}
\label{tab:2VEVe202}
\end{table}
\begin{figure}[h!]
\noindent\hfil\includegraphics[angle=-90,scale=.5]{En2V0}
\caption{Energy of 2VEV vortices per unit flux, $\beta_1=2$, $\tilde{\beta}_2=3$, ${\tilde{\beta}}'=2$, $\tilde{\alpha}=2.1$ and $e_2=0$.
For comparison, the energy per unit flux of the corresponding effective ANO vortices for large flux, $(\alpha_{\rm eff}/\lambda_{\rm eff}) 2\pi \sqrt{\lambda_{\rm eff}}$, is also indicated.}
\label{fig:En2V0}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
In the present paper, we gave a detailed study of vortex solutions in a broad class of $U(1) \times U(1)$ symmetric, two-component scalar field theories. We emphasize the hitherto unexplored case, when one of the scalars obtains a vacuum expectation value (1VEV), and also consider the case with both fields having a VEV (2VEV).
In the 1VEV case of the purely scalar (Gross--Pitaevskii) theory, vortices can lower their energy by the formation of a condensate of the second field in their core. The result is a condensate core (CC) vortex. We found that the condensate in the core of the vortex can stabilise higher winding vortices against the splitting instability, in strong contrast with the
ordinary GP theory.
In the 1VEV case of the gauged theory, two-component Ginzburg--Landau theory (or in the relativistic case, extended Abelian Higgs model), CC vortices also exist. They coexist with embedded Abrikosov vortices, and have significantly lower energy. Importantly, CC vortices are stable. Higher flux CC vortices also stabilise against the splitting instability, even in such cases when embedded Abrikosov vortices split into unit flux ones. In a strong coupling limit, relevant to, e.g., superconducting liquid metallic hydrogen,, we have demonstrated the existence of stable ``giant'' vortices, i.e., vortices with $O(1000)$ flux quanta. The physical implication is that these materials are neither type II superconductors (which only have stable unit flux vortices), nor type I (as the energy/flux of vortices does have a minimum here). We obtained similar results in the case when only one of the scalar fields is charged. In this case, we have found a remarkably simple description of the high flux limit of CC vortices,
quite similar to that of Abrikosov vortices.
In all the three cases of the GP and the GL models with 1 or 2 charged field, we have demonstrated, that vortices in the 1VEV case are smoothly connected with the ones in the 2VEV case with the winding in only one component. As in the case of two charged fields, the energy of 1VEV vortices is finite, and that of the corresponding 2VEV ones is logarithmically divergent, this connection is quite remarkable. In the case of one charged and one neutral field, all 1VEV and 2VEV vortices have finite energy.
The fact that CC vortices with higher fluxes become stable also implies a richer physics of inter-vortex forces. E.g., in the case of the GL theory, the stability of higher winding vortices implies that the inter-vortex forces become attractive as the distance between the vortices decreases. This is analogous to the behaviour of vortices in certain neither type I nor type II superconductors.
\subsection*{Acknowledgements}
This work has been supported by the grants OTKA K101709.
| proofpile-arXiv_065-12347 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Memes have evolved into one of the most powerful mediums of spreading hate online. The ubiquity of social media has fanned the flames of \textit{hate speech}, communication conveying prejudiced messages towards members of minority groups. Memes are frequently used to spread hate, alt-right, and even neo-Nazi rhetoric on Reddit, 4chan, and other mainstream social media websites. Recently, r/The\_Donald and 4chan have been responsible for a large fraction of hateful memes that spread virally from fringe to mainstream media platforms, and about 5\% of all memes posted on /pol/ were racist, meaning that over 600,000 racist memes were shared in a span of 13 months from \textit{just this community} ~\cite{zannettou2018origins}.
Flagging hateful memes before they spread is a challenging problem for humans and AI-based models due to the nuance and sociopolitical contexts that drive their interpretation. The ineffectiveness of current hate speech moderation methods highlights the acute need to make automatic hate speech detection more efficient.
In May 2020, Facebook AI released a dataset of over 10,000 multimodal memes as part of the Hateful Memes Challenge~\cite{kiela2021hateful}, a challenge hosted by DrivenData to drive progress on this task.
\subsection{Motivation}
On the Hateful Memes dataset, trained humans achieved an auROC of 82.65 \cite{kiela2021hateful}; the relatively poor performance of even this gold standard suggests that a hybrid approach of augmenting human classification with machine learning-derived scores and tags may be promising for effective hateful meme identification. To that end, in this work we consider human-interpretable machine learning algorithms, such as a gradient-boosted decision tree model that utilizes machine learning and engineered features to achieve performance that outperforms non-transformer baselines and achieves comparable performance to complex transformer baselines. We create a reasonably-performant model that can be used to flag memes and augment them with the most important features to aid human classification.
A decision tree model allows us to easily extract a ranking of derived features that can be used to augment meme images and aid humans in final classification. Such scoring provides valuable insights into common characteristics of hateful memes that may warrant further investigation. We look at textual sentiment, named entities in images and text, and semantic similarity between image and text as some of the most important features.
\section{Dataset/Challenge Description}
The Hateful Memes dataset is a challenge dataset with 12,140 total samples, of which about 63\% are non-hateful and 37\% are hateful memes. Many hateful memes come with text and image confounders, which alter the text or image of the hateful meme to change its connotation to non-hateful, meaning models must utilize both modalities to succeed at the task.
\begin{figure}[h]
\includegraphics[width=0.83\columnwidth]{confounders.png}
\caption{Hateful meme and text/image confounders. Image above is a compilation of assets, including ©Getty Images.}
\end{figure}
Exploratory data analysis found that the majority of hate attacks minority groups by playing off of common stereotypes and going beyond them to imply threats or violence.
The Hateful Memes challenge was a competition from May to October 2020 intended for researchers to fine-tune large-scale state-of-the-art transformer models. The competition metric was auROC.
\section{Related Work}
\subsection{Related datasets}
Although the Hateful Memes dataset is one of the first of its kind, there are a few similar datasets. SemEval-2020 Task 8 (Memotion analysis) \cite{sharma-etal-2020-semeval} involves classifying sentiment, humor, offensiveness, and motivation on a dataset of 10,000 human-annotated memes. The macro F1-score for the baseline model was 0.21 for sentiment and 0.50 for humor type classification using image-text models, emphasizing how challenging interpreting memes can be. Additionally~\cite{Miliani2020DANKMEMESE} is a similar dataset, but with 2,631 Italian memes.
\subsection{Text-only hate speech detection}
There has been a myriad of work on text-based hate speech detection, focused on Twitter-style text data. Current state-of-the art approaches \cite{10.1145/3394231.3397890, Abro2020AutomaticHS, fortuna2018survey} have involved the standard natural language processing toolkit, including BERT and other embedding schemes.
\subsection{Multimodal/meme hate detection}
State-of-the-art multimodal hate speech detection often includes unimodally pretraining models for each modality, for early and late fusion, as well as multimodal pretraining~\cite{Afridi2020AMM}. For example, \cite{kiela2021hateful}'s baseline using BERT on meme text is an example of unimodal pretraining, whereas their use of a VisualBERT COCO model to pretrain constitutes multimodal pretraining. Hateful Memes challenge winners achieved auROC's between 0.78 and 0.84, relying heavily on large multimodal transformer models such as OSCAR, UNITER, VisualBERT, and LXMERT~\cite{muennighoff2020vilio,sandulescu2020detecting,zhu2020enhance,velioglu2020detecting,lippe2020multimodal}, largely taking the same approach of fine-tuning and ensembling very high-capacity single- and dual-stream transformer models or other recurrent architectures with minimal data preprocessing. Despite outperforming baselines, such models are still very far from an ideal resolution to the problem of identifying hateful memes.
\section{Methods/Approach}
Here, we take a divergent approach from much of the literature on hateful meme detection. Rather than focusing on high capacity ensembles of transformer models, we use thoughtfully engineered features and pass these to two models for final classification: a gradient boosted decision tree and simple LSTM. This methodology allows us to easily isolate crucial features for identifying memes as hateful and extract underpinning logic from our models that may be useful to augment a human in performing this challenging classification task.
In addition to straightforwardly embedding the text and images associated to a meme, we augment our feature set with a variety of common-sense and machine-learning derived features that represent criteria that a human uses when attempting to contextualize a meme and uncover its underlying meaning.
\subsection{Text and image embedding}
For our gradient-boosted decision tree, we use a captioning model \cite{imagecaptioning} to capture the relevant content of an image in text format. We then embed both text and images using tf-idf to upweight individual words of high interest.
In our LSTM model, we concatenate meme image captions and the meme text and embed them via DistilBERT \cite{Sanh2019DistilBERTAD}.
\subsection{Additional textual features}
In addition to our embeddings, we develop features using named-entity recognition, profanity and slur detection, counts of hateful words, text sentiment, and emotion detection.
\subsubsection{Named-entity detection}
We use SpaCy named-entity detection~\cite{esteves2018named} to extract culturally relevant components of the meme text, identifying and individually encoding 2085 named entities.
\subsubsection{Profanity/Slurs}
We further augment our feature set with counts of profanity and slurs banned by Google~\cite{profanity}.
\subsubsection{Hateful words}
Based on a frequency search and corpuses such as top hits on Urban Dictionary, we supplement our feature set with flags for words commonly used to dogwhistle hate in memes despite having an innocuous meaning in ordinary speech. For example, we flag ``dishwasher,'' which is often used in a derogatory manner in memes to refer to women.
\subsubsection{Text Sentiment}
We use TextBlob~\cite{loria2018textblob} to identify the polarity (positivity or negativity of sentiment) and subjectivity (how opinionated or objective views expressed in the text are) of a meme's text.
\subsubsection{Emotion detection}
We use the text2emotion \cite{text2emote} library to score our meme text based on its happiness, sadness, fear, surprise, and anger. We find that high fear and anger scores are often indicative of hateful memes.
\subsubsection{Semantic Similarity}
We use a fine-tuned RoBERTa model~\cite{nie-etal-2020-adversarial, wolf-etal-2020-transformers} using a combination of various natural language inference datasets such as SNLI \cite{bowman2015large}, multiNLI \cite{N18-1101}, FeverNLI \cite{nie2019combining}, and ANLI \cite{nie-etal-2020-adversarial} to detect \textit{semantic similarity} between meme text and the generated meme captions and detected image web entities. The result is a probability vector for the 3 possible classes of ``contradiction'' (texts that contradict each other), ``entailment'' (one piece of text can be inferred from the other), or ``neutral'' (neither entailment nor contradiction, but texts can be semantically similar). These features are particularly useful, as a number of image confounders in the dataset have meme text which simply captions the image; then, a high score on the ``entailment'' or ``neutral'' classes can help conclude that a meme is benign; similarly, high scores on contradiction can help detect irony in a meme between the meme text and what is expressed in the meme image.
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{text-preprocessing-example.png}
\caption{Text preprocessing steps}
\label{fig:text_preprocessing}
\end{figure}
\subsection{Image-based features}
Many multimodal hate speech detection models overweight text. Here, we focus on extracting additional image-based features.
We first preprocess images by removing meme text using OCR, then caption them, then detect objects and web entities in the image. These steps serve to summarize some relevant content of an image.
\subsubsection{Image Preprocessing}
We preprocess meme images using OCR to detect textual regions and inpainting using OpenCV \cite{opencv}, increasing accuracy downstream.
\subsubsection{Image Captioning}
We use a visual attention-based captioning model trained on COCO~\cite{imagecaptioning,10.5555/3045118.3045336} to learn the relations between objects present in meme images.
\subsubsection{Object Detection}
We tag objects within meme images using Facebook's Detectron2 \cite{wu2019detectron2} tagger, which is able to provide specific descriptions of items that may be omitted from our image captions.
\subsubsection{Web Entity Detection}
As in~\cite{zhu2020enhance}, we use Google Vision API's Web Entity Detection \cite{googlevision}, which contextualizes images using knowledge from the web, enabling our model to account for the rapid shifts that occur in meme culture in a matter of weeks, days, or minutes.
\subsection{Total Preprocessing}
\label{section:totalprocess}
After preprocessing in both channels, we concatenate the emotion, sentiment, semantic similarity, profanity, and hateful words features ($13 \times 1$). Meme text, captions, detected objects, and named and web entities are concatenated and embedded jointly ($19651 \times 1$), forming the final input of $19664 \times 1$.
\subsection{Models}
We use an 90-10 train/validation-test split and build two classes of models, tuning each set of hyperparameters with a grid search and performing 5-fold cross-validation on each model.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{total-preprocessing.png}
\caption{Total preprocessing pipeline.}
\label{fig:preprocessing}
\end{figure}
\begin{table*}[ht]
\caption{Results}
\begin{tabular}{llllll}
\toprule
& & \multicolumn{2}{c}{AUROC} & \multicolumn{2}{c}{Accuracy} \\
Source & Model & \textbf { Validation } & \textbf { Test } & \textbf { Validation } & \textbf { Test }\\
\midrule & \text { Human } & - & 82.65 & - & 84.70\\
\cmidrule { 2 - 6 } \text { Hateful Memes } & \text { Image-grid } & 58.79 & 52.63 & 52.73 & 52.00 \\
\text { Non-transformer Baselines } & \text { Image-region } & 57.98 & 55.92 & 52.66 & 52.13\\
& \text { Text-only BERT } & 64.65 & 65.08 & 58.26 & 59.20\\
& \text { Late Fusion } & 65.97 & 64.75 & 61.53 & 59.66\\
\cmidrule { 2 - 6 } \text { Hateful Memes } & \text { ViLBERT } & 71.13 & 70.45 & 62.20 & 62.30 \\
\text { Transformer Baselines } & \text { VisualBERT } & 70.60 & 71.33 & 62.10 & 63.20\\
& \text { ViLBERT CC } & 70.07 & 70.03 & 61.40 & 61.10\\
& \text { VisualBERT COCO } & 73.97 & 71.41 & 65.06 & 64.73\\
\cmidrule { 2 - 6 }
& \text { GBDT w/ image tagging } & 70.86 & 71.11 & 70.27 & 71.19 \\
\text { Our Models } & \text { GBDT w/ tagging/captions } & 71.67 & 70.90 & 69.58 & 68.45 \\
& \text { GBDT w/ BERT only } & 70.38 & 71.52 & 68.97 & 69.36 \\
& \text { LSTM w/ BERT features } & 73.78 & 72.72 & 67.83 & 66.39\\
\bottomrule
\end{tabular}
\end{table*}
\subsubsection{GBDT}
In our gradient boosted-decision trees, we input (1) the joint tf-idf embeddings for image captions and meme text as well as (2) the engineered features for image and text, as described in \ref{section:totalprocess}. We train the model with 100 estimators, a learning rate of 1.0, and a maximum depth of 40, and a scale\_pos\_weight of 1.5, with an average of 900 nodes per tree after pruning.
\subsubsection{LSTM}
We use a pretrained DistilBERT model to preprocess our meme text and image captions jointly, forming a 768-dimensional input to the model, which has a 9-unit LSTM layer followed by two Dense layers with 8 and 2 neurons respectively. We train for 45 epochs using Adam and binary-cross entropy loss.
\section{Results and Discussion}
\subsection{Summary of Results}
We achieve a significant improvement from non-transformer baseline models and comparable results to transformer baselines using more lightweight and interpretable models. We are able to augment memes with the most important features, determined by the model, for easier human classification.
\begin{figure}[h]
\centering
\subfloat[\centering Loss Function]{{\includegraphics[width=4cm]{model-loss.png} }}
\subfloat[\centering auROC curve]{{\includegraphics[width=4cm]{roc-curves.png} }}
\caption{Plots from the LSTM Model.}
\label{fig:auroc_curves}
\end{figure}
\subsection{Discussion}
\subsubsection{GBDT feature importances}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{feature-importances.png}
\caption{Feature importances of top engineered features.}
\label{fig:gbtree}
\end{figure}
The gradient-boosted decision tree yields 494 predictive features, ranked by the feature\_importances\_ property, and a precision of 0.53 and a recall of 0.58. Top features are certain tf-idf embedded words and text emotion; named entities such as Hitler and ethnic groups such as ``asians'' or ``mexicans'' also rank highly; top named-entity and overall features are included in \autoref{table:topfeats}.
\begin{table}[h]
\caption{Top features ranked from gradient-boosted tree.}
\label{table:topfeats}
\begin{tabular}{llll}
\toprule
\multicolumn{2}{c}{Named Ents.} & \multicolumn{2}{c}{Total Features} \\
\textbf { Name } & \textbf { Score } & \textbf { Name } & \textbf { Score }\\
\midrule
ent\_jews\_norp & $3.4 \times 10^{-2}$ & text hate wds. & $2.3\times 10^{-1}$\\
ent\_muslim\_norp & $3.0 \times 10^{-2}$ & club & $1.1\times 10^{-1}$ \\
ent\_hitler\_person & $2.6\times 10^{-2}$ & isis & $6.4\times 10^{-2}$ \\
ent\_mexicans\_norp & $2.5 \times 10^{-2}$ & jews & $5.3\times 10^{-2}$ \\
ent\_islamic\_norp & $2.4 \times 10^{-2}$ & muslims & $5.1\times 10^{-2}$\\
ent\_asians\_norp & $2.3\times 10^{-2}$ & teacher & $4.3\times 10^{-2}$\\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{GBDT vs. recurrent network}
Gradient-boosted decision tree-based models provide the advantages of faster computation time and a ranking of the most important features to meme detection, providing computer insight into meme rhetoric. They are also effective at including engineered features, such as text sentiment, which would be meaningless to LSTMs, which are meant for sequence-based data. Recurrent networks are able to learn from DistilBERT features more effectively, as they have more computational scope and overall better performance than gradient-boosted decision trees.
\subsubsection{Confounders}
The model is able to correctly identify not only simple hateful memes but also distinguish between hateful memes and nontrivial confounders, leveraging both modalities for classification. We present one such correctly classified image confounder and hateful meme pair.
\begin{table}[H]
\caption{Confounder/hateful meme comparison. Images below are a compilation of assets, including ©Getty Images.}
\begin{tabular}{cp{3cm}p{3cm}}
\toprule
Label & Hateful meme & Image confounder \\
\midrule
Meme &
\raisebox{-0.5\totalheight}{\includegraphics[width=0.15\textwidth]{hateful-example.png}} & \raisebox{-0.5\totalheight}{\includegraphics[width=0.15\textwidth]{image-confounder-example.png}} \\
\midrule
Caption & ``a group of people playing on a beach'' & ``a black and white photo of a monkey'' \\
Pred. Label & 1 (hateful) & 0 (not hateful)\\
Label & 1 (hateful) & 0 (not hateful)\\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Analysis of difficult-to-classify memes}
We give examples of correctly and incorrectly classified hateful memes from the dataset.
\begin{figure}[h]
\centering
\subfloat[\centering Correctly classified hateful meme]{{\includegraphics[width=2.5cm]{hateful-correct.png} }}
\subfloat[\centering Misclassified hateful meme]{{\includegraphics[width=2cm]{hateful-misclassified.png} }}
\caption{Classified hateful memes from test set. Images above are a compilation of assets, including ©Getty Images.}
\label{fig:classified_memes}
\end{figure}
The first meme, despite nonhateful individual modalities, is flagged by the model by leveraging the image to gain the context needed to understand the text. The misclassified meme contains several words unfamiliar to models and most humans, such as ``SAif'' and ``btz.'' To classify similarly niche samples, knowledge pulled from repositories such as Know Your Meme \cite{kym} could give humans better context.
\subsection{Conclusion and Extensions}
We develop a lightweight multimodal model that classifies memes with performance comparable to transformer baselines. Since even humans achieve low auROCs, our approach, rather than aiming to replace humans with end-to-end models, flags hateful memes and pinpoints relevant engineered features to improve human classification. Further extensions include measuring human performance on classifying augmented memes and developing features based on metadata such as shares and user post history.
\bibliographystyle{ACM-Reference-Format}
| proofpile-arXiv_065-12433 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In his classic treatise on sediment transport, Hans Albert Einstein (HAE) presented a definition of suspended sediments (SS) and the role of turbulence in maintaining suspension as follows \cite{einstein1950bed}:
{\it "The characteristic definition of a suspended solid particle is that its weight is supported by the surrounding fluid during its entire motion. While being moved by the fluid, the solid particle, which is heavier than the fluid, tends to settle in the surrounding fluid. If the fluid flow has only horizontal velocities, it is impossible to explain how any sediment particle can be permanently suspended. Only if the irregular motion of the fluid particles, called turbulence, is introduced can one show that sediment may be permanently suspended}."
This operational definition is now standard in textbooks and research articles alike \cite{dey2014fluvial, green2014review, dey2020fluvial}. Despite some 80 years of research, the dominant factors controlling suspended sediment concentration (SSC) in streams continue to draw interest due to its multiple connections to ecosystem benefits and water quality degradation issues \cite{muste2005two, long2013remote, nazeer2014heavy, dai2016decline, huai2019predicting, huai2020analytical, tsengtwo}. High SSC can intercept photosynthetically active radiation necessary for sustaining submerged aquatic plants in lakes and rivers. The presence of high SSC is also related to eutrophication and corollary water quality issues \cite{yujun2008sediment, kellogg2014use}, clogging of gills of fish and other aquatic organisms, accelerating the denitrification process \cite{liu2013acceleration}. In certain cases, sediments provide necessary nutrients to aquatic plants and are of primary significance to sustaining nearshore ecosystems such as floodplains and marshes. Their role in element-cycling has been highlighted in several studies \cite{lupker2011rouse, mohtar2020incipient} as well. Another issue is the connection between SSC and micro/nano-plastics in saline environments. Recent work has shown that SS can promote polystyrene nano plastics settling in the presence of saline conditions, prompting further interest in SSC distribution in natural waters \cite{li2019interactions}.
Even in the most idealized flow condition with a balance between the gravitational settling flux and the vertical turbulent sediment flux, the description of SSC remains a recalcitrant problem. A model for the turbulent vertical flux is required and is often derived using Reynolds' analogy \cite{dey2014fluvial} where eddies are assumed to transport momentum and SS similarly. This analogy was the cornerstone of the well-celebrated Rouse's formula \cite{rouse1939analysis} that assumes sediment diffusivity is proportional to eddy viscosity. Since the early work of O'Brien \cite{obrien1933review}, Prandtl and von K\'arm\'an \cite{vonKarman1934}, these analogies have spawned numerous theories and closure models for the mixing length \cite{vanoni1984fifty,nie2017vertical,bombardelli2009hierarchical,bombardelli2012exchange, dey2014fluvial}. However, these models make no explicit contact with turbulent eddies and their associated kinetic energy distribution in the vertical direction. It is precisely the scale-wise vertical turbulent kinetic energy component that maintains sediments in suspension \cite{scully2003influence, mazumder2006velocity, dey2014fluvial} as noted by HAE.
The turbulent vertical flux of SS is directly modeled here from the spectrum of turbulent eddies thereby providing a new perspective on Reynold's analogy, the multiple length scales involved in describing SSC, and the emergence of Reynolds, Rouse, Schmidt, and Stokes numbers when linking eddy viscosity with eddy diffusivity for SS. The role of the Reynolds number has been introduced in prior studies as a damping correction to the mixing length \cite{van1956turbulent,wallin2000explicit,nezu2004turbulence} whereas the Rouse number is operationally used in the classification of sediment load. The proposed approach uses a co-spectral budget model (CSB) derived from an approximated Navier-Stokes equation in spectral form for the Reynolds stress and SS turbulent flux. It uses a spectral Rotta scheme modified to include the isotropization of the production term for the pressure decorrelation effect \cite{katul2013co} and a Schmidt number effect similar in form to van Rijin's bulk formulation \cite{rijn1984sediment} for linking the fluid and particle velocity decorrelation time scales, explicitly made here scale-dependent. The newly proposed formulation and a simplified solution derived from it are tested with several published experiments that span a wide range of flow conditions and grain properties (diameter and density). A comparison against the widely-used Rouse formula is featured and discussed.
\section{Theory}
\subsection{Definitions and General Considerations}
As a starting point to review models for SSC profiles in streams, a prismatic rectangular channel with constant width $B$ and bed slope $S_o$ is considered. The flow is assumed to be steady and uniform with constant water depth $H$ and flow rate $Q$. For small slopes, a balance between gravitational and frictional forces for a length segment $\Delta x$ along the flow direction $x$ yields
\begin{linenomath*}
\begin{equation}
\rho (B H \Delta x) g S_o = 2 \tau_s (H \Delta x) + \tau_o (B \Delta x),
\label{eq:forcebalance1}
\end{equation}
\end{linenomath*}
where $\tau_s$ is the side stress, $\tau_o$ is the bed stress, $g$ is the gravitational acceleration, and $\rho$ is the fluid density. This expression can be re-arranged as
\begin{linenomath*}
\begin{equation}
u_*^2=\frac{\tau_o}{\rho} ={g H S_o}\left(1+ \frac{2 H}{B} \frac{\tau_s}{\tau_o}\right)^{-1},
\label{eq:forcebalance2}
\end{equation}
\end{linenomath*}
where $u_*$ is the friction (or shear) velocity. For the case where $\tau_s=\tau_o$, $u_*^2=g R_h S_o$ with $R_h=H (1+2 H/B)^{-1}$ being the hydraulic radius. However, in many SS laboratory experiments, the channel bed is covered with sediments whereas the channel sides remain smooth to permit optical access. This difference in roughness between sides and bed leads to $\tau_s/\tau_o \ll 1$. This assumption can be combined with $H/B\le1$ usually selected to minimize secondary circulation to result in $\tau_o/\rho \approx g H S_o $. This approximation is adopted throughout. Fully turbulent flow conditions are also assumed to prevail so that the bulk Reynolds number $Re_b=U_b H/\nu>500$, where $\nu$ is the kinematic viscosity and $U_b$ is the bulk or depth-averaged velocity given as
\begin{linenomath*}
\begin{equation}
\label{eq:Ubulk_def}
U_b=\frac{Q}{B H} \approx \frac{1}{H} \int_0^H\overline{u}(z)dz,
\end{equation}
\end{linenomath*}
where $\overline{u}(z)$ is the mean velocity at vertical distance $z$ from the channel bed (positive upwards), and overline indicates ensemble-averaging usually determined from time averaging. For such a flow, the Reynolds-averaged mean continuity equation for SSC in steady and planar homogeneous flow at high $Re_b$ yields \cite{richter2018inertial}
\begin{linenomath*}
\begin{equation}
\frac{\partial \overline{C}(z)}{\partial t} =0=-\frac{\partial }{\partial z}\left[\overline{w'C'} - w_s\overline{C}-\Phi(z)\right],
\label{eq:fgov}
\end{equation}
\end{linenomath*}
where $t$ is time, $C=\overline{C}+C'$ is the instantaneous volumetric SSC in the flow, primed quantities are the fluctuating component, $w$ is the instantaneous vertical velocity component with $\overline{w}=0$ (assuming water is of constant $\rho$), $\overline{w'C'}$ is the turbulent vertical flux that requires a closure model, $w_s$ is the terminal velocity of sediment grains, and $\Phi(z)$ arises from particle inertia. In the regime where particle inertia is weak, to a leading approximation, $\Phi(z)$ is given by \cite{ferry2001fast,richter2018inertial}
\begin{linenomath*}
\begin{equation}
\Phi(z)=\tau_p \overline{\left[C \frac{D w'}{Dt} \right]}=\tau_p \overline {C} \frac{\partial \sigma_w^2}{\partial z}+\tau_p \overline{\left[C' \frac{D w'}{Dt} \right]},
\label{eq:Phip}
\end{equation}
\end{linenomath*}
where $\tau_p=w_s/g$ is a particle time scale, $\sigma_w^2=\overline{w'w'}$ is the vertical velocity variance at $z$ and $D(.)/Dt$ is the material derivative (local and advective) along a fluid particle trajectory. The $\Phi(z)$ is the sum of a turbophoretic effect that arises due to finite $\partial \sigma_w^2/\partial z$ in inhomogeneous flows such as channels \cite{reeks83,sardina2012wall,johnson20} and a turbulent concentration-vertical acceleration interaction terms. In equation \ref{eq:fgov}, the overall significance of $\Phi(z)$ at any $z$ depends on a local Stokes number $St(z)=\tau_p/\tau_{K}(z)$ where $\tau_K(z)=[\nu/ \epsilon(z)]^{1/2}$ is the Kolmogorov time scale formed by the local turbulent kinetic energy dissipation rate $\epsilon(z)$ and $\nu$ as reviewed elsewhere \cite{bragg2021mechanisms}. An associated length scale to $\tau_K$ is $\eta=(\nu^3/\epsilon)^{1/4}$, which is the Kolmogorov micro-scale representing eddy sizes impacted by viscous effects at $z$. Upon defining the Kolmogorov velocity as $v_k=\eta/\tau_K$, the Kolmogorov micro-scale Reynolds number $Re_k=v_k \eta/\nu=1$, meaning that both turbulence and viscous effects are equally important at scales commensurate to $\eta$ \cite{tennekes2018first}. In the limit $St\to 0$, the particle vertical velocity is given by the sum of the local vertical fluid velocity minus $w_s$, and $\Phi(z)$ can be ignored relative to the turbulent flux at $z$, an assumption routinely invoked in operational models for SSC. To allow for a 'bulk' Stokes number $St_b$ to be formulated, thereby facilitating comparisons across experiments, $\tau_{K,b}=(\nu/ \epsilon_b)^{1/2}$ is proposed where $\epsilon_b$ is the over-all bulk dissipation rate in clear water. Thermodynamic considerations require that the work per unit mass per unit time to move clear water at $U_b$ is $(g S_o) U_b$. For steady-state conditions (i.e. turbulent kinetic energy is stationary), this mechanical work produces turbulence that is then dissipated by the action of viscosity leading to an increase in the internal energy of the fluid. Hence,
\begin{linenomath*}
\begin{equation}
\epsilon_b=(g S_o) U_b;~~~ \tau_{K,b}=\sqrt{\frac{\nu}{\epsilon_b}}; ~~~ {\rm and} ~~~ St_b=\left(\frac{w_s}{g}\right)\tau_{K,b}^{-1}.
\label{eq:St_b}
\end{equation}
\end{linenomath*}
It is assumed that $\Phi$ is small and can be ignored when $St_b\ll 1$ (although, more precisely, $\Phi$ can only be ignored when $\max[St_b,St]\ll1$). Another estimate of bulk Stokes number is $St_+=\tau_p (u_*/H)$ \cite{greimann1999two,greimann2001two}, where $(H/u_*)$ is presumed to represent an outer-layer eddy turnover time. Noting that $g S_o=u_*^2/H$, the two bulk Stokes numbers can related using $St_b=St_+ (Re_b)^{1/2}$. A critique for using $St_+$ as a bulk Stokes number measure have been discussed elsewhere \cite{greimann1999two,richter2018inertial}.
With regards to the terminal sediment velocity, a simplified expression for $w_s$ that recovers many prior formulae \cite{tan2018rui, huai2020analytical} is used here and is given by \cite{cheng1997simplified}
\begin{linenomath*}
\begin{equation}
w_s=\frac{\nu}{d_s}\left[\sqrt{25+1.2d_s^2\left(\frac{\rho_s-\rho}{\rho}\frac{g}{\nu^2}\right)^{2/3}}-5\right]^{3/2},
\label{eq:ws}
\end{equation}
\end{linenomath*}
where $\rho_s$ is the sediment grain density (with $\rho_s/\rho>1$), and $d_s$ is the sediment grain diameter. This $w_s$ is smaller than the Stokes settling velocity ($w_{st}$)
\begin{linenomath*}
\begin{equation}
w_{st}=\frac{1}{18} \frac{g}{\nu}\left(\frac{\rho_s-\rho}{\rho}\right) d_s^2 ,
\label{eq:ws_stokes}
\end{equation}
\end{linenomath*}
\noindent except when $w_{st} d_s/\nu \ll 1$. The comparison between the two settling velocities is shown in Figure \ref{fig:ws} for reference.
\begin{figure} [ht]
\centerline{\includegraphics[angle=0,width=0.63\linewidth]{f1.eps}}
\caption{Comparison between the empirical sediment settling velocity $w_s$ used here and the Stokes settling velocity $w_{st}$ for different sediment to fluid density ratios. The one-to-one line is shown. The comparison between $w_s$ and $w_{st}$ for the data sets explored here is also featured as inset. }
\label{fig:ws}
\end{figure}
Since $w_{st}$ only applies to creeping flow past a sphere, equation \ref{eq:ws} is used as it covers a wider range of $w_s d_s/\nu$.
The mode of sediment transport is operationally related to $w_s$ and some measure of the strength of turbulence based on bulk flow properties. One such measure is the Rouse number $R$ or 'unit' Rouse number $R^*$ given by
\begin{linenomath*}
\begin{equation}
R^*= \frac{1}{\kappa}\frac{w_s}{u_*}; \quad R=\frac{1}{\beta}R^*;
\label{eq:RN}
\end{equation}
\end{linenomath*}
where $\kappa=0.41$ is the von K\'arm\'an constant and $\beta=Sc^{-1}$ is an inverse turbulent Schmidt number ($Sc$). The Rouse number is routinely used for classifying sediment load: $R>2.5$ for bedload, $0.8<R<2.5$ for SS, and $R<0.8$ for washload. To solve for $\overline{C}$, models linking $\overline{w'C'}$ to $\overline{C}$ as well as estimates for $Sc$ (and $\Phi$, though this is ignored here) are required in equation \ref{eq:fgov}, and those models are to be briefly covered.
\subsection{Conventional Formulations and Revisions}
Conventional approaches (including Rouse and O'Brien) for modeling SSC begin by ignoring $\Phi(z)$ and employing a gradient-diffusion approximation (or some non-Fickian revision to it) given as
\begin{linenomath*}
\begin{equation}
\label{eq:gradif}
\overline{w'C'}=-D_s \frac{d\overline{C}}{dz},
\label{eq:graddiff1}
\end{equation}
\end{linenomath*}
where $D_s$ is the sediment turbulent diffusivity. To estimate $D_s(z)$, existing theories approximate $D_s(z)$ by $\nu_t/Sc$ or $\beta \nu_t$, where $\nu_t$ is the turbulent or eddy viscosity ($\nu_t/\nu \gg 1$). When the mixing length hypothesis is further invoked to model $\nu_t$ as a product of a characteristic length and velocity, it yields
\begin{linenomath*}
\begin{equation}
\nu_t=l_o \left(l_o\,\left|\frac{d\overline{u}}{dz}\right|\right),
\label{eq:nut_l}
\end{equation}
\end{linenomath*}
where $l_o$ is a generic mixing length to be externally supplied that can vary with $z$. Dimensional analysis and similarity theory represent
\begin{linenomath*}
\begin{equation}
\frac{d\overline{u}}{dz} = \frac{\sqrt{-\overline{u'w'}(z)}}{l_o(z)},
\end{equation}
\end{linenomath*}
where $u'$ is the longitudinal velocity fluctuation, and $\overline{u'w'}$ is the momentum turbulent flux at height $z$ that can be estimated from the mean momentum balance using \cite{dey2014fluvial}
\begin{linenomath*}
\begin{equation}
\frac{-\overline{u'w'}(z)}{u_*^2}=\left (1-z_n\right),
\label{eq:MMB1}
\end{equation}
\end{linenomath*}
where $z_n=z/H$ is the normalized water depth. With this estimate of $\overline{u'w'}(z)$, it follows directly that
\begin{linenomath*}
\begin{equation}
\frac{d\overline{u}}{dz}=\frac{u_*}{l_o}\left(1-z_n\right)^{1/2}; \nu_t= u_* {l_o}\left(1-z_n\right)^{1/2};
D_s=\beta{u_* l_o}\left(1-z_n\right)^{1/2}.
\label{eq:MMB}
\end{equation}
\end{linenomath*}
These expressions ensure that as $z_n \rightarrow 1$, ${d\overline{u}}/{dz} \rightarrow 0$, $\nu_t \rightarrow 0$, and $D_s\rightarrow 0$. For $z_n\ll 1$ but $z^+>50$ (i.e. above the buffer layer) where $z^+=z u_*/\nu$ is a normalized distance in wall units \cite{pope2001turbulent} that can also be interpreted as a local Reynolds number ($Re_s$), $l_o$ is constrained by the channel bottom so that $l_o=\kappa z$. In this case, ${d\overline{u}}/{dz} \approx u_*/(\kappa z)$ and $\overline{u}(z)$ varies logarithmically with $z$, $\nu_t =\kappa z u_* $, and $D_s=\beta \kappa z u_*$ (i.e. linear in $z$). As $z_n \rightarrow 1$, the largest eddies are restricted by $H$ so that $l_o \propto H$ instead of $z$. Combining these two arguments using $l_o=\kappa z (1-z_n)^{1/2}$ yields the quadratic diffusivity profile reported in a number of stream flow studies \cite{fischer2013mixing} and direct numerical simulations (DNS) of stratified atmospheric flows on inclined planes \cite{giometto2017direct}. Assuming $\beta=Sc^{-1}=1$, the SSC profiles associated with the linear and quadratic $D_s(z)$ are
\begin{linenomath*}
\begin{eqnarray}
\frac{\overline{C}(z_n)}{\overline{C_b}}= \Bigg\{
\begin{array}{l l}
(\frac{z_n}{z_{n,b}})^{-R^*}, & \textrm{linear diffusivity}, ~~\textrm{Prandtl's power law} \\
(\frac{z_n}{1-z_n}\frac{1-z_{n,b}}{z_{n,b}})^{-R^*}, & \textrm{quadratic diffusivity}, ~~\textrm{Rouse's formula}
\end{array},
\label{eq:geS}
\end{eqnarray}
\end{linenomath*}
where $\overline{C_b}$ is a reference concentration at height $z_{n,b}=z_b/H$ and $R^*=R$ when setting $\beta=1$. The $R^*$ in equation \ref{eq:geS} is commonly replaced by a fitted $R$ (or $\beta$ is no longer unity) as discussed elsewhere \cite{muste2005two, dey2014fluvial}. The analysis using fitted $R$ is termed here as 'fitted' Rouse's formula. Other models for $l_o$ have been introduced but only two are singled out for illustrating differences in approaches to adjusting conventional formulations (usually for $\kappa z$): (i) $l_o=\kappa z V_n(z_n)$, where $V_n=1-\exp(-z^+/26)$ (labeled as the van Driest damping function); (ii) $l_o=\kappa z (1-z_n)^{m_1}$, where
\begin{linenomath*}
\begin{equation}
m_1=\frac{1}{2}\left[1+a_e \left(\frac{\overline{C}}{C_R}\right)\right],
\label{eq:loCz}
\end{equation}
\end{linenomath*}
$C_R$ is some reference concentration and $a_e$ is an empirical coefficient \cite{umeyaina1992vertical,mazumder2006velocity,castro2012karman}. In the second case, the mixing length is assumed to vary with SSC and recovers $l_o=\kappa z (1-z_n)^{1/2}$ only for clear water. However, in the presence of sediments, $m_1$ varies with $z_n$ (and $R$). In the first case, deviations from a linear mixing length is made to dependent on $z^+$ (instead of $H$), which is appropriate in the viscous and buffer regions of smooth boundary layers. Another revision to equation \ref{eq:graddiff1} is to re-cast turbulent transport in fractional derivatives to emphasize its non-Fickian aspect \cite{nie2017vertical}. In this approach, the fractional order becomes a parameter that must be determined from experiments depending on how SS trajectories deviate from Brownian trajectories \cite{sun2020review}. In practice, the order of the fractional derivative is set as a 'free' parameter and must implicitly include the $Sc$ effect. This approach is not pursued further here.
\subsection{Turbulent Stress and SS Flux Budgets}
Simplified turbulent stress and SS flux budgets are now considered. For a stationary and planar homogeneous flow in the absence of subsidence ($\overline{w}=0$), these budgets reduce to
\begin{linenomath*}
\begin{eqnarray}
\frac{\partial{\overline{w'u'}}}{\partial t} =0 &=& -\overline{w'w'}\frac{\partial \overline{u}}{\partial z}-\frac{\partial{\overline{w'w'u'}}}{\partial z}+\overline{p'\frac{\partial u'}{\partial z}}-\epsilon_{wu}, ~~\\ \nonumber
\frac{\partial{\overline{w'C'}}}{\partial t}=0&=&-\overline{w'w'}\frac{\partial \overline{C}}{\partial z}-\frac{\partial{\overline{w'w'C'}}}{\partial z}+\overline{p'\frac{\partial C'}{\partial z}}-\epsilon_{wc} -w_s\overline{ \left(w' \frac{\partial C'}{\partial z} \right)},
\label{eq:stressfluxbudget}
\end{eqnarray}
\end{linenomath*}
where $p'$ is the turbulent pressure, $\epsilon_{wu}$ and $\epsilon_{wc}$ are molecular destruction terms assumed to be small when compared to the pressure-decorrelation terms at high Reynolds numbers \cite{katul2013co}. The turbulence- particle interaction term requires closure that may be achieved by commencing with a local decomposition given by,
\begin{linenomath*}
\begin{eqnarray}
w_s\overline{\left(w' \frac{\partial C'}{\partial z} \right)}
=w_s\left[\left(\frac{\partial \overline{w' C'}}{\partial z} \right)-
\left(\overline{C'\frac{\partial w'}{\partial z}} \right)\right].
\label{eq:stressfluxbudget1}
\end{eqnarray}
\end{linenomath*}
When assuming $\Phi(z)=0$ in equation \ref{eq:fgov} (i.e. no particle inertia), $\overline{w'C'}=w_s \overline{C}$ thereby allowing one of the two terms in the difference shown in equation \ref{eq:stressfluxbudget1} to be linked to variables that are explicitly modeled. The other term (i.e. $\overline{C'{\partial w'}/{\partial z}}$) still necessitates a closure. A heuristic model that maintains maximum simplicity is to set
\begin{linenomath*}
\begin{equation}
\overline{C'\frac{\partial w'}{\partial z}}=b_1 \frac{{\partial \overline{w' C'}}}{{\partial z}},
\label{closure_1_RANS}
\end{equation}
\end{linenomath*}
where $b_1$ is a positive or a negative constant. Upon setting $\overline{w'C'}=w_s \overline{C}$, this heuristic closure model yields \cite{huang2014particle},
\begin{linenomath*}
\begin{eqnarray}
w_s\overline{\left(w' \frac{\partial C'}{\partial z} \right)}
= w_s\left[\left(\frac{\partial w_s \overline{C}}{\partial z} \right)-b_1 \left(\frac{\partial w_s \overline{C}}{\partial z} \right)
\right]= \alpha' w_s^2 \frac{\partial \overline{C}}{\partial z},
\label{eq:stressfluxbudget3}
\end{eqnarray}
\end{linenomath*}
where $\alpha'=1-b_1$ is a constant. When $|b_1|\ll 1$, then $\alpha'=1$ and
\begin{linenomath*}
\begin{eqnarray}
w_s\overline{\left(w' \frac{\partial C'}{\partial z} \right)}
=w_s^2\frac{\partial \overline{C} }{\partial z} .
\label{eq:stressfluxbudget4}
\end{eqnarray}
\end{linenomath*}
Whether $b_1$ or $\alpha'$ are strictly closure constants independent of sediment and/or flow conditions cannot be a priori ascertained. To do so requires another scaling analysis based on different assumptions and approximations. In this proposed scaling analysis, $C'$ is assumed to vary with a turbulent quantity such as $\sigma_c$, and $w'$ to vary with $\sigma_w$. Hence,
\begin{linenomath*}
\begin{eqnarray}
\overline{\left(C' \frac{\partial w'}{\partial z} \right)}
=A_F \left[\sigma_c(z)\right] \frac{\partial \sigma_w}{\partial z} =A_F \left[\frac{\overline{w'C}}{u_*}F_1 \left(z_n\right)\right]\frac{\partial \sigma_w}{\partial z},
\label{eq:close1}
\end{eqnarray}
\end{linenomath*}
where $A_F$ is a flux-variance \cite{albertson1995sensible} similarity constant that can be positive or negative depending on the sign of the correlation coefficient between $C'$ and $\partial w'/\partial z$, and $F_1(z_n)$ is an unknown dimensionless function describing the sediment concentration variance with $z_n$ above and beyond the $\overline{w'C'}$ variations with $z_n$. Since the goal is to determine the minimum governing variables impacting $b_1$ or $\alpha'$ while assuming $b_1$ is independent of $z_n$, equations \ref{eq:close1} and \ref{closure_1_RANS} can be equated to yield
\begin{linenomath*}
\begin{eqnarray}
A_F \left[\frac{\overline{w'C'}(z_n)}{u_*}F_1 \left(z_n\right)\right]\frac{\partial \sigma_w}{\partial z}=b_1 \frac{{\partial \overline{w' C'}(z_n)}}{{\partial z}}.
\label{eq:close1a}
\end{eqnarray}
\end{linenomath*}
Re-arranging to infer $b_1$ results in
\begin{linenomath*}
\begin{eqnarray}
b_1= A_F \left[\overline{w'C'}\left(\frac{\partial \overline{w' C'}}{\partial z}\right)^{-1}\right] \left[\frac{1}{u_*}F_1 \left(z_n\right)\frac{\partial \sigma_w}{\partial z}\right].
\label{eq:close1b}
\end{eqnarray}
\end{linenomath*}
With the assumption that $b_1$ is not dependent on $z_n$, additional order of magnitude arguments must now be invoked to assess the sediment/flow variables that impact its magnitude: (i) $\partial \sigma_w/\partial z\sim -u_*/H$ (likely valid except near the channel bottom), (ii) $F_1$ is roughly a constant, (iii) $\partial \overline{w' C'}/{\partial z} = w_s \partial \overline{C}/\partial z$, and (iv) $\overline{w'C'}/(\partial \overline{C}/\partial z)\sim - D_{s,avg}$ where $D_{s,avg}=(1/H)\int_0^H D_s(z)dz \sim u_* H$. Inserting these order of magnitude arguments into equation \ref{eq:close1b} result in
\begin{linenomath*}
\begin{eqnarray}
b_1 \sim \mathrm{sgn}(A_f) \frac{u_* H}{w_s} \left[\frac{1}{u_*} \frac{u_*}{H}\right] \sim \mathrm{sgn}(A_f) \frac{u_*}{w_s}.
\label{eq:close1c}
\end{eqnarray}
\end{linenomath*}
Equation \ref{eq:stressfluxbudget1} is used to suggest a pragmatic closure in equation \ref{eq:stressfluxbudget3} that applies to only one of two terms, and this one term itself is only one term in the overall flux budget. Given the interplay between these multiple terms, the overall model results for $\overline{C}$ may be robust to uncertainties in this closure vis-a-vis externally imposing $Sc$ or $\beta^{-1}$ directly on the eddy diffusivity as common in prior models.
Upon ignoring the flux transport terms (triple moments), and closing the pressure decorrelation terms using a linear Rotta scheme that accounts for the isotropization of the production yields
\begin{linenomath*}
\begin{eqnarray}
-(1-C_I) \sigma_{w}^2\frac{\partial \overline{u}}{\partial z}-A_R \frac{\overline{w'u'}}{\tau}=0 , \quad
\left[ -(1-C_I) -
\alpha' \frac{ w_s^2}{\sigma_{w}^2} \right]
\sigma_{w}^2\frac{\partial \overline{C}}{\partial z}-A_R \frac{\overline{w'C'}}{\tau}=0,
\label{eq:stressfluxbudget2}
\end{eqnarray}
\end{linenomath*}
where $\tau$ is a turbulent relaxation time scale, $C_I=3/5$ is the isotropization of the production constant determined from rapid distortion theory \cite{pope2001turbulent}, and $A_R=1.8$ \cite{katul2013co,katul2014cospectral} is the Rotta constant assumed to be the same for momentum and SS. It directly follows from these simplified budgets that a model of maximum simplicity for $Sc$ may be derived as
\begin{linenomath*}
\begin{equation}
Sc^{-1}(z_n)=\frac{D_s}{\nu_t}= 1 + \alpha \left(\frac{ w_s}{\sigma_{w}}\right)^2,
\label{eq:wc1}
\end{equation}
\end{linenomath*}
where $\alpha=\alpha'/(1-C_I)$, though $\alpha'$ or $b_1$ can vary themselves with $u_*/w_s$ as noted earlier. It is necessary to point out that when $\alpha\geq 0$, equation \ref{eq:wc1} is opposite to what is predicted by the so-called 'crossing-trajectories' effect for heavy particles settling in a turbulent flow. The crossing trajectories arise when particle trajectories cross trajectories of fluid elements under the influence of gravity. This effect invariably forces particles to move from a region of highly correlated flow to another less correlated region \cite{wells1983effects}. In this manner, particles lose velocity correlation more rapidly than the corresponding fluid points and thus must disperse less. Thus, the crossing trajectories effect requires $Sc>1$ \cite{csanady1963turbulent, duman2016dissipation}.
\subsection{The Co-spectral Budget Model}
The models so far make no explicit contact with the phenomenon they perpetrate to represent: turbulent eddies and their energy distribution. The proposed approach here uses a co-spectral budget model (CSB) to achieve such a link. The CSB is derived from an approximated Navier-Stokes equation in a spectral form that links turbulent eddies of different sizes to $\overline{w'C'}$. The CSB derivation commences by noting that $\overline{w'C'}$ and $\overline{u'w'}$ both satisfy the normalizing properties,
\begin{linenomath*}
\begin{equation}
-\overline{w'C'} =\int_{0}^{\infty}\phi_{wc}(k)dk, ~~ -\overline{u'w'} =\int_{0}^{\infty}\phi_{wu}(k)dk,
\label{eq:wc}
\end{equation}
\end{linenomath*}
where $\phi_{wc}(k)$ and $\phi_{wu}(k)$ are the co-spectral density functions of the turbulent vertical velocity-turbulent sediment concentration and turbulent vertical-longitudinal velocities, respectively, and $k$ is the wavenumber or inverse eddy size. The co-spectral budgets associated with equation \ref{eq:stressfluxbudget} have been derived elsewhere and simplify to \cite{bos2004behavior, cava2012scaling,katul2013co,katul2014cospectral},
\begin{linenomath*}
\begin{eqnarray}
\frac{\partial}{\partial t} \phi_{wu}(k)=0 =P_{wu}(k)+T_{wu}(k)+\pi_{wu}(k)-2\nu k^2\phi_{wu}(k),\\
\frac{\partial}{\partial t} \phi_{wc}(k)=0 =P_{wc}(k)+T_{wc}(k)+\pi_{wc}(k)-\nu(1+Sc_m^{-1})k^2\phi_{wc}(k),
\label{eq:vscg}
\end{eqnarray}
\end{linenomath*}
where $P_{wu}(k)=({d\overline{u}}/{dz}) E_{ww}(k)$ and $P_{wc}(k)=({d\overline{C}}/{dz}) E_{ww}(k)$ are the stress and flux production terms at $k$, $E_{ww}(k)$ is the vertical velocity spectrum satisfying the normalizing relation $\sigma_w^2 = \int_0^{\infty} E_{ww}(k)dk$, $T_{wu}(k)$ and $T_{wc}(k)$ are turbulent transfer terms, $\pi_{wu}(k)$ and $\pi_{wc}(k)$ are pressure-velocity and pressure-scalar decorrelation terms, and $Sc_m$ is the molecular Schmidt number (not related to $Sc$). Invoking a spectral-based Rotta model that includes the isotropization of the production as before, the pressure-scalar co-variance in k-space can be modeled as
\begin{linenomath*}
\begin{eqnarray}
\pi_{wu}(k)=-A_R\frac{1}{t_{ww}(k)}\phi_{wu}(k)-C_IP_{wu}(k)
,\quad
\pi_{wc}(k)=-A_R\frac{1}{t_r(k)}\phi_{wc}(k)-C_IP_{wc}(k),
\label{eq:pi}
\end{eqnarray}
\end{linenomath*}
where $A_R\approx1.8$ and $C_I=3/5$ are as before, $t_{ww}(k)$ and $t_r(k)$ are the decorrelation time-scale of the turbulent stress and particle concentration. A model of maximum simplicity is to assume that these two wavenumber dependent time scales are related using a wavenumber-dependent $Sc(k)$ given by,
\begin{linenomath*}
\begin{equation}
t_r(k)={t_{ww}(k)}{Sc^{-1}(k)}, ~~\textrm{with}~
{Sc^{-1}(k)}={1+\alpha{(w_s\,k\,t_{wc})^2 }},
\label{eq:tm}
\end{equation}
\end{linenomath*}
where $t_{wc}=\min(t_{ww}, f_o~t_{K,b})$ with $f_o$ being a constant (a plausibility argument to such $t_{wc}(k)$ representation is discussed later), $Sc$ is modeled in analogy to equation \ref{eq:wc1} albeit in a spectral form e.g. the local characteristic turbulent velocity is estimated by $(kt_{wc})^{-1}$ using a one-way coupling approach \cite{elghobashi1994predicting}, $t_{ww}(k)\propto \epsilon^{-1/3} k^{-2/3}$ is interpreted as a characteristic time scale derived from dimensional analysis assuming $\epsilon$ is the conserved quantity across the energy cascade of $E_{ww}(k)$, and $\epsilon$ is the turbulent kinetic energy dissipation rate. One plausible choice for the proportionality constant is $C_o^{-1/2}$ so as to recover a Kolmogorov time scale in the inertial subrange, where $C_o=0.65$ is the Kolmogorov constant for the vertical velocity component.
For scalewise integration, it is also necessary to maintain a bounded $t_r(k)$ as $k \rightarrow 0$ for any $z_n$. We set $t_r(k)=t_r(k_c)$ when $k<k_c$, where $k_c$ is the smallest inverse length scale where $E_{ww}(k)$ increases with increasing $k$. The viscous-destruction terms are negligible when compared to the Rotta terms for $k\eta\ll1$. Since $T_{wu}(k)$ and $T_{wc}(k)$ do not contribute to the net production or destruction of $\phi_{wu}(k)$ and $\phi_{wc}(k)$ but only redistribute them across scales (i.e. $\int_0^{\infty}T_{wu}(k)dk=0$, and $\int_0^{\infty}T_{wc}(k)dk=0$), they are ignored for simplicity \cite{bonetti2017manning}. Adopting these simplifications,
\begin{linenomath*}
\begin{eqnarray}
\phi_{uw}(k) =\left(\frac{1-C_I}{A_R}\right)\frac{d\overline{u}}{dz}\left[E_{ww}(k) t_{ww}(k)\right]
,\quad
\phi_{wc}(k) =\left(\frac{1-C_I}{A_R}\right)\frac{d\overline{C}}{dz}\left[E_{ww}(k) t_r(k)\right].
\label{eq:cvsc0}
\end{eqnarray}
\end{linenomath*}
To integrate these equations across $k$ and derive turbulent shear stress and sediment flux at any height $z_n$, an expression for $E_{ww}(k)$ is required. A model for $E_{ww}(k)$ that captures known spectral features at an arbitrary $z_n$ is shown in Figure \ref{fig:etke}.
\begin{figure} [ht]
\centerline{\includegraphics[angle=0,width=0.72\linewidth]{Ewwk.pdf}}
\caption{Left: A typical $E_{ww}(k)$ at $z_n$ from the channel bottom. The very low wavenumber range are assumed to follow the Saffman spectrum ($E_{ww}(k)$ $\propto$ $k^2$) until $k_c=1/H$. The Saffman spectrum is then connected using a flat transition (i.e. wall effects introduce energy splashing) to the inertial subrange at $k_o$ $\propto$ $1/z$ where $E_{ww}(k)$ $\propto$ $k^{-5/3}$. The black curves are extracted from measurements \cite{nikora2002fluctuations} with different flow conditions using ADV and do not resolve the viscous dissipation range in the vicinity of $k_e=1/\eta$ or the presumed Saffman spectrum. Right: The $\sigma_w^2/u_*^2$ profile modeled from scale-wise integration of $E_{ww}(k)$ and its simplified form (i.e. ignoring the Saffman contribution and extending the inertial subrange indefinitely to fine scales). The measured $\sigma_w^2/u_*^2$ profiles are from experiments described elsewhere \cite{Raupach81, nikora2002fluctuations, heisel2020velocity}. They include field experiments and wind-tunnel experiments over a wide range of roughness types and Reynolds number conditions. The direct numerical simulations (DNS) for a smooth channel (red) are also included for comparisons \cite{heisel2020velocity}. }
\label{fig:etke}
\end{figure}
The $E_{ww}(k)$ is now piece-wise approximated as
\begin{linenomath*}
\begin{equation}
\label{eq:ewweq}
E_{ww}(k)= \left\{ \begin{array}{l l}
E_{kol}(k_o)k_c^{-2}k^2, & \mathrm{if} ~0 \leq k \leq k_c\\
E_{kol}(k_o), & \mathrm{if} ~k_c \leq k \leq k_o\\
E_{kol}(k), & \mathrm{if} ~k_o \leq k \leq k_e
\end{array} \right.,
\end{equation}
\end{linenomath*}
where $k_c = H^{-1}$, $k_o=(\kappa z)^{-1}$ and $k_e=\eta^{-1}$ are three characteristic wavenumbers that mark the key transitions in $E_{ww}(k)$ between $H$ and the characteristic eddy scales bounding the inertial subrange \cite{bonetti2017manning, katul2013co,li2019cospectral, ayet2020scaling}, and $E_{kol}(k)=C_o\epsilon(z)^{2/3}k^{-5/3}$ is the Kolmogorov spectrum. In the case of $E_{kol}(k)$, the transfer of energy across scales shapes the energy cascade and is necessary for obtaining the $k^{-5/3}$ scaling. The transfer of stress across scales, as given by $T_{wu}(k)$, was ignored in the CSB model here. The inclusion of the transfer term in the energy cascade (indirectly specified by $E_{ww}(k)$) but not in the CSB may appear paradoxical. This is not so as the role and significance of the transfer terms are quite different when analyzing scale-wise energy and stress budgets \cite{bos2004behavior}. In the inertial subrange where $E_{kol}(k)\sim k^{-5/3}$, a $\phi_{uw}(k) \sim k^{-7/3}$ has also been reported and confirmed in numerous boundary layer experiments and simulations of wall-bounded flows \cite{pope2001turbulent}. A balance between production and dissipation terms in the CSB model leads to a $\phi_{uw}(k) \sim (d\overline{u}/dz) E_{ww}(k) t_{ww}(k)$, which recovers the $[(d\overline{u}/dz) \epsilon^{1/3}] k^{-7/3}$ scaling in the inertial subrange. Inclusion of $T_{wu}(k)$ necessarily leads to $\phi_{uw}(k)$ that must deviate from a $k^{-7/3}$ scaling in the inertial subrange as discussed elsewhere \cite{li2015revisiting}. Moreover, the constants emerging from a production balancing dissipation in the scale-wise CSB model for the inertial subrange, $[(1-C_I)/A_R] C_o^{1/2}=0.18$, does recover the accepted co-spectral similarity constant whose numerical value was determined at $0.15-0.16$ from wind tunnel studies, atmospheric surface layer studies, and DNS \cite{katul2013co}. For these reasons (i.e. $T_{wu}(k)$ ignored within the inertial subrange) and because $\int_0^{\infty} T_{wu}(k)dk=0$, $T_{wu}(k)$ is ignored at all $k$. This assumption is also compatible with ignoring the triple moments in equations \ref{eq:stressfluxbudget}.
The only remaining term needed to describe the magnitude of $E_{ww}(k)$ at all $k$ is $\epsilon(z)$. A model of maximum simplicity is to relate $\epsilon(z)$ to the mechanical production $P_{wu}(z)$ of the turbulent kinetic energy budget using \cite{pope2001turbulent}
\begin{linenomath*}
\begin{equation}
\epsilon (z)=\frac{P_{wu}(z)}{\phi(z_n)}
=\phi^{-1}(z_n)\left(-\overline{u'w'}\frac{d\overline{u}}{dz}\right)
=\phi^{-1}(z_n)u_*^2\left(1-\frac{z}{H}\right)\frac{d\overline{u}}{dz},
\label{eq:dissipation}
\end{equation}
\end{linenomath*}
where $\phi(z_n)$ is a modification function to account for the imbalance between the local mechanical production and local dissipation terms in the turbulent kinetic energy budget. For stationary and planar-homogeneous flow conditions without any mean vertical advection and in the absence of any transport terms, $\epsilon(z)\approx P_{wu}(z)$ and $\phi(z_n)\approx1$. While this estimate may be acceptable in the log-region describing $\overline{u}(z)$, deviations near the channel bottom ($\phi(z_n)>1$) and near the water surface ($\phi(z_n)<1$) are expected. Hence, $\phi(z_n)$ must be viewed as a depth-dependent function \cite{kim1987turbulence, pope2001turbulent} though its variation from unity is not considered here to maintain maximum simplicity. A plausibility argument for ignoring its variation from unity is that $\overline{w'C'} \propto \left[\phi(z_n)\right]^{-1/3}$ (shown later), which makes the SSC calculations less sensitive to $\phi(z_n)$ deviations from unity. This point is considered later in the context of modeling $\sigma_{w}^2(z_n)$ based on the assumed $E_{ww}(k)$ shape.
Returning to the choice of $t_{wc}=\min(t_{ww}, f_o~t_{K,b})$ and the choice $f_o$, as $z_n\rightarrow1$, $\overline{w'u'}\rightarrow0$, $P_{wu}(z_n)\rightarrow0$, and thus $\epsilon\rightarrow$ (i.e. no turbulence) near the free water surface. With $\epsilon\rightarrow$, $t_{ww}(k)\rightarrow\infty$ (along with $\tau_k\rightarrow\infty$ and $\eta_k\rightarrow\infty$). That $t_{ww}(k)\rightarrow\infty$ is not problematic for the closure scheme of $\pi_{wu}(k)$ and $\pi_{wc}(k)$ as those terms are expected to decay near the free water surface and this decay remains compatible with $t_{ww}(k)\rightarrow\infty$. The problem of $\epsilon\rightarrow0$ arises in maintaining a finite $Sc^{-1}(k)$ dominated by turbulent processes thereby necessitating a finite $\epsilon$ in the calculation of $Sc^{-1}(k)$ that cannot be readily inferred from $P_{wu}(z_n)$. To ensure that the particle interaction time scale $t_{wc}$ remains bounded in $Sc^{-1}(k)$, an adhoc minimal value of $\epsilon$, set to be $0.1\% ~ \epsilon_b$, is proposed. This choice of minimal $\epsilon_b$ prevents $\epsilon\rightarrow0$ as $z_n\rightarrow1$ in the $Sc(k)$ formulation only. This minimal threshold set to ensure a finite $\epsilon$ in $Sc(k)$ (mainly near the free water surface) leads to $f_o=\sqrt{1000}\approx 31$.
\section{Results and Discussion}
\subsection{Co-spectral Budget Model}
By scale-wise integrating $\phi_{uw}(k)$ and using $u_*^2(1-z_n)=\int_o^{k_e}\phi_{uw}(k)$ dk, the velocity gradient $d\overline{u}/dz$ at $z$ is obtained as
\begin{linenomath*}
\begin{eqnarray}
\frac{d\overline{u}}{dz}=A_{\pi}^{-3/4}\phi^{1/4}(z_n)\left(1-\frac{z}{H}\right)^{1/2}
\left[
\frac{15}{4}-\frac{8}{3}\left(\frac{k_c}{k_o}\right)^{1/3}-\frac{3}{4}\left(\frac{k_o}{k_e}\right)^{4/3}
\right]^{-3/4}\left(k_o u_*\right),
\label{eq:sscb3}
\end{eqnarray}
\end{linenomath*}
where $A_{\pi}=({1-C_I})\sqrt{C_o}/{A_R}\approx0.18$, and the vertical velocity variance can be derived by scale-wise integrating $E_{ww}(k)$ as,
\begin{linenomath*}
\begin{eqnarray}
\frac{\sigma_{w}^2}{u_*^2}=
\frac{5}{2}C_oA_{\pi}^{-1/2}\phi^{-1/2}(z_n)
\left[1-\frac{4}{15}\frac{k_c}{k_o}
-\frac{3}{5}\left(\frac{k_e}{k_o}\right)^{-2/3}
\right]
\left[\frac{15}{4}-\frac{8}{3}\left(\frac{k_c}{k_o}\right)^{1/3}
-\frac{3}{4}\left(\frac{k_e}{k_o}\right)^{-4/3}
\right]^{-1/2}
(1-z_n).
\label{eq:sigw}
\end{eqnarray}
\end{linenomath*}
Likewise, the SSC turbulent flux is solved as
\begin{linenomath*}
\begin{eqnarray}
-\overline{w'C'}=A_{\pi}\phi^{-1/3}(z_n)\Omega(z)u_*^{2/3}
\left[\left(1-\frac{z}{H}\right)\frac{d\overline{u}}{dz}\right]^{1/3}\frac{d\overline{C}}{dz}
=-w_s\overline{C},
\label{eq:sscb1}
\end{eqnarray}
\end{linenomath*}
with $\Omega(z_n)$ given by
\begin{linenomath*}
\begin{eqnarray}
\Omega(z_n)=\int_{0}^{k_c}Sc^{-1}(k_c)k_c^{-8/3}k_o^{-5/3}k^{2}dk
+\int_{k_c}^{k_o}k_o^{-5/3}Sc^{-1}(k)k^{-2/3}dk+
\int_{k_o}^{k_e}Sc^{-1}(k)k^{-7/3}dk.
\label{eq:sscb11}
\end{eqnarray}
\end{linenomath*}
Therefore, the turbulent Schmidt number $Sc(z_n)$ can be determined from the CSB model as
\begin{linenomath*}
\begin{eqnarray}
Sc(z_n)=\frac{\nu_t}{D_s}=\Omega^{-1}(z_n)\left[
\frac{15}{4}-\frac{8}{3}\left(\frac{k_c}{k_o}\right)^{1/3}-\frac{3}{4}\left(\frac{k_o}{k_e}\right)^{4/3}
\right]k_o^{-4/3}.
\label{eq:sc}
\end{eqnarray}
\end{linenomath*}
Because the determination of $k_e=1/\eta$ (where $\eta=(\nu^3/\epsilon)^{1/4}$) requires an estimate of $\epsilon(z_n)=P_{uw}(z_n)$ and thus an estimate of $d\overline{u}/dz$, an iterative scheme is needed to determine $d\overline{u}/dz$ and $k_e$ at every $z_n$ from equation \ref{eq:sscb3}. Once determined, the $E_{ww}(k)$, $Sc(z_n)$, $\overline{w'C'}$ and the subsequent SSC profile can be computed at each $z_n$ by solving equations \ref{eq:sigw}, \ref{eq:sscb1}, and \ref{eq:sc} for $\sigma_w^2$, $\overline{w'C'}$ and $Sc$. Since there is no analytical solution to this system, a numerical integration using a 3rd-order Adams–Bashforth method is employed.
Before proceeding to the analysis of SSC, an assessment of the assumed shape of $E_{ww}(k)$, its transition wavenumbers, as well as the consequence of the assumption of $\phi(z_n)\approx 1$ is conducted in Figure \ref{fig:etke}. The predicted $\sigma_w^2/u_*^2$ and its simplified version using $E^1_{ww}(k)$ without the Saffman spectrum and assuming $k_e\rightarrow\infty$ are compared against two sets of experiments: (i) wind tunnel experiments conducted over a wide range of surface roughness types \cite{Raupach81} and (ii) field experiments \cite{nikora2002fluctuations} of the sediment flow in the Balmoral Irrigation Canal (New Zealand). The wind-tunnel experiments used a hot-wire probe whereas the field experiments used acoustic Doppler velocity (ADV) measurements that do not resolve the viscous dissipation regime. As expected, the predicted $\sigma_w^2/u_*^2$ here exceeds the measurements because the spectral shapes assumed in $E_{ww}(k)$ account for a much broader range of eddy sizes than the experiments interrogate. Specifically, the Saffman and dissipation ranges are not resolved by the flume experiments whereas the wind tunnel experiments resolve a limited dissipation range but are not conducted over a sufficiently long enough sampling period to cover the Saffman spectrum. Nonetheless, the model recovers key features of the $(\sigma_w/u_*)^2$ profile: a rapid increase with $z_n$ near the surface, a peak at $(\sigma_w/u_*)^2=1.9$, and a quasi-linear decline as $z_n\rightarrow1$. The peak $(\sigma_w/u_*)^2=1.9$ is compatible with near-neutral atmospheric surface layer measurements ($=1.8$) where lateral confinements of the flow are absent (unlike flumes and wind tunnels) and where $H/\eta$ far exceeds those obtained in laboratory studies.
Now the comparisons of the CSB results with $\alpha$ temporarily set as a 'free' parameter with (i) Prandtl's power law solution and (ii) Rouse's formula are shown in Figure \ref{fig:SSC_CSB}.
\begin{figure} [ht]
\centerline{\includegraphics[angle=0,width=1.2\linewidth]{CSB_sim.eps}}
\caption{The predicted SSC, $Sc$, and $D_s$ profiles based on the CSB model when setting $d_s$=1 mm, $\rho_s$=1.2 g cm$^{-3}$, $u_*$=3 cm s$^{-1}$, and $U_b/u_*$=$10$. The reference position is at $z_{n,b}$=$0.01$. Different $\alpha$ values and Reynolds numbers ($Re_*$=$u_*H/\nu$) are featured to illustrate overall sensitivity of the normalized SSC profile to these parameters. The Prandtl and Rouse model predictions of SSC are shown for reference in the top-left panel. The Reynolds number is varied by alerting $\nu$.}
\label{fig:SSC_CSB}
\end{figure}
The computed SSC and $Sc$ profiles are also presented when the flow conditions and sediment properties are externally supplied. For Prandtl's power-law and Rouse's formula, the bulk Schmidt number was set to unity. However, the CSB model allows for a depth-dependent $Sc(z_n)$, which is set by $\alpha$. When $\alpha=0$, $Sc(z_n)=1$ in the entire channel, consistent with equation \ref{eq:wc1}. When $\alpha>0$, $Sc(z_n)$ varies with depth and is generally greater in the near-bed region and becomes smaller with increasing $z_n$. However, because of the imposition of a finite $\epsilon$ near the water surface ($=0.001\epsilon_b$), $Sc(z_n)$ increases back to near unity when $z_n \rightarrow 1$. Rouse's equation and CSB models exhibit different behavior near the water surface. Rouse's equation yields a zero-concentration at $z_n=1$ whereas the CSB model does not. One advantage to the CSB approach is its ability to resolve the dependence of $\overline{C}/\overline{C_b}$ on Reynolds number. Using different $\nu$, variations in $Re_*=u_*H/\nu$ can be generated and their effects on CSB model predictions tracked. Recall that $H/\eta_b$ (modeled in the CSB) scales as $Re_*^{3/4}$, and the effects of this scale separation on the shape of the vertical velocity spectrum, sediment flux co-spectrum, and the resulting $\overline{C}/\overline{C_b}$ profiles are explicitly determined. The effects of $\alpha$ are much more significant than the effects of $Re_*$, which is heuristically supportive for using Direct Numerical Simulation runs (lower $Re_*$) to further explore the CSB approach. As earlier noted, the implications of setting $t_{wc}=\min(t_{ww}, f_o ~t_{K,b})$ with $f_o=\sqrt{1000}$ are most visible on the $Sc(z_n)$ profile near the free water interface. Altering $f_o$ primarily modifies the thickness of the region near the water interface impacted by the imposed finite $t_{wc}$ (or finite $\epsilon$ in the $Sc(k)$ determination). However, the CSB model itself is not expected to be valid in this zone as the assumed shape of $E_{ww}(k)$ is not realistic, the flux transport terms can be finite, and turbo-phoretic effects may also be large in this vicinity. In sum, predictions from the CSB model near the free water surface must be treated with skepticism and caution.
\subsection{Recovery of the Rouse and Prandtl equations}
Whether a Rouse equation can be recovered from the CSB model under certain simplifications is now examined. Any explicit model must include $Sc$ and approximations to equation \ref{eq:sc}. Assuming $k_c/k_o \rightarrow 0$ and $k_o/k_e \rightarrow 0$ in $\Omega (z_n)$ only (i.e. setting the area under the Saffman spectrum to zero that is then partially compensated for by extending the inertial subrange to $k_e\rightarrow\infty$), the Schmidt number derived from equation \ref{eq:sc} can be approximated as
\begin{linenomath*}
\begin{equation}
Sc^{-1} \approx
1+B_{\pi}\left(\frac{w_s}{u_*}\right)^2,
~\textrm{with}~
B_{\pi}= \frac{\sqrt{15A_{\pi}}}{3C_o}\alpha \approx 0.84 \alpha,
\label{eq:wc2}
\end{equation}
\end{linenomath*}
which directly recovers the quadratic model for $Sc^{-1}$ reported elsewhere \cite{rijn1984sediment, bombardelli2012exchange} as expected. With $R \approx 0$, equation \ref{eq:wc2} indicate $\beta=Sc^{-1}\rightarrow 1$ thereby recovering Rouse's original assumption (i.e. SS resemble passive scalars in this case). This estimate of $\beta$ also allows for the determination of the model coefficient $\alpha$ using a separate data set and model runs shown in Figure \ref{fig:alpha_beta}.
\begin{figure} [ht]
\centerline{\includegraphics[angle=0,width=0.87\linewidth]{f4.eps}}
\caption{The model coefficient $\beta$ based on different formulae, experiments, and model runs. The experiments and model runs presented here are described elsewhere \cite{jha2009two}.}
\label{fig:alpha_beta}
\end{figure}
Figure \ref{fig:alpha_beta} shows different predictions of $\beta$, including $\beta=1+2(w_s/u_*)^2$ \cite{rijn1984sediment} and $\beta=1.3+3(w_s/u_*)^2$ \cite{jha2009two} for model results that explicitly consider particle-fluid interactions. Moreover, with $Sc$ provided in equation \ref{eq:wc2}, the SS diffusivity is derived as,
\begin{linenomath*}
\begin{equation}
\frac{D_s(z)}{\kappa z u_*}=\frac{1}{Sc}\left(1-z_n \right)=\left[1
+B_{\pi}\left(\frac{w_s}{u_*}\right)^2 \right] \left(1-z_n \right)
\label{eq:mg}
\end{equation}
\end{linenomath*}
where $\kappa z u_*$ is the eddy viscosity in the log-region of $\overline{u}(z)$. Depending on choices made for $\alpha$ or $B_{\pi}$, a number of empirical relations can be recovered including the widely used Rouse's equation and variants on it \cite{hunt1954turbulent}. For a given $\alpha$, an analytical solution for the SSC can be derived and compared with published experiments. The SSC solution for an arbitrary $\alpha$ is given as
\begin{linenomath*}
\begin{equation}
\frac{\overline{C}(z_n)}{\overline{C_b}}
=\left(\frac{z_n}{1-z_n}\frac{1-z_{n,b}}{z_{n,b}}\right)^{-R_+}.
\label{eq:gs1}
\end{equation}
\end{linenomath*}
where the power exponent $R_+$ is defined as
\begin{linenomath*}
\begin{equation}
R_+=\frac{1}{1
+B_{\pi}\left({w_s}/{u_*}\right)^2 }
{\frac{w_s}{\kappa u_*}}.
\label{eq:rf}
\end{equation}
\end{linenomath*}
When $\alpha=0$ (or $B_{\pi}=0$), a quadratic diffusivity profile \cite{obrien1933review} as well as Rouse's formula \cite{rouse1939analysis,rouse1937modern} for SSC given in equation \ref{eq:geS} are recovered. Furthermore, in the limit of ($z_n \ll 1$) a linear diffusivity profile \cite{vonKarman1934} along with the classic power law solution are also recovered from equation \ref{eq:gs1}. The consequences on $\sigma_w^2$ of setting the Saffman spectrum to zero and extending the inertial subrange to $k\rightarrow\infty$ on $\sigma_w^2$ are briefly discussed using Figure \ref{fig:etke}. As expected, these approximation over-estimate $(\sigma_w/u_*)^2$ in the near-wall region and underestimate $(\sigma_w/u_*)^2$ in the outer layer when compared to a $E_{ww}(k)$ that accommodates the Saffman spectrum (i.e. large scale effects) but truncates the inertial subrange at $1/k_e$. These effects cannot be readily ignored and may influence the choices made about $\alpha$.
\subsection{Comparison with Experiments}
The CSB model given by equations \ref{eq:sscb1} and \ref{eq:sscb3} and its simplified version featured in equation \ref{eq:gs1} are compared with published experiments \cite{greimann2001two, vanoni1984fifty, tsengtwo} summarized in Table \ref{tb:tt1}. We assume $\Phi(z)=0$ thereby neglecting inertial effects for compatibility with operational models (e.g. the Rouse model). The comparisons are shown in Figure \ref{fig:SSC_comp}. For these experiments, all the reported parameters including measured $u_*$, $d_s$, and $\rho_s$ and the fitted $\beta$ (needed for assessing the fitted Rouse equation) and $\alpha$ (needed for evaluating the numerical CSB model) are presented in Table \ref{tb:tt1}.
\begin{table}
\caption{Summary of published experiments and parameters used in model-data comparisons. When setting $Sc$ =$1$, not all runs are classified as SS (or $0.8$ $\leq$ ${R^*}$ $\leq$ $2.5$) even though sediments were reported as suspended. While $St_b$ is not very small for (b) and (c), $St_+$ $\ll$ $1$. Calculated densimetric and critical Froude numbers ($Fr_d$ and $Fr_{dc}$) are presented along with the roughness Reynolds number $Re_{pa}$. All experiments lie in the fully-rough ($Re_{pa}>100$) or transitional ($3<Re_{pa}<100$) regimes in fully-developed turbulence ($Re_b \geq 500$).}
\centering
\begin{tabular}{lcccccc}
\hline
\textbf{Run} & \textbf{(a)}& \textbf{(b)} & \textbf{(c)} & \textbf{(d)} & \textbf{(e)} & \textbf{(f)} \\ \hline
& \multicolumn{6}{c} {\textbf{Flow Properties}} \\
\hline
$H$ (m) & 0.10 & 0.52 & 0.50 & 0.10 & 0.10 & 0.10 \\
$B$ (m) & $-$ & 0.84 & 0.84 & 0.15 & 0.15 & 0.15 \\
$U_b$ (measured, m s$^{-1}$) & 1.98 & 3.95 & 3.63 & 0.31 & 0.22 & 0.17 \\
$Re_b=U_b H/\nu \times 10^{-4}$ & 18.7 & 193 & 170 & 2.9 & 2.1 & 1.6 \\
$Re_{pa}=u_*d_s/\nu$ &103 &169 &166 &16 &13 &7 \\
$u_*$ (cm s$^{-1}$) & 7.67 & 20.0 & 20.0 & 1.7 & 1.4 & 0.8 \\
$z_{n,b} \times 10^{2}$ (measured) & 6.3 & 2.6 & 3.0 & 5.0 & 3.8 & 5.6 \\
$z^+_{n,b}(= u_*z_{b}/\nu)$ & 456 & 2747 & 2988 & 85
& 53 &45 \\
$U_b/u_*$ (measured) & 25.8 & 19.4 & 18.2 & 18.2 & 15.7 & 21.3 \\
\hline
& \multicolumn{6}{c} {\textbf{Sediment Properties}} \\
\hline
$\rho_s/\rho$ & 1.05 & 2.65 & 2.65 & 1.20 & 1.20 & 1.20 \\
$d_s$ (mm) & 1.42 & 0.88 & 0.88 & 1.00 & 1.00 & 1.00 \\
$w_s$ (cm s$^{-1}$) & 1.7 & 10 & 10 & 2.9 & 2.9 & 2.9 \\
\hline
& \multicolumn{6}{c} {\textbf{Dimensionless Model Parameters}} \\
\hline
$R*=w_s/(\kappa u_*)$ & 0.5 & 1.2 & 1.2 & 4.3 & 5.2 & 9.1 \\
$\alpha$ &27.7 &14.5 &16.3 &0.6 &0.4 &0.2
\\
$\beta$ (Rouse) & 1.3 &1.6 &1.8 & 2.2 & 2.2 & 2.8 \\
$R=w_s/(\beta\kappa u_*)$ & 0.4 & 0.8 & 0.7 &1.9 & 2.4 & 3.2 \\
$\beta$ (Prandtl) & 0.9 & 1.1 & 1.2 & 1.4 & 1.5 & 1.9 \\
$R=w_s/(\beta\kappa u_*)$ & 0.6 & 1.1 & 1.0 &3.0 & 3.5 & 4.8 \\
$St_b\left(={w_s}/{g} \sqrt{{g S_o U_b}/{\nu}}\right)$ & 0.57 & 5.63 & 5.4 & 0.09 & 0.06 & 0.03
\\
$St_+\left(=\tau_pu_*/H\right) \times 10^{3}$ & 1.3 & 4.0 & 4.1 &0.5 & 0.4 & 0.2 \\
$Fr_d\left(=U_b/\sqrt{(\rho_s/\rho-1)gd}\right)$ & 75 & 33 & 30 &7 & 5 & 4 \\
$Fr_{dc}$ & 3 & 4 & 4 &3 & 3 & 3 \\
$U_b/u_*$ (CSB rough bed) &12.4 &15.2 &15.6 &13.0 &14.0 &13.5 \\
$U_b/u_*$ (CSB smooth bed) &27.2 &33.1 &33.5 &23.5 &23.7 &22.4
\\ \hline
\end{tabular}
\label{tb:tt1}
\end{table}
In the experiments, the sediments covered the bed and were assumed to have reached an equilibrium state where equation \ref{eq:fgov} applies \cite{tsengtwo}. The densimetric Froude number $Fr_d$ and the critical densimetric Froude number $Fr_{dc}$ whose formulation is described elsewhere \cite{ali2017origin,ali2018impact,li2019cospectral} are also presented in Table \ref{tb:tt1}. In all cases, the $U_b$, $\rho_s/\rho$, and $d/H$ result in $Fr_d>Fr_{dc}$ meaning that sediments can be released from the bed and must be balanced by sediments depositing onto the bed. Thus, the experiments do not strictly abide by HAE's definition of SS as sediments here are not remain permanently suspended. Across the experiments, the flow variables $U_b$ and $u_*$ varied from 10 cm s$^{-1}$ to 40 cm s$^{-1}$ and 0.8 to 8 cm s$^{-1}$, respectively. However, $U_b/u_*=(8/f_{dw})^{1/2}$, related to the Darcy-Weisbach friction factor $f_{dw}$, varied much less (15-25) as may be anticipated in fully rough flow over a channel bed covered by grains of similar $d_s$. The particle properties $\rho_s/\rho$ and $d_s$ varied from 1.05 to 2.65 and 0.88 to 1.4 mm, respectively. The consequence of these variations is that the empirically derived settling velocity $w_s$ is much smaller than the Stokes settling velocity as shown in the inset of Figure \ref{fig:ws}. Collectively, these experiments span wide-ranging particle sizes (in the SS range) and flow properties from different sources. The lowest measured sediment concentration near the channel bottom is close to the surface ($z_{n,b}\in [0.026, 0.063]$) but remains above the buffer region $z^+=u_*z_{b}/\nu>30$ as shown in Table \ref{tb:tt1}. For some runs, the $z^+<100$ and wall-blockage effects (not considered here) can impact $E_{ww}(k)$ and $d\overline{u}(z)/dz$ \cite{MccollEA16}, which introduce obvious uncertainties. As shown in Table \ref{tb:tt1}, experiments (a)-(c) are characterized by $St_b>0.5$, which may be indicative that $\Phi(z)$ is not small. Experiments (d)-(f) are characterized by a small $St_b$ as assumed by the CSB and Rouse's formula.
\begin{figure} [ht]
\centerline{\includegraphics[angle=0,width=0.99\linewidth]{f5.pdf}}
\caption{The predicted SSC profiles normalized by $C_b$ selected at the measurement height with the highest reported concentration. The panel labeling follows Table \ref {tb:tt1} with the top panel showing the comparisons from earlier measurement i.e. (a) \cite{xingkui1992velocity, greimann2001two} and (b)-(c) \cite{vanoni1984fifty}, and the bottom panel showing the comparisons using recent experiments (d)-(f) \cite{tsengtwo}. For experiments in panels (a)-(c), the $St_b$ is not small ($>0.5$).}
\label{fig:SSC_comp}
\end{figure}
Figure \ref{fig:SSC_comp} confirms that the fitted Rouse formula and fitted Prandtl formula (i.e. the $R$ model, allowing $\beta$ to be fitted) offer good agreements with some measurements (for (a)-(b) and (d)-(f) respectively) at all depths. Given that the simplified CSB model is identical to Rouse's formula, an agreement between the fitted Rouses's formula and the measurements can also be juxtaposed to the simplified CSB model. However, the numerical CSB model provides reasonable agreements for all the runs when allowing $\alpha$ to vary. Allowing $\alpha$ to be a free parameter has several advantages when compared to $\beta$ in the fitted Rouse equation. Setting $\beta$ as constant implies $Sc$ is constant at all $z_n$ while setting $\alpha$ as constant incorporates some of the local variations in $Sc$ with $z_n$ (albeit near the free water surface, maintaining a finite $\epsilon$ can be problematic without adjustments). The impact of minor variations in particle sizes is shown in the shaded area: the particle sizes are increased/decreased by 20\% to illustrate model sensitivity to $d_s$. Uncertainty in sediment composition (and thus $d_s$ and $w_s$) can be a factor in determining SSC uncertainty but not in all cases (runs d,e,f).
While the SSC model does not require $\overline{u}$ (only $d\overline{u}/dz$), the predicted $U_b$ from the CSB turbulent stress budget can be compared against measured $U_b$ for a plausibility check. The modeled $U_b$ requires $u_*$ along with a boundary condition specified here as $\overline{u}(z_{n,b})/u_*$ at $z_{n,b}$. A number of choices can be made about this boundary condition. Given that $z_{n,b}$ is sufficiently distant from the wall, the most direct of those choices is the log-law for two end-member cases: (i) fully rough with an externally imposed surface roughness and (ii) hydrodynamically smooth. In both cases, the mean velocity at $z_{n,b}$ is approximated as
\begin{linenomath*}
\begin{equation}
\frac{\overline{u}(z_b)}{u_*}=\frac{1}{\kappa} \log\left(\frac{z_b}{z_o} \right); \quad \quad \frac{\overline{u}(z_b)}{u_*}=\frac{1}{\kappa} \ln\left(z^+_{n,b}\right)+5,
\label{eq:loglaw_BC}
\end{equation}
\end{linenomath*}
where $z_o$ is the momentum roughness length. The $z_o$ can be related to $d_s$ by $z_o \approx d_s/30$ where the grain diameter is assumed constant. In all cases, the roughness Reynolds number $Re_{pa}=u_* d_s/\nu>3$ but in some cases, the flow is not fully rough (i.e. transitional with $3<Re_{pa}<100$). For this reason, the CSB model forced by both rough and smooth surface boundary conditions at $z_{n,b}$ are featured in Table \ref{tb:tt1}. The agreement between measured and the range of CSB modeled $U_b/u_*$ for these two end-member cases appears reasonable. Runs (a) and (f) are closer to a smooth-wall case whereas runs (b), (c), and (e) are better approximated by a rough-wall boundary condition. Run (d) falls in-between these two end-member cases. While Run (f) had the smallest $Re_{pa}=8$ and a near-smooth wall approximation may be justifiable, run (a) had an $Re_{pa}>100$. We do not have a clear explanation as to why $U_b$ in run (a) is better approximated by a smooth wall boundary condition.
An investigation of the relation between fitted $\alpha$ (and $\beta$) and $w_s/u_*$ is undertaken and shown in Figure \ref{fig:Sc_beta}. A near-linear relation between $\alpha^{-1}$ and $w_s/u_*$ indirectly supports the heuristic closure adopted for $\overline{C'{\partial w'}/{\partial z}}$ with some caveats.
\begin{figure} [ht]
\centerline{\includegraphics[angle=0,width=0.63\linewidth]{ab1.eps}}
\caption{The dependence of fitted $\alpha$ and $\beta$ on the $w_s/u_*$. The red, blue and black dashed lines show the fitted trend-lines of $\alpha^{-1}$ and $\beta$ from Rouse and Prandtl equations respectively. The cyan dashed line is $\beta$=$1+$ $2(w_s/u_*)^2$ \cite{rijn1984sediment} extrapolated for large $w_s/u_*$.}
\label{fig:Sc_beta}
\end{figure}
In the regime $w_s/u_*\gg 1$, the closure model with $b_1\sim \mathrm{sgn}(A_f) u_*/w_s$ leads to an $\alpha^{-1}\sim -\mathrm{sgn}(A_f)(1-C_I)(w_s/u_*)$ and $\beta\sim -\mathrm{sgn}(A_f)/(1-C_I)(w_s/u_*)$, both of which are negative unless $\mathrm{sgn}(A_f)$ is negative. The relation in Figure \ref{fig:Sc_beta} indicates a positive slope between fitted $\alpha^{-1}$ and $w_s/u_*$, suggesting that the coefficient $A_f$ in the flux-variance similarity closure (i.e. equation \ref{eq:stressfluxbudget4}) is negative. More broadly, to what extend this closure is general and how robust are its results in the context of SSC profile predictions cannot be unpacked from the experiments here and is better kept for a future research topic.
\section{Model Limitations}
The treatment of suspended sediments as a dilute mixture is an obvious model limitation. This assumption requires particles to settle independently and that the solid volume can be ignored relative to the water volume. For the experiments considered here, this assumption is reasonable. Another restrictive assumption is setting $\Phi=0$ \cite{kind1992one,chamecki2007concentration}. A $\Phi=0$ also leads to $\overline{C}=\overline{w'C'}/w_s\rightarrow0$ at $z_n\rightarrow1$, which may not be general. Given the large vertical gradients in $\sigma_w^2$ near the channel bottom and near the free water surface, turbophoretic effects can be significant in these two regions \cite{caporaloni1975transfer,guha1997unified,marchioli2002mechanisms,zhao2006modeling,katul2010predicting,chamecki2007concentration}. The turbophoretic effect act to increase the SS concentration near the water surface; however, the measurements here (runs a-c) suggest that for the $St_b>1$ cases, the SS concentrations near the water surface experience a decline as $z_n\rightarrow1$ instead of an increase. This finding can be used to suggest that $\Phi=0$ may be plausible as the turbophoretic term was shown to dominate $Phi$ near the water surface \cite{richter2018inertial,bragg2021mechanisms}. The CSB budget formulation here (i.e. equation \ref{eq:vscg}) ignored the flux transfer term and their vertical variation. In the case of the turbulent stress, ignoring the flux transfer term (and its vertical gradients) altogether guarantees that the co-spectrum between $w'$ and $u'$ in the inertial subrange maintains a $k^{-7/3}$ scaling. This $k^{-7/3}$ scaling has been observed in numerous boundary layer studies reporting co-spectra thereby offering indirect justification for this assumption. The flux transport terms (i.e. the vertical gradients of triple moments in the Reynolds averaged equations) have also been ignored. These terms have been studied less for stress and sediment flux turbulent budgets compared to their turbulent kinetic energy budget counterparts. The work here highlights the need for an assessment of these terms relative to their mechanical production terms. The CSB model also assumes that the linear Rotta scheme (slow component) with an isotropization of production (rapid component) applies equally to SS and momentum fluxes without adjustments in constants (i.e. $A_R=1.8$ and $C_I=3/5$). Hence, any departure from these established constants must be absorbed by $t_{ww}(k)/t_r(k)$, which manifests itself as a Schmidt number effect (or $\alpha$ variations).
The assumed shape of $E_{ww}(k)$ is also over-simplified and certainly not reflective of what is known about the energetics near the surface ($z^+<100$) such as wall-blockage. Moving away from the wall region itself, other 'shape issues' arise. For example, near the spectral transition from inertial to viscous regimes, usually occurring at around $k\eta \approx 0.1$, $E_{ww}(k)$ experiences a bottleneck that is absent here \cite{saddoughi1994local,katul2015bottlenecks}. Likewise, as $k\eta>0.1$ and increases further into the viscous regime, $E_{ww}(k)$ decays exponentially \cite{pope2001turbulent}. Hence, extending the inertial subrange to $k\eta=1$ is not intended to capture all such mechanisms impacting the vertical velocity spectrum. Instead, it allows for some compensation of loss in energy due to censoring $E_{ww}(k)$ at $k\eta =1$ while introducing extra energy due to an expected overestimation of the extrapolated inertial subrange spectrum in this vicinity. On a more positive note, while the full details of the turbulent kinetic energy cascade across scales are not explicitly considered, their effects remain implicitly contained in the assumed shape of $E_{ww}(k)$. As such, some of these effects can be accommodated (e.g. the bottleneck, viscous cutoff, etc...) by various revisions to $E_{ww}(k)$ (e.g. including a bump around $k\eta =0.1$, resolving the viscous cutoff region using the Pao spectral shape or variants \cite{pope2001turbulent} on it, etc...).
It is to be noted that the co-spectral budget is integrated scale-wise, which means that the precise shape of $E_{ww}(k)$ in the vicinity of $k\eta \approx 1$ is less crucial. Moving beyond the shape issues of $E_{ww}(k)$ and focusing on its primary input variable $\epsilon(z_n)$, the approach assumes turbulent kinetic energy production is balanced by its dissipation at every $z_n$ (i.e. $\phi(z_n)=1$), which is certainly not realistic for all $z_n$. However, as previously mentioned, deviations from unity in $\phi(z_n)$ may be ameliorated by the sub-unity exponent ($-1/3$) dependence in the SSC budget. An exception to this statement is the particle time scale $t_{wc}(k)$ in $Sc(k)$. A $\phi(z_n)=1$ as $z_n\rightarrow1$ leads to an unbounded $Sc^{-1}(k)$ and thus an uncertain $D_s$ shape in the vicinity of the free surface. A plausible adjustment to the $Sc^{-1}(k)$ calculations based on maintaining a minimal $\epsilon$ ($=0.001\epsilon_b$) was introduced here though this correction remains adhoc. Last, the turbulent SS flux from the CSB model(s) follows the same form as gradient-diffusion closure upon ignoring both - turbulent flux transport and scale-wise transfer terms. However, a key advantage here is that the effective diffusion coefficient $D_s$ from the CSB model contains contributions from turbulent eddies and Schmidt numbers at all scales. The proposed Schmidt number (or $\alpha$) is consistent with bulk Schmidt number formulations such as those by van Rijin's and other one-way coupling schemes (i.e. particle transport does not impact the flow) when $Sc<1$ \cite{bombardelli2012exchange}. For dense mixture or other aeolian particles in the atmosphere, the particle Schmidt number can be larger than unity \cite{csanady1963turbulent} implying other particle-fluid interaction models are required.
When using the CSB model, the $\alpha$ used for the determination of the Schmidt number is treated as a single fitted parameter. Hence, the CSB model offers the same number of free parameters as the fitted Rouse equation. What was found here is that $\alpha^{-1}$ varies linearly with $w_s/u_*$ when combining all the experiments. A plausibility argument as to why $\alpha$ depends on $w_s/u_*$ was also offered. In some instances, the addition of a single fitted parameter may be desirable in hydraulic models as discussed elsewhere \cite{papke2013reduced, battiato2014single, rubol2018universal, li2019mean}, but an increasing number of free model parameters does not necessarily lead to a better physical understanding. The sediment settling velocity estimated in equation \ref{eq:ws} is commonly based on a mass-median-diameter from particle size distribution measurements, which however may not be an optimized characteristic size as shown by some in-situ measurements \cite{williams2007high}. Large variations in $d_s$ can have a substantial impact on SSC profiles, which may be more significant than models for $\alpha$.
\section{Conclusion}
Operational modeling of SSC in turbulent flows continues to be a formidable challenge in hydraulics, hydrology, ecology, and water quality control. The work here establishes a new link between the spectrum of vertical velocity and SS turbulent flux, which was then used to arrive at expressions for the SSC profile. The spectrum of vertical velocity is characterized by multiple scaling regimes that include the Saffman spectrum ($E_{ww}(k) \sim k^{+2}$), the 'energy splashing' effect due to the presence of a wall ($E_{ww}(k) \sim k^{0}$), and the much-studied inertial subrange regime ($E_{ww}(k) \sim k^{-5/3}$). Finite Reynolds effects are accommodated through a scale separation between $z$ and the Kolmogorov microscale $\eta$ terminating the scale-wise extent of the inertial subrange (as a first approximation). This dependence can be noted when considering the scaling argument $k_e/k_o = z/\eta \sim (z u_*/\nu)^{3/4}$ \cite{tennekes2018first}. Hence, increasing $Re_s=(z u_*/\nu)$ by either increasing $z$ or $u_*$ leads to a widening of the scale-wise extent of the inertial subrange, which then impacts all subsequent expressions such as $\Omega(z_n)$ and $d\overline{u}/dz$. As such, the proposed model is responsive to finite Reynolds number, Schmidt number, and Rouse number effects. Prior \textit{ad-hoc} efforts such as correcting $l_o$ by $V_n$ (i.e. the van Driest damping function) can now be interpreted from this new spectral perspective (i.e. $Re_s$ effects become large for small $z$ or $u_*$). A simplified solution to the CSB model in which the Saffman spectrum is truncated but the inertial subrange is now extended to infinite wave-numbers (i.e. $Re_s\rightarrow\infty$) was shown to recover earlier theories (e.g. Rouse's formula). The fitted Rouse's equation (and by extension the simplified CSB solution) also describes the measured SSC profiles in all the experiments considered here provided $\alpha$ (or $\beta$) is allowed to vary with $w_s/u_*$. Thus, one of the main novelties here is to provide a spectral link between the energy distribution in eddies and the SSC shape. Interactions between turbulent eddies and suspended sediment grains at various heights were also proposed, resulting in a scale-dependent $Sc$ captured by a single parameter $\alpha$ that varies with $w_s/u_*$. Such $Sc$ variations were formulated in spectral space but recover expected bulk relations between $R$ and $Sc$ identified by other models, experiments, and simulation studies. When all these findings are taken together, future extension of this work must focus on upgrading the particle-turbulence interaction scheme and its signature in a scale-dependent Schmidt number. Such extension will benefit from targeted DNS runs where all the terms in the particle co-spectrum as well as $E_{ww}(k)$ can be computed or determined. Likewise, an exploration of where the sediment flux transport term is significant relative to the mechanical production term and how to incorporate its effects can be undertaken from the aforementioned DNS runs.
\subsection*{Data Availability and Acknowledgements}
All the data used were digitized from the published literature \cite{greimann2001two, vanoni1984fifty, jha2009two, tsengtwo}. SL was supported by a fellowship from the Nicholas School of the Environment at Duke University. GK and ADB acknowledge support from the U.S. National Science Foundation (NSF-AGS-1644382, NSF-AGS-2028633, NSF-IOS-1754893, and NSF-CBET-2042346).
| proofpile-arXiv_065-12458 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection*{Acknowledgements} {#1}
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}} }
\newcommand{\Abs}[1]{\left\vert#1\right\vert}
\newcommand{\bgu}{\mathring{{\bf u}}}
\newcommand{\bgB}{\mathring{{\bf B}}}
\newcommand{\bfomg}{\boldsymbol{\omega}}
\newcommand{\bfPsi}{\boldsymbol{\Psi}}
\newcommand{\bfSgm}{\boldsymbol{\Sigma}}
\newcommand{\err}{\boldsymbol{\epsilon}}
\newcommand{\errh}{\boldsymbol{\delta}}
\newcommand{\errwp}{\boldsymbol{e}}
\newcommand{\tu}{\tilde{u}}
\newcommand{\tb}{\tilde{b}}
\newcommand{\tc}{\tilde{c}}
\newcommand{\tn}{\tilde{n}}
\newcommand{\tsgm}{\tilde{\sigma}}
\newcommand{\tpsi}{\tilde{\psi}}
\newcommand{\tphi}{\tilde{\phi}}
\newcommand{\tomg}{\tilde{\omega}}
\newcommand{\ttht}{\tilde{\theta}}
\newcommand{\tilde{\rho}}{\tilde{\rho}}
\newcommand{\dlta}{\delta_{1}}
\newcommand{\vert}{\vert}
\newcommand{\partial}{\partial}
\newcommand{\dfrm}[1]{{}^{(#1)} \pi}
\newcommand{\mean}[1]{\bar{#1}}
\vfuzz2pt
\hfuzz2pt
\begin{document}
\title{Loss of regularity for the 2D Euler equations
\author{In-Jee Jeong\thanks{Department of Mathematical Sciences and RIM of Seoul National University. E-mail: injee\_j@snu.ac.kr} }
\date{\today}
\maketitle
\begin{abstract}
In this note, we construct solutions to the 2D Euler equations which belong to the Yudovich class but lose $W^{1,p}$ regularity continuously with time.
\end{abstract}
\section{Introduction}
The dynamics of inviscid and incompressible fluid is described by the Euler equations: given some $n$-dimensional domain $\Omega$, the system is given by \begin{equation} \label{eq:Euler}
\left\{
\begin{aligned}
\partial_t u + u\cdot\nabla u +\nabla p & = 0, \\
\nabla\cdot u &= 0,
\end{aligned}
\right.
\end{equation} where $u(t,\cdot):\Omega\rightarrow\mathbb R^n$ and $p(t,\cdot):\Omega\rightarrow\mathbb R$ denote the velocity and pressure of fluid at time $t$, respectively. When $\Omega$ has boundary, \eqref{eq:Euler} should be supplemented with the slip boundary condition $u(t,x)\cdot n(x) = 0$ for $x\in \partial\Omega$, where $n(x)$ is the unit normal vector.
We shall be concerned with the two-dimensional Euler equations on $\mathbb T^2 = (\mathbb R/\mathbb Z)^2$; introducing the vorticity $\omega = \nabla\times u$ and taking the curl of \eqref{eq:Euler}, we have the vorticity form of the 2D Euler equations: \begin{equation}\label{eq:Euler-vort}
\begin{split}
\partial_t \omega +u\cdot\nabla\omega = 0, \quad u = \nabla^\perp\Delta^{-1}\omega.
\end{split}
\end{equation} Here, $\nabla^\perp = (-\partial_{x_2},\partial_{x_1})^\top$. The goal of this note is to construct solutions to \eqref{eq:Euler-vort} which belongs to a well-posedness class yet lose Sobolev regularity with time.
To motivate our results, we briefly review the well-posedness theory for the Euler equations \eqref{eq:Euler}. For sufficiently nice $n$-dimensional domains, \eqref{eq:Euler} is locally well-posed in $C^{k,\alpha}\cap L^2$ with $k\ge 1$ and $0<\alpha<1$ and $W^{s,p}$ with $s>\frac{n}{p}+1$. That is, for $u_0$ belonging to such a space, there exist $T>0$ and a unique local-in time solution \eqref{eq:Euler} such that $u(t)$ belongs to the same space for all $0\le t<T$ and $u(0)=u_0$. It is known that these regularity requirements are sharp; for $u_0$ less regular, the regularity may not propagate in time for a solution of \eqref{eq:Euler}. To be more precise, for $n= 3$, $s=1$, and any $1\le p$, there exist examples of solutions $u(t)$ such that \begin{equation}\label{eq:loss-vel}
\begin{split}
u(0) \in W^{s,p}(\mathbb T^n) \quad \mbox{and} \quad u(t) \notin W^{s,p}(\mathbb T^n) \quad \mbox{for any}\quad t>0.
\end{split}
\end{equation} Similar examples exist which do not propagate initial $C^\alpha$ regularity of $u(0)$ for any $\alpha<1$. We shall recall the constructions below; for now, let us just note that the examples are based the so-called $2+\frac{1}{2}$ flow, which gives the restriction that $n\ge 3$ (Bardos-Titi \cite{BT,BT2}). It seems that in the two dimensional case, solutions satisfying \eqref{eq:loss-vel} have not been constructed before.
Recently, Bourgain-Li \cite{BL1,BL2,BL3D} established ill-posedness of \eqref{eq:Euler} at critical regularity; roughly speaking, they were able to prove that \eqref{eq:loss-vel} occurs with $s=\frac{n}{p}+1$ and $p>1$, in dimensions two and three. (See also \cite{EJ,EM1,JY2,MY,JY,JKi} for simpler proofs and further developments.) It does not imply that \eqref{eq:loss-vel} occurs also in less regular Sobolev spaces; in principle, this question could be harder since one expects less control over weaker solutions.
Our main result shows that an $L^\infty \cap W^{1,p}$-vorticity with $p<2$ may continuously lose integrability with time. For bounded initial vorticity, the uniqueness and existence of the solution $\omega(t,x)\in L^\infty([0,\infty);L^\infty(\mathbb T^2))$ is provided by the celebrated Yudovich theory \cite{Y1}. Moreover, we shall take the initial data $\omega_0$ to be Lipschitz continuous away from a single point, which is a property preserved by Yudovich solutions (see \cite{EJ} for a proof). In particular, for any $t>0$, $\nabla\omega$ is well-defined almost everywhere in space and bounded away from a single point.
\begin{theorem}\label{thm:torus}
There exist $1\le p^*<2$ and $c_0>0$ such that for any $p^*<p_0<2$, we can find an initial data $\omega_0 \in L^\infty \cap \left( \cap_{p<p_0} W^{1,p}\right) (\mathbb{T}^2)$ such that the unique solution $\omega(t)$ satisfies, with some {$T^*=T^*(p_0)>0$}, \begin{equation*}
\begin{split}
\nrm{\omega(t,\cdot)}_{W^{1, {q(t)}}(\mathbb T^2)} = + \infty~,
\end{split}
\end{equation*} for \begin{equation*}
\begin{split}
{q(t):=} 1 + \frac{1}{\frac{1}{p_0-1} + c_0t},\quad 0\le t\le T^*.
\end{split}
\end{equation*}
\end{theorem}
\begin{remark}
Let us give a few remarks on the statement.
\begin{itemize}
\item For any $t>0$, we may define the index $p(t)=\sup\{0< q: \omega(t)\in W^{1,q}(\mathbb T^2) \} $. While the statement of Theorem \ref{thm:torus} does not exclude the possibility that $p(t)$ jumps downwards in time, this behavior is impossible due to the bound given in Proposition \ref{prop:E}. Hence, for the data $\omega_0$ given in Theorem \ref{thm:torus}, the index $p(t)$ is continuous in time and satisfies $p(t)<p_0$ for any $t\in (0,T^*]$.
\item In terms of the velocity, Theorem \ref{thm:torus} says that \eqref{eq:loss-vel} occurs {at least for a small interval of time with} $u_0\in W^{2,p}(\mathbb T^2)$ with $p^*<p<2$.
\item The restriction $p^*<p_0$ should be technical but it could take significant work to remove it; we shall see in the proof that $p^*$ depends only on the constant $C$ in Lemma \ref{lem:key}.
\item Our data can be localized to any small ball and hence the specific choice of domain $\mathbb T^2$ is not important.
\end{itemize}
\end{remark}
\subsection*{Organization of the paper}
The rest of this paper is organized as follows. In Section \ref{sec:example}, we present several examples and discussions which illustrate delicate nature of Euler equations at low regularity. The proofs are then presented in Section \ref{sec:proofs}.
\section{Examples}\label{sec:example}
We present several examples in order to give a sense of the behavior described in the main results. In \ref{subsec:BT}, we recall the well-known construction of Bardos-Titi in three dimensions and make a comparison with our result. Then in \ref{subsec:BC}, we present a time-independent vector field which is able to cause the phenomenon of continuous loss of Sobolev regularity for the advected scalar. In the same section we comment on some difficulties in actually using the vector field in the context of the Euler equations.
\subsection{Loss of regularity in shear flows}\label{subsec:BT}
Any solution of the 2D Euler equations can be lifted to solutions in 3D (and higher); given $(u^{2D},p^{2D})$ solving \eqref{eq:Euler} in $\mathbb T^2$, define \begin{equation*}
\begin{split}
u(t,x) = (u^{2D}(t,x_1,x_2),u_3(t,x_1,x_2))
\end{split}
\end{equation*} where $u_3$ is any solution to \begin{equation*}
\begin{split}
\partial_t u_{3} + u^{2D}\cdot\nabla u_3 = 0.
\end{split}
\end{equation*} Then one can see that $u$ defines a solution to the 3D Euler equations with $p = p^{2D}$. This is sometime referred to as the $2+\frac{1}{2}$-dimensional flow (see \cite{MB}). A special class of $2+\frac{1}{2}$-dimensional flows are given by the following \textit{shear flows}: \begin{equation}\label{eq:shear}
\begin{split}
u(t,x) = (u_1(x_2),0,u_3(x_1-tu_1(x_2),x_2)).
\end{split}
\end{equation} Note that this defines a solution to the 3D Euler equations in $\mathbb T^3$ with zero pressure for any reasonably smooth functions $u_1$ and $u_3$ of one variable.\footnote{For $u_1$ and $u_3$ belonging to $L^2(\mathbb T)$, it can be shown that $u$ defined in \eqref{eq:shear} is a weak solution to the 3D Euler equations; see \cite[Theorem 1.2]{BT}.} The form \eqref{eq:shear} was introduced in a work of DiPerna-Majda \cite{DiPM} to provide an example of weak solution sequence in 3D Euler whose limit is \textit{not} a solution to 3D Euler. For more applications and references regarding this flow, one can see the illuminating papers of Bardos-Titi \cite{BT,BT2}.
\begin{proposition}[{{see \cite[Proposition 3.1]{BT2} and \cite[Theorem 2.2]{BT}}}]
There exists initial data $u_0 \in W^{1,p}(\mathbb T^3)$ with any $1\le p$ ($u_0\in C^\alpha(\mathbb T^3)$ with any $0<\alpha<1$, resp.) of the form \begin{equation*}
\begin{split}
u_0(x) = (u_1(x_2),0,u_3(x_1,x_2))
\end{split}
\end{equation*} such that the corresponding shear flow solution $u(t)$ given in \eqref{eq:shear} does not belong to $W^{1,p}(\mathbb T^3)$ ($C^\alpha(\mathbb T^3)$, resp.) for any $t>0$.
\end{proposition}
\begin{proof}
We only consider the $W^{1,p}$-case with $1\le p<\infty$. Note that \begin{equation*}
\begin{split}
\nrm{u_0}_{W^{1,p}}^p \lesssim \int_0^1 |\partial_{x_2}u_1(x_2)|^p dx_1 + \int_0^1\int_0^1 |\partial_{x_1}u_3(x_1,x_2)|^p + |\partial_{x_2}u_3(x_1,x_2)|^p dx_1dx_2
\end{split}
\end{equation*} and {computing $\partial_{x_2}u(t)$ using \eqref{eq:shear}, we obtain that \begin{equation}\label{eq:comp}
\begin{split}
\nrm{u(t)}_{W^{1,p}}^p\gtrsim -\nrm{\partial_{x_2}u_3}_{L^p}^{p} + t^p\int_0^1\int_0^1 |\partial_{x_2}u_1(x_2)|^p|\partial_{x_1}u_3(x_1,x_2)|^pdx_1dx_2.
\end{split}
\end{equation} Given the above formula, let us define $u_1(x_2) = |x_2|^{1-\frac{1}{p}+\epsilon}$ for small $\epsilon>0$ near $x_2=0$ and smooth otherwise. Next, define $u_3(x_1,x_2)=|x|^{1-\frac{2}{p}+\epsilon}$ near $|x|=0$, where $|x|=\sqrt{x_1^2+x_2^2}$. Since $|\partial_{x_i}u_3| \lesssim |x|^{-\frac{2}{p}+\epsilon}$ for $i=1,2$, it is clear that $u_0 \in W^{1,p}$. In particular, the first term on the right hand side of \eqref{eq:comp} is bounded.} Furthermore, employing polar coordinates $(r,\theta)$, \begin{equation*}
\begin{split}
\int_0^1\int_0^1 |\partial_{x_2}u_1(x_2)|^p|\partial_{x_1}u_3(x_1,x_2)|^pdx_1dx_2 & \gtrsim \int_0^{r_0} r^{-1+\epsilon p} r^{-2+\epsilon p} r dr =+\infty
\end{split}
\end{equation*} for $\epsilon <p^{-1}$. This finishes the proof.
\end{proof}
Note that for solutions of the form \eqref{eq:shear}, the integrability index may drop instantaneously at $t =0$ but $u(t_1)$ and $u(t_2)$ have the same Sobolev regularity for any $t_1,t_2>0$. (One can see that $u(t)$ belongs to at least $W^{1,\frac{p}{2}}(\mathbb T^3)$ for any $t>0$.) Similar phenomenon occurs in the scale of $C^\alpha$ spaces, see \cite{BT}. This behavior is very different from continuous loss of integrability that our solutions exhibit. Actually, in two dimensions, conservation of $\omega\in L^\infty$ \textit{prohibits} jump of the integrability index, as the following remarkable result due to Elgindi shows: \begin{proposition}[{{see \cite[Lemma 3]{EJ}}}]\label{prop:E}
Let $\omega_0\in (L^\infty\cap W^{1,p})(\mathbb T^2)$ with $0<p\le 2$. Then, the Yudovich solution corresponding to $\omega_0$ satisfies \begin{equation*}
\begin{split}
\nrm{\omega(t)}_{W^{1,q(t)}} \le \nrm{\omega_0}_{W^{1,p}}
\end{split}
\end{equation*} where $q(t)$ is the solution to the ODE \begin{equation*}
\begin{split}
\dot{q}(t) = -C\nrm{\omega_0}_{L^\infty} q(t)^2 ,\quad q(0) = p
\end{split}
\end{equation*} with some universal constant $C>0$ not depending on $p$.
\end{proposition}
\subsection{Loss of regularity with the Bahouri-Chemin background}\label{subsec:BC}
We shall consider the following vector field $v$ on the positive quadrant $(\mathbb R_+)^2$: \begin{equation}\label{eq:BC-model}
\begin{split}
v_1(x_1,x_2) = -x_1 \ln \frac{1}{x_2}~, \qquad v_2(x_1,x_2) = x_2 \ln \frac{1}{x_2}~.
\end{split}
\end{equation} This is a toy model for the Bahouri-Chemin velocity field which will be introduced below. Although $v$ is not exactly divergence free (the divergence satisfies $\mathrm{div}(v)=-1$), one can add $x_2$ to $v_2(x_1,x_2)$ to make it divergence-free. (The resulting flow can be shown to demonstrate the same behavior but computations are more tedious.) We have the following
\begin{proposition}\label{prop:BC-model-flow}
Let $f_0$ be the function defined in $(\mathbb R_+)^2$ by $f_0(r,\theta)=\sin(2\theta)$ for $0\le r<1$ in polar coordinates. Then, the solution $f(t)$ to the transport equation { \begin{equation}\label{eq:transport-BC-model}
\left\{
\begin{aligned}
\partial_t f + v \cdot\nabla f = 0, & \\
f(t=0)=f_0 &
\end{aligned}
\right.
\end{equation}}satisfies \begin{equation}\label{eq:loss-model}
\begin{split}
\nrm{f(t)}_{W^{1,q(t)}} = +\infty
\end{split}
\end{equation} and \begin{equation}\label{eq:retain-model}
\begin{split}
\nrm{f(t)}_{W^{1,q(t)-\epsilon}}<\infty, \mbox{ for any } \epsilon>0
\end{split}
\end{equation} for all $t>0$, where $q(t) := \frac{2}{2-\exp(-t)}$.
\end{proposition}
Before we start the proof, let us observe a convenient lemma which gives a lower bound on the $W^{1,p}$ norm based on the distance between two level sets.
\begin{lemma}\label{lem:lvl-Sob}
Assume that $f\in L^\infty(\mathbb R^2)\cap Lip(\mathbb R^2\backslash\{0\})$ written in polar coordinates satisfies $f(r,0) = 0$ and $f(r,\theta^*(r))=1$ for all $0<r<r_0$ with some $r_0>0$ where $\theta^*:[0,r_0]\rightarrow \mathbb R/2\pi\mathbb Z$ is a measurable function. Then, \begin{equation*}
\begin{split}
\nrm{\nabla f}_{L^p}^p \ge c \int_0^{r_0} r^{1-p} (\theta^*(r))^{1-p} dr.
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
The $L^p$-norm of $\nabla f$ is equivalent, up to absolute constants, with \begin{equation}\label{eq:Lp-def}
\begin{split}
\nrm{\partial_r f}_{L^p}^p + \nrm{r^{-1}\partial_\theta f}_{L^p}^p.
\end{split}
\end{equation} Note that for any fixed $0<r<r_0$, \begin{equation*}
\begin{split}
1 = f(r,\theta^*(r))-f(r,0) = \int_0^{\theta^*(r)} \partial_\theta f (r,\theta') d\theta' \le |\theta^*(r)|^{1-\frac{1}{p}} \left( \int_0^{2\pi} |\partial_\theta f(r,\theta)|^p d\theta \right)^{\frac{1}{p}},
\end{split}
\end{equation*} which gives \begin{equation*}
\begin{split}
\int_0^{2\pi} |\partial_\theta f(r,\theta)|^p d\theta \ge |\theta^*(r)|^{1-p}.
\end{split}
\end{equation*} Hence \begin{equation*}
\begin{split}
\nrm{\nabla f}_{L^p}^p \ge c \int_0^{r_0} \int_0^{2\pi} r^{1-p} |\partial_\theta f(r,\theta)|^p d\theta dr \ge c \int_0^{r_0} r^{1-p} (\theta^*(r))^{1-p} dr. \qedhere
\end{split}
\end{equation*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:BC-model-flow}]
We fix some $0<a<1$ and let $\phi(t)=(\phi_1(t),\phi_2(t))$ be the trajectory of the point $(a,a)$ by the flow generated by $v$. Then, from \begin{equation*}
\begin{split}
\dot{\phi}_2 = \phi_2 \ln\frac{1}{\phi_2},
\end{split}
\end{equation*} we have \begin{equation*}
\begin{split}
\ln\frac{1}{\phi_2} = e^{-t} \ln\frac{1}{a}.
\end{split}
\end{equation*} Next, from \begin{equation*}
\begin{split}
\dot{\phi}_1 = -\phi_1\ln\frac{1}{\phi_2} = -\phi_1 e^{-t} \ln\frac{1}{a},
\end{split}
\end{equation*} we obtain that \begin{equation*}
\begin{split}
\ln\frac{1}{\phi_1} = (2-e^{-t}) \ln\frac{1}{a}.
\end{split}
\end{equation*} {This shows that the image by the flow map $\Phi(t,\cdot)$ of the diagonal segment $\{ (a,a):0<a<1 \}$ can be parameterized by $$\Gamma(t):=\{ (x_1,x_1^{\frac{\exp(-t)}{2-\exp(-t)}}) : 0<x_1<1 \}$$ for any $t>0$.} (See Figure 1.) That is, the solution $f$ with initial data $f_0(r,\theta)=\sin(2\theta)$ for $r\le1$ satisfies $f(t,\Gamma(t))\equiv 1$. Since $f(t,(0,x_2))\equiv 0$ for all $t$, \eqref{eq:loss-model} follows from a computation similar to the one given in the proof of Lemma \ref{lem:lvl-Sob}: for any fixed $t>0$ and $0<x_2<1$, \begin{equation*}
\begin{split}
\int_0^1 |\partial_{x_1}f (x_1,x_2)|^p dx_1 \ge x_2^{-\frac{1}{\gamma}(p-1)}, \quad \gamma = \frac{\exp(-t)}{2-\exp(-t)},
\end{split}
\end{equation*} and hence \begin{equation*}
\begin{split}
\int_0^1\int_0^1 |\partial_{x_1}f (x_1,x_2)|^p dx_1 dx_2\ge \int_0^1 x_2^{-\frac{1}{\gamma}(p-1)} dx_2,
\end{split}
\end{equation*} which is integrable if and only if $-\frac{1}{\gamma}(p-1)>-1$. We omit the proof of \eqref{eq:retain-model}.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=0.5]{fig}
\caption{Flow of a ``rectangle'' in polar coordinates: the curves $\{ \theta = \mathrm{const} \}$ becomes instantaneously tangent with the $x_2$-axis for $t>0$.}
\end{figure}
In the explicitly solvable example above, continuous loss of Sobolev regularity occurs from the cusping of a level set for the advected scalar. While it is correct that the velocity corresponding to $\omega_0 = \sin(2\theta)$ has approximately the form \eqref{eq:BC-model} (cf. Lemma \ref{lem:key}), the velocity field immediately changes for $t>0$ in the 2D Euler case, which is a nonlinear problem. Indeed, cusping of the level sets weakens the velocity gradient as some chunk of vorticity moves away from the origin (the integral $I(x)$ in \eqref{eq:key} is reduced), which at the same time slows down the cusping phenomenon. To illustrate this point, one can simply compute the 2D Euler velocity gradient corresponding to $f(t)$ where $f$ is the solution to \eqref{eq:transport-BC-model}. At the origin (which is the point $\nabla \nabla^\perp \Delta^{-1}f$ is supposedly most singular), \begin{equation*}
\begin{split}
\partial_{x_1}\partial_{x_2}\Delta^{-1}f(t,0) \simeq c\int \frac{y_1y_2}{|y|^4} f_0\circ \phi(t,y) dy \simeq c' \int \frac{y_1y_2}{|y|^4} \frac{\phi_1(y)}{\phi_2(y)} dy \simeq \frac{c''}{t}
\end{split}
\end{equation*} for $0<t\ll 1$, using that $\phi_1(t,y)\simeq y_1y_2^t$ and $\phi_2(t,y)\simeq y_2^{1-t}$. Hence, the velocity corresponding to $f$ becomes Lipschitz continuous instantaneously for $t>0$.
It is an interesting problem by itself to determine the behavior of the Yudovich solution with initial data $\omega_0\sim \sin(2\theta)$ near the origin. In \cite[Section 6.2]{EJ2}, a formal nonlinear system which models this behavior was introduced. Roughly speaking, the model equation is obtained by replacing the 2D Euler velocity gradient with the main term $I(x)$ in \eqref{eq:key}. The formal model is still not explicitly solvable, but it can be reduced to a second order ODE system with time-dependent coefficients. The numerical solution suggests that cusping of level sets occur but the cusps are not algebraic (that is, of the form $(x_1,x_1^\gamma)$) but just logarithmic of the form $(x_1,x_1(\ln\frac{1}{x_1})^\gamma)$ for some time-dependent $\gamma$. The corresponding velocity indeed \textit{regularizes} with time; it can be argued that $\nrm{\nabla u(t)}_{L^\infty}\simeq ct^{-1}$ for $0<t\ll 1$. This does not allow for continuous in time loss of Sobolev regularity for $\omega$.
Closing this section, let us remark that the situation is simpler when the spatial domain has a boundary. For instance, one can consider instead of $\mathbb T^2$, the domain $\mathbb{T} \times [0,1]$ which has the exact same Biot-Savart law.\footnote{Note that any smooth solution on $\mathbb T^2$ can be regarded as a smooth solution on $\mathbb{T} \times [-1,1]$, but the converse holds only when the solution vanishes at the boundary $\mathbb T\times (\{0\}\cup\{1\})$.} We now have that $f_0 = \cos(\theta) \in W^{1,p}(\mathbb{T} \times [-1,1])$ for any $p<2$ and that under the flow given in \eqref{eq:BC-model}, $f(t)$ converges pointwise to $\mathrm{sgn}(x_1)$. In the context of the 2D Euler equation, this guarantees that the velocity corresponding to $\omega(t)$ with $\omega_0 = f_0$ retains the asymptotic form of \eqref{eq:BC-model} for all $t>0$. For this reason, a very short proof of Theorem \ref{thm:torus} in the $\mathbb{T} \times [-1,1]$-case can be obtained (see the author's thesis \cite[Corollary 2.2.7]{Jthesis}). Alternatively, one can adopt the argument of Zlatos \cite{Zm} who showed merging of level sets for 2D Euler solutions in the disc.
\section{Proof}\label{sec:proofs}
\subsection{Preliminaries}
We recall the basic log-Lipschitz estimate for the velocity. For a proof, see \cite{MB,MP}. \begin{lemma}\label{lem:log-Lip}
Let $u = \nabla^\perp\Delta^{-1}\omega$ with $\omega\in L^\infty(\mathbb T^2)$. Then, for any $x,x' \in \mathbb T^2$ with $|x-x'| < 1/2$, \begin{equation}\label{eq:log-Lip}
\begin{split}
|u(x)- u(x')| \le C\nrm{\omega}_{L^\infty} |x-x'| \ln \left(\frac{1}{|x-x'|}\right)~.
\end{split}
\end{equation}
\end{lemma}
The key technical tool is the following lemma (see Kiselev--Sverak \cite{KS}, Zlatos \cite{Z}, Elgindi--Jeong \cite{EJ2,EJSVP2}), which is in some sense complementary to the previous log-Lipschitz estimate. \begin{lemma}\label{lem:key}
Let $\omega \in L^\infty( \mathbb{T}^2 )$ be odd with respect to both axes. For any $x = (x_1,x_2)$ with $x_1,x_2 \in (0,1/2]$, we have \begin{equation}\label{eq:key}
\begin{split}
(-1)^j\frac{ u_j(x)}{x_j} = I(x)+ B_j(x)
\end{split}
\end{equation} where \begin{equation}\label{eq:I-def}
\begin{split}
I(x):= \frac{4}{\pi} \int_{[2x_1,1]\times[2x_2,1]} \frac{y_1y_2}{|y|^4} \omega(y) dy
\end{split}
\end{equation} and \begin{equation}\label{eq:B-est}
\begin{split}
\left| B_j(x) \right| \le C \nrm{\omega}_{L^\infty} \ln\left( 10 + \frac{x_{3-j}}{x_j} \right) , \qquad j = 1, 2.
\end{split}
\end{equation}
\end{lemma}
\begin{comment}
\subsection{Case of the infinite strip}
\begin{proof}
The initial data can be given explicitly; in polar coordinates, define on $[0,1]^2$ \begin{equation*}
\begin{split}
\omega_0(r,\theta) := \chi(r) \cdot \begin{cases}
1 & 0 \le \theta \le \pi/4 \\
\frac{4}{\pi} (\frac{\pi}{2} - \theta) & \pi/4 < \theta \le \pi/2
\end{cases}
\end{split}
\end{equation*} where $\chi(\cdot)$ is some smooth cutoff function equals $1$ on $[0,1/2]$ then vanishes outside of $[0,2/3]$. Then we extend $\omega_0$ as an odd function on $x_1$. To check that $\omega_0 \in W^{1,p}(\mathbb{H})$ for all $p<2$, it suffices to check $\partial_{x_i}\theta \in L^p(\mathbb{H})$ for $i = 1,2$: in the case of $i = 1$, \begin{equation*}
\begin{split}
\int_{\mathbb{H}} |\partial_{x_1}\theta |^p \le C \int\int \left| \frac{-\sin\theta}{r} \right|^p r d\theta dr \le C \int_0^1 \frac{1}{r^{p-1}} dr < + \infty~.
\end{split}
\end{equation*}
To check the blow-up statement one first runs an exactly same argument as in Theorem \ref{thm:odd_odd_Hoelder} with the above initial data $\omega_0$, to deduce in particular that for each $0\le t\le T^*$ and $0 < x_1 < \delta(t)$ small enough, \begin{equation*}
\begin{split}
\omega_t(x_1, x_1^{\alpha(t)}) \equiv 1~.
\end{split}
\end{equation*} Then, since the solution $\omega_t$ stays as a Lipschitz function away from the origin, we simply estimate for $0 < x_1,x_2 \ll 1$ \begin{equation*}
\begin{split}
\left(\int |\partial_{x_1} \omega_t(y,x_2)|^p dy\right)^{1/p} x_2^{1/(\alpha(t) p')} \ge \int_0^{x_2^{1/\alpha}} \partial_{x_1} \omega_t(y,x_2)dy = \omega_t(0,x_2) - \omega_t(x_2^{1/\alpha(t)},x_2) = 1~,
\end{split}
\end{equation*} where $1/p+1/p' = 1$. Integrating in $x_1$, \begin{equation*}
\begin{split}
\int\int |\partial_{x_1} \omega_t(y_1,y_2)|^pdy_1dy_2 \ge \int_0^{\delta(t)} y_2^{-p/(\alpha(t)p')} dx_2 = +\infty
\end{split}
\end{equation*} as long as $p \ge 1+\alpha(t)$. This finishes the proof.
\end{proof}
\end{comment}
\subsection{Proof of Theorem \ref{thm:torus}}
In the proof, we shall take the initial vorticity $\omega_0\in L^\infty$ to be non-negative on $[0,1]^2$ and has odd symmetry with respect to both axes; that is, \begin{equation*}
\begin{split}
\omega_0(x_1,x_2) = -\omega_0(-x_1,x_2) = -\omega_0(x_1,-x_2), \qquad x_1, x_2 \in \mathbb T = [-1,1).
\end{split}
\end{equation*} The unique solution retains both the sign property and odd symmetry. For this reason, it suffices to specify the solution (and the data) only in the region $[0,1]^2$. Moreover, we shall normalize $\nrm{\omega_0}_{L^\infty}=1$.
Our initial data on $[0,1]^2$ will be even symmetric with respect to $x_1=x_2$, and in the region $0<x_1<x_2$, we define using polar coordinates {with a small $\beta>0$ to be specified below:} \begin{equation*}
\begin{split}
\omega_0(r,\theta) = \chi(r) \begin{cases}
r^{-\beta}\theta &\quad 0\le\theta <r^\beta,\\
1 &\quad r^\beta\le \theta.
\end{cases}
\end{split}
\end{equation*} Here, $\chi(r)$ is a smooth cutoff function satisfying $\chi(r)=1$ for $0<r<\frac{1}{2}$. Using \eqref{eq:Lp-def}, it is easy to see that $\omega_0 \in W^{1,p}$ if and only if $p<1+\frac{1}{1+\beta}$. Therefore, we {may define} $\beta$ to satisfy $p_0 = 1+\frac{1}{1+\beta}$, where $p_0<2$ is given in the statement of Theorem \ref{thm:torus}.
To begin with, {we introduce the flow map: given $x$, $\Phi(t,x)$ is defined by the solution to the ODE \begin{equation*}
\begin{split}
\frac{d}{dt}\Phi(t,x) = u(t,\Phi(t,x)), \qquad \Phi(0,x)=x.
\end{split}
\end{equation*}
}
Now, integrating \eqref{eq:log-Lip} in time, one can obtain the following flow estimate which is valid for $0<t<t_0$ and $|x-x'|<\frac{1}{4}$ with some absolute constants {$t_0, c>0$}: \begin{equation*}
\begin{split}
|x-x'|^{1+ct} \le |\Phi(t,x)-\Phi(t,x')|\le |x-x'|^{1-ct}
\end{split}
\end{equation*} and in particular, by taking $x' = 0$, \begin{equation}\label{eq:radial-bds}
\begin{split}
|x |^{1+ct} \le |\Phi(t,x) |\le |x |^{1-ct}.
\end{split}
\end{equation}
We now prove that for $0<t<t_0$ and $0<r<\delta_0$, $\omega(t)\equiv 1$ on the region $$\mathbb A:=\{ (r,\theta): 0<r<\delta_0, \frac{\pi}{20}<\theta<\frac{\pi}{4} \}$$ for $t_0$ and $\delta_0$ taken sufficiently small. In the following we shall take them (in a way depending only on other absolute constants) smaller whenever it becomes necessary. Towards a contradiction, assume that there is a point $x_0\in[0,1]^2$ with $\omega_0(x_0)<1$ and $0<t<t_0$ that $\Phi(t,x_0)\in \mathbb A$. For $\omega_0(x_0)<1$, $x_0=(r_0,\theta_0)$ (in polar coordinates) should satisfy at least one of the following: (i) $r_0>\frac{1}{2}$, (ii) $\theta_0<r_0^\beta$, (iii) $\frac{\pi}{2} - \theta_0<r_0^\beta$. By arranging $\delta_0,t_0$ small, we can exclude the possibility of (i) using \eqref{eq:radial-bds}. Now assume (ii) holds. We may further assume that $r_0<\sqrt{\delta_0}$ by taking $t_0$ smaller if necessary. Writing $\Phi(t,x_0) = (r(t),\theta(t))$ in polar coordinates and applying \eqref{eq:log-Lip} with $x = \Phi(t,x_0)$ and $x' = (r(t),0)$, \begin{equation*}
\begin{split}
|u^\theta(t,\Phi(t,x_0))|\le Cr(t)\theta(t) \ln \left( \frac{1}{r(t)\theta(t)}\right).
\end{split}
\end{equation*} Here $u^\theta = u \cdot e^\theta$ is the angular component of the velocity and we have used that it vanishes on the axes, due to the odd symmetry of $\omega$. Hence we can bound \begin{equation*}
\begin{split}
\left|\frac{d}{dt} \theta(t) \right| = \frac{|u^\theta(t,\Phi(t,x_0))|}{|\Phi(t,x_0)|} \le C\theta(t) \ln \left( \frac{1}{r(t)\theta(t)}\right)\le C\theta(t) \ln \left( \frac{1}{r_0^{1+ct}\theta(t)}\right).
\end{split}
\end{equation*} Now we may take $t_0$ small that $1+ct<2$ for all $t\le t_0$ and then \begin{equation*}
\begin{split}
\frac{d}{dt} \ln\left( \frac{1}{\theta(t)} \right) \ge -C\left( \ln\left(\frac{1}{r_0}\right) + \ln\left(\frac{1}{\theta(t)}\right) \right),
\end{split}
\end{equation*} which gives \begin{equation*}
\begin{split}
\ln\left( \frac{1}{\theta(t)} \right) & \ge e^{-Ct} \ln\left( \frac{1}{\theta_0} \right) + (e^{-Ct}-1) \ln\left( \frac{1}{r_0} \right) \ge (1-Ct) \ln\left( \frac{1}{\theta_0} \right) -Ct \ln\left( \frac{1}{r_0} \right) \\
&\ge (1-Ct-\frac{Ct}{\beta}) \ln\left( \frac{1}{\theta_0} \right),
\end{split}
\end{equation*} where in the last step we have used that $\theta_0< r_0^\beta$. Therefore we can guarantee that for all $t\le t_0$ and $r_0<\delta_0$, \begin{equation*}
\begin{split}
\theta(t) \le \theta_0^{1-Ct(1+\frac{1}{\beta})} < r_0^{\beta(1-Ct(1+\frac{1}{\beta}) )} < \delta_0^{\frac{\beta}{2}(1-Ct(1+\frac{1}{\beta}) )} <\frac{\pi}{20}
\end{split}
\end{equation*} by taking $t_0,\delta_0$ smaller if necessary. This gives the desired contradiction. The proof that (iii) is impossible can be done similarly.
Let us follow the trajectory of the {curve} $\{ (r_0,\frac{\pi}{2}-r_0^\beta)_* : 0<r_0< \delta_0^4 \}$, on which $\omega_0\equiv 1$. {Here and in the following, let us use the polar coordinates with notation $(r,\theta)_* = (r\cos\theta, r\sin\theta)$. For convenience, let us define}
$$(r(t),{\theta}(t))_* :=\Phi(t,(r_0,\frac{\pi}{2}-r_0^\beta)_*).$$ We can assume that on $[0,t_0]$, $r(t)<\delta_0^2$ for any $r_0<\delta_0^4$. Moreover, on the diagonal $\{ (a,a) : a < {\delta_0^2} \}$, we may compute that \begin{equation*}
\begin{split}
I(t,(a,a)) &= \frac{4}{\pi}\int_{ [2a,1]\times[2a,1]} \frac{y_1y_2}{|y|^4} \omega(t,y)dy \ge \frac{4}{\pi}\int_{\mathbb A \cap ([{2\delta_0^2},1]\times[ {2\delta_0^2},1] ) } \frac{y_1y_2}{|y|^4}dy \\
&\ge c \int_{2\delta_0^2}^{\delta_0} \int_{2\delta_0^2}^{y_1} \frac{y_2}{|y_1|^3} dy_2 dy_1 \ge c\ln\frac{1}{\delta_0} - C,
\end{split}
\end{equation*} {with some constants $c, C>0$,} using that $\mathbb A \cap ([{2\delta_0^2},1]\times[ {2\delta_0^2},1] ) $ contains the region $ \{ (y_1,y_2): 2\delta_0^2<y_1<\delta_0, 2\delta_0^2<y_2<y_1 \}$. Hence, for $\delta_0>0$ sufficiently small, the term $I(t,(a,a))$ can {``dominate''} $B_j(t,(a,a))$ for $j = 1, 2$ in \eqref{eq:key}. {To be precise, taking $\delta_0>0$ smaller if necessary, it follows from the error estimate \eqref{eq:B-est} with $x = (a,a)$ that \begin{equation*}
\begin{split}
|B_j(t,(a,a))| \le \frac{1}{10}I(t,(a,a)), \qquad j=1, 2.
\end{split}
\end{equation*}} This implies that the velocity is pointing northwest on the diagonal, which guarantees in particular that $ {\theta}(t) > \frac{\pi}{4}$. Now, along the trajectory $(r(t),\theta(t))_*$, we compute using \eqref{eq:key} that \begin{equation*}
\begin{split}
\dot{\theta}(t) = \frac{u^\theta(t,(r(t),\theta(t))_*)}{|r(t)|} & = \frac{1}{2}\sin(2\theta(t))\left( 2I + B_1 + B_2 \right) \\
& \ge \sin(2\theta(t))\left( I(t,(r(t),\theta(t))_*) - C(1+\ln \frac{1}{\frac{\pi}{2}-\theta(t)}) \right).
\end{split}
\end{equation*} Introducing $\bar{\theta}(t) = \frac{\pi}{2}-\theta(t)$, the above inequality becomes \begin{equation}\label{eq:lb}
\begin{split}
\dot{\bar{\theta}}(t) \le - 2\bar{\theta}(t) \left( {I(t,(r(t),\theta(t))_*) } - C(1 + \ln\frac{1}{\bar{\theta}(t)}) \right).
\end{split}
\end{equation} We now estimate {$I(t,(r(t),\theta(t))_*)$} from below, using $\omega(t) \equiv 1$ on $\mathbb A$. Note that \begin{equation*}
\begin{split}
\mathbb A \cap ([{2r(t)\cos(\theta(t))},1]\times[ {2r(t)\sin(\theta(t))},1] )\supset \{ (y_1,y_2): 3r(t)<y_1<\delta_0, 3r(t)<y_2<y_1 \}
\end{split}
\end{equation*} and using $r(t)\le r_0^{1-ct}$, \begin{equation*}
\begin{split}
I(t,(r(t),\theta(t))_*) \ge c(1-ct)\ln\left( \frac{1}{r_0} \right)= \frac{c}{\beta}(1-ct)\ln\left( \frac{1}{\bar{\theta}_0} \right)
\end{split}
\end{equation*} where the constant $c>0$ depends on $\delta_0$. Hence, as long as we have \begin{equation}\label{eq:ansatz}
\begin{split}
\frac{c}{\beta}(1-ct)\ln\left( \frac{1}{\bar{\theta}_0} \right) > 2C(1 + \ln\frac{1}{\bar{\theta}(t)})
\end{split}
\end{equation} where $C>0$ is the constant from \eqref{eq:lb}, we have \begin{equation*}
\begin{split}
\dot{\bar{\theta}}(t) \le -\frac{c}{\beta} {(1-ct)}\bar{\theta}(t) \ln \frac{1}{\bar{\theta}_0}
\end{split}
\end{equation*} for $t\le t_0$ by taking $t_0$ smaller and therefore \begin{equation*}
\begin{split}
\bar{\theta}(t) \le \bar{\theta}_0^{1+\frac{ct}{\beta}} = r_0^{\beta+ct} \le (r(t))^{(1-ct)(\beta+ct)}.
\end{split}
\end{equation*} Note that for $\beta>0$ sufficiently small, \eqref{eq:ansatz} is satisfied for $t\in[0,t_0]$, again by taking $t_0$ smaller {to satisfy $ct\le ct_0<\frac18$,} if necessary. Moreover, {$(1-ct)(\beta+ct) \ge \beta + c_0t$ with $c_0 = \frac{c}{8}$ if $0<\beta<\frac12$ and $ct_0<\frac18$}. Recall that \begin{equation*}
\begin{split}
p_0 = 1 + \frac{1}{1+\beta} > p^*
\end{split}
\end{equation*} and by taking $2>p^*$ closer to 2, we can guarantee \eqref{eq:ansatz}. Recalling that $\omega(t,\cdot)\equiv 1$ on $(r(t),\frac{\pi}{2}-\bar{\theta}(t))$ and applying Lemma \ref{lem:lvl-Sob} gives that \begin{equation*}
\begin{split}
\omega(t)\notin W^{1,q(t)}(\mathbb T^2),\quad q(t) = 1 + \frac{1}{1+\beta+c_0t} .
\end{split}
\end{equation*} This finishes the proof.
\subsection{Discussions}\label{subsec:disc}
Let us close the paper with commenting on related issues that we have not touched upon so far.
\subsubsection*{Loss of regularity with $H^1$ initial vorticity.} It is not clear to us whether it is possible for $\omega_0\in H^1\cap L^\infty$ to lose $W^{1,p}$ regularity with time. Let us briefly discuss the main difficulty. Note that as a consequence of Lemma \ref{lem:lvl-Sob}, $\omega_0 \notin H^1(\mathbb T^2)$ if $f$ takes on different constant values along two half-lines emanating from the origin. That is, insisting on the odd-odd scenario forces the vorticity to vanish near the origin. A natural choice, which was used in \cite{EJ} is to take roughly $\omega_0(r,\theta)\sim (\ln\frac{1}{r})^{-\gamma}\sin(2\theta)$. Then one can check that the corresponding velocity gradient satisfies $|\nabla u_0(r,\theta)|\sim (\ln\frac{1}{r})^{1-\gamma}$, and even the passive transport with $u_0$ is not strong enough to remove the vorticity from $W^{1,p}$ with any $p<2$.
After the completion of this work, we have learned about an interesting recent paper \cite{Hung} which proves that (see Theorem 3.1 therein) for $\omega_0\in H^1\cap C^0$, we have $\omega(t) \in W^{1,p} \cap W^{\alpha,2}$ for any $p<2$ and $\alpha<1$. That is, continuity (opposed to mere boundedness) of the vorticity makes the situation completely different.
\subsubsection*{Loss of regularity in negative H\"older spaces.}
Another natural question to ask is whether there exists initial vorticity $\omega_0\in C^{-\alpha}(\mathbb T^2)$ with a solution $\omega(t) \notin C^{-\alpha}(\mathbb T^2)$ for $t>0$. There are serious essential difficulties in proving such a statement; now the initial vorticity is necessarily unbounded and the uniqueness of solution is not guaranteed (see very recent progress in \cite{V1,V2,Bre,BV,Elling2016}). On the other hand, any $L^{p,q}$ norms of the vorticity is preserved for any solution, which makes it hard to lose H\"older regularity.
{\subsubsection*{Loss of regularity for active scalars.}
It will be an interesting problem to extend loss of regularity to active scalar equations, most notably to the case of SQG (surface quasi-geostrophic) equations. In this case, an analogue of Yudovich theorem is not available and therefore one should develop a new strategy.
}
\subsection*{Acknowledgement}
\noindent IJ has been supported by the New Faculty Startup Fund from Seoul National University, the Science Fellowship of POSCO TJ Park Foundation, and the National Research Foundation of Korea grant (No. 2019R1F1A1058486). {We sincerely thank the anonymous referees for their kind words and numerous suggestions which significantly improved the readability of the paper.} The author states that there is no conflict of interest.
\bibliographystyle{amsplain}
| proofpile-arXiv_065-12507 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Key concepts}
\label{Sec_Intro}
The preliminary material covered in this section draws from the work of Bombin~\cite{bombin2015single,bombin2016resilience} and was influenced by Brueckmann's thesis~\cite{breuckmann2018phd}, though our presentation is less topological and has some new ideas.
\subsection{Stabiliser codes}
An $n$ qubit error correcting code storing $k$ logical qubits can be represented by a projector $\Pi$ onto the codespace. Stabiliser codes are an important class where $\Pi$ can be described in terms of the code stabiliser $\mathcal{S}$. That is, $\mathcal{S}$ is an abelian subgroup of the Pauli group such that for all $S \in \mathcal{S}$ we have $S \Pi = \Pi S = \Pi$. To perform error correction we measure some set of checks $\mathcal{M} \subset \mathcal{S}$ that generate $\mathcal{S}$ under multiplication. We require that $\mathcal{M}$ suffices to the generate the whole stabiliser of the codespace but we allow for the possibly of $\mathcal{M}$ being overcomplete. We define the weight $\mathrm{wt}(\cdot)$ of a Pauli operator $P$ as the number of qubits on which $P$ acts nontrivially (the identity is the only trivial Pauli). Given a family of check sets $\mathcal{M}_n$ with index $n$, which we will call a \textit{check family}, we find there is a corresponding code family $\Pi_n$. For a given code family, there may be many different choices of check family, so many statements are more precisely defined with respect to check families. For instance, we have a notion of low-density party check (LDPC) and we say a check family is LDPC if there exists a constant $C$ such that for every $n$
\begin{enumerate}
\item For all $S \in \mathcal{M}_n$ we have $\mathrm{wt}(S)\leq C$;
\item For every physical qubit in the code, there are no more than $C$ checks in $\mathcal{M}_n$ that act non-trivially on that qubit.
\end{enumerate}
It is crucial that the constant $C$ is the same for every member of the family. One practical consequence is that for codes with an LDPC check family, the complexity of measuring checks does not increase with the code size. Crudely, one can say a code family is LDPC if there exists at least one corresponding LDPC check family. Note that topological code families are always LDPC.
Also important is the code distance $d_Q$. We use the subscript $Q$ to distinguish this from the single-shot distance (denoted $d_{ss}$) that we define later. The distance $d_Q$ is simply the minimum $\mathrm{wt}(P)$ over all $P$ such that $P \Pi = \Pi P$ but $P \notin \mathcal{S}$. It is useful to also define the min-weight $\ensuremath{\mathrm{wt}}_{\mathrm{min}}$ of a Pauli operator, which is
\begin{equation}
\ensuremath{\mathrm{wt}}_{\mathrm{min}} (P) := \{ \mathrm{wt} (PS) : S \in \mathcal{S} \}.
\end{equation}
To summarise, an $[[n,k,d_Q ]]$ code has parameters $n$ (number of physical qubits), $k$ (number of logical qubits) and $d_Q$ (qubit code distance).
The measurement syndrome is the result of measuring $\mathcal{M}=(M_1, M_2, \ldots , M_m)$. Given a physical Pauli error $E$ we can denote $\sigma(E)$ as the syndrome due to $E$ assuming perfect measurements. We use the convention that $\sigma(E)$ is a binary column vector with elements
\begin{equation}
\label{Eq_syndrome_def}
[\sigma(E)]_i = \begin{cases}
1 & \mbox{ if } E M_i = - M_i E \\
0 & \mbox{ if } E M_i = M_i E \\
\end{cases}
\end{equation}
We will be interested in the weight of the syndrome and always use $| \ldots |$ to denote the Hamming weight of binary vectors. The Hamming weight is the number of nonzero elements.
\subsection{Single-shot error correction}
\label{Intro_soundness}
A decoder is an algorithm that takes a measurement syndrome $s \in \mathbb{Z}_2^m$ and outputs a recovery Pauli operator $E_{\mathrm{rec}}$. We model measurement errors as introducing an additional syndrome vector $u$ so that we physically observe syndrome $s=\sigma(E)+u$ where $E$ is the physical error. Good decoder design would ensure that given $s$ the recovery is such that residual error $E_{\mathrm{rec}} E$ has low min-weight. We propose the following definition
\begin{defin}[Single-shot error correction]
\label{single_shot_def}
Let $p$ and $q$ be integers and $f : \mathbb{Z} \rightarrow \mathbb{R}$ be some function with $f(0)=0$. We say a check set is $(p, q, f)$ single-shot if there exists a decoder such that for all $u$ and $E$ such that
\begin{enumerate}
\item $|u| < p$ ; and
\item $f(2|u|) + \mathrm{wt}(E) < q $
\end{enumerate}
the decoder takes syndrome $s=\sigma(E)+u$ and outputs recovery operation $E_{\mathrm{rec}}$ such that $\ensuremath{\mathrm{wt}}_{\mathrm{min}}(E_{\mathrm{rec}} \cdot E) \leq f(2 |u|)$.
\end{defin}
This captures all instances of single-shot error correction known to the author. We are interested in good cases where $p$ and $q$ are large and $f$ is in some sense small. A very bad case is when $p=1$ so that no measurement errors ($|u|<1$) can be tolerated. A more rigorous notion of good single-shot properties requires us to consider not just a single instance but an infinite check-family.
\begin{defin}[Good single-shot families] \label{GoodssDEF}
Consider an infinite check family $\mathcal{M}_n$ of $n$-qubit codes. We say the family is a good single-shot family if each $\mathcal{M}_n$ is $(p ,q , f)$ single-shot where
\begin{enumerate}
\item $p$ and $q$ grow with $n$ such that $p,q \geq a n^b$ for some positive constants $a,b$. That is, $p,q \in \Omega( n^b )$ with $b>0$;
\item and $f(x)$ is some polynomial that is monotonically increasing with $x$ and independent of $n$.
\end{enumerate}
\end{defin}
We need $p$ and $q$ to grow so that we can tolerate more errors as the code size grows. We want $f$ to be independent of $n$ so that the residual errors remain contained.
Single-shot error correction is defined for a single round but it is informative to see what the consequences are for $N$ rounds of error correction. We use a label $\tau \in \{ 1, \ldots , N \}$ for the round number. On round $\tau$, we denote $u_{\tau}$ for the measurement errors and $E_{\tau}$ for the new physical errors. We must combine $E_{\tau}$ with the residual error from the previous round $R_{\tau-1}$ to obtain the total error $E_{\tau}R_{\tau-1}$. For the $\tau^{\mathrm{th}}$ round to satisfy the conditions in Def.~\ref{single_shot_def} we need that $|u_{\tau}| < p$ and
\begin{equation}
f(2|u_{\tau}|) + \mathrm{wt}(E_{\tau} R_{\tau-1} ) < q .
\end{equation}
Assuming similar conditions were satisfied on the previous round, we may upper bound $\mathrm{wt}( R_{\tau-1} )$ using Def.~\ref{single_shot_def} and have
\begin{equation}
f(2|u_{\tau}|) + f(2|u_{\tau-1}|) + \mathrm{wt}(E_{\tau}) < q .
\end{equation}
Therefore, provided the measurement errors and new physical errors are small for every round, the residual error will be kept under control over many rounds and not grow in size.
The above definition of single-shot error correction is difficult to analyse since it contains the clause ``if there exists a decoder" and there are many possible decoders. Therefore, we also consider a complementary concept called soundness which will be shown to entail single-shot error correction. Roughly, this extra property is that for low weight syndromes there exists a low weight physical error producing the syndrome. More formally,
\begin{defin}[Soundness] \label{RoughssDEF}
Let $t$ be an integer and $f : \mathbb{Z} \rightarrow \mathbb{R}$ be some function called the soundness function with $f(0)=0$. Given some set of Pauli checks $\mathcal{M}$, we say it is $( t, f)$-sound if for all Pauli errors $E$ with $| \sigma(E) | = x < t$, it follows that there exists an $E^\star$ with $\sigma(E^\star) = \sigma(E)$ such that $\ensuremath{\mathrm{wt}}(E^\star) \leq f(x)$.
\end{defin}
The phrase soundness comes from the literature on locally testable codes~\cite{aharonov2015quantum,hastings2016quantum}. In particular, the above definition is similar to Def 14 of Ref.~\cite{aharonov2015quantum} though this earlier work did not allow for the $| \sigma(E) | < t$ clause.
Again, good soundness would mean ``small" $f$. More rigorously, we define the following notion of goodness
\begin{defin}[Good soundness] \label{GoodssDEF}
Consider an infinite check family $\mathcal{M}_n$. We say the family has good soundness if each $\mathcal{M}_n$ is $(t, f)$-sound where:
\begin{enumerate}
\item $t$ grows with $n$ such that $t \geq a n^b$ for some positive constants $a,b$. That is, $t \in \Omega( n^b )$ with $b>0$;
\item and $f(x)$ is some polynomial that is monotonically increasing with $x$ and independent of $n$.
\end{enumerate}
\end{defin}
The intuition behind $f$ being a polynomial is that we are formalising an algebraic version of an area or volume law that is encountered in topological codes. For instance, in the classical 2D Ising model we know that the area within a boundary follows a quadratic scaling (you may wish to look ahead to Fig.~\ref{Fig_ToricIsing}b3). Ultimately, $f$ will govern the size of residual errors after performing single-shot error correction, so we do not want it to grow with the number of qubits. In contrast, $t$ captures the scale at which this boundary law breaks down and so it must grow with the code size to enable single-shot error correction of larger errors as the code grows.
It is clear that not all check families have good soundness. For 2D toric codes with the standard choice of checks, an error violating only 2 checks can be of arbitrarily large size.
\subsection{Energy barriers}
\label{Intro_energy}
Energy barriers play an important role in the design of passive quantum memories~\cite{brown2016review,terhal2015review}. While passive quantum memories are a distinct topic from active single-shot error correction, the two topics are intertwined. Earlier work~\cite{aharonov2015quantum} has commented on the relationship between soundness and energy barriers, though they used a more restrictive notion of soundness. For a stabiliser code with checks $\mathcal{M}$ we define a Hamiltonian
\begin{equation}
H =- \sum_{S \in \mathcal{M}} S .
\end{equation}
We are interested in walks of quantum states $W=\{ \psi_0 , \psi_1 , \psi_2, \ldots , \psi_L \}$ that fulfil
\begin{enumerate}
\item groundstates: $\psi_0 $ and $\psi_L$ are groundstates of $H$;
\item orthogonality: $\psi_0 $ and $\psi_L$ are orthogonal;
\item local errors: for every $j \in [1,L]$ there exists a single-qubit Pauli $P_j$ such that $\ket{\psi_j}=P_j \ket{\psi_{j-1}}$.
\end{enumerate}
For every such walk we associate an energy penalty
\begin{equation}
ep(W) = \mathrm{max}_{\psi_j \in W} \bra{\psi_j} H \ket{\psi_j} - E_{gs} ,
\end{equation}
where $E_{gs}$ is the ground state energy. The energy barrier of check set $\mathcal{M}$ and associated Hamiltonian is then the minimum $ep(W)$ over all walks $W$ satisfying the above conditions. Less formally, the energy barrier is the minimum energy required to go from one ground state to another.
Every quantum code will have some size energy barrier. We are really interested in the scaling with code size. Given an infinite check family $\mathcal{M}_n$ of $n$-qubit codes, if the energy barrier scales as $\Omega(n^c)$ for some positive constant $c$, then we say the family has a \textit{macroscopic} energy barrier.
\subsection{Measurement redundancy and single-shot distance}
We have allowed for some redundancy so that checks $\mathcal{M}$ may be overcomplete. This is pivotal for us to capture the single-shot properties of the 4D toric codes since they are only known to exhibit good soundness when an overcomplete set of checks are used. We quantify the amount of redundancy in a measurement scheme as the ratio between the number of measurements performed and the minimum number required to generate the stabiliser of the code and use $\upsilon$ to denote this ratio. Good soundness can always be achieved by allowing $\upsilon$ to grow with $n$ by simply repeating the same measurements. Rather, the most interesting cases are check families where $\upsilon$ is no more than a small constant factor. There may also be interesting intermediate cases where $\upsilon$ grows but slowly (e.g. sublinearly), though a constant factor is more desirable and is what we prove later in our constructions. Since topological codes can use redundancy to achieve good soundness, it is reasonable to ask whether redundancy is necessary for good soundness? We will see later that redundancy is not always essential for good soundness (see Thm.~\ref{THM_soundness_simple} and Sec.~\ref{Sec_Simple}). However, it seems that redundancy does play an important role when one attempts to marry good soundness with LDPC properties.
Check redundancy provides consistency conditions that one can inspect for evidence of measurement errors. These are checks on checks and we call them metachecks. They do not represent a physical measurement but classical postprocessing on the measurement outcomes. It is essentially a classical error correcting code that can be represented by a parity check matrix $H$. Given a binary string $s$ representing the outcome of syndrome measurements, we say $Hs$ is the metacheck syndrome, where $Hs$ is evaluated modulo 2. If there are no measurement errors then $s=\sigma(E)$ where $E$ is the physical error. Recall that we model measurement errors as introducing an additional error $u$ so that $s=\sigma(E)+u$. Since the metachecks are intended to look for measurement errors, we require that $H \sigma (E) =0$ for all $E$. It follows that the metasyndrome $H s = H(\sigma(E)+u)=Hu$ depends only on the measurement error $u$. There will always exist a maximal set of metachecks $H_{\mathrm{max}}$ such that $H_{\mathrm{max}}s = 0$ if and only if there exists an error $E$ such that $s=\sigma(E)$. However, we are flexible and allow for $H$ to contain fewer checks than $H_{\mathrm{max}}$, so that not all check redundancies are metachecked. While it might seem odd to not use the maximum information present, this occurs naturally in some local decoders for topological codes where local metachecks are used but non-local metachecks are ignored by the decoder (see for instance the discussion on ``energy-barrier limited decoding" in Ref.~\cite{breuckmann2016local}). Given a non-maximal set of meta-checks, there are syndromes $s$ that pass all metachecks ($Hs=0$) and yet there is no error $E$ satisfying $ s=\sigma(E)$. This motivates the following definition.
\begin{defin}[Single-shot distance]
For a code with checks $\mathcal{M}$ and metacheck matrix $H$ we define the single-shot distance as
\begin{equation}
d_{ss} = \mathrm{min} \{ | s | : H s =0 , s \notin \mathrm{im}(\sigma) \}.
\end{equation}
We use the convention that $d_{ss} = \infty$ if for all $s$ there exists some $E$ such that $s = \sigma(E)$.
\end{defin}
Here, $\mathrm{im}(\sigma)$ is the image of map $\sigma$, which is the set of $s$ such that $s=\sigma(E)$ for some $E$. A equivalent definition is that $d_{ss}$ is the minimal weight $s$ such that $H s=0$ but $H_{\mathrm{max}}s \neq 0$.
The single-shot distance relates to how many measurement errors can be tolerated before a failure occurs that we call a metacheck failure. In a metacheck failure, the syndrome has no explanation in terms of qubit errors.
We remark that for any $\mathcal{M}$ we can always choose $H=H_{\mathrm{max}}$ and then $d_{ss}$ is infinite. However, sometimes a finite single-shot distance may be preferred to ensure that the metacheck decoding process can be implemented using a local decoder~\cite{breuckmann2016local}. For a code with metachecks we extend the notation $[[n,k,d_Q]]$ to $[[n,k,d_Q, d_{ss}]]$.
\section{Summary of results}
\label{Sec_summary}
Here we prove the following:
\begin{theorem}[Single-shot success]
\label{THM_main}
Consider a quantum error correcting code with parameters $[[n,k,d_Q, d_{ss}]]$ and check set that is $( t, f)$-sound. It is also $(p,q,f)$ single-shot where
\begin{align}
p & = \frac{1}{2} \mathrm{min}[d_{ss}, t] \\
q & = d_{Q}/2.
\end{align}
\end{theorem}
For the above bounds to be useful, the code must have a soundness function $f$ that is fairly gentle (e.g. some polynomial). The proof is mostly linear algebra and is given in Sec.~\ref{Sec_Conditions}.
Our second result is an observation on the connection between soundness and energy barriers.
\begin{theorem}
\label{THM_soundness_implies}
Any LDPC check family with good soundness and code distance $d_Q$ growing as $\Omega(n^c)$ for some constant $0 < c$ will also have a macroscopic energy barrier.
\end{theorem}
This is proved in Sec.~\ref{Sec_Energy}. We remark that Aharonov and Eldar made a similar observation~\cite{aharonov2015quantum} though using a much stronger notion of soundness. Since Bravyi and Terhal proved that no 2D topological code can have a macroscopic energy barrier~\cite{bravyi2009no}, it follows immediately that
\begin{corollary}
\label{No_go_corollary} Any 2D topological check family with code distance $d_Q$ growing as $\Omega(n^c)$ for some constant $0 < c$ will not have good soundness.
\end{corollary}
We thank Michael Beverland for pointing out that this corollary follows directly from Thm.~\ref{THM_soundness_implies} and the Bravyi and Terhal result.
Next, we show that
\begin{theorem}
\label{THM_soundness_simple}
For any $n$-qubit quantum error correcting code we can find a set of checks generating the code stabiliser (without any redundancy) such that these checks are $( \infty, f(x)=x)$-sound.
\end{theorem}
The proof is elementary and given in Sec.~\ref{Sec_Simple}. While this is a simple result, it carries important implications for our understanding of soundness. It shows that any code family can be bestowed with good soundness by appropriate choice of checks, but in the process the LDPC property may be lost. Therefore, the interesting question is for which code families we can find checks that are simultaneously LDPC and of good soundness.
Our last main result is a recipe for quantum codes with the required properties. We show that
\begin{theorem}[Construction of single-shot codes]
\label{THM_construct}
Given a classical error correcting code with parameters $[n,k,d]$ we can construct a quantum error correcting code with parameters $[[n_Q, k^4, d_Q \geq d, d_{ss}=\infty ]]$ where
\begin{align}
n_Q & = n^4 + 4 n^2 (n-k)^2 + (n-k)^4 .
\end{align}
Furthermore, the resulting checks are $( d , f)$-sound and also $(\frac{d}{2} , \frac{d}{2} ,f)$ single-shot, with $f(x)=x^3 / 4$ or better. The check redundancy is bounded $\upsilon < 2$. Given a classical LDPC check family, this construction gives a quantum LDPC check family. Given a classical check family where $d \in \Omega(n^a)$ we have a good single shot family.
\end{theorem}
We remark that the distance bound $d_Q \geq d$ and soundness properties are loosely bounded. Indeed, very recently Zeng and Pryadko~\cite{zeng2018higher} considered the same code family and showed that $d = d^2 $.
Before giving the proof of Thm.~\ref{THM_construct}, we establish how the mathematics of homology theory and chain complexes can be used to define quantum codes with metachecks. As such, we provide a pedagogical interlude in Sec.~\ref{Sec_Homology_theory} that introduces this correspondence. The proof is then given in Sec.~\ref{Sec_Constructions} and uses the homological product on chain complexes. Where possible we have converted abstract homological proofs into linear algebra. The constructions of Thm.~\ref{THM_construct} will emerge as a simple, special case of the techniques explored in Sec.~\ref{Sec_Constructions}, and we will see that codes with finite single-shot distance are also possible. An important metric is the encoding rate, the number of logical qubits per physical qubit $k_Q/n_Q$. The expressions for the inverse rate are neater to write
\begin{align}
\frac{n_Q}{k_Q} & = \frac{n^4 + 4 n^2 (n-k)^2 + (n-k)^4 }{k^4} \\ \nonumber
& = 6 \left( \frac{n}{k} \right)^4 - 12 \left( \frac{n}{k} \right)^3 + 10 \left( \frac{n}{k} \right)^2 - 4 \left( \frac{n}{k} \right) + 1 .
\end{align}
From this, it is clear that for any family of classical codes with constant rate $n/k \leq A$, will yield a family of quantum codes with constant rate $n_Q/k_Q \leq A_Q \sim O( A^4 )$.
\section{Conditions for successful single-shot error correction}
\label{Sec_Conditions}
This section proves that soundness leads to single shot error correction as stated in Thm.~\ref{THM_main}. Our analysis will use a minimum weight decoder defined as follows:
\begin{defin}[MW single-shot error decoding] \label{Def_SS_decode}
Given measurement outcomes $s = \sigma(E) +u$, a minimum weight decoder performs the following 2 steps
\begin{enumerate}
\item Syndrome decode: find $s_{rec}$ with minimal $|s_{rec}|$ such that $s+s_{rec}$ passes all metachecks (so $H(s+s_{rec})=0$);
\item Qubit decode: find $E_{rec}$ with minimal $\ensuremath{\mathrm{wt}}( E_{rec})$ such that $\sigma( E_{rec} )=s + s_{rec}$;
\end{enumerate}
We call $R= E \cdot E_{rec}$ the residual error.
\end{defin}
This is the most common notion of weight minimisation and for instance was suggested by Bombin~\cite{bombin2015single}. Other decoders may correct more errors or may be more efficient to implement. However, the minimum weight decoder is especially useful in the following analysis.
Note that it is not possible to always find solutions to the above problem. For instance, one may find a minimising $s_{rec}$ but then there is no $E_{rec}$ satisfying the second condition. We call such an event a metacheck failure, but we do have the following guarantee
\begin{lem}[Meta-check success]
We can find a solution to MW single-shot decoding provided that $|u| < d_{ss}/2$.
\end{lem}
The proof is essentially the same as standard proofs for correcting adversarial qubit errors. Metacheck failures correspond to cases where there exists a minimal weight $s_{rec}$ where $H(s+s_{rec})=0$ but there is no physical Pauli error $E$ such that $\sigma(E)=s+s_{rec}$. Note that whenever we use ``$+$" between two binary vectors it should be read as addition modulo 2. First, we note that $H(s+s_{rec})=H(\sigma(E)+u+s_{rec})$ and using $H\sigma(E)=0$ we get that $s_{rec}$ must satisfy $H(u+s_{rec})=0$. Since, $s_{rec}=u$ would satisfy this requirement and $s_{rec}$ is minimum weight, we infer that $|s_{rec}|\leq |u|$. Using the triangle inequality we get $|s_{rec}+u| \leq 2 |u| < d_{ss} $. By the definition of single-shot distance, it follows that there exists a physical error $E'$ such that $\sigma(E')=s_{rec}+u$. Using the syndrome relation $\sigma(E \cdot E') = \sigma (E)+ \sigma(E')$ we obtain
\begin{equation}
\sigma(E \cdot E') = s +u + s_{rec}+u = s + s_{rec}.
\end{equation}
Therefore, there is always a physical error (e.g. $E_{rec}=E \cdot E'$) consistent with the repaired syndrome $s + s_{rec}$ and the lemma is proved.
The above proof shows that the code can tolerate up to $d_{ss}/2-1$ adversarial measurement errors and provide a solution to single-shot decoding. However, the story is not finished since even if a metacheck failure does not occur, a conventional logical failure might yet occur. Therefore, next we address the question of how we can ensure the residual error $R=E_{rec} \cdot E$ has bounded size. From $\sigma( E_{rec} )= s + s_{rec}$ we deduce $\sigma(R)=u+s_{rec}$ and so
\begin{equation}
\label{Eq_small_residual_synd}
|\sigma(R)| \leq 2 |u| < d_{ss}
\end{equation}
This prompts the question, given a small syndrome (consistent with metachecks) does there even exists a small weight physical error generating this syndrome! Indeed, this is not always the case; unless the code has nice soundness properties. Using our notion of soundness we can prove the following
\begin{lem}[An upper bound on residual error]
\label{ResErrLem}
Consider a quantum error correcting code with parameters $[[n,k,d_Q, d_{ss}]]$ that is $( t, f)$-sound. Given measurement error $u$ and physical error $E$. If
\begin{enumerate}
\item $|u| < d_{ss}/2$ : the measurement error is small enough to ensure no metacheck failures;
\item $|u| < t/2$ : the measurement error is small enough to use soundness properties;
\item $f(2 |u|) + \ensuremath{\mathrm{wt}}(E)< d_Q/2$ : the combined errors are sufficiently small;
\end{enumerate}
It follows that a solution to MW single-shot decoding will yield a residual error $R=E \cdot E_{rec}$ with $\ensuremath{\mathrm{wt}}_{\mathrm{min}}( R ) \leq f( 2|u| )$.
\end{lem}
We know from above (Eq.~\ref{Eq_small_residual_synd}) that the residual error $R$ satisfies $|\sigma(R)|\leq 2 |u| < d_{ss}$. By using the definition of $(t,f)$-soundness, we know that provided $2 |u| < t$ there exists an $R^\star$ such that $\sigma (R)=\sigma (R^\star)$ and $\ensuremath{\mathrm{wt}}( R^{\star}) \leq f( 2 |u| )$. It remains to show that $S=R R^\star$ is a stabiliser of the code as this would entail that $\ensuremath{\mathrm{wt}}_{\mathrm{min}}( R ) \leq \ensuremath{\mathrm{wt}} ( R^\star ) \leq f( 2 |u| )$. Clearly, $\sigma(R R^\star)=\sigma(S)=0$ so $S$ is either a stabiliser or a nontrivial logical operator. It can only be a nontrivial logical operator if $ d_Q \leq \ensuremath{\mathrm{wt}}(R R^{\star}) $. The rest of the proof shows that we instead have $\ensuremath{\mathrm{wt}}(R R^{\star}) < d_Q$ and so $S$ is a stabiliser. We start with
\begin{align}
R \cdot R^\star = E \cdot E_{rec} \cdot R^\star ,
\end{align}
and
\begin{align}
\ensuremath{\mathrm{wt}}( R \cdot R^\star ) = \ensuremath{\mathrm{wt}}( E \cdot E_{rec} \cdot R^\star ).
\end{align}
Using the triangle inequality
\begin{align}
\ensuremath{\mathrm{wt}}( R \cdot R^\star ) \leq \ensuremath{\mathrm{wt}}( E_{rec} ) +\ensuremath{\mathrm{wt}}( E \cdot R^\star ) .
\end{align}
Since, $E_{rec}$ is a minimum weight solution, we can assume that $\ensuremath{\mathrm{wt}}( E_{rec} ) \leq \ensuremath{\mathrm{wt}}( E \cdot R^\star )$, and hence
\begin{align}
\ensuremath{\mathrm{wt}}( R \cdot R^\star ) \leq 2 \ensuremath{\mathrm{wt}}( E \cdot R^\star ) \leq 2 \ensuremath{\mathrm{wt}}( E ) + 2 \ensuremath{\mathrm{wt}}( R^\star ).
\end{align}
Using again that $\ensuremath{\mathrm{wt}}( R^{\star}) \leq f( 2 |u| )$ we obtain
\begin{align}
\ensuremath{\mathrm{wt}}( R \cdot R^\star )\leq 2 ( f(2 |u|) + \ensuremath{\mathrm{wt}}( E ) ) .
\end{align}
We are interested in when the LHS is upper bounded by $d_Q$, which follows from the RHS being upper bounded by $d_Q$, which is precisely the third condition of the lemma. Therefore, $\ensuremath{\mathrm{wt}}( R \cdot R^\star ) < d_Q$ and consequently $R=S \cdot R^\star$. This proves the lemma, and Thm.~\ref{THM_main} follows by simply rephrasing the lemma into the language of Def.~\ref{single_shot_def}.
\section{Soundness and energy barriers}
\label{Sec_Energy}
Here we discuss the relationship between the concept of code soundness and energy barriers in physical systems, resulting in a proof of Thm.~\ref{THM_soundness_implies}. The reader ought to ensure familiarity with the introductory material in subsections~\ref{Intro_soundness} and \ref{Intro_energy}. Aharonov and Eldar remarked in Ref.~\cite{aharonov2015quantum} that codes with good soundness lead to large energy barriers, though they were interested in a strictly stronger definition of soundness.
A key lemma is the following
\begin{lem}
\label{Claim_EnergyB}
Consider a $[[n,k,d_Q]]$ quantum code with checks $\mathcal{M}$ that is $(t,f)$-sound and where all qubits are involved in no more than $C$ checks. It follows that the energy barrier is at least $f^{-1}(w)$ where $w=\mathrm{min}[ (t-1)/C , (d_Q-1)/2 ]$ and $f^{-1}$ is the inverse of the soundness function.
\end{lem}
For any walk of states $\{\psi_0, \psi_1 , \psi_2, \ldots \psi_L \}$ we have a sequence of Pauli operators $\{ {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} , E_1, E_2, \ldots E_L \}$, so that $\ket{\psi_j}=E_j\ket{\psi_0}$ and
$E_j E_{j-1}^{\dagger}=E_j E_{j-1}=P_j$ is a one qubit Pauli error (the local error condition). For every $E_j$ in the sequence we consider the reduced weight
\begin{equation}
\ensuremath{\mathrm{wt}}_{\mathrm{red}} (E) := \mathrm{min}_V \{ \mathrm{wt}(E V ) : V \in \mathcal{P} , \sigma(V)=0 \} ,
\end{equation}
where the minimisation is over all Pauli $V$ with trivial syndrome. Note that reduced weight is slightly difference from min-weight since the minimisation is over a bigger group than the code stabiliser. Herein we use $V_j$ to denote Pauli operators that achieve the above minimisation. Since $\sigma(V_j)=0$ every $V_j$ is either a stabiliser or a nontrivial logical operator. By the groundstates and orthogonality property, it follows that $V_0={\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}$ and $V_L=E_L$. So the sequence starts with a stabiliser and ends with a nontrivial logical operator. Therefore, there must exist a $j^{\star}$ such that $V_{j^{\star}}$ is a stabiliser and $V_{j^{\star}+1}$ is a nontrivial logical operator. Therefore, $V_{j^{\star}} V_{j^{\star}+1}$ must also be a nontrivial logical operator and so
\begin{equation}
\label{Eq_Fstar}
d_Q \leq \ensuremath{\mathrm{wt}}( V_{j^{\star}} V_{j^{\star}+1} ) .
\end{equation}
Furthermore, we have
\begin{align} \nonumber
\ensuremath{\mathrm{wt}}( V_{j^{\star}} V_{j^{\star}+1} ) = & \ensuremath{\mathrm{wt}}( V_{j^{\star}} V_{j^{\star}+1} E_{j^{\star}} E_{j^{\star}}^\dagger E_{j^{\star}+1 } E_{j^{\star}+1 }^\dagger ) \\
= & \ensuremath{\mathrm{wt}}( V_{j^{\star}} E_{j^{\star}} V_{j^{\star}+1} E_{j^{\star}+1 } E_{j^{\star}} E_{j^{\star}+1 }) ,
\end{align}
and using the triangle inequality twice we have
\begin{align} \nonumber
\ensuremath{\mathrm{wt}}( V_{j^{\star}} V_{j^{\star}+1} ) \leq & \ensuremath{\mathrm{wt}}( V_{j^{\star}} E_{j^{\star}} ) + \ensuremath{\mathrm{wt}}( V_{j^{\star}+1} E_{j^{\star}+1 } ) \\ \nonumber
& + \ensuremath{\mathrm{wt}}( E_{j^{\star}} E_{j^{\star}+1} ) \\
= & \ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{j^{\star}}) + \ensuremath{\mathrm{wt}}_{\mathrm{red}} (E_{j^{\star}+1}) + 1.
\end{align}
We have used $\ensuremath{\mathrm{wt}}_{\mathrm{red}} (E_j)= \ensuremath{\mathrm{wt}} ( V_jE_j)$ on the first two terms and the local errors condition on the last term. Combining this with Eq.~\eqref{Eq_Fstar}, leads to
\begin{align}
d_Q & \leq 2 \mathrm{max}[\ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{j^{\star}}) ,\ensuremath{\mathrm{wt}}_{\mathrm{red}} (E_{j^{\star}+1}) ] + 1 ,
\end{align}
and so
\begin{align}
\frac{d_Q -1}{2}& \leq \mathrm{max}[ \ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{j^{\star}}) , \ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{j^{\star}+1}) ] .
\end{align}
Consider the sequence of reduced weights $\{ \ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{0}) , \ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{1}) , \ldots , \ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{n}) \}$. The sequence starts and ends with zero and at some point must reach $(d_Q-1)/2$ or higher. Furthermore, the local error condition entails that $|\ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{j+1}) - \ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{j}) |$ is either 0 or 1 and so the sequence of reduced weights must include every integer from 0 to $(d_Q-1)/2$. Therefore, we can set $w$ equal to $\min[t/C , (d_Q-1)/2]$ and there must exist an $E_{j}$ with $\ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{j})=w$. Next, we consider the syndrome $\sigma (E_j)$ and note that $\sigma( E_j ) = \sigma (E_j V_j) $ where $\ensuremath{\mathrm{wt}}( E_j V_j )=\ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{j})$. The LDPC condition of the code ensures that for any $E$ we have $|\sigma(E)| \leq C \ensuremath{\mathrm{wt}}(E)$. Therefore, for the $E_j$ with $\ensuremath{\mathrm{wt}}_{\mathrm{red}}(E_{j})=w$ we have $|\sigma (E_j) | \leq C w$. Since $w \leq (t-1)/C$ we have $|\sigma(E_j) | \leq t-1 < t$ and the soundness property can be deployed to conclude that $f^{-1}(w) \leq |\sigma(E_j)|$. Since this holds for every possible walk, $f^{-1}(w)$ gives a lower on the energy barrier and we have proved Lem.~\ref{Claim_EnergyB}.
From Lem.~\ref{Claim_EnergyB} we can quickly obtain a proof of Thm.~\ref{THM_soundness_implies}. We consider an infinite family of $[[n,k,d_Q]]$ codes with an LDPC check family $\mathcal{M}$ with good soundness. That is, the codes are $(t_n,f)$-sound such that: the soundness function $f \in O(x^a)$ is independent of $n$; and $t_n$ grows as $\Omega(n^b)$ for some constants $a$ and $b$. We further assume that the code distance $d_Q$ grows as $\Omega(n^c)$ for some constant $c$. Since $d_Q \in \Omega(n^c)$ and $t \in \Omega(n^b)$, we can choose $w = \min[t/C , (d_Q-1)/2] \in \Omega(n^{\mathrm{min}[c,b]})$ in Lem.~\ref{Claim_EnergyB}. It follows that the energy barrier scales as $ \Omega(n^{\mathrm{min}[c,b] / a})$ since $f \in O(x^a)$ and so $f^{-1} \in \Omega(x^{1/a})$. Therefore this check family has a macroscopic energy barrier. Notice that soundness is not the only ingredient in the proof, the LDPC condition is also crucial. It is unclear whether a similar result can be shown without the LDPC condition.
We remark that the converse statement would be that any LDPC check family with a macroscopic energy barrier has good soundness. We have neither proof nor counterexample and so the status of this converse statement remains open.
Bravyi and Terhal proved that no 2D topological stabiliser codes have a macroscopic energy barrier~\cite{bravyi2009no}. Therefore, such codes cannot have good soundness as we stated in corollary~\ref{No_go_corollary}. This is nearly a statement that single-shot error correction is impossible in 2D topological stabiliser codes and we believe this to be the case. Though one must be cautious as we have shown good soundness to be a sufficient condition for single-shot error correction but not a necessary one. Clearly, if a code does not have good soundness then minimum weight decoding (in the sense of Def.~\ref{Def_SS_decode}) can lead to large weight residual error. However, if one deviates from the minimum weight decoding strategy then the picture becomes less clear. For instance, one strategy might be that when the minimum weight solution is high weight, we do not attempt to return the system to the codespace but instead apply a partial recovery. For instance, if we observe two far apart checks with ``-1" outcomes in the 2D toric code, then we could apply a partial recovery that reduces the distance between these checks. Indeed, there are cellular automata decoders for the 2D toric code that behave just like this~\cite{Har04,herold2015cellular,breuckmann2016local,herold2017cellular}. These fail to qualify as single-shot decoders in the usual sense as they rely on the syndrome history (partially stored in a cellular automata). But they highlight that single-shot error correction might be possible using an imaginative decoder approach based on partial recoveries.
\section{Good soundness for all codes}
\label{Sec_Simple}
It is common to conflate a quantum error correction code with a set of checks $\mathcal{M}$ that generate the stabiliser. But there are many choices of checks for any given code. Crucially, the soundness properties depend on the set of checks. Here we prove Thm.~\ref{THM_soundness_simple}, which roughly states that for any code we can find a check set with good soundness properties. The proof follows from the following lemma.
\begin{lem}
\label{Lem_PureErrors}
Given an $[[n,k,d_Q]]$ quantum error correction code with stabiliser $\mathcal{S}$ there exists a minimal set of generators $\mathcal{M}=\{ M_1, M_2, \ldots , M_{n-k} \}$ and associated Pauli errors $\mathcal{E}=\{ E_1, E_2, \ldots , E_{n-k} \}$ such that: (1) $[ M_i , E_j] \neq 0$ if and only if $i=j$; and (2) every $E_j$ acts non-trivially on only a single qubit and so $\ensuremath{\mathrm{wt}}(E_j)=1$.
\end{lem}
We first consider the consequence of this lemma. Given such a set of checks, it follows that if $s$ is a syndrome unit vector (so $|s|=1$) with a 1 entry in the $j^{\mathrm{th}}$ location, then $s=\sigma(E_j)$ (recall Eq.~\eqref{Eq_syndrome_def}). More generally, $s$ can be written as a sum of $|s|$ unit vectors and therefore $s=\sigma(E)$ where
\begin{equation}
E = \prod_{j : s_j =1} E_j.
\end{equation}
Since $\ensuremath{\mathrm{wt}}(E_j)=1$ we have $\ensuremath{\mathrm{wt}}(E) \leq |s|$ (with more work one can prove equality). Therefore, the checks are $(t,f)$-sound with $t=\infty$ and $f(x)=x$ since: the argument holds for any weight syndrome, and so the value of $t$ is unbounded; and the weight of the physical error is no more than the weight of the syndrome, so we have $f(x)=x$.
The proof of Lem.~\ref{Lem_PureErrors} is essentially a step in the proof Lem.~2 of Ref.~\cite{Camp10a}. In Ref.~\cite{Camp10a}, it is shown that upto to qubit labelling and local Clifford unitaries, the generators $M_j$ can be brought into a diagonalised form inspired by the graph state formalism. In this form, $M_j$ acts on the $j^{\mathrm{th}}$ qubit with Pauli $X$. On all others qubits with labels 1 through to $n-k$, the operator $M_j$ acts as either Pauli $Z$ or the identity. Therefore, Pauli $Z$ acting on qubit $j$ anticommutes with generator $M_j$ and commutes with all other generators. Accounting for local Cliffords and original qubit labelling, the required $E_j$ may act on a different qubit and may be different from Pauli $Z$, but it will be a single qubit Pauli. This completes the proof.
The soundness properties proven above are extremely strong. This leads to the counter-intuitive result that single-shot error correction is possible for any code and without any check redundancy. The price to pay is that one must use a certain set of checks such as the diagonalised form above. As such, if the checks are initially low weight (e.g. part of an LDPC check family) then this property may be lost as the diagonalisation process leads to high weight checks. Indeed, we can prove the following strong limitation on diagonalisation methods.
\begin{claim}
Consider a family of codes with checks in the diagonalised form used in the proof of Lem.~\ref{Lem_PureErrors}. Assume also the diagonalised check family is LDPC, such that in every code no qubit is acted on by more than $C$ checks. It follows that the distance is bounded $d_Q \leq C+1$ for all codes in the family.
\end{claim}
We prove this by constructing an explicit error $F$ that is not in the code stabiliser but $\sigma(F)=0$ and $\ensuremath{\mathrm{wt}}(F) \leq C+1$. First, we let $P$ be some single qubit Pauli ($\ensuremath{\mathrm{wt}}(P) = 1$) acting on a qubit with label exceeding $n-k$. By the LDPC property $|\sigma(P)| \leq C$. Furthermore, following previous arguments there exists an $E$ acting on the first $n-k$ qubits such that $\sigma(E)=\sigma(P)$ and $\ensuremath{\mathrm{wt}}(E) \leq |\sigma(P)|$. Combined $\ensuremath{\mathrm{wt}}(E) \leq |\sigma(P)|$ and $|\sigma(P)| \leq C$ entail $\ensuremath{\mathrm{wt}}(E) \leq C$. Setting $F=EP$, we have that
\begin{equation}
\sigma(F)=\sigma(E)+\sigma(P)=2 \sigma(E) = 0
\end{equation}
and
\begin{equation}
\ensuremath{\mathrm{wt}}(F) \leq \ensuremath{\mathrm{wt}}(E)+\ensuremath{\mathrm{wt}}(P) \leq C + 1.
\end{equation}
Lastly, we need to show that $F$ is not an element of the stabiliser. First we note that $F \neq {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}$ since $E$ and $P$ act on disjoint sets of qubits. Next, let us assume to the contrary that $F$ is a non-trivial element of the stabiliser. Then there is some non-empty set $J \subseteq \{ 1, \ldots, n-k \}$ such that
\begin{equation}
F = \prod_{j \in J} M_j.
\end{equation}
Following the argument in the proof of Lem.~\ref{Lem_PureErrors}, let us assume that each $M_j$ acts with Pauli $X$ on the $j^{\mathrm{th}}$ qubit. But all $M_{k \neq j}$ act on the $j^{\mathrm{th}}$ qubit with either Pauli $Z$ or the identity. Therefore, for every $j \in J$ we have that $F$ acts on the $j^{\mathrm{th}}$ qubit with either $X$ or $Y$. Since $J$ is non-empty there is at least one qubit with index between 1 and $n-k$ such that $F$ acts as either $X$ or $Y$. However, $F=EP$ where $E$ acts on the first $n-k$ qubits with either $Z$ or ${\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}$. Since $P$ acts on one of the last $k$ qubits, we see that $F$ can not be a stabiliser and must instead be a non-trivial logical operator.
The LDPC property is highly desirable and so too is growing code distance. Therefore, we need an alternative route to good soundness.
\section{Tanner graphs, chain complexes and homology theory}
\label{Sec_Homology_theory}
From here on we specialise to codes with checks $\mathcal{M}$ that can be partitioned into checks in the $Z$ and $X$ Pauli basis. For such codes, we describe quantum codes in a graphical language that extends on the classical use of Tanner graphs. We will explain the correspondence between the graphical representation and a linear algebra description in terms of concepts from algebraic topology.
\begin{figure*}
\includegraphics{Hasse.pdf}
\caption{A graphical representation of some example classical and quantum error correcting codes, including scheme for parity check measurements and metachecks. (a) the 4 bit classical repetition code; (b) the 4 bit classical repetition code with an additional check and corresponding metachecks; (c) the 4 bit classical repetition code with repeated checks and corresponding metachecks; (d) the 7-qubit Steane code; (e) the 7-qubit Steane code with additional checks and corresponding metachecks. The symbol $\delta_{j}$ is a matrix describing the connectivity between vertices in set $C_j$ and $C_{j+1}$. It can also be considered as a linear map known as the boundary map in homology theory.}
\label{Fig_Hasse}
\end{figure*}
Several example graphs are given in Fig.~\ref{Fig_Hasse}. In every case, the graph breaks up into $D+1$-partitions and we will refer to $D$ as the length of the graph. Each partition comes with a set of vertices $C_j$. We use a binary matrix $\delta_j$ to describe the adjacency between vertices in $C_j$ and $C_{j+1}$. Specifically, matrix $\delta_j$ has a ``1" in entry $(a,b)$ if and only if the $b^{\mathrm{th}}$ vertex in $C_j$ is connected to the $a^{\mathrm{th}}$ vertex in $C_{j+1}$. Therefore, $\delta_0$ is the well-known parity check matrix of a classical code. Furthermore, $\delta_0$ is the parity check matrix for bit-flip ($X$) errors in a quantum code. Using superscript $T$ for transpose, the matrix $\delta_{-1}^T$ is the parity check matrix for phase-flip ($Z$) errors in a quantum code.
We conflate thinking of $C_j$ as a set of vertices and also as a binary vector space $\mathbb{Z}_2^{n_j}$ where $n_j$ denotes the number of vertices in $C_j$. A unit vector $\hat{u}$ has only a single entry with value 1 and identifies single vertex in $C_j$. Therefore, given a pair of unit vectors $\hat{u} \in C_j$ and $\hat{v} \in C_{j+1}$, we have $\hat{v}^T \delta_j \hat{u}=1$ if and only if the corresponding vertices are connected. Therefore, given a unit vector $\hat{u} \in C_1$ identifying a measurement (or check) for bit-flip errors, the vector $\delta_0^T \hat{u} $ identifies the (qu)bits involved in that check. We use the notation
\begin{align}
X[u] & :=\otimes_j X_j^{u_j} , \\
Z[v] & :=\otimes_j Z_j^{v_j} ,
\end{align}
where $u$ and $v$ are binary vectors. The graph should be read as not just defining a code but also the measurement scheme. So for every unit vector $\hat{u}$ in $C_1$, the graphical formalism stipulates that we measure the operator $Z[\delta_0^T \hat{u}]$. So in our earlier notation $Z[\delta_0^T \hat{u}]$ would be a member of $\mathcal{M}$ and is a stabiliser of the code. Since the stabiliser is a group, we have that $Z[\delta_0^T u]$ is a stabiliser for any vector $u \in C_1$. Similarly, $X[\delta_{-1} v]$ is a stabiliser of the code for every $v \in C_{-1}$. Operators $X[\delta_{-1} v]$ and $Z[\delta_0^T u]$ will commute if and only if $(\delta_0^T u)^T\delta_{-1} v=u^T \delta_0 \delta_{-1} v = 0$ where all such equations should be read using addition modulo 2. Since we need all such operators to commute, we require that $\delta_0 \delta_{-1}=0$. Conversely, if $X[e]$ with $e \in C_0$ is an error, the vector $\delta_0e$ is the $Z$-measurement syndrome assuming ideal measurements.
\begin{figure*}
\includegraphics{IsingVToric.pdf}
\caption{In (a) we illustrate the 2D toric code. Part (a1) describes the toric code using the vertex labelling from Fig.~\ref{Fig_Hasse} with grey curved lines highlighting the periodic boundary conditions of the torus. Part (a2) shows the relationship between error and syndromes. Notice that a weight 2 syndrome (two endpoints) could require an arbitrarily long string to produce the syndrome. Therefore, the code does not have good soundness. In (b) we illustrate the 2D Ising model as a classical error correction code. Part (b1) again uses the vertex labelling from Fig.~\ref{Fig_Hasse}. Notice that (b1) represents the same graph as (a1) but with the different types of vertex changing role. Part (b2) shows a measurement error that is detected by metachecks. Part (b3) shows a measurement syndrome that passes all metachecks (i.e. it would be the corrected syndrome of (b2)). The red region shows an error pattern that generates the syndrome. Notice that the size of the physical error scales at most quadratically with the size of the syndrome. Therefore, the code does have good soundness. Part (b4) show a metacheck failure. There is a syndrome that spans the code and forms a non-trivial cycle. Due to periodic boundary conditions there is no error region with this syndrome as its boundary.}
\label{Fig_ToricIsing}
\end{figure*}
In homology theory, this whole structure is called a chain complex and the operators $\delta_j$ are called boundary maps provided the relation $\delta_{j+1} \delta_j = 0$ holds for all $j$. Therefore, given a homological chain complex the commutation relations are automatically satisfied since $\delta_0 \delta_{-1}=0$. Remarkably, requiring $\delta_{j+1} \delta_j = 0$ not only gives us the required commutation relations but also ensures that the metachecks are suitably defined. We will show this formally. Consider a physical error $X[e]$. It will generate $Z$-syndrome $\delta_0e$ assuming no measurement errors. Since there are no measurement errors, the metasyndrome $x=\delta_1 \delta_0 e$ ought to be the all zero vector, which is ensured if $\delta_1 \delta_0 =0$.
Let us connect this back to the notation used in the first part of this paper. The check set is
\begin{equation}
\mathcal{M}=( Z[\delta_0^T \hat{u}_1], \ldots , Z[ \delta_0^T \hat{u}_{n_{1}}] , X[\delta_{-1} \hat{v}_1], \ldots , X[\delta_{-1} \hat{v}_{n_{-1}}] )
\end{equation}
where $\hat{u}_j$ and $\hat{v}_j$ are unit vectors with the unit in the $j^{\mathrm{th}}$ location. Any Pauli error can be expressed as $E = X[e]Z[f]$ for some vectors $e$ and $f$. The syndrome of this Pauli is then the combination of the $Z$ and $X$ syndromes, so that
\begin{equation}
\sigma( X[e]Z[f] ) = \left( \begin{array}{c}
\delta_0e \\
\delta_{-1}^T f
\end{array} \right) .
\end{equation}
Furthermore, the whole metasyndrome matrix has block matrix form
\begin{equation}
H = \left( \begin{array}{cc}
\delta_1 & 0 \\
0 & \delta_{-2}^T
\end{array} \right) .
\end{equation}
From this we see that the condition required earlier (that $H \sigma(E)=0$ for all Pauli $E$) follows from the fundamental property of chain complexes, specifically $ \delta_1 \delta_0 = 0$ and $ \delta^T_{-2} \delta^T_{-1} = 0$.
Next, we study some parameters of chain complexes. We use $n_j$ to denote the number of vertices in $C_j$, and equivalently the dimension of the associated vector space $\mathbb{Z}_2^{n_j}$. The matrix $\delta_j$ will have $n_j$ columns and $n_{j+1}$ rows. An important parameter is the $j^{\mathrm{th}}$ Betti number, which we denote $k_j$. For our purposes, it suffices to define
\begin{equation}
\label{Eq_Betti}
k_j := \mathrm{nullity}(\delta_j) - \mathrm{rank}(\delta_{j-1}) .
\end{equation}
Here, $\mathrm{nullity}$ is the dimension of the kernel, denoted $\ker(\delta_j)$, which is the space of vectors $u$ such that $\delta_j u=0$. The $\mathrm{rank}$ is the number of linearly independent rows in a matrix. Alternatively, the $\mathrm{rank}$ is equal to the dimension of the image, denoted $\mathrm{im}(\delta_{j-1})$, which is the space of vectors $v$ such that there exists a $u$ satisfying $v=\delta_{j-1}u$. Those familiar with homology theory may prefer to think of $k_j$ as the dimension of the $j^{\mathrm{th}}$ homology group $\mathcal{H}_j= \mathrm{ker}(\delta_j) / \mathrm{im}(\delta_{j-1})$. This counts the number of different homology classes at a particular level of the chain complex. Let $c$ be an element of $C_j$. If $c \in \mathrm{ker}(\delta_j)$ then we say $c$ is a cycle. However, for any $c \in \mathrm{im}(\delta_{j-1})$ it immediately follows from $\delta_j \delta_{j-1}=0$ that also $c \in \mathrm{ker}(\delta_j)$ and such a cycle is said to be a trivial cycle. On the other hand, if $c \in \mathrm{ker}(\delta_j)$ but $c \notin \mathrm{im}(\delta_{j-1})$ then $c$ is a non-trivial cycle. If any non-trivial cycles exist then $k_j>0$, and the value of $k_j$ counts the number of different non-trivial cycles (factoring out homological equivalence). Note that for $k_j$ with the lowest value of $j$ in the chain complex, the matrix $\delta_{j-1}$ is not defined and so Eq.~\eqref{Eq_Betti} should be read with $\delta_{j-1}$ substituted by the zero matrix. Similarly, for the largest possible $j$ value we must take $\delta_j$ as the zero matrix.
One can similarly look at the cohomologies
\begin{equation}
\label{Eq_CoBetti}
k_j^T := \mathrm{nullity}(\delta^T_{j-1}) - \mathrm{rank}(\delta^T_{j}) .
\end{equation}
Poincar\'{e} duality entails that $k_j^T=k_j$ and for completeness we give a simple proof in App.~\ref{App_Poincare} using only linear algebra. For quantum codes, $k_0$ is important as it gives the number of logical qubits encoded by the code. It is useful for us to also to consider $k_j$ for other values of $j$. For instance, in a code with metachecks, $k_1$ is the number of classes of syndromes $x$ such that they pass all the metachecks ($\delta_1 x =0 $) but there does not exist an explanation in terms of qubit errors ($\nexists e$ such that $x=\delta_0 e $).
In the context of error correction, we are interested not just in the number of non-trivial cycles, but also their minimum distance. As such, we define
\begin{align} \label{Eq_distances}
d_j & := \mathrm{min} \{ |c| : c \in \mathrm{ker}(\delta_j) , c \notin \mathrm{im}(\delta_{j-1}) \} , \\ \nonumber
d_j^T & := \mathrm{min} \{ |c| : c \in \mathrm{ker}(\delta^T_j) , c \notin \mathrm{im}(\delta^T_{j+1}) \} ,
\end{align}
where $|c|:=\sum_j c_j$ is the Hamming weight. We use the convention that $d_j=\infty $ whenever $k_j=0$ and similarly $d_j^T=\infty$ whenever $k_{j+1}^T$=0. We know of no simple relationship between $d_j$ and $d_j^T$. This is enough for us to define the usual parameters of the corresponding $[[n,k,d_Q]]$ quantum code as $n=n_0$, $k=k_0$ and $d_Q=\mathrm{min}[d_0, d_{-1}^T]$. However, we also introduce a new parameter that we call the single-shot distance as follows.
\begin{defin}[Single-shot distance]
Given a length-4 chain complex we define the single-shot distance as $d_{ss}:=\mathrm{min}[d_1, d_{-2}^T]$ where $d_1$ and $d_{-2}^T$ are special cases of Eq.~\eqref{Eq_distances}.
\end{defin}
The single-shot distance relates to how many measurement errors can be tolerated before a failure occurs that we call a metacheck failure. In a metacheck failure, the syndrome has no explanation in terms of qubit errors. See Fig.~\ref{Fig_ToricIsing}b4 for an example of metacheck failure in the 2D Ising model with periodic boundary conditions.
Let us review different ways we can use this formalism. Consider a length-1 chain complex $C_0 \rightarrow_{\delta_0} C_1$. We can consider the vertices in the zeroth level as bits and the first level as parity checks. Thus a length-1 chain complex can be regarded as a classical code. Consider a length-2 chain complex $C_{-1} \rightarrow_{\delta_{-1}} C_0 \rightarrow_{\delta_0} C_1$. This could represent either a quantum code (without any metachecks) or alternatively a classical code equipped with metachecks. In the classical case, our convention is to increment all the indices by one to have $C_{0} \rightarrow_{\delta_{0}} C_1 \rightarrow_{\delta_1} C_2$. We choose this convention such that $C_{0}$ always labels the physical bits or qubits. In Fig.~\ref{Fig_ToricIsing}a1 and Fig.~\ref{Fig_ToricIsing}b1 we show two graphs representing length-2 chain complexes. The graphs are identical except in Fig.~\ref{Fig_ToricIsing}a1 it represents a quantum code and in Fig.~\ref{Fig_ToricIsing}b1 it represents a classical code with metachecks.
Given a length-4 chain complex, the additional layers of homology describe metachecks on the $X$ and $Z$ checks. Note that the additional layers of the chain complex have no direct effect on the code parameters.
We could also consider length-3 chain complexes with metachecks on either $X$ and $Z$ checks. It is also plausible that a length-3 chain complex could support single-shot error correction of both error types by using a form of gauge fixing such as proposed in 3D colour codes~\cite{bombin2015single}. However, we will not explore this here.
We also need to translate the notion of soundness into the language of chain complexes
\begin{defin}[Soundness of maps] \label{ssDEF}
Let $t$ be an integer and $f : \mathbb{Z} \rightarrow \mathbb{R}$ be some function called the soundness function. Given a linear map $\delta$, we say it is $( t, f)$-sound if for all $r$ such that $| \delta r| < t$, it follows that:
\begin{align}
x = | \delta r| & \implies \mathrm{min} \{ | r' | : \delta r' = \delta r \} \leq f(x) .
\end{align}
Furthermore, we say a quantum error correcting code is $( t, f)$-sound if the above holds for both $\delta_0$ and $\delta_{-1}^T$. For a classical error correcting code this is required for just $\delta_0$.
\end{defin}
This is less general than the earlier Def.~\ref{RoughssDEF} since the above only applies to CCS codes whereas our earlier definition was valid for any stabiliser code. However, it should be clear that any CCS code satisfying Def.~\ref{ssDEF} will also satisfy Def.~\ref{RoughssDEF}. We saw earlier that 2D topological codes cannot have good soundness and we illustrate this in Fig.~\ref{Fig_ToricIsing}a. Whereas, for the 4D toric code, with an appropriate choice of checks, geometric arguments show that low weight syndromes can always be generated by small weight errors. To visualise this, it is easier to instead think of the 2D Ising model as a classical error correcting code. In such a code, syndrome cycles have a weight equal to their perimeter and the error generating the syndrome has weight equal to the area (see Fig.~\ref{Fig_ToricIsing}b3). The area of a 2D region can be no more than $x^2/8$ of the perimeter length $x$ and so the Ising model has a quadratic soundness function. Therefore, it can be helpful to think of soundness as describing the geometric area law relationship between syndromes and errors, albeit in purely algebraic terms.
Check redundancy provides consistency conditions that one can inspect for evidence of measurement errors. These checks on checks are illustrated in Fig.~\ref{Fig_Hasse} using diamonds. We call these metachecks. They do not represent a physical measurement but classical postprocessing on the measurement outcomes. That is, for a given metacheck node we calculate the parity of all the checks it is connected to. If this parity is odd, a measurement error must have occurred on one of the adjacent nodes. Recall that we quantify the amount of redundancy in a measurement scheme as the ratio between the number of measurements performed (which equals $n_1 + n_{-1}$) and the minimum number required to generate the stabiliser of the code (which equals $n_0-k_0$). We use $\upsilon$ to denote this ratio, so that
\begin{equation}
\label{Eq_Redundancy_Def}
\upsilon = \frac{n_1 + n_{-1}}{n_0-k_0},
\end{equation}
with $\upsilon=1$ indicating no redundancy. In Fig.~\ref{Fig_Hasse} we give examples of codes with such redundancy (Fig.~\ref{Fig_Hasse}b, Fig.~\ref{Fig_Hasse}c and Fig.~\ref{Fig_Hasse}c). We are interested in check families where $\upsilon$ is no more than a small constant factor.
\section{Constructing single-shot codes}
\label{Sec_Constructions}
Here we show how the homological product can be used to construct new codes supporting single-shot error correction. This will culminate in a proof of Thm.~\ref{THM_construct} though the techniques allow for a broader range of constructions, including codes where the single-shot distance is finite.
\subsection{A single application constructions}
\label{Sec_firstHomo}
As a warm-up, we begin by considering a single application of the homological product. Our approach is to take a length-1 chain complex (e.g. a conventional classical code) and use the homological, or hypergraph, product to build a length-2 chain complex with the desired properties. In general, one could take two different input classical codes and combine them together using these techniques, but for simplicity we take both input codes to be the same. Furthermore, there are a few different notions of the homological product. For instance, Bravyi and Hastings use a simplified variant that they call the single sector homological product, whereas we will use a more standard textbook variant that Bravyi and Hastings would call a multi sector homological product~\cite{bravyi2014homological}. Furthermore, there is some freedom in the notation and we use a convention such that the homological product in this section is manifestly equivalent to the hypergraph product of Tillich and Zemor~\cite{tillich2014quantum}.
Given a chain complex $C_0 \rightarrow_{\delta_0} C_1$ we can define a new chain complex $ \tilde{C}_{-1} \rightarrow_{\tilde{\delta}_{-1}} \tilde{C}_{0} \rightarrow_{\tilde{\delta}_0} \tilde{C_{1}} $ of the form
\begin{equation}
C_{0} \otimes C_{1} \rightarrow_{\tilde{\delta}_{-1}} (C_{0} \otimes C_{0}) \oplus (C_{1} \otimes C_{1}) \rightarrow_{\tilde{\delta}_0} C_{1} \otimes C_{0}.
\end{equation}
The notation $\otimes$ represents the tensor product. For example, if $a \in C_{0}$ and $b \in C_{1}$ then $a \otimes b \in C_{0} \otimes C_{1} $, and the space $C_{0} \otimes C_{1} $ further contains any linear combinations of such vectors. The symbol $ \oplus$ represents a direct product. For instance, vectors in $(C_{0} \otimes C_{0}) \oplus (C_{1} \otimes C_{1})$ can be written as $w =u \oplus v$ where $u \in (C_{0} \otimes C_{0})$ and $v \in (C_{1} \otimes C_{1})$. All vectors should be read as column vectors and so the direct product of vectors can also be read as stacking these vectors
\begin{equation}
u \oplus v = \left( \begin{array}{c}
u \\
v \end{array}
\right) .
\end{equation}
We will use the weight identities $| u \otimes v| = |u| \cdot |v|$ and $| u \oplus v| = |u| + |v|$. The boundary map $\tilde{\delta}_{-1}$ is defined such that for product vectors $a \otimes b \in C_{0} \otimes C_{1} $, we have
\begin{equation}
\label{BoundMap}
\tilde{\delta}_{-1} ( a \otimes b ) = (a \otimes (\delta_0^T b) ) \oplus ((\delta_0a) \otimes b ),
\end{equation}
and it extends linearly to non-product vectors. This is often more concisely denoted as $\tilde{\delta}_{-1} = ({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \delta_0^T) \oplus (\delta_0 \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}})$. The boundary map $\tilde{\delta}_0$ is defined such that for product vectors $a \otimes b \in C_{0} \otimes C_{0} $ and $c \otimes d \in C_{1} \otimes C_{1} $, we have
\begin{equation}
\tilde{\delta}_0 ( (a \otimes b) \oplus (c \otimes d) ) = (( \delta_0 a) \otimes b )+ (c \otimes (\delta_0^T d) ) ,
\end{equation}
and again extending linearly to non-product vectors. Both the new boundary maps can also be represented in block matrix form
\begin{align}
\tilde{\delta}_{-1} & = \left( \begin{array}{c}
{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \delta_0^T \\
\delta_0 \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}
\end{array} \right) , \\ \nonumber
\tilde{\delta}_0 & = \left( \begin{array}{cc}
\delta_0 \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} & {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \delta_0^T
\end{array} \right) .
\end{align}
From here it is easy to verify that they satisfy the requirement that $ \tilde{\delta}_0 \tilde{\delta}_{-1} = 2( \delta_0 \otimes \delta_0^T) = 0$, where we have used that all mathematics is being performed modulo 2. These matrices fully characterise the new chain complex and from them we can find graphs of the sort shown in Fig.~\ref{Fig_Hasse}. We give a graphical overview in Fig.~\ref{Fig_SingleHP}.
\begin{figure*}
\includegraphics{HomoProduct1.pdf}
\caption{An overview of a single application of the homological product to generate a length-2 chain complex from a length-1 chain complex (that can be viewed as a classical code). In (a) we label the chain-complex under the assumption that it defines a quantum code, and where the subscripts are consistent with the main text. In (a) we label the chain-complex under the assumption that it defines a classical code. In order that $\tilde{C}_0$ denotes the bits, we have increments all the new subscripts by 1. Throughout we use rectangles to show a collection of bit/qubit vertices; we use ovals to show a collection of checks; and diamonds to show a collect of metachecks.}
\label{Fig_SingleHP}
\end{figure*}
Now we discuss the parameters of this new structure, with some of these results obtained in Ref.~\cite{tillich2014quantum}. Simple dimension counting tells us that the new chain complex has
\begin{align}
\label{Ntidle}
\tilde{n}_{-1} & = n_0 n_1 , \\ \nonumber
\tilde{n}_0 & = n_0^2 + n_1^2 , \\ \nonumber
\tilde{n}_1 &= n_0 n_1 .
\end{align}
The dimension of the homological classes is more involved, but a well known result from homology theory (the K\"{u}nneth formula~\cite{hatcher2002algebraic,bravyi2014homological}) tells us that
\begin{align}
\label{Ktidle}
\tilde{k}_{-1}& = k_0 k_1 , \\ \nonumber
\tilde{k}_0 & = k_0^2 + k_1^2 , \\ \nonumber
\tilde{k}_1 &= k_1 k_0 .
\end{align}
The distance of the code is trickier yet again to prove and is not a standard quantity in homology theory. Nevertheless, one can show that
\begin{align} \label{Eq_distances1}
\tilde{d}_{-1} & = d_0 d_0^T , \\ \label{Eq_distances2}
\tilde{d}_0^T & = d_0 d_0^T , \\ \label{Eq_distances3}
\tilde{d}_0 & \geq \mathrm{min} ( d_0 , d_0^T) , \\ \label{Eq_distances4}
\tilde{d}_{-1}^T & \geq \mathrm{min} ( d_0 , d_0^T) .
\end{align}
We provide proofs in App.~\ref{App_distances} for Eq.~\eqref{Eq_distances1} and Eq.~\eqref{Eq_distances2}. The results of Eq.~\eqref{Eq_distances3} and Eq.~\eqref{Eq_distances4} were shown by Tillich and Zemor~\cite{tillich2014quantum} but we give an independent proof in the homological formalism in App.~\ref{App_distances}.
Here we instead focus on the following lemma
\begin{lem}[First soundness lemma]
\label{Lem_Coexpand}
Let $C_0 \rightarrow_{\delta_0} C_1$ be a chain complex. Applying the above homological product we obtain a new chain complex where the map $\tilde{\delta}_{0}^T$ is $(t ,f)$-sound and $\tilde{\delta}_{-1}$ is $(t ,f)$-sound with $f(x)=x^2/4$ and $t=\mathrm{min}(d_0, d_0^T)$.
\end{lem}
We make no assumptions about the soundness properties of the original chain complex but find this emerges due to the nature of the homological product. However, if one knows that the original chain complex is sound, one could prove a stronger soundness result (with $f$ growing slower than $x^2/4$) for the new chain complex. We prove this lemma in App.~\ref{App_Coexpand1} and next discuss its implications.
Using the above homological product, we can construct a quantum code with parameters $[[ \tilde{n}_0, \tilde{k}_0, d_Q ]]$ where $d_Q=\mathrm{min}[\tilde{d}_0^T, \tilde{d}_0]$. These codes will not necessarily support single-shot error correction because the soundness property in Lem.~\ref{Lem_Coexpand} is not the property required by Thm.~\ref{THM_main}, which requires that $\tilde{\delta}_{0}$ and $\tilde{\delta}_{-1}^T$ have good soundness properties.
Why prove Lem.~\ref{Lem_Coexpand} if it is does not directly provide quantum codes with single-shot capabilities? First, in the next section we will make a second application of the homological product and Lem.~\ref{Lem_Coexpand} will be used, and so it is a stepping stone result. Second, Lem.~\ref{Lem_Coexpand} is highly instructive as it gives a way to construct classical codes that exhibit single-shot error correction. Let us explore this second point further. A classical code with metachecks needs three layers of structure (recall Fig.~\ref{Fig_Hasse}) and our convention is that the subscript $0$ in $C_0$ always denotes the bits or qubits. So for a classical code with metachecks, we want a chain complex of the form $ \tilde{C_{0}} \rightarrow_{\tilde{\delta}_{0}} \tilde{C_{1}} \rightarrow_{\tilde{\delta}_1} \tilde{C_{2}} $. We can use the chain complex generated by the homological product by simply increasing all the subscripts by 1. With these incremented subscripts, Lem.~\ref{Lem_Coexpand} tells us that $\tilde{\delta}_{0}$ is $(d_0^T ,f)$-sound with $f(x)=x^2/4$. It is easy to get lost in subscripts, so we emphasize that the important feature is that soundness runs in the direction from bits/qubits to checks. This is illustrated in Fig.~\ref{Fig_SingleHP} where it clearly runs the correct way for the classical code but not the quantum code. For instance, the 2D toric code and 2D Ising code can both be obtained by applying the homological product to a classical repetition code, but only the 2D Ising code exhibits good soundness (recall Fig.~\ref{Fig_ToricIsing}).
Next, we comment on the redundancy of the new quantum code.
\begin{claim}[Updated redundancy]
\label{redundancyUpdate1}
Let $C_0 \rightarrow_{\delta_0} C_1$ be a chain complex associated with an $[[n,k,d]]$ classical code with check redundancy $\upsilon=n_1/(n_0-k_0)$. Applying the above homological product we obtain a new chain complex and associated quantum code with check redundancy
\begin{equation}
\tilde{\upsilon} = \upsilon \frac{ n}{\upsilon(n-k) + k} < 2 \upsilon.
\end{equation}
Notice that if $\upsilon=1$ then $\tilde{\upsilon}=1$.
\end{claim}
To prove this, we begin with the definition of redundancy and apply Eqs.~\eqref{Ntidle} and Eqs.~\eqref{Ktidle}
\begin{align}
\tilde{\upsilon} & =\frac{\tilde{n}_1+\tilde{n}_{-1}}{\tilde{n}_0-\tilde{k}_{0}} \\
& =\frac{2 n_0 n_1 }{n_0^2 + n_1^2 - k_0^2 - k_1^2} \\
& =\frac{2 n_0 n_1 }{(n_0-k_0)(n_0+k_0) + (n_1-k_1)(n_1+k_1) }.
\end{align}
Using that for a length-1 chain complex $n_1-k_1=n_0-k_0$ and the definition of $\upsilon$, we find
\begin{align}
\tilde{\upsilon} & = \frac{2 n_0 n_1 }{(n_0-k_0)(n_0+k_0+ n_1+k_1) } \\ \nonumber
& = 2 \upsilon \frac{n_0 }{n_0+k_0+ n_1+k_1} .
\end{align}
Since the fraction is clearly less than 1, we have that $\tilde{\upsilon} < 2 \upsilon$. Furthermore, using $n_1-k_1=n_0-k_0$ to eliminate $k_1$ and $\upsilon=n_1/(n_0-k_0)$ to eliminate $n_1$, we obtain
\begin{align}
\tilde{\upsilon} & = \upsilon \frac{n_0 }{\upsilon (n_0-k_0) + k_0} ,
\end{align}
and the identification $n=n_0$ and $k=k_0$ gives the final expression for $\tilde{\upsilon}$.
We conclude this section by considering a simple application of the above homological product. Given a classical $[n,k,d]$ code, we can associate many different length-1 chain complexes, depending on whether there is redundancy in the check operators. However, for any code there always exists a minimal chain complex where there is no redundancy ($\upsilon=1$). For such a minimal chain complex, we have $n_1 = n - k$, $k_1=0$ and $d_0^T=\infty$. This is useful as it allows us to make statements that depend only on well known code properties.
\begin{corollary}[Quantum code constructions]
\label{Cor_QC_constructions}
Consider a classical $[n,k,d]$ code. Applying the above homological product to the minimal chain complex of this code, we obtain a $[[ 2n(n-k)+k^2 , k^2 , d ]]$ quantum code with no check redundancy.
\end{corollary}
\subsection{A second application of the homological product}
\label{Sec_secondHomo}
\begin{figure*}
\includegraphics{HomoProduct2.pdf}
\caption{An overview of the second application of the homological product to generate a length-4 chain complex from a two dimensional chain complex (that can be viewed as a quantum code).}
\label{Fig_DoubleHP}
\end{figure*}
For a quantum error correcting code with metachecks we need a length-4 chain complex, which can be constructed by applying the homological product to a length-2 chain complex. We use breve ornaments over symbols in this section to identify matrices, variables and vector spaces associated with the length-4 chain complex, as follows
\begin{equation}
\label{fourDimChain}
\breve{C}_{-2} \rightarrow_{\breve{\delta}_{-2}} \breve{C}_{-1} \rightarrow_{\breve{\delta}_{-1}} \breve{C}_{0} \rightarrow_{\breve{\delta}_0} \breve{C}_{1} \rightarrow_{\breve{\delta}_1} \breve{C}_{2} .
\end{equation}
The homological product between a pair of 2-dimensional chain complexes will generate a length-4 chain complex according to the general rule that
\begin{equation}
\breve{C}_{m} = \bigoplus_{i-j =m} \tilde{C_{i}} \otimes \tilde{C_{j}} .
\end{equation}
The boundary maps are illustrated in Fig.~\ref{Fig_DoubleHP} and can be written as block matrices as follows
\begin{align}
\breve{\delta}_{-2} & = \left( \begin{array}{c}
{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_0^T \\
\tilde{\delta}_{-1} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}
\end{array} \right) , \\
\breve{\delta}_{-1} & = \left( \begin{array}{ccc}
{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_{-1}^T & & 0 \\
\tilde{\delta}_{-1} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} & & {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_0^T \\
0 & & \tilde{\delta}_{0} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}
\end{array} \right) , \\
\breve{\delta}_{0} & = \left( \begin{array}{ccccc}
\tilde{\delta}_{-1}\otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} & & {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_{-1}^T & & 0 \\
0 & & \tilde{\delta}_0 \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} && {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_0^T
\end{array} \right) , \\
\breve{\delta}_{1} & = \left( \begin{array}{ccc}
\tilde{\delta}_{0}\otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} & & {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_{-1}^T \\
\end{array} \right) .
\end{align}
One can verify that $\breve{\delta}_{j+1}\breve{\delta}_{j}=0$ for all $j$ follows from the same condition on the $\tilde{\delta}$ matrices. As before, one obtains the relations
\begin{align}
\label{NandKbreve}
\breve{n}_m & = \sum_{i-j =m} \tilde{n}_i \tilde{n}_j , \\ \nonumber
\breve{k}_m & = \sum_{i-j =m} \tilde{k}_i \tilde{k}_j ,
\end{align}
where the first is simple dimension counting and the second line follows from the K\"{u}nneth formula.
The distances are lower bounded as follows
\begin{align}
\label{breveDistances}
\breve{d}_0 , \breve{d}_{-1}^T & \geq \mathrm{min}[ \tilde{d}_{-1} , \mathrm{max}[ \tilde{d}_{0} , \tilde{d}_{-1}^T ] , \tilde{d}_{0}^T ] , \\ \nonumber
\breve{d}_{1} , \breve{d}_{-2}^T & \geq \mathrm{min}[ \tilde{d}_{0} , \tilde{d}_{-1}^T ] ,
\end{align}
which we prove in App.~\ref{App_distances2}. Note that the distance will often be significantly larger than these lower bounds.
Our main technical goal is to prove the following soundness result.
\begin{lem}[Second soundness lemma]
\label{Thm_Coexpand}
Let $\tilde{C}_{-1} \rightarrow_{\tilde{\delta}_{-1}} \tilde{C}_0 \rightarrow_{\tilde{\delta}_0} \tilde{C}_1$ be a chain complex such that $\tilde{\delta}_{0}^T$ is $(t ,f)$-sound and $\tilde{\delta}_{-1}$ is $(t ,f)$-sound with $f(x)=x^2/4$. Applying the above homological product we obtain a new length-4 chain complex (as in Eq.~\ref{fourDimChain}) where the map $\breve{\delta}_{0}$ is $(t ,g)$-sound and $\breve{\delta}_{-1}^T$ is $(t ,g)$-sound with soundness function $g(x)=x^3/4$.
\end{lem}
We show the direction of the resulting soundness in Fig.~\ref{Fig_DoubleHP} and this should be contrasted with the direction of the soundness arrows in Fig.~\ref{Fig_SingleHP}. We will only prove the results for $\breve{\delta}_{0}$ with the proof for $\breve{\delta}_{-1}^T$ being essentially identical.
Let us first discuss how the problem can be divided into three subproblems. Let $s \in \mathrm{im}( \breve{\delta}_0)$ so there must exist at least one $r \in \breve{C}_0$ such that $\breve{\delta}_0r=s $. We divide $r$ into components
\begin{equation}
r= \left( \begin{array}{c} r_a \\
r_b \\ r_c
\end{array} \right) ,
\end{equation}
and consider two distinct images
\begin{align}
\label{Eq_images}
s_{L} & = (\tilde{\delta}_{-1} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) r_a + ({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_{-1}^T)r_b , \\ \label{Eq_images2}
s_{R} & = ( {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_0^T ) r_c + (\tilde{\delta}_{0} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) r_b ,
\end{align}
where
\begin{align}
\label{Eq_imageRelations}
s = \breve{\delta}_0 r = \left( \begin{array}{c}
s_{L} \\
s_{R}
\end{array} \right) .
\end{align}
One always has the weight relations $|r|=|r_a|+|r_b|+|r_c|$ and $|s|=|s_{L}|+ |s_{R}|$.
For a syndrome that passes all metachecks we have that
\begin{align}
\breve{\delta}_1 s & = (\tilde{\delta}_{0} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) s_L + ({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_{-1}^T ) s_R = 0 ,
\end{align}
which entails that
\begin{align}
m:=(\tilde{\delta}_{0} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) s_L & = ({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_{-1}^T ) s_R ,
\end{align}
where we have defined this new quantity to be $m$. Given a physical error pattern $r$ that generates the syndrome (as in Eqs.~\eqref{Eq_images}-\eqref{Eq_images2}) the metachecks are always passed and one finds that
\begin{align}
\label{mEq}
m = (\tilde{\delta}_0 \otimes \tilde{\delta}_{-1}^T)r_b .
\end{align}
It is interesting that this depends only on the $r_b$ component of $r$. We can first try to find low weight $r_b$ that solves Eq.~\eqref{mEq}. This leads to the following partial solution to the problem
\begin{lem}[Partial soundness result]
\label{Lem_MiniCoexpand}
Let $\tilde{C}_{-1} \rightarrow_{\tilde{\delta}_{-1}} \tilde{C}_0 \rightarrow_{\tilde{\delta}_0} \tilde{C}_1$ be a chain complex. Applying the above homological product we obtain a new length-4 chain complex (as in Eq.~\ref{fourDimChain}) with the following property. For any $s \in \mathrm{im} ( \breve{\delta}_0)$ there exists an $r_b$ with the following properties
\begin{enumerate}
\item correctness:
$(\tilde{\delta}_0 \otimes \tilde{\delta}_{-1}^T)r_b = m = (\tilde{\delta}_{0} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) s_L = ({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_{-1}^T ) s_R$;
\item low weight: $|r_b| \leq |s_L| \cdot |s_R|$;
\item small $s_L$ remainder: $ s_{L} - ({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_{-1}^T)r_b = \sum_i \alpha_i \otimes \hat{a}_i$ where $\hat{a}_i$ are unit vectors and $\alpha_i \in \ker{\delta_0}$. There are at most $|s_L|$ nonzero $\alpha_i $ and these are bounded in size $|\alpha_i| \leq |s_L|$ ;
\item small $s_R$ remainder: $ s_{R} - (\tilde{\delta}_{0} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) r_b = \sum_i \hat{b}_i \otimes \beta_i $ where $\hat{b}_i$ are unit vectors and $\beta_i \in \ker{\delta}_{-1}^T$. There are at most $|s_R|$ nonzero $\beta_i $ and these are bounded in size $|\beta_i| \leq |s_R|$.
\end{enumerate}
\end{lem}
The proof has a similar flavour to the earlier soundness result and is deferred until App.~\ref{App_Partial_coexpand}. Notice that the lemma does not require any soundness of the initial chain complex. Next, we want to find low-weight $r_a$ and $r_c$ such that they provide the remaining elements of the syndrome as follows
\begin{align}
\label{Eq_remainder}
(\tilde{\delta}_{-1} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) r_a & = s_{L} - ({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_{-1}^T)r_b , \\
( {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \otimes \tilde{\delta}_0^T ) r_c & = s_{R} - (\tilde{\delta}_{0} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) r_b .
\end{align}
Fortunately, Lem.~\ref{Lem_MiniCoexpand} ensures that these remainder syndromes are ``small" in the defined sense. We may next use the following observation
\begin{claim}[Inheritance of soundness]
\label{Claim_inherit}
If $\tilde{\delta}_{-1}$ is $(t,f)$-sound then $\tilde{\delta}_{-1} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}$ is also sound in the following strong sense. Let $q \in \mathrm{im} ( \tilde{\delta}_{-1} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} )$ with decomposition $q = \sum_i \alpha_i \otimes \hat{a}_i$ such that $|\alpha_i| < t$ then there exists an $r_a$ such that $(\tilde{\delta}_{-1} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}})r_a = q$ and $| r_a | \leq \sum_i f( |\alpha_i| )$. A similar result holds when we interchange the order of tensor products and consider $\tilde{\delta}_0^T$.
\end{claim}
The proof is fairly straightforward. Since $|\alpha_i| < t$ for all $i$ and by assumption $\tilde{\delta}_{-1}$ is $(t,f)$-sound, there must exist $\gamma_i$ such that $\tilde{\delta}_{-1} \gamma_i = \alpha_i$ and $|\gamma_i| \leq f( |\alpha_i| )$. By linearity, there exists $r_a=\sum_i \gamma_i \otimes \hat{a}_i$ such that $(\tilde{\delta}_{-1} \otimes {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}})r_a = q$ and $|r_a| \leq \sum_i |\gamma_i | \leq \sum_i f( |\alpha_i| )$.
Next, we put these pieces together. Combining Lem.~\ref{Lem_MiniCoexpand} and Claim.~\ref{Claim_inherit} together with the assumption that $|s| < t$ one immediately obtains that there exist $r_a$ and $r_c$ solving Eq.~\eqref{Eq_remainder} with weights upper bounded by
\begin{align}
|r_a| & \leq |s_L| f( |s_L|) \\
|r_c| & \leq |s_R| f( |s_R|)
\end{align}
Therefore, we have the total weight
\begin{align}
|r| & \leq |s_L| f( |s_L|) + |s_L| \cdot |s_R|+ |s_R| f( |s_R|) .
\end{align}
We take $f(x)=x^2/4$ as stated in Thm.~\ref{Thm_Coexpand}, which leads to
\begin{align}
|r|& \leq \frac{1}{4}|s_L|^3 + |s_L| \cdot |s_R|+ \frac{1}{4} |s_R|^3 \\
&\leq \frac{1}{4}( |s_L| + |s_R|)^3 \\
&= \frac{1}{4} |s|^3 .
\end{align}
Therefore, we have proven ($t,g$)-sound of $\breve{\delta_0}$ with $g(x)=x^3/4$. This completes the proof that Thm.~\ref{Thm_Coexpand} follows from Lem.~\ref{Lem_MiniCoexpand}.
Next, we comment on the check redundancy of these codes
\begin{claim}[Updated redundancy part 2]
\label{redundancyUpdate2}
Consider a length-2 chain complex and associated quantum code with check redundancy $\tilde{\upsilon}$. Applying the above homological product we obtain a length-4 chain complex and new quantum code with check redundancy $\breve{\upsilon} < 2 \tilde{\upsilon}$.
\end{claim}
To prove this we recall the definition of redundancy and then use Eqs.~\eqref{NandKbreve} to obtain
\begin{align}
\breve{\upsilon} & = \frac{\breve{n}_1 + \breve{n}_{-1}}{\breve{n}_0 - \breve{k}_0 } \\
&= \frac{2 \tilde{n}_0 (\tilde{n}_1 + \tilde{n}_{-1}) }{ (\tilde{n}^2_{-1} + \tilde{n}^2_0+ \tilde{n}^2_{1} ) - (\tilde{k}^2_{-1} + \tilde{k}^2_0+ \tilde{k}^2_{1} ) } .
\end{align}
Since $\tilde{n}_j \geq \tilde{k}_j$ for all $j$, the denominator is greater than $\tilde{n}^2_0-\tilde{k}^2_0$, which itself can be factorised as $(\tilde{n}_0-\tilde{k}_0)(\tilde{n}_0+\tilde{k}_0)$ and so
\begin{align}
\breve{\upsilon} & \leq \frac{2 \tilde{n}_0 (\tilde{n}_1 + \tilde{n}_{-1}) }{ (\tilde{n}_0-\tilde{k}_0)(\tilde{n}_0+\tilde{k}_0) } , \\
& = 2 \left(\frac{\tilde{n}_1 + \tilde{n}_{-1} }{ \tilde{n}_0-\tilde{k}_0 } \right) \left( \frac{\tilde{n}_0}{ \tilde{n}_0+\tilde{k}_0 } \right) , \\
& = 2 \tilde{\upsilon} \left(\frac{ \tilde{n}_0 }{ \tilde{n}_0+\tilde{k}_0 } \right) ,
\end{align}
Last, we use the loose bound that the fraction is less than 1 to conclude that $\breve{\upsilon} \leq 2 \tilde{\upsilon} $ as claimed.
\subsection{Combining homological products}
Here we combine the results of the preceding two subsections. Parameters carrying a breve are first expressed in term of parameters carrying a tilde, and then the tilde parameters are replaced with unornamented parameters.
\begin{align}
\breve{n}_{0} & = \tilde{n}^2_1+ \tilde{n}_0^2 + \tilde{n}^2_{-1} = (n_0^2+ n_1^2)^2 + 2n_0^2n_1^2 , \\ \nonumber
\breve{n}_{1} = \breve{n}_{-1} & = \tilde{n}_0(\tilde{n}_1+ \tilde{n}_{-1}) = 2 (n_0^2+n_1^2)n_0 n_1 , \\ \nonumber
\breve{k}_{0} & = \tilde{k}^2_1+ \tilde{k}_0^2 + \tilde{k}^2_{-1} = (k_0^2+ k_1^2)^2 + 2k_0^2k_1^2 , \\ \nonumber
\breve{k}_{1} = \breve{k}_{-1} & = \tilde{k}_0(\tilde{k}_1+\tilde{k}_{-1})= 2 (k_0^2+k_1^2)k_0 k_1 , \\ \nonumber
\breve{d}_{0} = \breve{d}_{-1}^T & \geq \mathrm{min}[ d_0 , d_0^T ] , \\ \nonumber
\breve{d}_{1} = \breve{d}_{-1}^T & \geq \mathrm{min}[ d_0 , d_0^T ] .
\end{align}
Furthermore, by combining Claim.~\ref{redundancyUpdate1} and Claim.~\ref{redundancyUpdate2} we obtain an upper bound on the check redundancy
\begin{align}
\breve{\upsilon} < 2 \tilde{\upsilon}= 2 \upsilon \frac{ n}{\upsilon(n-k) + k} ,
\end{align}
where $ \upsilon$ is the check redundancy of the $[n,k,d]$ classical code associated with the initial length-1 chain complex.
The simplest case is when we use a minimal chain complex representing the initial $[n,k,d]$ classical code. Then $\upsilon=1$, $k_1=0$ and $n_1=n-k$ and the above equations simplify to
\begin{align}
\breve{n}_{0} & = n^4 + 4 n^2 (n-k)^2 + (n-k)^4 , \\ \nonumber
\breve{n}_{1} = \breve{n}_{-1} & = 2n (n-k) (n^2+(n-k)^2) , \\ \nonumber
\breve{k}_{0} & =k^4, \\ \nonumber
\breve{k}_{1} = \breve{k}_{-1} & = 0 \\ \nonumber
\breve{\upsilon} & < 2. \\ \nonumber
\breve{d}_{0} = \breve{d}_{-1}^T & \geq d ,
\end{align}
We also know that $\breve{d}_{1} = \breve{d}_{-2}^T = \infty$ as a consequence of $\breve{k}_{1} = \breve{k}_{-1} = 0$. We make the following identifications: $\breve{n}_{0}$ gives the number of physical qubits $n_Q$; $\breve{k}_{0}$ is the number of logical qubits $k_Q$; $ \breve{d}_{0} $ and $\breve{d}_{-1}^T$ give the qubit error distance $d_Q$; and $\breve{d}_{1}$ and $\breve{d}_{-2}^T$ give the single-shot distance $d_{ss}$. This proves Thm.~\ref{THM_construct}. We remark that in the final stages of this research, Zeng and Pryadko posted a preprint~\cite{zeng2018higher} that shows that the distance is much better than suggested by our bounds, in particular $\breve{d}_{0} = \breve{d}_{-1}^T = d^2 $.
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|c|c||c|c|c|c|c|c|c|}
\multicolumn{6}{|c||}{\textbf{Input classical code}} & \multicolumn{7}{|c|}{\textbf{Double homological product code}} \\ &
\multicolumn{3}{|c|}{parameters} & max. check & redundancy & \multicolumn{4}{c|}{parameters} & max. check & mean check & redundancy \\
$\delta$ & $n$ & $k$ & $d$ & weight & $\upsilon$ & $n_Q$ & $k_Q$ & $d_Q$ & $d_{ss}$ & weight & weight & $\breve{\upsilon} $ \\ \hline
$\left( \begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 1 \\
\end{array} \right)$ & 3 & 1 & 3 & 2 & 1 & 241 & 1 & 9 & $\infty$ & 6 & 4.87179 & 1.3 \\
$\left( \begin{array}{cccc}
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 1 & 1 \\
\end{array} \right)$ & 4 & 1 & 4 & 2 & 1 & 913 & 1 & 16 & $\infty$ & 6 & 5.18 & 1.31579 \\
$\left( \begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 1 \\
1 & 0 & 1 \\
\end{array} \right)$ & 3 & 1 & 3 & 2 & 1.5 & 486 & 6 & 9 & 3 & 6 & 6 & 1.33884 \\
$\left( \begin{array}{cccccc}
1 & 1 & 0 & 0 & 0 & 0 \\
0 & 1 & 1 & 0 & 1 & 0 \\
0 & 0 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 \\
\end{array} \right)$ & 6 & 2 & 4 & 3 & 1 & 3856 & 16 & 16 & $\infty$ & 8 & 5.48077 & 1.3 \\
\end{tabular}
\caption{Some example small classical codes used to generate a quantum code with good soundness through a double application of the homological product. Many of the parameters come directly from equations in the main text. The mean check weight and redundancy are calculated exactly by constructing the explicit parity check matrices. Our table uses the improved distance $d_Q$ results of Zeng and Pryadko~\cite{zeng2018higher}.} \label{tab_examples}
\end{table*}
In Table~\ref{tab_examples} we provide some concrete examples. These are the smallest examples since they use very small initial classical codes. Though the resulting quantum code is much larger. The first three examples correspond to 4D toric codes with cubic tilling either with closed boundary conditions (examples 1 and 2) or periodic boundary conditions (example 3). The last example corresponds to no previous codes that we know of. We have deliberately chosen codes that have low check weight as these will be the most experimentally feasible. Our constructions could potentially be slightly improved using a generalisation of the hypergraph improvements analogous to use of rotated toric lattices~\cite{kovalev12}.
\section{Discussion \& Conclusions}
This is a paper of two halves. The first half was conceptual and gave a presentation of single-shot error correction. We found an intimate connection between single-shot error correction and a property called good soundness. We saw that good soundness in LDPC codes entails a macroscopic energy barrier, which further confirms a relationship between passive quantum memories and single-shot error correction. However, our results leave open whether there exist any codes with a macroscopic energy barrier that lack good soundness. Michael Beverland suggested in discussion that it would be interesting to look at whether Haah's cubic code~\cite{haah2011cubic,bravyi2013quantum} has good soundness. The Haah cubic code is notable because it does have a macroscopic energy barrier but is not a good passive quantum memory at all scales due to entropic effects. Also curious is the role of metachecks and redundancy. We saw that good soundness can be achieved by any code without any check redundancy, but the proof used a diagonalised form of the stabiliser generators that typically destroys any LDPC properties.
The second half of this paper was more technical and focused on specific code constructions capable of providing both good soundness and LDPC properties. It has long been known that homology theory provides a natural mathematical framework for CCS codes, but we saw that homology theory is especially useful when metachecks (checks on measurements) are added to the picture. It is well known that for topological codes the energy barrier and single-shot error correction are intimately related to the dimensionality of the code. We abstract away the topological structure and instead work with algebraic homological structure. While these codes no longer have a dimensionality in the geometric sense, we saw that using the homological product can imbue codes with a sort of effective dimensionality. More precisely, a double application of the homological product resulted in single-shot properties similar to 4-dimensional topological codes. Many readers will feel more comfortable with topological codes because of the conceptual and visual crutches they provide. However, topological codes are significantly limited in terms of the code parameters they can achieve due to trade-off bounds~\cite{bravyi2010tradeoffs,delfosse2013tradeoffs}. So by freeing ourselves from the constraints of topological codes and pursuing their more abstract cousins, we can seemingly benefit from many of the advantages of high-dimensional topological codes (e.g. single-shot error correction) but with improved code parameters. This prompts the question what other topological code properties might hold for homological product codes. We know that 3D and 4D topological codes can support transversal non-Clifford gates~\cite{bombin2006topological,bombin2013self,watson2015qudit,kubica2015unfolding,kubica2015universal,vasmer2018universal,campbell2017roads}, which suggests that a similar property might hold for suitably defined homological product codes.
Our code constructions married good soundness and LDPC properties, through the use of check redundancy and associated metachecks. But do any codes exist without check redundancy that are useful for single-shot error correction? A related question is whether our soundness properties are necessary conditions for single-shot error correction as well as being sufficient conditions. While finishing this research, work on quantum expander codes~\cite{leverrier18} has shown that they can perform single-shot error correction without any check redundancy. Initially, we speculated (in an early preprint) that the quantum error codes will have good soundness, but Leverrier has shown (in private correspondence) that they do not have this property! Therefore, there is more work to be done on this topic to find a code property more permissive than soundness that encompasses all of our codes and also the quantum expander codes.
The main limitation of this work is that we restrict our attention to adversarial noise. Stochastic noise models instead distribute errors according to some probability distribution and assign a non-zero probability to every error configuration. If the probability of a high weight error is low, then we can still leverage proofs from the adversarial noise setting. However, in an independent noise model where each qubit is affected with probability $p$, a code with $n$ qubits will typically suffer around $pn$ errors. For all known quantum LDPC code families, the distance scales sublinearly, and so there is some scale at which the code is likely to suffer an error considerable larger than the code distance. Nevertheless, one is often able to prove the existence of an error correcting threshold. The crucial point is that even though some errors of weight $pn$ might not be correctable, these represent a small fraction of all weight $pn$ errors and so happen with small probability. At this point, proof techniques diverge. We can prove that this works for concatenated codes, topological codes and low-density parity check codes~\cite{kovalev2013fault}. As such, while there is a single theoretical framework for adversarial noise, there is no single theory for stochastic noise in all settings. The situation is likely the same in the setting of single-shot error correction. The pioneering work of Bombin demonstrated that three dimensional colour codes can perform single-shot error correction against a stochastic noise model~\cite{bombin2015single}, and so in this sense our results are strictly weaker. On the other hand, our approach is strictly more general as it applies to a broad range of codes, including many new code constructions such as those presented here. It is then natural to wonder what are sufficient and necessary conditions for single-shot error correction to work against stochastic noise? It is reasonable to conjecture that any concatenated or LDPC codes that meets our criteria for adversarial noise will also perform single-shot error correction against stochastic noise.
Acknowledgements.- This work was supported by the EPSRC (EP/M024261/1) and the QCDA project which has received funding from the QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union's Horizon 2020 Programme. I would like to thank Nicolas Delfosse for his tutorial on hypergraph product codes during the FTQT 2016 workshop at the Centro de Ciencias de Benasque Pedro Pascual. Thank you to Simon Willerton, Michael Beverland, Mike Vasmer, Anthony Leverrier, Barbara Terhal and Ben Brown for conservations and comments on the manuscript. Referee 2 is thanked for their diligent attention to detail.
| proofpile-arXiv_065-12572 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
The search for high redshift objects has rapidly developed in the last decades as astronomers attempt to understand the evolution of galaxies throughout the history of the Universe, with the current frontier being at redshift $z\sim 10$, or $\sim13.4$ Gyr lookback time {\citep{oesch2016, zitrin2014, coe2013}}. Since the large majority of these distant sources are very faint {($m_{AB}\sim26$ for a typical $L_*$ galaxy at $z\sim6$)} deep images of the sky are needed. The \textit{Hubble Space Telescope} has carried out a number of surveys that had the detection of high-redshift galaxies as a key science motivation, starting from the pioneering \textit{Hubble} Deep Field survey \citep[HDF; ][]{hdf}, and then continuing to improve depth and area covered thanks to technological progress offering newer instrumentation, with the \textit{Hubble} Ultra Deep Field survey \citep[HUDF; ][]{hudf}, the Great Observatories Origins Deep Survey \citep[GOODS; ][]{goods}, {the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey \citep[CANDELS; ][]{candels}, the \textit{HST} Frontier Fields \citep{hstff}, and the Brightest of Reionizing Galaxies Survey \citep[BoRG;][]{trenti2011}, among others.}
The most common techniques used to identify high redshift galaxies from broadband imaging are the Lyman-break method \citep{steidel96}, which has been widely applied to the highest redshift ($z\gtrsim4$) samples {\citep[e.g.][]{bouwens2015}}, and other photometric redshift selection methods \citep[e.g.][]{coe2006}. Due to the ubiquitous presence of hydrogen, which has a large ionization cross section, photons with $\lambda < 912$\AA{} are heavily absorbed by neutral hydrogen in its ground state, and only have a low probability of escaping from a galaxy without being absorbed. Hydrogen in the intergalactic medium also contributes to the Lyman-break, effectively absorbing a large fraction of photons emitted by a high-redshift source at $\lambda < 1216$\AA{} for sources at $z\gtrsim 4$. Although generally highly effective, the Lyman-break method has some limitations as it may preferentially select only certain subsets of the galaxy population at high-$z$, such as relatively unobscured, actively star-forming galaxies (e.g., see \citealt{stanway2008}). Recent examples of the application of this technique include \citet{calvi2016}, \citet{bouwens2016}, and \citet{hathi2010}. The Lyman break selection is a special case of multi-color photometric selection, which is most effective when a spectral break is present in the sources targeted by the survey. However, spectra of galaxies can have other characteristic features in addition to the Lyman-break, which can be observed in different wavelengths and can improve the candidates selection. For instance, infrared data can be used to detect the Balmer break in $z\gtrsim5$ galaxies \citep{mobasher2005}, and photometric redshift accuracy and reliability improves when there is a large number of bands available.
Arguably, one of the most fundamental observables from high-redshift surveys is the measurement of the galaxy luminosity function (LF). Generally, studies of the LF at cosmological distances are carried out with galaxy candidates from photometric catalogs (either using photometric redshift estimations or a dropout technique) as spectroscopic samples are significantly more challenging to construct and thus limited in numbers. Even after accounting for the most recent advancements in the field, that yielded catalogs of photometric sources at $z\gtrsim 4$ including more than $10,000$ sources \citep{bouwens2015}, the LF shape is still debated, and the topic is a very active research area \citep[e.g.][]{ishigaki2017, bouwens2015, atek2015, bowler2014, bradley2012}. To go from counting galaxies to the construction of the LF, it is imperative to understand completeness and efficiency, i.e., what fraction of all the existing galaxies with a given spectral energy distribution, morphological properties and redshift range is identified in an observed sample. Accordingly, a machinery able to estimate the recovery fraction is critically needed for robust LF estimations. Yet, despite the large number of high-redshift galaxy surveys carried out in the last 20 years since the original Hubble Deep Field \citep{williams96}, there is not a unified publicly available tool to estimate their completeness and source recovery. Such a software tool is not only important for the estimation of volume and luminosity functions, but also to investigate the properties of the galaxies a survey fails to detect, and reasons for missing them.
The classic approach to completeness estimates is to insert simulated galaxies in the observed images and quantify the recovery efficiency. There are two main methods typically used to create these simulated sources. One is based on starting from images of galaxies acquired in similar observations (for example at lower redshift), that are modified/re-scaled to fit the desired properties of the sample to simulate. The other one is the creation of artificial light profiles from theoretical models of the expected surface brightness profiles. Examples of LF studies utilizing the former approach are \citet{bershady98}, \citet{imai2007}, and \citet{hornillos2009}. The latter approach is applied in \citet{bowler2015}, \citet{oesch2014}, \citet{jian2011}, among others.
This is also the approach taken in this paper, primarily because of its flexibility in the definition of shape, size and the spectral energy distribution of the artificial sources, which make it well suited for a broader range of applications.
This paper presents a \texttt{python} based tool to estimate the completeness of galaxy surveys, the \emph{GaLAxy survey Completeness AlgoRithm} (\texttt{GLACiAR} hereafter). The software produces a photometric output catalog of the simulated sources as main output, and associated higher level products to easily quantify source completeness and recovery. In particular, two main analyses are automatically performed: the first is the calculation of the fraction of sources recovered as a function of magnitude in the detection band (i.e., the survey completeness); and the second one is a more comprehensive characterization of the recovery efficiency taking into account all survey bands allowing the user to implement multi-color selection criteria to identify high-redshift galaxies (i.e., the survey source selection efficiency as a function of both input magnitude and redshift).
The current version of the software is limited to handle blank (non-lensed) fields, but the code structure has been designed with the idea of introducing, in a future release, the capability to load a user-defined lensing magnification map and add artificial objects in the source plane. This would allow natural application of the code to quantify completeness for lensing surveys, which is a powerful complementary method to find high-redshift galaxies as we can observe intrinsically faint galaxies that are magnified by foreground objects. Surveys such as the Cluster Lensing And Supernova survey with \textit{Hubble} \citep[CLASH;][]{clash} and the Herschel Lensing Survey \citep[HLS;][]{hls} are some examples.
This paper is organized as follows: Section~\ref{overview} discusses the principles of the code, with our specific algorithmic implementation presented in Section~\ref{implementation}. Section~\ref{results} illustrates the application of the code to part of the BoRG survey and compares the results obtained to previous determinations of the survey completeness and selection functions. Finally, we summarize and conclude in Section~\ref{discussion}.
\section{GENERAL OVERVIEW}\label{overview}
\texttt{GLACiAR} is structured modularly for maximum efficiency and flexibility. First, it creates artificial galaxies and adds them to the observed science images. Then, a module to identify sources is called, which builds catalogs with photometric information of the detected objects. The output catalogs from the original science images are compared with the ones from the new frames in order to identify the artificial sources recovery and multi-band photometric information. Finally, another module is available to automatically calculate their recovered fraction as a function of input magnitude and simulated redshift. Figure~\ref{fig:diagram} provides a high-level summary of the algorithm.
To identify sources, we limit ourselves to the use of \texttt{SExtractor} \citep{sextractor} for the current release, but we expect to expand the functionality of \texttt{GLACiAR} to allow the use of \texttt{photutils} \citep{photutils} in future versions.
\begin{figure*}
\centering
\includegraphics[scale=0.3]{diagram_completeness1.jpg}
\caption{Logic diagram of \texttt{GLACiAR}'s code structure. User-defined parameters and a science image (with its associated RMS map) are taken as input, with the code then generating simulated galaxy stamps, which are added to the science image at random positions, sampled from a uniform distribution. A detection algorithm is run on these images, and its output is used to determine statistics on source recovery.}
\label{fig:diagram}
\end{figure*}
A set of galaxy stamps are generated with sources that follow a S\'ersic luminosity distribution \citep{sersic68} with parameters defined by the user. These artificial galaxies are placed at random positions on the images of the survey. In order to run the code, a parameters file (described in Section~\ref{input_parameters}) must be completed by the user to define the features of the simulated galaxies, such as magnitude, size, redshift, among others. Along with this, \texttt{GLACiAR} requires other files: the science images, a list with the names of the fields (for one or more than one), the \texttt{SExtractor} parameters, frames with noise intensity maps (RMS or weight maps, depending on which ones are used to run the source identification), and the point spread functions (PSFs) in the filter(s) used to acquire the image(s). These inputs are described in more detail in Section~\ref{files_needed}.
\subsection{S\'ersic profiles for artificial galaxies}
For the characterization of the artificial galaxy's surface brightness, we use the S\'ersic luminosity profile \citep{sersic68} which has been widely shown to be a good fit for different types of galaxies given its flexibility \citep[e.g.:][]{peng2002, graham2005, haussler2013}. This profile is defined as:
\begin{equation}
I(R) = I_{e}exp\left \{-b_{n}\bigg[\bigg(\frac{R}{R_e}\bigg)^{\frac{1}{n}}-1\bigg]\right \},
\label{eq1}
\end{equation}
with $I_{e}$ being the intensity at the radius that encloses half of the total light, $R_{e}$; $n$ is the S\'ersic index, which describes the shape of the profile; and $b_{n}$ is a constant defined in terms of this index, which follows from our choice to normalize the profile with $I_e$.
To obtain the luminosity of a galaxy within a certain radius, we follow the approach by \citet{graham2005} integrating equation (\ref{eq1}) over a projected area $A=\pi R^{2}$, ending up with:
\begin{equation}
L(<R) = I_{e}R_{e}^{2}2\pi n\frac{e^{b_{n}}}{(b_{n})^{2n}}\gamma(2n,x)\\
\label{eq2}
\end{equation}
where $\gamma(2n,x)$ is the incomplete gamma function, and
\begin{equation}
x = b_{n} \bigg(\frac{R}{R_{e}}\bigg)^{\frac{1}{n}}.
\end{equation}
To calculate $b_{n}$ we follow \citet{ciotti91}, and taking the total luminosity we obtain:
\begin{equation}
\Gamma(2n) = 2\gamma(2n,b_{n})
\end{equation}
where $\Gamma$ is the complete gamma function. From here, the value of $b_{n}$ can be obtained.
\subsection{Artificial galaxy data}\label{artificialgalaxy}
We create the stamp of an artificial galaxy according to a set of input (user-specified) parameters, which describe the free parameters of the S\'ersic profile described in equation~\ref{eq2}. $n$ is the S\'ersic index and it can be defined arbitrarily in \texttt{GLACiAR}. For the effective radius $R_{e}$, the input is the proper size in kiloparsecs at a redshift $z=6$, which is converted into arcseconds and scaled by the redshift with $(1+z)^{-1}$. The default value is $R_{e}=1.075$ kpc, chosen according to previous completeness simulations for the BoRG survey \citep{bradley2012, bernard}. This is converted into arcseconds by using the scale of the images. The intensity $I_{e}$ is calculated from equation $\ref{eq2}$ considering $L(<R)$ as the total flux, which depends on the magnitude assigned to the object. Each magnitude can be converted into flux using
\begin{equation}
f_{b} = 10^{\frac{(zp_{b}-m_{b})}{2.5}},
\end{equation}
with $f_{b}$, $zp_{b}$, and $m_{b}$ being the flux, zeropoint, and magnitude of a $``b"$ band, respectively. The user specifies the value for the magnitude in the detection band (which is also chosen by the user). The flux in the other bands is calculated according to the redshift of the simulated galaxy and its spectrum. To calculate the flux in each filter and for each object, we assume a power-law spectrum with a Lyman break as a function of the wavelength $\lambda$:
\begin{equation}
F(\lambda) = \left\{
\begin{array}{ll}
0 & \quad \lambda \leq 0 \\
a\lambda^{\beta} & \quad 1216\leq \lambda
\end{array}
\right.,
\label{eq_spectrum}
\end{equation}
where $a$ is the normalization, and $\beta$ is the slope of the flux. In our code, the value of $\beta$ follows a Gaussian distribution, where the mean and standard distribution can be chosen by the user. For the default case, we adopt a mean of $-2.2$ and a standard deviation of $0.4$, which is suitable for high redshift galaxies \citep{bouwens2015}.
In the top panel of Figure~\ref{spectrum} we show the spectrum of a simulated galaxy with $\beta = -2.0$ at $z=10.0$ with the filters F098M, F125W, F160W, and F606W from \textit{HST} used in the BoRG survey (described in section~\ref{borg}). The bottom panel shows that same source added to the science images in those filters. It can be seen that there is no image of the artificial galaxy in the F098M and F125W bands, as no flux is expected at these wavelengths, while the artificial source is present in F125W and F160W bands with different intensities, since the Lyman-break falls in the F125W filter.
\begin{figure*}
\centering
\includegraphics[scale=0.6]{spectrum_paper.pdf}
\caption{\textit{Top:} Spectrum of a simulated galaxy at $z=10$ and with $\beta=-2.0$ produced by \texttt{GLACiAR} in arbitrary units of flux as a function of wavelength, with four \textit{HST} filter transmission curves superimposed (F098M, F125W, F160W, and F606W). \textit{Bottom:} Source from above inserted into the F606W, F098M, F125W, and F160W science images (from left to right) from field BoRG-0835+2456 assuming a $n=4$ surface brightness profile and $m_{AB}=24.0$ with no inclination and circular shape. The stamps have a size of 3.6''$\times$3.6''.}
\label{spectrum}
\end{figure*}
The user can choose different S\'ersic indexes for the simulated galaxies as well as the fraction of each type. The default values are $50\%$ of the sources with $n=1$, and $50\%$ with $n=4$.
In terms of morphology, the galaxies can have different inclinations and eccentricities. The inclinations can vary from $0^{\circ}$ to $90^{\circ}$, and the user can specify the sampling sequence in the angular coordinate space. For example, if 10 values are chosen, the sampling spacing will be $9^{\circ}$. The same principle applies to eccentricities, whose values vary from 0 (circle) to almost 1 (highly elliptical). Furthermore, we allow for a special case: a S\'ersic index of $n=4$. This profile \citep{devaucouleurs} is commonly associated with elliptical galaxies, which tend to have a circular shape. Accordingly, if one of the S\'ersic indexes required by the user is $n=4$, there is a boolean parameter which indicates whether these galaxies will have only a circular shape, or an elliptical shape (which allows different inclination and eccentricity values).
Figure~\ref{galaxy_stamps} shows examples of simulated galaxies with different features.
For each redshift bin, we create a set of stamps each representing an artificial galaxy with total flux given by equation~\ref{eq2}. The value of the flux in each individual pixel at position $(x_i,y_i)$ and size $\Delta r$ is calculated numerically by integrating the surface brightness profile:
\begin{equation}
L(x_i,y_i) = \int_{x_i-\Delta r/2}^{x_i+ \Delta r/2} \int_{y_i -\Delta r/2}^{y_i + \Delta_r/2} I(r) dx dy
\end{equation}
where $r^2=(x^2+y^2)$.
We note that previous approaches to completeness simulations have resorted to oversampling the inner pixels of the artificial sources as a balance between accuracy and computational speedup \citep[e.g.:][]{peng2002, haussler2007}. However, as \texttt{GLACiAR} is tailored for high redshift galaxies, which are typically marginally resolved, we prefer to implement a highly accurate calculation of the flux in each individual pixel.
The artificial sources generated by \texttt{GLACiAR} do not include Poisson noise. This is motivated by the fact that Poisson noise becomes dominant over other components (background, readout, and dark current noise) only in a regime where the source is detected at high confidence ($S/N\gtrsim 50$). Under these conditions, completeness simulations are not required. For example, we verified from the Hubble Space Telescope WFC3 Exposure Time Calculator that for a compact source in the F160W filter, Poisson noise becomes greater than the sky background at $S/N>80$.
For each set of parameters, subsets with all the possible galaxies in terms of inclination and eccentricity for each S\'ersic index are generated. All the simulated galaxies in each subset have the same redshift, meaning that the parameters that change, apart from $n$ are the slope $\beta$, the input magnitude, eccentricity, and inclination angle. Both $\beta$ and the input magnitude only modify the flux, i.e. the shape of the surface brightness profile of the simulated galaxy remains the same except for a scaling factor. Hence, we do not need to recalculate the flux in each pixel for these galaxies as we can just apply a global re-scaling. In the case of the eccentricity and inclination angle, these parameters change the shape of the source and distribution of its flux. Given that, we generate all possible combinations for each subset with the same S\'ersic index and redshift. Note that the redshift also changes the distribution of the flux as we define $R_{eff}$ as a function of $z$.
\subsection{Point Spread Function Convolution}\label{psf}
The PSF describes the imaging system response to a point input, and we take it into account to properly include the instrumental response into our model mock galaxies. In order to do this, we need a user-supplied PSF image, which is convolved with the artificial galaxy images through the the \texttt{python} module \texttt{convolution.convolve} from \texttt{Astropy}. For commonly-used \textit{HST} filters, we already include Tiny Tim PSF data\footnote{http://www.stsci.edu/hst/observatory/focus/TinyTim} in the `$psf$' folder. If the user desires to apply \texttt{GLACiAR} to filters not listed in the code, the corresponding files can be added to that folder.
\subsection{Positions}
After generating the simulated galaxy stamps, their position $(x,y)$ is assigned within the science image. These coordinates $(x,y)$ correspond to the pixel where the center of the stamp will be placed, and are generated as pairs of uniform random numbers across the pixel range in the science image. Two conditions are required to accept the pair: First, for physical reasons, a simulated galaxy cannot be blended with another simulated galaxy (but no limitation is imposed to blending with sources in the original science image) and second, the center of the simulated source must fall inside the science image boundaries (technically implemented by requiring the pixel to have a value different from zero in the science image).
The artificial source positions generated are saved for comparison in the subsequent steps.
\begin{figure*}
\centering
\includegraphics[scale=0.55]{stamps.pdf}
\caption{Example of different types of galaxies produced by \texttt{GLACiAR}. The left panels show a zoom of the galaxies placed on a constant background (box size 35$\times$35 pixels), while the middle and right panels show them inserted in a typical science image (F160W for the field BoRG-0835+2456) with box sizes (2.8''$\times$2.8'' and 5.0''$\times$2.8'' respectively). From top to bottom we see an artificial galaxy with a S\'ersic index of 4, and total input magnitude $m_{AB}=23.8$; an artificial galaxy with S\'ersic index of 4, and magnitude $m_{AB}=25.8$; an artificial galaxy with S\'ersic index of 1, magnitude $m_{AB}=23.8$, eccentricity of 0.5, and inclination angle of 05$^{\circ}$; and an artificial galaxy with S\'ersic index of 1, magnitude $m_{AB}=25.8$, eccentricity of 0.5, and inclination angle of 0$^{\circ}$. The first two ones have a circular shape, while the latter two are elliptical.}
\label{galaxy_stamps}
\end{figure*}
\subsection{Multi-band data}
\texttt{GLACiAR} is structured to handle multiple, user-specified photometric bands. Depending on the redshift of the simulated source and the slope of its spectrum $\beta$, synthetic images will have different magnitudes in different bands. To calculate them, the code starts from the spectrum defined in Equation~\ref{eq_spectrum} (see Figure~\ref{spectrum} for an example), and it convolves it with the relevant filter transmission curve using the function \texttt{pysysp} from the package \texttt{PyPI}. Input files for a set of default HST filters are included in our release. If the user requires a different filter that is not part of the \texttt{GLACiAR}'s sample, they can add it by adding the transmission file in the folder `filters'.
After calculating the flux of the simulated source in each filter, the postage stamp image of the artificial galaxy is rescaled to that total flux. In order to save time, we let all the simulated galaxies in a single recovery simulation iteration have the same value of $\beta$, so there is no need to repeat the filter convolution for sources at the same redshift, and sample instead a different value of $\beta$ in each iteration. This saves computational resources without impacting the end results since (1) we employ a sufficient number of iterations ($n_{iter}=100$ by default) to sample the $\beta$ distribution reasonably well, and (2) changes in $\beta$ produce only relatively small differences in colours ($\Delta m < 0.1$) for default input choices. Therefore it's not necessary to sample a different $\beta$ value for each galaxy.
Finally, the artificial galaxies stamps are added to the science images in the corresponding bands, if their total magnitude in that band is below a critical threshold ($m_{AB}\leq 50$ by default).
\subsection{Source Identification}\label{recovery}
We run a source identification software (\texttt{SExtractor} in this case) on the original images, as well as on the new images with the simulated galaxies, to create source catalogs. In order to do that, the user must provide a configuration file under the folder `SExtractor\_files'. If no file is provided, the software uses the default one, `parameters.sextractor'. The filter file also needs to be copied here. We provide one example with the filter `gauss\_2.0\_5x5.conv'.
\texttt{GLACiAR} calls \texttt{SExtractor} to run over all the science images with added artificial sources generated in each iteration; it produces new catalogs and new segmentation maps for each of them. To ease storage space requirements, segmentation maps are deleted after use by default.
To study the recovery fraction, the segmentation map of the original image is compared with the segmentation map of the image containing the simulated galaxies. The positions where the simulated galaxies were placed have been recorded, therefore the new segmentation map values in that position can be checked. It is possible that the new source is not found by \texttt{SExtractor} in the actual position that was placed in, thus we allow a certain margin, examining the values of the new segmentation map over a grid of 3x3 pixels centered in the original input position. If any of the values of this grid is not zero, the ID number of the object that is there is saved (i.e. the value of that pixel in the segmentation map). To determine whether that object is blended, we check in the original segmentation map the values of the pixels where the simulated object lies. If any of the pixel values are different from zero, the object is flagged as blended. If the real source blended with the simulated galaxy has an original magnitude fainter than the simulated galaxy input magnitude, we still consider the simulated object successfully recovered. On the other hand, if the original science source is brighter, an extra test is performed. If less than 25\% of the pixels of the new object overlap with the original object, and there is a difference smaller than 25\% between recovered and input flux of the simulated object, we still consider it as recovered, while if any of these two requirements are not met, we flag the artificial source as not recovered. This is a conservative (and moderately computationally intensive) approach on assessing blending, but it has advantages of taking into full account the arbitrary shape of foreground sources and the extent of the overlap of the segmentation maps when compared to a distance-based approach. We also note that 25\% overlap is an arbitrary threshold that we fined-tuned based on experimentation, which users are free to modify.
To summarize this process, Figure~\ref{fig:diagram_detailed} shows a flow chart with a detailed explanation of, in particular, the blending and recovering of sources. Furthermore, Figure~\ref{maps} shows an example of the identification of the simulated galaxies in one of the fields of the BoRG survey.
\begin{figure*}
\centering
\includegraphics[scale=0.3]{diagram_completeness.jpg}
\caption{Diagram with a detailed explanation of how \texttt{GLACiAR}'s algorithm structure, focusing in particular on the blending classification.}
\label{fig:diagram_detailed}
\end{figure*}
\subsection{Multi-band photometric output}
The ultimate output of \texttt{GLACiAR} is a multi-band photometric catalog that lists input and output properties of the artificial objects, including a flag to indicate whether entries have been marked as successfully recovered. This catalog naturally allows the user to run a customized data analysis to measure completeness and source recovery using the same criteria that the user would apply to actual science data (whether a dropout technique or a photometric redshift estimation is desired). For convenience of Lyman-break selection users, \texttt{GLACiAR} has a module that performs statistical analysis of the recovery as a function of input redshift and magnitude.
\section{IMPLEMENTING AND RUNNING THE CODE}\label{implementation}
The code is in the \texttt{Github} repository https://github.com/danielacarrasco/GLACiAR. The user should download the code, change the input parameters, and add any files if needed. Detailed instructions are provided in a README file. A brief description of the parameters and required files follows below.
\subsection{Input Parameters}\label{input_parameters}
The parameters needed to run the simulation are found in the file `\textit{parameters.yaml}'. Some of them need to be specified by the user, while others can be either inputted or left blank, in which case they take a default value. A description of all the parameters is given in Appendix \ref{appendix}.
\begin{figure*}
\centering
\includegraphics[scale=0.3]{fig1_nsci.jpg}
\includegraphics[scale=0.3]{fig1_ssci.jpg}\\
\includegraphics[scale=0.3]{fig1_nsegm.jpg}
\includegraphics[scale=0.3]{fig1_ssegm.jpg}
\caption{Illustration of \texttt{GLACiAR}'s application to BoRG field $borg\_$0835+2456. textit{Top left:} Original science image.\textit{Top right:} Science image plus simulated galaxies with an input magnitude of $m_{H}=26.0$ indicated by colored circles. \textit{Bottom left:} \texttt{SExtractor} Segmentation map for the original science image. \textit{Bottom right:} Segmentation map after running \texttt{SExtractor} on the image that includes simulated galaxies. The color of the circles encodes detection of the simulated sources with green indicating recovery for an isolated galaxy, blue recovery but source blended with a fainter object. Detection failures are shown in red.}
\label{maps}
\end{figure*}
\subsection{Required Files}\label{files_needed}
The files required for the algorithm are described below. More details on their format and location can be found in the README file on \texttt{Github}.
\begin{itemize}
\item[] \textsf{Science images:} All the images in which the simulation is going to be run on. It must include all the different fields and bands as well.
\item[] \textsf{List:} Text file with the names of the fields from the survey. This list is given as an input parameter (see section~\ref{input_parameters}).
\item[] \textsf{\texttt{SExtractor} parameters:} As discussed in section~\ref{recovery}, \texttt{GLACiAR} invokes instances of \texttt{SExtractor} on the images (original and with simulated galaxies). To run that external software, a file defining the parameters is needed. There is an example provided under the folder `SExtractor\_files' (based on the BoRG survey source detection pipeline), which will be used if no other file is provided, but we recommend the user to customize this input to optimize their specific analysis.
\item[] \textsf{RMS maps or weight maps:} Frames having the same size as the science image that describe the noise intensity at each pixel. They are necessary only if required for the \texttt{SExtractor} parameters. They are defined as:
\begin{equation}
weight = \frac{1}{variance} = \frac{1}{rms^{2}}
\end{equation}
\item[] \textsf{PSF}: PSF data for filters/instruments not currently included in the release can be added in this folder by the user (see~\ref{psf} for more details).
\end{itemize}
\section{EXAMPLE APPLICATION}\label{results}
To illustrate \texttt{GLACiAR}'s use, we apply it to estimate the completeness and source recovery of a large HST imaging program, the Brightest of Reionizing Galaxies Survey (BoRG), focused on identifying $L>L_*$ galaxies at $z\gtrsim 8$ along random lines of sights \citep{trenti2011,trenti2012,bradley2012,schmidt2014,calvi2016,bernard}. Specifically, we focus on characterizing the J-dropout source recovery (galaxies at $z\sim 10$) and compare our results with those in \citet{bernard}. The results are discussed throughout this section, and they can be seen in Figure~\ref{fig:completeness_borg} and~\ref{fig:dropouts_borg}.
\subsubsection{Data}\label{borg}
The dataset considered here is the BoRG[z8] subset, consisting of core BoRG pointings (GO11700, 12572, 12905), augmented by other pure parallel archival data (GO 11702, PI Yan, \citet{yan2011}) and COS GTO coordinanted parallel observations. For a detailed description of the survey, we refer to \citet{trenti2011}; \citet{bradley2012}; \citet{schmidt2014}. We use the 2014 (DR3) public release of the data.\footnote{https://archive.stsci.edu/prepds/borgy/}, which consists of 71 independent pointings covering a total area of $\sim350$ arcmin$^{2}$. All fields were imaged in 4 bands: F098M ($Y_{098}$), F125W ($J_{125}$), F160W ($H_{160}$), and an optical band F606W ($V_{606}$) or ($V_{600}$). The BoRG[z8] public data release consists of reduced and aligned science images produced with \texttt{MultiDrizzle} \citep{multidrizzle}, a pixel scale of 0.08, and associated weight maps \citep{bradley2012,schmidt2014}. The 5$\sigma$ limiting magnitudes for point sources and aperture $r=0.2''$ vary between $m_{AB}=25.6-27.4$, with a typical value of $m_{AB}\sim26.7$.
\subsection{Redshift Selection/Dropouts criteria}
We use \texttt{GLACiAR} for recovery of simulated sources in the redshift range of $z\sim 10$. In order to do this, we apply a selection criteria to find $J_{125}$ dropouts following \citet{bernard}:
\begin{itemize}
\item[-]$S/N_{160}\geq 8.0$
\item[-]$S/N_{V}<1.5$
\item[-]$S/N_{098}<1.5$
\item[-]$J_{125}-H_{160}>1.5$
\end{itemize}
Note that while these criteria are set as default in the code, their selection is fully customizable by the user.
\subsection{Completeness and Source Recovery Output}
The main results produced by the program can be summarized in three tables described below, including an example for the first two (see Tables~\ref{ResultsStats} and~\ref{ResultsGalaxiesBand}).
First, the statistics of what fraction of the galaxies placed in the image were identified and how many were recovered at the corresponding redshift with the selection technique. Table~\ref{ResultsStats} shows an example of its structure for our BoRG dataset.
\begin{table*}
\centering
\begin{tabular}{cccccccccccc}
\hline \hline
z$^{a}$ & m$^{b}$ & N\_Obj$^{c}$ & $S=0^{d}$ & $S=1,2^{e}$ & $S=-1^{f}$ & $S=-2^{g}$ & $S=-3^{h}$ & N\_Rec$^{i}$ & N\_Drop$^{j}$ & Rec$^{k}$ & Drops$^{l}$\\ \hline
9.0 & 24.1 & 300 & 218 & 26 & 50 & 4 & 2 & 268 & 0 & 0.89 & 0.0 \\
9.0 & 24.3 & 1000 & 751 & 62 & 169 & 13 & 5 & 920 & 0 & 0.92 & 0.0\\
9.0 & 24.5 & 1500 & 1112 & 94 & 257 & 26 & 11 & 1369 & 0 & 0.91 & 0.0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
10.0 & 24.1 & 300 & 211 & 17 & 63 & 5 & 4 & 274 & 101 & 0.91 & 0.34\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
11.8 & 27.9 & 600 & 0 & 72 & 0 & 34 & 494 & 0 & 0 & 0.0 & 0.0\\\hline
\end{tabular}
\caption{Example of the file produced by the simulation with the statistics for each redshift and magnitude.}
\tabnote{$^{a}$ Input redshift of the simulated galaxy.}
\tabnote{$^{b}$ Magnitude bin that represents the median value of the bins.}
\tabnote{$^{c}$ Number of objects inputted for that redshift and magnitude bin in all the iterations.}
\tabnote{$^{d}$ Number of galaxies recovered by \texttt{SExtractor} that were isolated.}
\tabnote{$^{e}$ Number of artificial sources recovered that were blended with a fainter object.}
\tabnote{$^{f}$ Number of artificial sources recovered that were blended with a brighter object.}
\tabnote{$^{g}$ Number of artificial sources that were detected by \texttt{SExtractor} but with a $S/N$ under the required threshold.}
\tabnote{$^{h}$ Number of artificial sources that were not detected by \texttt{SExtractor}.}
\tabnote{$^{i}$ Number of recovered artificial sources: $(d+e)$.}
\tabnote{$^{j}$ Number of artificial sources that passed the dropout selection criteria}
\tabnote{$^{k}$ Fraction of not recovered artificial sources : $\frac{i}{c}$.}
\tabnote{$^{l}$ Fraction of artificial sources that passed the selection criteria$\frac{j}{c}$.}
\label{ResultsStats}
\end{table*}
Second, a table with more detail about the galaxies that were inserted and the recovering results, several tables (one for each redshift step) are produced with all the galaxies that were placed in the simulations at that redshift. They have the recovered magnitude in the detection band, the identification status, the ID given by \texttt{SExtractor}, among others. The structure is shown in Table~\ref{ResultsGalaxiesBand}
\begin{table*}
\centering
\begin{tabular}{cccccc}
\hline \hline
Initial Mag$^{a}$ & iteration$^{b}$ & ID Number$^{c}$ & Input Magnitude$^{d}$ & Output Magnitude$^{e}$ & Identification Status$^{f}$\\ \hline
24.1 & 1 & 319 & 25.922 & 26.255 & 0\\
24.1 & 1 & 213 & 25.922 & 26.088 & 0\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
27.9 & 10 & 39 & 26.952 & 23.627 & -1\\
27.9 & 10 & 0 & 26.952 & -99.000 & -3\\\hline
\end{tabular}
\caption{Example of the file produced by the \texttt{GLACiAR} with information of all the simulated galaxies.}
\tabnote{$^{a}$ Magnitude corresponding to the input flux for the star. This is not the same as $^{d}$ as the input magnitude changes depending on the $\beta$ value and size of the object.}
\tabnote{$^{b}$ Iteration number.}
\tabnote{$^{c}$ Identification number given by \texttt{SExtractor} after it runs on the image with the simulated galaxies. This number is unique for every iteration for a given magnitude and redshift.}
\tabnote{$^{d}$ Magnitude corresponding to the added flux inside all the pixels that the source includes.}
\tabnote{$^{e}$ Magnitude of the source found with \texttt{SExtractor} after it runs on the image with the simulated galaxies.}
\tabnote{$^{f}$ Integer number that indicates whether a source has been recovered and/or is blended.}
\label{ResultsGalaxiesBand}
\end{table*}
Third, one last table, which is useful for redshift selection. Given that the number of bands is variable, and it can be large, this table is released in a Python-specific compact binary representation (using the \texttt{pickle} module). It contains the ID of the object, input magnitude, status, magnitudes in all bands, and $S/N$ for each band as well.
\subsection{Results and Comparisons}
We run the simulation for the whole BoRG[z8] survey. As an example, the results for one field ($borg\_$0440-5244) are shown in Figure~\ref{fig:completeness_borg}; Figure 6a shows the completeness fraction $C(m)$ for different redshifts as a function of the input magnitude, while Figure 6b is a slice of $C(m)$ at a fixed redshift ($z=10.0$). As we can see, the completeness is around $C(m)\sim90\%$ up to a magnitude of $m_{AB}\sim25.0$, and it drops to $C(m)=0.0\%$ for $m_{AB}\gtrsim27.1$, while at $m_{AB}\sim25.98$, we find a completeness of $C(m)=50\%$. This is expected from when comparing with the results from \textit{HST} exposure time calculator\footnote{http://etc.stsci.edu/etc/input/wfc3ir/imaging/}: a galaxy at $z=10.0$ in an image with the characteristics of the field we are running our simulations on, gives as a $S/N$ ratio of $\sim8.0$ at a magnitude of $m_{AB}=26.1$ in the $H_{160}$ band for a point source galaxy with circular radius of 0.2'' and a power law $F(\lambda) = \lambda^{-1}$ spectrum.
The results of the dropout selection for the same field are shown in Figure~\ref{fig:dropouts_borg}. We can compare our results with the ones from \citet{bernard} (bottom panel of Figure 4 in their paper), where we can see the selection function $C(m)S(z,m)$ for the field $borg\_$0440-5244. Our results achieve a maximum of $\sim64\%$ recovery, to be compared against the maximum $\sim75\%$ recovery reported in their paper. As we have full access to the code used to produce both sets of results, we can attempt to understand the origin of this discrepancy. First of all, there is a difference in $C(m)$ in the range of $m_{AB} = 25.5 - 26.0$, that is most likely attributed to the definition of successful recovery for blended or potentially blended sources. In fact, when comparing the results for recovery of isolated objects \texttt{GLACiAR} obtains the same results. The completeness analysis in \citet{bernard} considers sources as blended based on the distance from the center of the objects, i.e. if the detected object is closer than a certain distance (in pixels) from the center of an object in the original science catalog, then it classifies the artifical source as blended. In this respect, \texttt{GLACiAR} improves upon the previous analysis by carrying out a more sophisticated analysis based on comparison of the segmentation maps, which take into account the actual spatial extension of the sources, instead of limiting the analysis to catalog output.
Another key difference originates from how our galaxies are simulated: we simulate images in all the bands, even when the expected is negligible given the spectrum of the artifical source. In the case of \citet{bernard}, the V-band ($F600LP$ or $F606W$) non-detection requirement was not simulated since it was assumed that artificial sources had no flux in that band. To account for this, the selection function computed excluding the V-band non-detection requirement was reduced by $6.2\%$, which derives from the assumption that the S/N distribution in the V-band photometry would follow Gaussian statistics. \texttt{GLACiAR} performs instead a full color simulation and our results indicate that non-Gaussian tails contribute to exclude a larger fraction of objects at bright magnitudes. Indeed, if we replicate the approach by \citet{bernard} we obtain instead results consistent with that study (for isolated sources). Thus all differences are understood and the comparison contributes to validate the accuracy of \texttt{GLACiAR}.
Note that \texttt{GLACiAR} results for $C(m)$ and $S(z,m)$ are provided as a function of the intrinsic magnitude of the simulated images. Previous studies, including \citet{bernard}, may present completeness as a function of recovered output magnitude instead. Since in the latter case a specific LF for the simulated sources has to be assumed to map intrinsic to observed completeness through a transfer function, we opted to setup the output of \texttt{GLACiAR} to provide only the fundamental quantity, and leave derivation of an observed completeness to the user if needed.
\begin{figure}[ht!]
\centering
\includegraphics[scale=.75]{Completeness_Field0440-5244.pdf}
\includegraphics[scale=.58]{Cz_Field0440-5244.pdf}
\caption{Completeness selection plots produced by our simulation for the BoRG field $borg\_$0440-5244 in F160W. The top panel shows the completeness for a range of redshifts $z=9.6-12.0$, and the bottom panel shows a slice of those results for $z=10$. The completeness is around $\sim90\%$ up to $m_{AB}\sim25.0$, and it drops to $0.0\%$ for $m_{AB}\gtrsim27.0$. The blue dashed line shows the $50\%$ calculated by \texttt{GLACiAR} ($m_{AB}=25.98$). The red dashed line shows the limiting magnitude at which a point source with circular radius of 0.2'' and a spectrum following a power law $F(\lambda) = \lambda^{-1}$ is detected at a $S/N = 8$ according to the \textit{HST} exposure time calculator ($m_{AB}=26.10$).}
\label{fig:completeness_borg}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale=.8]{Dropouts_Field0440-5244.pdf}
\includegraphics[scale=.8]{CS_Field0440-5244.pdf}
\caption{Dropouts selection plots produced by our code for the BoRG field $borg\_$0440-5244 for redshift $z\sim10$. The top panel shows the dropouts found from all the galaxies inserted ($C(m)S(z,m)$), while the bottom panel shows the fraction of recovered dropouts ($S(z,m)$) for artificial sources that are successfully identified in the detection band. Note that the bottom panel becomes noisy for $m_{AB}>27.0$ since $S(z,m)$ is computed only using the small number of faint artificial galaxies that are identified with success. The top panel does not suffer from such noise, instead.}
\label{fig:dropouts_borg}
\end{figure}
\section{DISCUSSION AND SUMMARY}\label{discussion}
In this paper, we present a new tool to estimate the completeness in galaxy surveys, \texttt{GLACiAR}. This algorithm creates an artificial galaxy stamp that follows a S\'ersic profile with parameters such as the size, S\'ersic index, input magnitude, input redshift, filters, among others, that are chosen by the user. After creating the galaxies, they are added to the science image. A source identification algorithm is run on the science images and on the images with the simulated galaxies, in order to study the recovery of these mock galaxies.
After the source catalogs are produced, we match the newly found objects with the positions in which the simulated galaxies were originally inserted, and we cross-match the area of the segmentation maps corresponding to these new sources with the ones from the original catalogs, so the status of these galaxies can be determined. These statuses can be categorized in four groups: detected and isolated, blended with fainter object, blended with brighter object, and not detected. If a source falls into one of the two first categories only, it is considered detected (an example can be seen in Figure~\ref{maps}). The final product of the algorithm are three types of tables, with the information of the statistics about the recovery, the detected galaxies, and all the galaxies.
To illustrate the use of the new tool, and to validate it against previous literature analysis, we applied \texttt{GLACiAR} to analysis of the selection function for $z\sim 10$ galaxies in the BoRG[z8] survey, comparing our results to the recent work by \citet{bernard}. Section~\ref{results} discusses the comparison in detail, with the key summary being that while (minor) differences are present, these can be attributed to improvements introduced by \texttt{GLACiAR} and are fully understood. In particular, the improved completeness analysis is more realistic in its treatment of non-Gaussian noise for all survey bands, and includes a sophisticated comparison between segmentation maps to identify blended objects to high reliability.
This initial application demonstrates that \texttt{GLACiAR} is a valuable tool to unify the completeness estimation in galaxy surveys. So far, the code is limited to surveys where the detection of the sources is done by \texttt{SExtractor}, but its structure has been designed to allow a future upgrade of capabilities by inclusion of \texttt{photutils} as well.
More broadly, the code is flexible allowing, for instance, the possibility of modifying the redshift selection criteria along with the fraction of galaxies that follow different values of $n$ for the S\'ersic luminosity profile. This makes \texttt{GLACiAR} suitable for a range of different applications in galaxy formation and evolution observations, including studies of LFs, contamination rates in galaxy surveys, characteristics of selected galaxies in redshift selections, among others. A future release of the code will also incorporate a module to account for weak and strong lensing magnification maps, with applications to galaxy cluster surveys such as the \textit{Frontier Fields} initiative.
\begin{acknowledgements}
We thank the referee for helpful suggestions and comments that improved the paper. This work was partially supported by grants HST/GO 13767, 12905, and 12572, and by the ARC Centre of Excellence for All-Sky Astrophysics in 3 Dimensions (ASTRO-3D).
\end{acknowledgements}
\begin{appendix}
\section{Description of input parameters}\label{appendix}
Below there is a list with a brief description of all the parameters used to run \texttt{GLACiAR}:
\begin{itemize}
\item[] \textsf{n\_galaxies:} Number of galaxies per image to place in each iteration ($\texttt{default} = 100$).
\item[] \textsf{n\_iterations:} Number of iterations, i.e. the number of times the simulation is going to be run on each image for galaxies with the same redshift and magnitude ($\texttt{default} = 100$).
\item[] \textsf{mag\_bins:} The number of desired magnitude bins. For a simulation run from $m_{1} = 24.0$ to $m_{2} = 25.0$ in steps of 0.2 magnitudes, there will be 6 bins ($\texttt{default} = 20$).
\item[] \textsf{min\_mag:} Brightest magnitude of the simulated galaxies ($\texttt{default} = 24.1$).
\item[] \textsf{max\_mag:} Faintest magnitude of the simulated galaxies ($\texttt{default} = 27.9$).
\item[] \textsf{z\_bins:} The number of desired redshift bins. For a simulation run from $z_{1} = 9.5$ to $z_{2} = 10.5$ in steps of 0.2, there will be 6 bins ($\texttt{default} = 15$).
\item[] \textsf{min\_z:} Minimum redshift of the simulated galaxies ($\texttt{default} = 9.0$).
\item[] \textsf{max\_z:} Maximum redshift of the simulated galaxies ($\texttt{default} = 11.9$).
\item[] \textsf{n\_bands:} Number of filters the survey images have been observed in. If not specified, it will raise an error.
\item[] \textsf{detection\_band:} Band in which objects are identified. If not specified, it will raise an error.
\item[] \textsf{lambda\_detection:} Central wavelength in angstroms of the detection band. If not specified, it will raise an error.
\item[] \textsf{bands:} Name of the bands from \textsf{n\_bands}. The detection band has to be the first entry in the list. If not specified, it will raise an error.
\item[] \textsf{zeropoints:} Zeropoint values corresponding to each band. The entries must follow the same order as \textsf{bands}. Default values are set as 25.0.
\item[] \textsf{gain\_values:} Gain values for each band. The entries must follow the same order as \textsf{bands}. If not specified, it will raise an error.
\item[] \textsf{list\_of\_fields:} Text file containing a list with the names of the fields the simulation will run for, which can be one or more. If not specified, it will raise an error.
\item[] \textsf{R\_eff:} Effective radius in kpc for a simulated galaxy at $z=6$. It is the half light radius, i.e. the radius within half of the light emitted is enclosed. This value changes with the redshift as $(1+z)^{-1}$ ($\texttt{default} = 1.075$ kpc).
\item[] \textsf{beta\_mean:} Mean value for a Gaussian distribution of the UV spectral slope (Section~\ref{artificialgalaxy}). ($\texttt{default} = -2.2$).
\item[] \textsf{beta\_sd:} Standard deviation for the for a Gaussian distribution of the slope of the spectrum as explained in Section~\ref{artificialgalaxy}. ($\texttt{default} = 0.4$).
\item[] \textsf{size\_pix:} Pixel scale for the images in arcsec ($\texttt{default} = 0.08$).
\item[] \textsf{path\_to\_images:} Directory where the images are located. The program will create a folder inside it with the results. If not specified, it will raise an error.
\item[] \textsf{image\_name:} Name of the images. They all should have the same name with the name of the field (list\_of\_fields) and band written at the end, as follows: `image$\_$name+field+band.fits'. If not specified, it will raise an error.
\item[] \textsf{types\_galaxies:} Number indicating the amount of S\'ersic indexes. ($\texttt{default} = 2$).
\item[] \textsf{sersic\_indexes:} Value of the S\'ersic index parameter $n$ for the number of \textsf{types\_galaxies} ($\texttt{default} = [1,4]$).
\item[] \textsf{fraction\_type\_galaxies:} Fraction of galaxies corresponding the the S\'ersic indexes given ($\texttt{default} = [0.5,0.5]$).
\item[] \textsf{ibins:} Number of bins for the inclination angle. The inclinations can vary from $0^{\circ}$ to $90^{\circ}$, i.e., if 10 bins are chosen, the variations will be of $9^{\circ}$. One bin indicates no variation of inclination angle. ($\texttt{default} = 1$).
\item[] \textsf{ebins:} Number of bins for the eccentricity. The values can vary 0 to 1, i.e., if 10 bins are chosen, the variations will be of 0.1. One bin indicates only circular shapes ($\texttt{default} = 1$).
\item[] \textsf{min\_sn:} Minimum $S/N$ ratio in the detection band for an object to be considered detected by \texttt{SExtractor}. ($\texttt{default} = 8.0$)
\item[] \textsf{dropouts:} Boolean that indicates whether the user desires to run a dropout selection ($\texttt{default} = False$).
\item[] \textsf{de\_Vacouleur:} Boolean that indicates whether the user wants to make an exemption for de Vaucouleur galaxies. If true, galaxies with $n=4$ will only have circular shape ($\texttt{default} = False$).
\end{itemize}
\end{appendix}
\nocite*{}
\bibliographystyle{pasa-mnras}
| proofpile-arXiv_065-12601 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\section{Brief introduction to Langevin Dynamics}
Non-equilibrium systems evolving towards equilibrium exist everywhere in nature and have a wide range of applications, from soft matter and biophysics to strongly coupled phenomena in the most extreme phases of matter. The point of converging of such phenomena is the Brownian motion \cite{brown01}, the irregular motion of mesoscopic particles that are large enough to admit a hydrodynamic type of coarse-graining, but small enough to exhibit thermal fluctuations in a liquid environment caused by random microscopic interactions with the particles of the medium. The implications of Brownian motion were very important even from the initial stages of their theoretical study, verifying at the time the existence of atomic nature of matter and providing a good estimation of the Avogadro's constant \cite{ein1905,sutherland1, smol1,perrin1}. Shortly after these developments, the probabilistic description was introduced to lay down the foundation of non-equilibrium statistical mechanics with the Langevin and Fokker-Planck equations. Since then there is a continuous interest on theoretical developments and new applications in all quantitative sciences, making the topic, a slow evolving revolution \cite{brownianII,brownian100,brownianrel,brownian111}.
The fluctuation and dissipation effects can be realized in the classical Langevin equation
\bea \label{lang}
\dot{p}_i(t)=-\eta_D p_i(t) +\xi_i(t)~,
\eea
where $p$ is the momentum of the particle, $\eta_D$ is the so called friction coefficient or the momentum drag coefficient and $\xi$ is the random force. These two competing phenomena are realized in nature by the friction force corresponding to dissipation, representing the energy transfer to the environment, and the noise of the environment which contributes the right amount of energy to the system through the fluctuations so that it evolves towards equilibrium. The physical picture described can be formulated as a theorem: the fluctuation-dissipation theorem relating the magnitudes of these phenomena while at equilibrium.
The theoretical developments are based on natural assumptions, mainly on separation of the timescales of the phenomena involved. These are the larger relaxation time needed for the particle to thermalize compared to the smaller collision timescale. The random force then has the following properties: i) for timescales larger the collision time the force is stochastic with zero mean value, ii) the stochastic force is uncorrelated to itself and is translationally invariant, property triggered by the equilibrated homogeneous environment, iii) the statistical properties of the random force obey the time-translational invariance. These are summarized to the following relations
\bea
\vev{\xi(t)}=0~,\qquad \vev{\xi(t)\xi(t')}=\kappa \delta(t-t')~,
\eea
where $\kappa$ is a constant measuring the degree of correlation and represents the mean squared momentum per unit of time. A particle experienced such a random force undergoes a random independent displacement generated by the integral
\bea
\int_0^t \xi(t')dt'= \int_0^{t_1} \xi(t')dt'+\int_{t_1}^{t_2} \xi(t')dt'+\ldots~,
\eea
which can be thought as a summation of independent terms, each one drawn from the same distribution, resulting to a total integral obeying a normal distribution with zero mean. By applying the central limit theorem of statistics one may obtain directly $\left< \dx^2 \right>\sim t$. Alternatively, the solution of the stochastic equation for $\tau\gg \eta_D^{-1}$ is
\bea\label{p2}
p_i(t)=\int_{-\infty}^t dt' e^{\eta_D(t'-t)}\xi_i(t')~,\qquad \vev{p^2}=\int^t dt_1 dt_2 e^{\eta_D(t_1+t_2)} \vev{\xi_i(t_1)\xi_i(t_2)}~=\frac{3\kappa}{2 \eta_D}~.
\eea
From the equipartition theorem the typical thermal momentum reads $p\sim \sqrt{M T}$ and therefore the drag coefficient is related to temperature \eq{p2} as
\bea\label{eta}
\eta_D=\frac{\kappa}{2 M T}~.
\eea
To introduce the diffusion coefficient we compute the mean squared position of the particle from \eq{lang} at later time and by using again the equipartition theorem we get
\bea
\vev{x_i(t)x_j(t)}=2 D t \delta_{ij}~, \qquad \vev{x^2(t)}=\frac{1}{M^2} \int^t dt_1 dt_2\vev{p(t_1)p(t_2)}= \frac{6 T t}{M \eta_D}~,
\eea
relating the diffusion constant with respect to the drag coefficient \eq{eta} and to the mean squared momentum transfer as
\bea\label{eqdn}
D=\frac{T}{M\eta_D}=\frac{2 T^2}{\kappa}~.
\eea
\subsection{Justification of the Heavy Quark Diffusion in the Quark-Gluon Plasma}
The task of modeling the heavy quark interaction in thermal environment is amenable to similar diffusion treatment we have just described \cite{Svetitsky:1987gq,vanHees:2004gq,Moore:2004tg,Mustafa:2004dr}. The role of the heavy test particle of the previous paragraph undergoing a Brownian motion, is played by a heavy quark in an environment of a light-particle fluid. The charm and bottom quark masses are much larger than the temperature and the constituent masses of the equilibrated Quark-Gluon-Plasma (QGP) environment, providing a good separation between the relaxation and the collision times.
Let us justify this by the following estimations.
The typical non-relativistic thermal momentum of a heavy quark with mass M with $M\gg T$, is $p^2\sim M T\gg T^2$ resulting to the low velocity $v^2\sim T/M\ll 1$. The typical square momentum transfer from the medium for hard collisions is of order $Q^2\sim T^2$ from the equipartition theorem, therefore we eventually have $p\gg Q$. As a result a large number of collisions of order $M/T$ is required to change the momentum by a factor of order one. Therefore, the interaction of the heavy quark with the medium can be formulated with uncorrelated momentum kicks, and the Boltzmann equation in momentum transfer can be expanded to the Fokker-Planck equation of heavy quark diffusion, in the medium of the QGP. We can further estimate $\a_S\simeq 0.5$ and $M/T=7$ to obtain the drag coefficient with respect to the diffusion constant $\eta_D^{-1}\simeq 7 D$.
Therefore, the Brownian motion and the dissipation phenomena provide direct observables for the heavy quarks in strongly coupled theories, capable to reveal potential interesting properties of the fundamental interactions. In a sense the heavy quark interactions can be thought as encoded in the transport coefficients, which in principle are related to the scattering matrix elements on light partons in the QGP.
One way to study the heavy quark diffusion is by perturbation theory \cite{Svetitsky:1987gq}, where the medium is approximated as a weakly interacting system of quark and gluons, treatment that is not reliable in conditions realized at heavy ion colliders. Non-perturbative interactions can be captured by effective resonance models \cite{vanHees:2004gq} allowing to compute certain resonances related to the diffusion dynamics, or by methods of holography which we discuss in this review.
\section{Fluctuation and Dissipation in Holography: A Unified Approach}
In this section we focus mostly on the description of heavy quark diffusion in the context of the gauge/gravity duality
reviewing mainly the unified study scheme developed in \cite{Giataganas:2018ekx}. Previous works initializing the ideas on AdS and Lifshitz spacetime include \cite{deBoer:2008gu,Son:2009vu,CaronHuot:2011dr,Sonner:2012if,Tong:2012nf}. While other relevant works include \cite{Hubeny:2010ry,Fischler:2014tka,Yeh:2014mfa,Yeh:2015cra,Banerjee:2015vmo,Moerman:2016wpv,Lee:2016wcn, Kiritsis:2013iba, Ho:2013rra,Fischler:2012ff,Roychowdhury:2015mta}. In the next section we will study the fluctuations and the energy loss of a moving heavy quark.
To present the general picture, let us introduce the gravity dual theory in string frame
\bea\label{gen1}
ds^2=-g_{00}(r) dx_0^2+ g_{rr}(r)dr^2 +\sum_{i=1}^{d} g_{ii}(r) dx_i^2 ~,
\eea
with $\lim_{r\to \infty} g_{ii}(r)=\infty$, such that the boundary is at $r=\infty$. $d$ are the space dimensions and the metric is diagonal. The dual field theory lives on the spacetime spanned by $(x_0,x_i)$ and $r$ is the holographic direction.
The massive heavy particle is represented by a string initiating from to the boundary spacetime, introducing therefore to the field theory extra degrees of freedom, and extending to the bulk of the space in the IR until the point $r= r_h$. For a thermal quantum field theory the $r_h\neq 0$ corresponding to the horizon of the black hole, while for a field theory at zero temperature $r_h=0$ and the string terminates at the deep IR. The dynamics of such strings are described by solutions of the Nambu-Goto (NG) action. Let us consider a worldsheet which extends along the $x_1$ direction being parametrized by $x_1=x_1(\tau,\sigma),~ u=\sigma,~ x_0=\tau,$ where $(\tau,\sigma)$ are the worldsheet coordinates. The action is equal to
\bea\label{actiona1}
S=-\frac{1}{2 \pi \alpha'} \int d\sigma d\tau \sqrt{-\prt{g_{00}+g_{11} \dx_1^2} \prt{g_{rr}+g_{11} x_1'^2}}~.
\eea
For a static particle one expects by symmetry arguments a straight string solution. Indeed it can be easily found that the solution to the equations of motion of the above action is $x_1=0$, where a space coordinate transformation is implemented to bring the origin at the position of the string.
The fluctuations of the heavy quark are realized by the dynamics of string fluctuations. For the static quark we need to consider the fluctuations around $x_1=0$. However, the length of the string-worldsheet in the bulk is infinite due to the infinite distance of the boundary of the space to its interior. Since the length is proportional to the particle's mass, we need to introduce a Neumann boundary condition on the location of a flavor brane $r_b$ close to the boundary. The boundary condition then reads $x_1'(r_b)=0$. The fluctuations $\delta x_1(t,r)$ give the Nambu-Goto action in terms of the metric elements \cite{Giataganas:2018ekx}
\bea\label{oactionorder2}
S=~c~-\frac{1}{4 \pi \alpha'} \int d\sigma d\tau \left(-\frac{g_{11} \sqrt{-g_{00}}}{\sqrt{g_{rr}}}\delta x_1'^2+ \frac{g_{11}\sqrt{g_{rr}}}{\sqrt{-g_{00}}}\delta \dx_1^2\right)~.
\eea
The Fourier decomposed fluctuations then take the form
\bea\label{flucgen}
\delta x_1(t,r)=\int_0^\infty h_\omega (r) \left(\alpha(\omega)e^{-i\omega \tau}+\alpha(\omega)^\dagger e^{i\omega \tau}\right)~,
\eea
with $\alpha(\omega)^\dagger$ and $ \alpha(\omega)$ being the creation and annihilation operators, and the mode equation reads
\bea\label{modes0}
\frac{\pp}{\pp r}\prt{\frac{g_{11} \sqrt{-g_{00}}}{g_{rr}}h_\omega(r)'}+\omega^2\frac{g_{11}\sqrt{g_{rr}}}{\sqrt{g_{00}}}h_{\omega}(r)=0~.
\eea
\subsection{Heavy Quark Fluctuation at Zero Temperature}
It is known that in quantum physics the fluctuation and dissipation phenomena are present even at zero temperature due to vacuum fluctuations of the environment fields and the uncertainty principle. Even in the simplest case of the zero-point energy of the electromagnetic field it has been shown that the fluctuation and dissipation phenomena occur and the relevant theorems hold \cite{landausp}. Perturbative methods analyzing these phenomena have also been developed to integrate out the environmental degrees of freedom by modeling them as an infinite number of simple harmonic oscillators \cite{caldeira1983,Schwinger,Feynman:1963fq,Grabert:1988yt, Hu:1993qa, Hu:1986jj}. Ohmic, sub-subohmic, supra-ohmic environmental effects have been analyzed in such approaches, for example works on the latter ones include \cite{Hsiang:2005pz, Hsiang:2007zb}.
The quantum fluctuations on the heavy quark in a strongly coupled environment follows the same logic. The test particle fluctuations are induced by its coupling to the gluonic fields, resulting to a non-uniform motion. The dissipation is realized by a gluonic radiation back to the medium induced by the non-canonical motion of the quark. Here we analyze the resummed physical effects of such theories using techniques of gauge/gravity duality, reviewing the generical methodology developed in \cite{Giataganas:2018ekx}.
To proceed to the solution of the fluctuation equations, it is necessary to consider a certain general class of theories with dual backgrounds that belong to \eq{gen1}, which we choose as
\bea\label{polymetric1}
g_{00}=r^{a_0} f(r)~,\quad g_{rr}=\frac{1}{r^{a_u} f(r)}~,\quad g_{ii}=r^{a_i}~,\quad f(r)=1~,\quad r_h=0~,
\eea
where the $i$ indices count the spatial directions and $a_i$ are constant powers. The class of the dual field theories accommodated by the above metric includes the hyperscaling Lifshitz violating ones \cite{Kachru:2008yh,Dong:2012se,Narayan:2012hk}, the anisotropic theories \cite{Azeyanagi:2009pr,Mateos:2011ix,Mateos:2011tv,Giataganas:2017koz,Jain:2014vka,Donos:2016zpf} and several other. Features of the current analysis are applicable for backgrounds with asymptotics of \eq{polymetric1} like certain RG flow gravity dual solutions.
There are several reasons for choosing the form of the metric as in \eq{polymetric1}. Certain rescaling can bring it to a form with less number of constants $a_i$, however this would not make our presentation simpler, since our results are formulated in terms of a constant $\nu$ incorporating all the scalings of the background. Moreover, formulas derived below with the form of the chosen metric are directly applicable to any gravity background accommodated, without the need of any coordinate transformation. Finally, such choice is convenient to build on the finite temperature string fluctuation analysis and to make the mapping of the string-brane fluctuations we study in later sections.
Notice that the string fluctuations in hyperscaling violation theories at zero temperature \cite{Edalati:2012tc} guarantees that our methods at zero temperature go through all the way for particular metrics although the theories considered here are more generic than the hyperscaling ones. These include for example the anisotropic theories with different stability and physical ranges of the parameters, compared to hyperscaling theories, allowing new features on string fluctuations. Our notation offers also a powerful advantage, since each scaling is unique, we track accurately how the different metric elements affect the observables. This is the crucial point that will allows us to identify the order of the Bessel function of the fluctuations as the central quantity that the stochastic observables depend exclusively on. This observation holds even in finite temperature as we will show in later sections.
For this class of the background, the mode equations \eq{modes0} becomes
\bea\label{mawtw1}
\frac{\partial}{\partial r}\left(r^{a_1+\frac{a_0+a_u}{2}} h_\omega(r)'\right)+\omega^2 r^{a_1-\frac{a_0+a_u}{2}}h_{\omega}(r) = 0 ~ ,
\eea
with a solution of type \cite{Giataganas:2018ekx}
\bea\label{solutionmodes1}
h_\omega(r) = r^{-\nu \kappa} A_\omega \prtt{J_\nu\prt{\omega \tilde{r}}+B_\omega Y_\nu\prt{\omega \tilde{r}}}~,
\eea
where $A_\omega$ and $B_\omega$ are constants and
\bea\label{definitionn1}
\tilde{r} := \frac{2 r^{\frac{1}{2}\left(2-\a_0-\a_u\right)}}{a_0+a_u-2}=\frac{r^{-\kappa}}{\kappa}~,\qquad \kappa:= \frac{a_0+a_u}{2}-1~,\qquad \nu:=\frac{a_0+2 a_1+a_u-2}{2\prt{a_0+a_u-2}}~
\eea
and $J_\nu(\tilde{r}),~Y_\nu(\tilde{r})$ are the Bessel functions of first and second kind. The integration constants are found by looking at the canonical computation relations for theories in curved space-times and at the Neumann boundary condition to obtain (Figure 1)
\bea
A_\om=\sqrt{\frac{ \pi \a'}{|\kappa|\prt{1+B_\omega^2}}}~, \qquad B_\omega=- \frac{J_{\n-1}\prt{\omega \tilde{r}_b}}{Y_{\n-1}\prt{ \omega\trr_b}}~,
\eea
where in the above and following computations we need to use the Bessel function properties presented in the Appendix of \cite{Giataganas:2018ekx}.
\begin{figure}
\centerline{\includegraphics[width=60mm,angle =90]{x2dp.pdf}}
\caption{\small{The real part of the string world-sheet fluctuations in a fixed Lifshitz background for a particular frequency and time. The large density of excitations in the deep IR $r$, get quickly smoothed out to result the corresponding Brownian motion on the boundary at large $r$.}}
\end{figure}
The two-point function of the fluctuations in terms of the energy of the string in the low frequency limit
\bea\label{smallo}
\omega \ll \trr_b =\frac{r_b^{-\kappa}}{\kappa}~,
\eea
takes a very compact form
\bea\label{twopointfun01}
\vev{X_1(t) X_1(0)}~\sim~
E^\frac{4\kappa\prt{1-\nu}}{ \prt{a_0-\kappa}} ~|t|^{3-2\nu}~,\quad ~\rm{when}~\quad \nu \ge 1~,
\eea
and
\bea\label{twopointfun02}
\vev{X_1(t) X_1(0)}\sim E^0 ~|t|^{ 2\nu-1}~,\quad ~\rm{when}\quad~ \nu \le 1~,
\eea
where $X_1(t):=\delta x_1(t,r_b)$. The energy of the string comes by trading the cut-off $r_b$ at the boundary using the action of the static straight string
\bea\label{energy0}
E=\frac{1}{2\pi \a'}\int_{r_h}^{r_b} dr \sqrt{ -g_{00} g_{rr}}= \frac{1}{2 \pi\a'} \prt{a_0-\kappa} r_b^{a_0-\kappa}~.
\eea
To summarise the analysis, we find that after an involved computation the two-point function \eq{twopointfun01}, \eq{twopointfun02} turns out to depend only on a single parameter, the order of the Bessel function. This is surprising and elegant result. We mention that in the unrelated studies of the chaos in the non-relativistic theories, it has been found that the order of the Bessel function of closed string solutions controls whether the theory is chaotic or not \cite{Giataganas:2014hma}. This is another example where the order of Bessel function plays such a significant role.
Depending on the properties of the dual field theory of \eq{polymetric1}, the two-point function of the particle fluctuations is characterized by one of the two branches we derived, where one of them is always independent of the mass of the quark. The null energy condition (NEC) is satisfied for regions in both branches of the two-point function \eq{twopointfun01} and \eq{twopointfun02}, ensuring that both of them can be physical. For theories giving $\nu=3/2$ and $\nu=1/2$, where the AdS background belongs, we have a minimal rate of logarithmic growth, while for theories giving $\n=1$ we have the maximum rate of growth.
\subsubsection{Linear Response Function Analysis and Fluctuation-Dissipation Relation}
The response function of the system due to an applied external force $F(t)= E e^{-i\omega t} F(\omega)$ can be found from
\bea\label{response}
\vev{X_1(\omega)}=\chi(\omega) F(\omega)~,
\eea
where $\chi(\omega)$ is the admittance of the system. The force corresponds to a new boundary term
\bea\label{bboundf1}
\frac{g_{11} \sqrt{-g_{00}}}{\sqrt{g_{uu}}}\delta x_1'(r_b)= 2\pi\alpha' F(t)~,
\eea
which does not modify the solution of string fluctuations, only its constants. Making a tortoise coordinate transformation $r \rightarrow \tilde{r}$ we see that the fluctuation dynamics
in the IR limit $r\rightarrow 0 \Rightarrow\tilde{r} \rightarrow \infty$ for $\kappa>0$, behaves like a wave equation in the flat space, where by choosing the ingoing boundary conditions we get
\bea\label{solamodes3}
\delta x_1(t,r)=e^{-i\omega t} g_\omega(r)~,\qquad g_\omega(r)=r^{-\n\kappa} H_\n\prt{\om \tilde{r}}~,
\eea
with $H:=H^{(1)}=J+iY$ being the Hankel function. Therefore the response function can be found to be
\bea\label{response00}
\chi(\omega) = \frac{2\pi\a'}{\omega} r_b^{-a_1} \frac{H_{\nu }(\omega \tilde{r}_b)}{H_{\n-1}(\omega \tilde{r}_b)}~.
\eea
All the information of the system and the theory are incorporated in the response function in an elegant way through the definition of $\n$ and $\trr$. The fluctuation-dissipation theorem can be also shown that it is satisfied
\bea\label{flds}
\mbox{Im}\chi(\omega)=\frac{4 \alpha'}{ \omega^2}\frac{r_b^{-a_1}}{\tilde{r}_b}~\prt{J_{\nu-1}^2\prt{\omega \tilde{r}_b}-Y_{\nu-1}^2\prt{\omega \tilde{r}_b}}^{-1}=\frac{1}{2} \vev{X_1(\omega)X_1(0)}~,
\eea
for the theories fitting in our study.
By expanding the response function in the positive $\nu$ region
\bea
\chi(\omega)\sim -c_1\prtt{ m\prt{i\omega}^2 +\cO(\omega^4) +\gamma \prt{-i\omega}^{2\nu}+\cO(\omega^{2\nu+2})}^{-1}~,
\eea
we obtain the inertial mass $m$ and the self-energy $\g$ of the particle
\bea\label{massin1}
m=\frac{r_b^{2\kappa\prt{\nu-1}}} {2 \kappa \prt{\nu-1}}~,~\qquad ~ \g=\frac{1-i\tan\prtt{\prt{\nu-\frac{1}{2}}\pi}}{\prt{ \prt{2 i \kappa}^{2\nu-1}\G\prt{\nu}^2}} \pi ~,
\eea
to find that the mass is not simply equal to the energy of the string given by \eq{energy0}. The order of the Bessel function plays a significant role in the result, indicating for example which term dominates in the expression. For $\n>1$ the inertial mass dominates over the self energy at low frequency, otherwise the self energy is the dominant contribution.
\subsection{Thermal Diffusion of a Heavy Quark}
Let us now consider the Brownian motion in a thermal environment. The metric \eq{polymetric1} has to contain a black hole
\bea\label{polymetric2}
g_{00}=r^{a_0} f(r)~,\quad g_{rr}=\frac{1}{r^{a_u} f(r)}~,\quad g_{ii}=r^{a_i}~,\quad f(r):=1-\frac{r_h^{a_f}}{r^{a_f}}~,\quad r_h\neq 0~,\quad T=\frac{a_f}{4\pi}r_h^{\kappa}~,
\eea
where $a_f$ is a constant and $T$ is the temperature of the heat bath. The form of the blackening factor does not need to be necessarily chosen as the function above, for most of our analysis would be enough to assume a single zero at the position of the horizon $r_h$.
The equations of motion for the thermal fluctuations are generated by \eq{modes0} and can be brought to the Schr\"{o}dinger-like form by a coordinate transformation presented in Appendix of \cite{Giataganas:2018ekx} to get
\bea\label{schrodinger}
\frac{\partial^2 y}{\partial r_\star^2}+\prt{\omega^2-V(r)} y=0~,
\eea
where
\bea\label{rrstar}
&& y=h_\omega(r) r^\frac{a_1}{2}~ ,\qquad r_\star= -\frac{r_h^{-\kappa}}{a_f} ~B_{\prt{\frac{r_h}{r}}^{a_f}}\prtt{\frac{\kappa}{ a_f},0} ~,\\\label{yeq1}
&& V(r)=\frac{a_1}{2} r^{2\kappa} f(r)\prt{\left(a_0+a_u+\ff{a_1}{2}-1\right)f(r)+r f'(r) }~,
\eea
where $B_z\prtt{a,b}$ is the incomplete beta function. The monodromy patching procedure can be applied to obtain an approximate solution patching the regions between the black hole horizon all the way towards the boundary \cite{Motl:2003cd,Maldacena:1996ix,Harmark:2007jy,deBoer:2008gu,Tong:2012nf}. The three regions are chosen as
\bea
\label{rr1}
&&A)\qquad r\sim r_h\qquad\mbox{and}\qquad V(r)\ll \omega^2~ \Longleftrightarrow~ f(r)\ll \omega~.\\
\label{rr2}
&&B)\qquad V(r)\gg \omega^2~ \Longleftrightarrow~ f(r)\gg \omega~.\\
&&C)\qquad r\rightarrow \infty ~\Longleftrightarrow~ r\gg r_h~, \label{rr3}
\eea
giving the solutions for the Fourier modes $\delta x_1(t,r)=e^{-i\omega t} h_{\omega}(r)$ as \cite{Giataganas:2018ekx}
\bea\nn
&& h_{Ah}(r)=c_1\left(1- \frac{i\omega r_h^{-\kappa}}{a_f} \log\prt{\frac{r}{r_h}-1}\right)~,\\\label{monodromy1}
&& h_{Bh}(r)=c_3+c_4~ c_0+\frac{c_4}{a_f} r_h^{-2\kappa\n} \log\left(\frac{r}{r_h}-1\right)~,\\ \nn
&& h_{Bb}(r)=c_3-\frac{ c_4 }{2\kappa\n} r^{-2\kappa\n}~,\\ \nn
&& h_{Cb}(r)=c_5+c_6 ~r^{-2\kappa\nu}\left(\frac{\omega ~ \mbox{sign}\prt{1+2\kappa}}{2\kappa}\right)^{2\nu}~,
\eea
where the notation of the subscript used to identify which region of the space the solution covers: three regions and the proximity close to (h)orizon or (b)oundary is labeled. By patching the solutions we eventually obtain for their constants
\bea\label{combineall}
c_5=c_1\prt{1+ i c_0 \om r_h^{a_1} }~,\qquad c_6 = \frac{ i\om r_h^{a_1}c_1}{2\kappa\n} \prt{ \frac{2\kappa}{\om ~ \mbox{sign}\prt{1+2\kappa}}}^{2\n}~,
\eea
to give the diffusive nature of the solution near the boundary
\bea \label{hcresult}
h_\omega(r)=c_1\left(1+i\omega c_0 r_h^{a_1}+\frac{i \omega r_h^{a_1}}{2\kappa\nu} r^{-2\kappa\nu}\right)~.
\eea
\subsubsection{Response Function Analysis, Fluctuation-Dissipation and Nature of the Thermal Noise}
The response function expansion for low frequency is given by \cite{Giataganas:2018ekx}
\bea
\chi(\omega)=2\pi \a'\prt{\frac{i}{\g \omega}-\frac{m}{\g^2}+\cO(\omega)}~,
\eea
where the damping coefficient and inertial mass reads
\bea\label{thermalcore1}
\g= r_h^{a_1}~,\qquad m= r_h^{2 a_1} \prt{-c_0+ \frac{r_b^{-2\kappa\n}}{2\kappa\n}}+ m_0~.
\eea
The mass receives a thermal correction compared with the zero temperature result $m_0$ \eq{massin1}. The diffusion constant in terms of the response function reads
\bea \label{diffusion1}
D=T \lim_{\om\rightarrow 0}\prt{-i~\om \chi(\om)}~,
\eea
and is found to depend on powers of temperature specified by the order of the Bessel function \cite{Giataganas:2018ekx}
\bea\label{ordiffusiont}
D=2\pi\a' \prt{\frac{4\pi}{a_f}}^{2\n-1}~ T^{2\prt{1-\n}}~.
\eea
The monotonicity of the coefficient with respect to temperature, is increasing for $\nu<1$ and decreasing for $\n>1$, depending therefore on the characteristics of the dual field theory.
An interesting observation at the low frequency limit, is that the response function takes a universal form depending on the metric element along the spatial direction that the fluctuations occur and the temperature of the heat bath, given by the simple formula \cite{Giataganas:2018ekx}
\bea\label{responsemet1}
\chi(\om)=\frac{2\pi \a'}{-i\om g_{11}(r_h)}~.
\eea
The fluctuation-dissipation theorem is found to hold when the density of states is $\sim \log\e/(4\pi^2 T)$. Moreover, it can be seen that the correlator of the random force $\xi$ is independent of the frequency $\omega$ indicating a white noise which depends on the temperature with powers of the Bessel function order $\vev{\xi~ \xi}\sim T^{2\nu}$.
\subsection{Dp-brane Fluctuations in a $d+1$-spacetime}
In \cite{Giataganas:2015ksa} it has been noticed that the dynamics of the $k$-strings corresponding holographically to certain Dp-branes, can be mapped to the dynamics of fundamental strings in theories that have a certain preserved quantities under a T-duality. Here we consider another type of special branes, the rigid ones, and study the fluctuation and dissipation of the branes.
The brane is parametrized in the radial gauge as: $x_0=\tau,~r=\sigma_1:=\sigma,~ x_2=\sigma_2,\ldots,~ x_p=\sigma_p, ~ x_j=x_j(\tau,\sigma_i)$, $j=1, p+1,\ldots,d$. The fluctuations along the spatial transverse direction $x_1$ are given by the perturbation of the Dirac-Born-Infeld (DBI) action as \cite{Giataganas:2018ekx}
\bea
S_{\mbox{DBI},2}=\frac{T_p }{2} \int d\tau d\sigma_i \left(-\frac{g_{11} \sqrt{-g_{00}}\sqrt{g_{22}\ldots g_{p p}}}{\sqrt{g_{rr}}}\delta x_1'^2+ \frac{g_{11}\sqrt{g_{rr}}\sqrt{g_{22}\ldots g_{pp}}}{\sqrt{-g_{00}}}\delta \dx_1^2\right)~.
\eea
giving the mode equation for $\delta x_1 (\tau,\sigma)=e^{-i\omega \tau} h_\omega(r)$
\bea \label{bbmodes01}
\partial_r\prt{-\frac{g_{11} \sqrt{-g_{00}}\sqrt{g_{22}\ldots g_{pp}}}{\sqrt{g_{rr}}}h_\omega(r)'}- \frac{g_{11}\sqrt{g_{rr}}\sqrt{g_{22}\ldots g_{p p }}}{\sqrt{-g_{00}}}\omega^2 h_\omega(r)=0~.
\eea
We observe that under a mapping of the form \cite{Giataganas:2018ekx}
\bea\label{mapping0}
g_{11} \longrightarrow g_{11}\sqrt{g_{22} g_{33}\ldots g_{dd} }~, ~~~\mbox{or equivalently } ~~a_1\longrightarrow \tilde{a}_1=a_1+\frac{1}{2}\prt{a_2+\ldots+a_p}~,
\eea
the string fluctuation equations \eq{modes0} become equivalent to the brane fluctuation equations. By defining the shifted constants
\bea\label{branedefinition}
\tilde{r}:=\frac{2 r^{\frac{1}{2}\left(2-\a_0-\a_u\right)}}{a_0+a_u-2}=\frac{r^{-\kappa}}{\kappa}~,\qquad \kappa:= \frac{a_0+a_u}{2}-1~,\qquad \tilde{\nu}:=\frac{a_0+2 \ta_1+a_u-2}{2\prt{a_0+a_u-2}}~,
\eea
the brane-related analysis follows in a straightforward way. For example the two-point function reads
\bea\label{branetwopointbf1}
\vev{X_1(t) X_1(0)}\sim
E^\frac{4\kappa\prt{1-\tilde{\nu}}}{ \left(a_0-\kappa\right)} ~|t|^{3-2\tilde{\nu}}~,\quad~ \rm{when}\quad \tilde{\nu} \ge 1~,
\eea
and
\bea\label{branetwopointbf2}
\vev{X_1(t) X_1(0)}\sim E^0 ~|t|^{ 2\tilde{\nu}-1}~,\quad \rm{when}\quad~ \tilde{ \nu } \le 1~.
\eea
The response function of the quantum brane fluctuations is
\bea\label{bform1}
\chi(\omega) =\frac{2\pi\alpha'}{\omega} r_b^{-a_1-\frac{1}{2}\prt{a_2+\ldots+a_p}} \frac{H_{\tilde{\nu} }(\omega \trr_b)}{H_{\tilde{\nu}-1}(\omega \tilde{r}_b)}~.
\eea
with a low-frequency expansion that provides the inertial mass $m$ and the self-energy $\g$ of the state
\bea\label{braneform}
m= \frac{r_b^{2\kappa\left(\tilde{\nu}-1\right)}} {2 \kappa \prt{\tilde{\nu}-1}}~,\qquad ~\g= \frac{1-i\tan\prtt{\prt{\tilde{\nu}-\frac{1}{2}}\pi}}{\prt{ \prt{2 i \kappa}^{2\tilde{\nu}-1}\Gamma\left(\tilde{\nu}\right)^2}} \pi ~.
\eea
The analysis of the thermal fluctuations goes along the same lines with the string's one. For example for the case of branes the diffusion constant in terms of the temperature reads
\bea\label{bdiffusionbf}
D=2\pi\alpha' \prt{\frac{4\pi}{a_f}}^{2\tilde{\nu}-1}~ T^{2\prt{1-\tilde{\nu}}}~,
\eea
where for $\tilde{\nu}<1$ increases with temperature and otherwise decreases.
For the branes in an arbitrary background the response function was proposed to take the form \cite{Giataganas:2018ekx}
\bea\label{bchibf}
\chi(\om)=\frac{2\pi \a'}{-i\om g_{11}(r_h)\sqrt{g_{22}(r_h)\ldots g_{pp}(r_h)}}~.
\eea
\subsection{Application of the Generic Formalism to Particular Theory}
Let us briefly demonstrate how the methodology described, applies to theories dual to the anisotropic black hole background found recently in Einstein-Axion-Dilaton action and contain as particular case the IIB supergravity background \cite{Giataganas:2017koz}. The geometry which accommodates a black hole, written in the form of \cite{Giataganas:2017koz,Giataganas:2018ekx} reads
\bea\label{hyscametr}
ds^2=a^2 C_R e^{\frac{\phi(r)}{2}} r^{-\frac{2\theta}{dz}} \left(r^{2} \prt{-f(r)dt^2+dx_i^2}+C_Z r^{\frac{2}{z}} dx_3^2+\frac{dr^2}{f(r)a^2 r^2}\right) ~,
\eea
where
\bea\label{fdilela}
f(r)=1-\left(\frac{r_h}{r}\right)^{d+\prt{1-\theta}/z}~,\qquad e^{\frac{\phi(r)}{2}}=r^\frac{\sqrt{\theta^2+3 z\prt{1-\theta}-3}}{\sqrt{6}z}~
\eea
and
\bea
C_R=\frac{\prt{3z-\theta}\prt{1+3z-\theta}}{6 z^2}~,\qquad C_Z=\frac{z^2}{2\prt{z-1}\prt{1+3z-\theta}}~.
\eea
The background becomes of IIB supergravity solution for $z=3/2,~\theta=0$ reproducing the geometry \cite{Azeyanagi:2009pr} and the IR geometry of the RG flow \cite{Mateos:2011ix}. The Hawking temperature of the theory can be found equal to
\bea
T=\frac{|d+(1-\theta)/z|}{4 \pi r_h^z} ~.
\eea
Let us work with $d=3$ spatial dimensions. The first task is to determine the order of the Bessel functions for fluctuations along each direction by using \eq{definitionn1}
\bea
&&\nu_1=\frac{18z-4\theta +\sqrt{6}\sqrt{3z\prt{1-\theta}-3+\theta^2}}{12 z}~ ,\\
&&\nu_3=\frac{12+6z-4\theta +\sqrt{6}\sqrt{3z\prt{1-\theta}-3+\theta^2}}{12 z}~.
\eea
The fluctuations along the $x_1$ and $x_3$ behave in a different manner. The two-point function along $x_1$ is
\bea\label{twopointa1}
\vev{X_1(t) X_1(0)}\sim E^{-2} ~|t|^{3-2\nu_1}~,\quad \rm{when}\quad \nu_1 \ge 1 ~,
\eea
where for $\n_1<1$ there is no physical and stable theory. The $x_3$ fluctuations give
\bea\label{twopointa4}
\vev{X_3(t) X_3(0)}\sim
E^{2\frac{12-6z-4\theta +\sqrt{6}\sqrt{3z\prt{1-\theta}-3+\theta^2}}{-6z+4\theta-\sqrt{6}\sqrt{3z\prt{1-\theta}-3+\theta^2}}} ~|t|^{3-2\nu_3}~,\quad \rm{when}\quad \nu_3 \ge 1 ~,
\eea
or
\bea
\vev{X_3(t) X_3(0)}\sim E^0 ~|t|^{ 2\nu_3-1}~,\quad \rm{when}\quad \nu_3 \le 1~.
\eea
The diffusion constant depends on the direction and is given by
\bea
D_i=2\pi \alpha'\prt{\frac{4\pi}{d+\prt{1\-\theta}/z}}^\prt{2\nu_i-1}T^{2\prt{1-\nu_i}}~,
\eea
where $i=1,3$ labels the direction of the fluctuations.
\section{Stochastic Motion of the Moving Heavy Quark: Theory Independent Approach}
Let us consider a heavy particle moving with a velocity $v$ in a strongly coupled environment. The Langevin coefficients $\kappa_\perp, \kappa_\parallel$, corresponding to the mean squared momentum per unit of time in transverse and parallel direction with respect to the quark's motion, are obtained by analyzing the fluctuation of the trailing Wilson line. The out of equilibrium relativistic heavy quarks go under a Brownian motion
with a stochastic force $\xi(t)$. The methodology for generic theories including the
anisotropic ones , was developed in \cite{Giataganas:2013hwa,Giataganas:2013zaa} by providing a set of readily applicable formulas for the observables and extracting certain behaviors of them in a wide class of theories. The set of theories accommodated in this study include for example: conformal, non-relativistic, hyperscaling violation, anisotropic and theories under certain magnetic fields. The formulas of \cite{Giataganas:2013hwa,Giataganas:2013zaa} have been applied on several particular models,
for example in \cite{Finazzo:2016mhm}. The initiating works on the subject include \cite{Herzog:2006gh,Gubser:2006bz,Gubser:2006nz,CasalderreySolana:2006rq,CasalderreySolana:2007qw} and
early works that contributed to further development include \cite{Giecold:2009cg,Gursoy:2009kk,HoyosBadajoz:2009pv,Gursoy:2010aa}.
It is worthy to mention, that by studying the relativistic heavy quark diffusion in theories with rotational invariance it has been found that a universal inequality for the Langevin coefficients exist
$\kappa_\parallel\ge \kappa_\perp$ \cite{Giataganas:2013hwa,Gursoy:2010aa}. Namely the longitudinal Langevin diffusion coefficient along the quark motion is larger compared to that of the transverse direction. This inequality has been proved to be violated in the presence of strongly coupled anisotropies \cite{Giataganas:2013hwa,Giataganas:2013zaa}, in a similar way with the well known shear viscosity over entropy density bound \cite{Kovtun:2004de,Rebhan:2011vd,Jain:2015txa,Giataganas:2017koz} (other relevant works include \cite{Erdmenger:2010xm,Samanta:2016pic,Ge:2015owa,Kolekar:2016pnr}).
Other related works on the drag and stochastic sting motion include \cite{Gubser:2006qh,CasalderreySolana:2009rm,Giecold:2009cg,Rajagopal:2015roa,Herzog:2007kh,Akamatsu:2015kaa,Horowitz:2015dta,Nakamura:2013yqa, Arean:2016het,Roy:2009sw,Horowitz:2009pw,Caceres:2006as,Ahmadvand:2015gfi,Zhang:2018mqt} while dragging of the particle even at zero temperature in non-relativistic theories has been observed \cite{Hartnoll:2009ns,Kiritsis:2012ta,Fadafan:2009an}. An earlier review with references therein is \cite{CasalderreySolana:2011us}.
\subsection{The Trailing String} \label{section:trailing}
Let us review first the theory-independent analysis of the trailing string following for the drag force analysis of the Appendices \cite{Giataganas:2012zy,Giataganas:2013lga} while for the stochastic motion analysis we follow \cite{Giataganas:2013hwa,Giataganas:2013zaa}. We use a similar metric to \eq{gen1} with $u=1/r$ and therefore a boundary at $u\rightarrow 0$ and a black hole horizon at $u=u_h$
\bea\label{gen22}
ds^2=-g_{00}(u) dx_0^2+ g_{uu}(u)du^2 + \sum_{i=1}^{d} g_{ii}(u) dx_i^2 ~.
\eea
The string parametrization for a quark moving on the $x_1$ direction with velocity $v$, is $x_0=\tau,~~u=\sigma,~~x_1=v~t+\xi(u)$, where $\xi$ is the profile of the string in the bulk, satisfying on the boundary $\xi(u_b)=0$.
The equation of motion is
\bea\label{trailings}
\xi'^2=-g_{uu} C^2\,\frac{g_{00}+g_{11}\,v^2} {g_{00}g_{11}\prt{C^{2}+g_{00}g_{11}}}~,~\qquad ~ C:=2~\pi\,\alpha'\,\Pi^1_u~,
\eea
where $u_0$ is the black hole horizon of the world-sheet metric given by the solution of
\bea\label{wshorizon}
g_{00}(u_0)=-g_{11}(u_0)\,v^2~,
\eea
when $g_{uu}(u_0)\neq 0$. For $v=0$ the horizon of the worldsheet coincides with the horizon of the black hole, satisfying the natural expectations. The dragging of the particle is given by \cite{Giataganas:2012zy}
\bea\label{drag1}
F_{drag,x_1}=-\frac{1}{2\pi\alpha'}\frac{\sqrt{-g_{00}(u_0)\,g_{11}(u_0)}} {2\pi}=-\frac{ g_{11}(u_0)~v}{2\pi\alpha'}~,
\eea
while the friction coefficient is defined by
\bea\label{drag22}
F_{drag}=\frac{dp}{dt}=-\eta_D p~,\qquad \eta_D=\frac{g_{11}(u_0)}{2\pi\alpha' M \gamma}~,
\eea
where $M$ is the mass of the heavy quark, $p=M\,v~\gamma$ and $\g$ is the Lorentz factor.
The worldsheet has a blackening factor, and therefore a temperature is associated to it. To find the temperature we diagonalize the world-sheet metric to get \cite{Giataganas:2013hwa}
\bea\label{tws1}
T_{ws}^2=
\frac{1}{16\pi^2}\bigg|\frac{1}{g_{00} g_{uu}}\prt{g_{00}~g_{11}}' \prt{\frac{g_{00}}{g_{11}}}'\bigg|\Bigg|_{u=u_0}
~.
\eea
$T_{ws}$ should be thought as the effective temperature that the quark feels and is the temperature that appears in the Einstein equations relating the diffusion and the Langevin coefficients. The effective temperature in most theories turns out to be lower than the heat bath temperature, although in anisotropic theories may become higher. The natural expectation for the static quark $(v=0)$ would be that it feels the heat bath temperature and this can be verified by the above relation to obtain $T_{ws}=T$.
\subsection{Fluctuation of the Moving Trailing String}
Let us review the fluctuations around the trailing string in a generic background. The method was developed in \cite{Giataganas:2013hwa,Giataganas:2013zaa} and the action was found to be
\bea \label{actiontr22}
S_2=-\frac{1}{2\pi\alpha'}\int d\tau d\sigma \,\frac{H^{\alpha\beta}}{2}~\left[N(u)\,\pp_\alpha \delta x_1\,\partial_\beta \delta x_1+\sum_{i\neq 1}{g_{ii}}\partial_\alpha \delta x_i~\partial_\beta \delta x_i\right]~,
\eea
where $H^{\a\beta}=\sqrt{-h}{h}^{\alpha\beta},$ and $h^{\alpha\beta}$ is inverse of the diagonalized induced world-sheet metric given by
\bea
h_{\sigma\sigma}
=\frac{g_{00}g_{uu}g_{11}}{g_{00}g_{11}+C^2}~,\qquad
h_{{\tau}{\tau}}=g_{00}+v^2\,g_{11}~.
\eea
Taking advantage of the membrane paradigm it has been found that the Langevin coefficients are computed by \cite{Giataganas:2013hwa,Giataganas:2013zaa}
\bea\label{membranek}
\kappa_\perp=\frac{1}{\pi\alpha'}\,g_{kk}\bigg|_{u=u_0} T_{ws}~,~\qquad~
\kappa_\parallel=\frac{1}{\pi\alpha'}\,\frac{\left(g_{00}g_{11}\right)'} {g_{11}\,\left(\frac{g_{00}}{g_{11}}\right)'}\Bigg|_{u=u_0} T_{ws}~,
\eea
where the index $k$ denotes a particular transverse direction to that of motion $x_1$ and no summation is taken. The ratio takes the surprisingly compact form
\bea\label{ratioklkt}
\frac{\kappa_\parallel}{\kappa_\perp}=\frac{\left(g_{00}g_{11}\right)'} {g_{kk}g_{11}\,\left(\frac{g_{00}}{g_{11}}\right)'}\Bigg|_{u=u_0}~.
\eea
In isotropic spaces it has been found that $\kappa_\parallel>\kappa_\perp$ for any velocity of the quark's motion \cite{Giataganas:2013hwa,Gursoy:2010aa}. In anisotropic theories the universal condition is violated \cite{Giataganas:2013hwa,Giataganas:2013zaa}. It exists a critical quark velocity $v_c$ beyond which the inequality gets inverted to $\kappa_\parallel<\kappa_\perp$.
\subsection{Excitation of the Medium due to Heavy Quark Motion}
The quark deposits energy in the medium through its interactions with the environment, and as a result excitations in the medium occur which are expected to be well described by linearized hydrodynamics. Due to the motion of the particle through the plasma, a laminar wake is generated behind it, which has been shown to be of universal strength with respect to the total drag force exerted by the plasma. A sonic boom has been discovered for probes moving faster than the speed of sound and a diffusion wake behind the quark's motion has been also found \cite{Gubser:2007ga,Gubser:2007ni,CasalderreySolana:2006sq,Friess:2006fk}.
The total action to compute such backreacted effects is given by
\bea
S=\frac{1}{2 \kappa^2}\int d^5 x \sqrt{-g} \prt{R+\frac{12}{L^2}}-S_{NG}~,
\eea
where the NG is computed for the trailing string of the section \ref{section:trailing}. The equations of motion read
\bea
R_{\mu\nu}-\frac{1}{2} g_{\mu\nu} R-\frac{6}{L^2} g_{\mu\nu}=\tau_{\mu\nu}~.
\eea
If we concentrate on the AdS spacetime
\bea
ds^2=\frac{1}{u^2}\prt{-f(u)dt^2+d\vec{x}^2+\frac{du^2}{f(u)}}~,\qquad~ f(u)=1-\frac{u^4}{u_h^4}~,
\eea
the bulk stress-energy tensor for the trailing string takes the form
\bea
\tau_{\mu\nu}=-\frac{\kappa^2}{2\pi\alpha'} u^3 \sqrt{1-v^2} \partial_\alpha x^\mu \partial^a x^\nu~,
\eea
computed on the string world-sheet. The backreaction of the string metric can be found by considering the fluctuations $g_{\mu\nu}=g_{\mu\nu}^{0}+h_{\mu\nu}$ on the $g_{\mu\nu}^{0}$ AdS metric. The $h_{\mu\nu}$ depends for the trailing string in terms of $x_1-v t$ and we have the freedom to take the axial gauge $h_{\mu u}=0$. The system of equations consists of ten second order differential equations in $u$, minus five first order constraints, therefore we need to specify fifteen integration constants. These are fixed by imposing conditions on the boundary of the space and at the horizon of the black hole as in \cite{Gubser:2009sn}.
The low momentum asymptotics can be obtained analytically by formally expanding all variables in low momenta, where the diffusion pole and the sound pole expected from the hydrodynamic behavior of the plasma are confirmed by the computations \cite{Gubser:2009sn}. On the other hand at the large momenta, the leading term of the stress-energy tensor is a boosted version of the stationary quark's as expected. At scales much shorter that the typical length scale of the fluid the quark does not see the plasma and feels a vacuum state. The next order reveals the existence of a critical velocity for the quark's motion that passes from a region of energy depletion behind the quark, to a region of energy depletion in front of it. The full numerical analysis is presented in \cite{Gubser:2009sn} for the various quark velocities.
\subsection{Non-Perturbative Monte Carlo Simulations of the Heavy Quark Diffusion}
An estimate of the heavy quark momentum diffusion with Monte-Carlo simulation using the Backus-Gilbert method \cite{bgflattice} was done in \cite{Francis:2015daa} giving
\bea
\kappa=1.8 \ldots 3.4~ T^3~.
\eea
The result seems to be in agreement with a next to leading order computation in perturbative QCD using the hard thermal loop effective theory and by setting the scales and the coupling to the usual ones used in the heavy ion collisions \cite{CaronHuot:2007gq}. Then the diffusion coefficient is estimated to
\bea
D T= 0.35 \ldots 1.1
\eea
and has higher values compared to the one predicted for the light quarks in the continuum limit \cite{Ding:2010ga,Burnier:2012ts}. This can be naturally explained with the fact that the heavy quarks feel slightly weaker interactions. Using these methods, it would be interesting to show that the heavy mass limit is justified for the lighter charm quarks, and not only for the heavier bottoms quarks. A promising direction would also be to estimate the effects from dynamical quarks on the heavy quark diffusion, where the screening should affect the observables. To this direction the gauge/gravity duality could also provide very insightful qualitative results. In this context, the flavor is added to the correspondence with the use of the Dp-branes, where the quenched limit is easier tractable \cite{Karch:2002sh,Erdmenger:2007cm}, while the unquenched limit is demanding computationally \cite{Burrington:2004id,Nunez:2010sf}, so approximate or numerical methods have been developed and the relevant screening on several observables has been observed, for example \cite{Kirsch:2005uy,Giataganas:2011nz,Bigazzi:2014qsa,Alho:2012mh,Faedo:2017aoe,Li:2016gtz}.
\section{Summary}
In this brief review we have presented a theory independent treatment of stochastic heavy quark dynamics. In the introduction we have justified why the heavy quarks admit a stochastic treatment. Then we have presented the analysis of quantum and thermal fluctuations for a static quark by considering fluctuations of the straight string. We have moved on to the analysis of the trailing string fluctuations to obtain the Langevin equations. The idea of this review is to present the model independent holographic formulas applicable to wide class of theories, which have been obtained in \cite{Giataganas:2018ekx} for the static quark and in \cite{Giataganas:2013hwa,Giataganas:2013zaa} for the trailing string. We briefly summarize most of the formulas below.
\textbf{Quantum Fluctuations of the Heavy Particle:} Using the wide class of theories described by \eq{gen1} we obtain the generic form of the action \eq{oactionorder2} describing the fluctuations with the mode equation \eq{modes0}. For the theories of the form \eq{polymetric1} the fluctuations are given by a Bessel type solution \eq{solutionmodes1} with order $\nu$ \eq{definitionn1}
which depends on the background geometry. By applying the boundary conditions and appropriate quantization we determine the constants of the solutions. The two point-function of the fluctuation has a surprisingly compact form and the two branches \eq{twopointfun01} and \eq{twopointfun02}, controlled by the value of the Bessel function and therefore by the properties of the theory we study.
The response function analysis is done by applying the generic boundary force \eq{bboundf1}. The modification of the boundary condition leads to a different solution for the fluctuations specified by \eq{solamodes3}. The response function is found in terms of the Hankel function with the same order $\nu$. Then the fluctuation-dissipation theorem is found to be satisfied \eq{flds}. By expanding the response function we determine the inertial mass and the self energy of the particle for the whole class of theories \eq{massin1}. All the results and their properties heavily depend on the order $\n$ incorporating the information of the theory.
\textbf{Thermal Diffusion of the Heavy Particle:} Including to our geometry the black hole \eq{polymetric2}, we study the thermal string fluctuations. The solution to the equation of fluctuations \eq{schrodinger} is involved and the monodromy patching method needs to be used, patching certain approximate solutions in different regions along the holographic direction \eq{monodromy1}. The solution close to the boundary is given by \eq{hcresult} depending heavily of the asymptotics of the metric element along the fluctuations and the Bessel function order. The response function is found to take the form \eq{responsemet1}, exclusively depending on the black hole horizon along the direction of fluctuations. The self energy and the thermally corrected inertial mass are given by \eq{thermalcore1}. Interestingly the diffusion coefficient scales with the temperature in way that is solely controlled by the Bessel function order \eq{ordiffusiont}, realizing how elegantly the information of the wide class backgrounds is encoded in the order $\nu$.
\textbf{Diffusion of a Rigid type Dp-Branes:} By mapping the equations of the rigid branes \eq{bbmodes01} to the string fluctuations we find a way to read all the stochastic observables and coefficients from the analysis done on the strings. The mapping of the string to brane fluctuation is implemented by \eq{mapping0} giving the shifted Bessel function order \eq{branedefinition}. Then the two-point function of Dp-branes \eq{branetwopointbf1}, \eq{branetwopointbf2}, the response function \eq{bchibf}, the inertial mass and self energy \eq{braneform}, and the diffusion coefficient \eq{bdiffusionbf} can be read in a straightforward way from the prescription explained above.
\textbf{Dragging of the Heavy Moving Particle:} We consider a heavy particle moving with a velocity $v$ in a strongly coupled environment described by a metric with elements being arbitrary functions of the holographic direction \eq{gen22}. The profile of the trailing string is given by the equation \eq{trailings} and the two-dimensional worldsheet has a black hole with a horizon given by the solution of the equation \eq{wshorizon}. The drag force is expressed in terms of the metric element of the direction of the quark's motion \eq{drag1}, while the friction coefficient is found in \eq{drag22}. The moving quark feels an effective temperature different than the one of the heat bath given by the equation \eq{tws1}.
\textbf{Langevin Coefficients of the Heavy Moving Particle:} The action of the fluctuations is given by \eq{actiontr22}. Employing the membrane paradigm we obtain the Langevin coefficients given in terms of the metric elements and the effective temperature that the quark feels \eq{membranek}. These are generic powerful formulas directly applicable to the thermal holographic theories. The ratio of the two coefficients take a very simple form given by \eq{ratioklkt} and turns out that for isotropic theories it satisfies a universal relation inequality of the form $\kappa_\parallel>\kappa_\perp$ for the whole range of the quark motion velocity. In anisotropic theories the universal inequality gets inverted for a critical speed of the quark, and the universality is violated in a similar way that the viscosity over entropy bound is violated in anisotropic theories.
~\newline~\newline
\textbf{Acknowledgements:} The work of the author is supported by the National Center of Theoretical Science (NCTS) and the grants 101-2112-M-007-021-MY3 and 104-2112-M-007 -001 -MY3 of the Ministry of Science and Technology of Taiwan (MOST). This review submitted for the Proceedings of the Corfu Summer Institute 2017 'School and Workshops on Elementary Particle Physics and Gravity', 2-28 September 2017, Corfu, Greece.
\bibliographystyle{JHEP}
| proofpile-arXiv_065-12630 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In standard cosmology, most of the matter in the Universe is in the form of an elusive substance dubbed ``dark matter" (DM), which must be fundamentally different from the particles contained in the Standard Model of particle physics~\cite{Bertone:2016nfn,Bertone:2010zza}. In virtually all proposed formation scenarios, black holes form in environments characterized by a high dark matter density. Potentially large dark matter overdensities are expected to form around supermassive \cite{Gondolo:1999ef}, intermediate-mass \cite{Bertone:2005xz}, and so-called ``primordial'' \cite{2008ApJ...680..829R} black holes, as a consequence of their formation and evolution.
We focus here on primordial black holes (PBHs), i.e.~compact objects that may have formed in the early Universe from small-scale large-amplitude density fluctuations originated during inflation, or via a variety of other mechanisms (see e.g. \cite{Hawking:1971ei,Carr:1974nx,ChaplineNature1975}; for recent reviews, \cite{Green:2014faa,Sasaki:2018dmp}). The recent discovery of several gravitational wave signals from merger events of massive binary-black-hole (BBH) systems has prompted a renewed debate on the contribution of PBHs in the mass range $M_\mathrm{PBH}$ $\sim$ $1$ - $10^2$ M$_\odot$ to dark matter \cite{Bird:2016dcv,ClesseBellido2016}.
If one considers PBH pairs forming in virialized structures, the PBH binary merger rate is compatible with the one inferred by the LIGO and Virgo collaborations ($\mathcal{R} \simeq$ 12 - 213 Gpc$^{-3}$ yr$^{-1}$ \cite{Abbott:2017vtc}) assuming that all of the DM is in the form of PBHs \cite{Bird:2016dcv}. However, a significant number of PBH pairs decouple from the Hubble flow deep in the radiation era, therefore PBH pairs can copiously form in the early Universe as well \cite{Nakamura:1997sm,Ioka:1998nz}. A recent calculation of the associated merger rate at present time \cite{Sasaki:2016jop} -- extended and refined in \cite{Ali-Haimoud:2017rtz} -- provides a much larger estimate of the PBH merger rate. This can be translated into a bound on the fraction of DM in the form of PBHs, which is potentially much stronger than any other constraint in the same mass range (see \cite{PhysRevD.94.083504,Garcia-Bellido:2017xvr,Clesse:2017bsw,Gaggero:2016dpq} and references therein).
Here we show that the dark matter accumulated around PBHs significantly modifies the dynamical evolution of PBHs binaries.
In fact, unless PBHs contribute all of the DM in the Universe, they inevitably grow around them mini-halos of DM, whatever its fundamental nature is, in the early Universe ~\cite{Mack:2006gz,Ricotti:2007jk}.
These DM ``dresses'' are expected to grow until the PBHs form binary systems that decouple from the Hubble flow, and to dramatically alter the evolution of the binaries due to {\it dynamical friction} \cite{Chandrasekhar1943a,Chandrasekhar1943b,Chandrasekhar1943c}.
While the PBHs orbit around each other, they interact with their respective DM mini-halos and induce slingshot effects on the DM particles, losing energy and momentum in the process, and eventually heating up the DM halos. The effect of dynamical friction on BH binaries has been studied in the context of super-massive binary systems at the center of galaxies, and is expected to make the binaries more compact and less eccentric. This can have a potentially significant effect on the merger time, and eventually on the merger rate of those objects at the present time (see, for instance, \cite{Begelman:1980vb,Quinlan:1996vp,2017PhRvD..96f3001G}).
In order to assess the impact of DM mini-halos on the orbits of PBH binaries, we perform N-body simulations to follow the dynamics of these systems, making use of the publicly available \textsc{GADGET-2}~ code \cite{Springel:2005mi} as a gravity-only N-body solver. We follow the evolution of the PBH binary system from the time at which it decouples from the Hubble flow until the point at which most of the DM in the mini-halos has been ejected, and the semi-major axis and eccentricity of the system have stabilised. We thus self-consistently compute the merger times and merger rates of primordial BBH systems today, and compare it with the one inferred by the LIGO and Virgo collaborations. We also present a simple analytical model that captures the main aspects of the numerical calculations, and offers useful insights on the physics of the problem.
The paper is organized as follows. In Sec.~\ref{sec:BBHproperties} we discuss the primordial BBH parameter space, following the formalism detailed in Ref.~\cite{Ali-Haimoud:2017rtz} and including the evolution of DM mini-halos following Refs.~\cite{Mack:2006gz,Ricotti:2007jk}; in Sec.~\ref{sec:simulations} we present the setup and results of our numerical simulations and our procedure to remap the BBH parameter space under the effects of dynamical friction. In Sec.~\ref{sec:results} we present our estimate of the merger rate that takes these effects into account; the resulting bound on the fraction of DM in the form of PBHs is shown in Fig.~\ref{fig:LIGO_limit}. Finally, we discuss these results and possible caveats in Sec.~\ref{sec:discussion}, followed by our conclusions in Sec.~\ref{sec:conclusions}. Code and results associated with this work can be found \href{https://github.com/bradkav/BlackHolesDarkDress}{here} \cite{DarkDressCode}.
\section{Formation and properties of PBH binaries formed in the early Universe}
\label{sec:BBHproperties}
\subsection{Properties of PBH binaries}
\label{sec:properties}
If PBHs make up all the DM, most of the pairs decouple from the Hubble flow before matter-radiation equality and form bound systems (see Fig. \ref{fig:fractionPBHplot}); if they only contribute a dark matter fraction $f_{\rm PBH} \ll 1$ only rare pairs with small separations form binaries.
It is possible to determine the probability distribution for these systems in a two-dimensional parameter space where the two independent variables are the semi-major axis $a$ of the binary orbit and the dimensionless angular momentum, defined as
\begin{equation}
j \equiv \frac{\ell}{\sqrt{2 G_N M_\mathrm{PBH} a}} = \sqrt{1 - e^2} \,,
\end{equation}
where $\ell$ is the angular momentum per unit reduced mass and $e$ is the eccentricity.
Following the notation and the approach described in Ref.~\cite{Ali-Haimoud:2017rtz} (see also \cite{Chen:2018czv}), it is convenient to define the dimensionless variable $X$ as follows:
\begin{equation}
X \equiv \left(\frac{x}{\bar{x}}\right)^3 \,,
\label{eq:bigX}
\end{equation}
where $x$ is the comoving seperation of the PBH pair and
\begin{equation}
\bar{x} \equiv \left( \frac{3 M_{\rm PBH}}{4 \pi f_{\rm PBH} \, \rho_\mathrm{eq}} \right)^{1/3}\,,
\end{equation}
is the mean (comoving) separation between two PBHs, in terms of the PBH mass $M_{\rm PBH}$, the density at matter-radiation equality $\rho_{eq}$, and the fraction $f_{\rm PBH}$ of DM in PBHs. Under the assumption that PBHs are uniformly distributed\footnote{The effect of clustering has recently been discussed in \cite{Ballesteros:2018swv}, and is found to be negligible for narrow mass functions, and potentially relevant for broader distributions.}, the differential probability distribution with respect to $X$ is simply
\begin{equation}
\frac{\partial P}{\partial X} \,=\, e^{- X}\,.
\end{equation}.
The angular momentum distribution is more tricky, as it requires us to model the tidal field the binaries are immersed in.
A first estimate was performed in \cite{Nakamura:1997sm}, considering only the torque exerted by the tidal force caused by the PBH which is closest to the binary.
A more refined treatment, accounting for the tidal torquing exerted by {\it all other PBHs} surrounding the binary itself, was presented later in \cite{Ali-Haimoud:2017rtz}.
In the current work we adopt the latter prescription. It is useful to write explicitly the full PDF in terms of the variables $a$ and $j$ we are mostly interested in:
\begin{equation}
\label{eq:Paj}
P (a, j) |_{f, M_{\rm PBH}} \, = \, \frac{\partial X}{\partial a}\, \exp{\left(- \frac{x(a)^3}{\bar{x}^3}\right) } \, P(j)\,,
\end{equation}
with
\begin{equation}
\label{eq:measure}
\frac{\partial X}{\partial a} \,=\, \frac{\partial X}{\partial x} \frac{\partial x(a)}{\partial a} \,=\, \frac{3}{4 a^{1/4}} \left(\frac{f_{\rm PBH}}{\alpha \bar{x}}\right)^{3/4} \,,
\end{equation}
where:
\begin{itemize}
\item The relation between the decoupled binary semi-major axis $a$ and initial comoving separation $x$ of the PBH pair can be computed numerically \cite{Ali-Haimoud:2017rtz} by solving the equation of motion of two point sources subject to gravitational pull and Hubble flow at the same time:
\begin{equation}
\frac{{\rm d}^2 r}{{\rm d}t^2} = - \frac{2 G_N M_{\rm PBH}}{r^2} \frac{r}{|r|} + (\dot{H}(t) + H(t)^2) \, r\,,
\label{eq:Motion1}
\end{equation}
where $H(t)$ is the Hubble constant.
The solution clearly shows a turnaround of $r(t)$ followed by an oscillatory regime, which proceeds undisturbed by the Hubble flow; the relation between the semi-major axis $a$ of the newly formed binary and the initial PBH separation $x$ is then:
\begin{equation}
x(a) \simeq \left(\frac{3\, a\, M_{\rm PBH}}{4 \pi \,\alpha \,\rho_{eq}} \right)^{1/4}
\label{eq:x_of_a}
\end{equation}
with $\alpha \simeq 0.1$ \cite{Ioka:1998nz,Ali-Haimoud:2017rtz}.
\item The $j$ probability distribution is also estimated in the same paper, and can be written as follows:
\begin{equation}
\label{eq:Pj}
j \, P(j)|_{f, M_{\rm PBH}} \,=\, \frac{y(j)^2}{\left(\, 1 \,+\, y(j)^2 \,\right)^{3/2}}\,,
\end{equation}
where
\begin{equation}
y(j) \equiv \frac{j}{0.5\, (1+\sigma_{eq}^2/f^2)^{1/2}\, X}.
\end{equation}
In the above expression, the contribution from large-scale Gaussian density perturbations, characterized by a variance $\sigma_{eq} \approx 0.005$ at matter-radiation equality, is taken into account.
\end{itemize}
With these prescriptions, the integral of the PDF over the full $(a, j)$ parameter space provides the fraction of PBHs that form a decoupled binary system in the early Universe, as shown in Fig.~\ref{fig:fractionPBHplot} for different values of the PBH mass and DM fraction in PBHs.
The full PDF $P(a,j)$ is displayed in Fig.~\ref{fig:ProbabilityDistribution}. In the same figure we also show the contours referring to the expected merger time of the binary due to the emission of gravitational radiation, which is given by \cite{Peters:1964zz}:
\begin{equation}
\label{eq:tmerge}
t_\mathrm{merge} \,=\, \frac{3 \, c^5}{170 \, G_N^3} \, \frac{a^4 j^7}{M_{\rm PBH}^3}\,.
\end{equation}
We remark that either a very small semi-major axis or an extreme eccentricity is required to get a merger time comparable with the age of the Universe ($t_\mathrm{univ} \sim 13.7 \,\,\mathrm{Gyr}$): wider, more circular binaries tend to merge on much longer timescales.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth,]{plots/FractionPBHbin.pdf}
\caption{{\bf Fraction of PBHs that belong to some binary system formed in the early Universe}. This quantity is plotted as a function of the fraction of DM in PBHs (for different values of the PBH mass). As mentioned in the text, if PBHs make all the DM, most of them belong to pairs that have a chance to decouple from the Hubble flow before matter-radiation equality and form a binary system.}
\label{fig:fractionPBHplot}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,]{plots/PDF_mergetoday_Kamionkowski_loglog.pdf}
\caption{{\bf Probability distribution of PBH binaries that decouple in the early Universe}. The PDF, derived in \cite{Ali-Haimoud:2017rtz}, is given by Eq. \ref{eq:Paj}. We plot it as a function of the semi-major axis $a$ and dimensionless angular momentum $j = \sqrt{1-e^2}$. The red solid lines show contours of constant merger time (in Gyr).}
\label{fig:ProbabilityDistribution}
\end{figure}
\subsection{Accretion of dark matter mini-halos before binary decoupling}
\label{sec:decoupling}
Let us now add another relevant piece of information to our model.
Given the PDF described above, the authors of \cite{Ali-Haimoud:2017rtz} derived the merger rate at present time, and found that it would exceed the one observed by the LIGO and Virgo collaborations. Thus, PBHs can only be a small fraction of the DM in the Universe.
Motivated by these results, we consider a scenario characterized by a sub-dominant population of PBHs, immersed in a high-density DM-dominated environment, rapidly expanding and diluting.
In this context, the relevant effect we want to model is the progressive growth of a DM mini-halo around each PBH, governed by the competition between the gravitational pull of the PBH and the expanding Hubble flow.
The accretion of the DM halo deep in the radiation era can be computed numerically~\cite{Mack:2006gz,Ricotti:2007jk}
by solving the following equation (similar to Eq. \ref{eq:Motion1}), describing radial infall of matter in an expanding universe:
\begin{equation}
\frac{{\rm d}^2 r}{{\rm d}t^2} = - \frac{G M_{\rm PBH}}{r^2} + (\dot{H} + H^2) r \, ,
\label{eq:Motion2}
\end{equation}
where $H(t) = 1/(2t)$. Evolving the above equation for each shell, starting from very high redshift with the initial conditions $r=r_i$ and $\dot{r} = H_i r_i = r_i/(2t_i)$, one finds that the PBH can accrete a DM halo with $M_\mathrm{halo}^\mathrm{eq} = M_\mathrm{PBH}$ at the end of the radiation era ($z = z_\mathrm{eq}$).
The density profile of such a halo was first determined analytically in \cite{Bertschinger:1985pd} as a power law
\begin{equation}
\label{eq:rhoDM}
\rho(r) \propto r^{-3/2}.
\end{equation}
We note that the same dependence on $r$ has been obtained in recent, realistic numerical simulations \cite{Delos:2017thv} that follow the evolution of ultra-compact mini halos (UCMHs)\footnote{Such halos can form out of small-scale large-amplitude density fluctuations that are too small to form PBHs, but still large enough to originate collapsed structures. The $\rho(r) \propto r^{-3/2}$ profile can develop if the UCMHs originate from a pronounced spike in the power spectrum at some given reference scale.}. There is however evidence that UCMHs may grow much shallower profiles or indeed that the profiles may be even steeper around isolated over-densities such as PBHs \cite{Gosenca:2017ybi}. In this work, we fix the density profile to that of Eq.~\eqref{eq:rhoDM}, which is widely considered as a reference in the current literature.
Given the self-similarity of the profile, with no intrinsic length scale, it is useful to define a sharp cutoff at some {\it truncation radius}, which can be defined either as the turnaround radius, or the radius of the ``sphere of influence'' centered on the PBH and characterized by a DM density larger than the (rapidly declining) background density.
According to \cite{Mack:2006gz,Ricotti:2007jk}, both definitions provide the same result:
\begin{equation}
R_\mathrm{tr} (z) \,=\, 0.0063 \, \left( \frac{M_\mathrm{halo}^\mathrm{eq}}{M_{\odot}} \right)^{1/3} \, \left( \frac{1 + z_\mathrm{eq}}{1 + z} \right) {\rm pc}\,,
\label{eq:r_tr}
\end{equation}
with respect to the redshift and the PBH mass. In the above expression, $_\mathrm{eq}$ refers to the quantities evaluated at matter-radiation equality (i.e.~at $z_\mathrm{eq} \simeq 3375$)
Hence, the halo mass accreted at a generic redshift $z$ can be written in terms of the truncation radius as follows:\footnote{We note that Eqs.~\eqref{eq:r_tr} and \eqref{eq:Mhalo} do not strictly apply \textit{before} matter-radiation equality. With this caveat, we apply them here. However, we also note that we will be mostly interested in binaries decoupling shortly before matter-radiation equality, where the true halo mass and size are unlikely to deviate by much from Eqs.~\eqref{eq:r_tr} and \eqref{eq:Mhalo}.}
\begin{equation}
\label{eq:Mhalo}
M_\mathrm{halo} (z) = \left(\frac{R_\mathrm{tr} (z)}{R_\mathrm{eq}}\right)^{3/2} M_\mathrm{PBH}\,.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth,]{plots/decouplingRedshift_haloMass_bis.pdf}
\caption{{\bf Truncation radius and semi-major axis of the decoupled binaries}. Both quantities are plotted as a function of the decoupling redshift (see Eq. \ref{eq:r_tr} and \ref{eq:z_of_a}). As mentioned in the text, we notice that more compact binaries decouple earlier (at larger redshifts). The plot shows that the DM halos never overlap in the redshift range considered here, because the truncation radius is always smaller than the semi-major axis of the binary system.}
\label{fig:r_tr_and_a}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth,]{plots/decouplingRedshift_haloMass.pdf}
\caption{{\bf Decoupling redshift and corresponding mass of the accreted DM halo}. In this plot, we assume that the binary systems are made of $30$ M$_\odot$ PBHs. The quantities are plotted as a function of the semi-major axis, as given by Eq. \ref{eq:z_of_a} and \ref{eq:Mhalo}.}
\label{fig:decouplingRedshift}
\end{figure}
It is now important to point out that the time at which a PBH pair decouples from the Hubble flow and forms a binary system depends on the PBH separation.
The decoupling redshift of a binary with semi-major axis $a$ is given by \cite{Ali-Haimoud:2017rtz}:
\begin{equation}
z_{\rm dec} \, (a) \,=\, 3 f_{\rm PBH} \, z_{\rm eq} \, \frac{\bar{x}^3}{(x(a))^3}
\label{eq:z_of_a}
\end{equation}
where $x(a)$ is defined in Eq.~\eqref{eq:x_of_a}.
As noted in \cite{Sasaki:2016jop}, the binaries decouple in the radiation era, so we adopt the expression above until the time of matter-radiation equality, i.e. for $z > z_{eq}$. This implies a maximum value of the semi-major axis $a_\mathrm{max}$, set by $z_\mathrm{dec} = z_\mathrm{eq}$ and given by:
\begin{equation}
\label{eq:amax}
\frac{z_{dec}}{z_{eq}} = 3f_{\rm PBH} \left(\frac{\bar{x}}{x(a)} \right)^3 \rightarrow a_{max} = \, \left( \frac{3^{5} \,M_{\rm PBH}}{4 \pi \rho_{eq}}\right)^{1/3}\,.
\end{equation}
We plot in Fig.~\ref{fig:r_tr_and_a} the semi-major axis and the truncation radius as a function of the decoupling redshift, given by Eq.~\eqref{eq:r_tr} and the inverse of Eq.~\eqref{eq:z_of_a}.
We also notice that, from these relations, it is straightforward to realize that wider binaries decouple later (intuitively, for larger separations the gravitational pull takes more time to overcome the Hubble flow that tends to break the pair up), and therefore have the chance to grow a more massive DM halo around each PBH. This behavior is represented in Fig.~\ref{fig:decouplingRedshift}.
\subsection{Self-consistent PBH binary orbits}
\label{sec:selfconsistent}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth,]{plots/firstRemapping.pdf}
\caption{{\bf First remapping}. We represent the relative difference between the original PDF in the $(a,j)$ parameter space and the one which takes into account the presence of the mini-halos (as described in Sec. \ref{sec:selfconsistent}). We remark that this initial remapping is mainly based on a rescaling of the mass parameter, and does not take into account the impact of the halos on the BBH orbits, which will be addressed in the next section.}
\label{fig:firstRemapping}
\end{figure}
Before studying the impact of the mini-halos on the merger rate, let us merge the pieces of information presented so far, and estimate how the presence of the mini-halos (discussed in Sec.~\ref{sec:decoupling}) changes the probability distributions in the $(a, j)$ parameter space (described in Sec.~\ref{sec:properties}).
The presence of the halos can be taken into account, at first order, by assuming that the dressed PBHs behave as point masses with $M_{\rm tot} = M_{\rm PBH} + M_{\rm halo}$.
For a fixed comoving separation, this causes them to decouple {\it earlier} at some redshift $z'$ with a smaller semi-major axis. This implies a rescaling of $x(a)$ in Eq.~\eqref{eq:x_of_a} as follows:
\begin{equation}
M_{\rm PBH} \, \rightarrow \, M_{\rm PBH} + M_{\rm halo}(z')\,.
\end{equation}
Moreover, the PDF in Eq.~\eqref{eq:measure} must be rescaled by a factor $(M_\mathrm{tot}/M_\mathrm{PBH})^{3/4}$, arising from the new expression for $\partial x/\partial a$.
As far as $P(j)$ is concerned, the dominant contribution to the torque comes from large $z$, when the PBHs are closer together and therefore the forces between them are largest\footnote{This can be seen explicitly in the $s^{-2} \sim z^2$ scaling of the integrand in Eqs.~(13) and (14) of Ref.~\cite{Ali-Haimoud:2017rtz}.}. At large $z$, the PBHs have typically not had time to grow a large DM halo. The late-time growth of the DM halos may increase the torques from other PBHs (and their respective halos), and the DM halo accreted by the nearest neighbors could in principle exchange angular momentum with the binary. However, the mass accretion mostly happens at late times, while the torques are mostly exerted at early times, which means that the angular momentum \textit{per unit mass} (and therefore $j$) are expected to be roughly constant. We therefore assume that the DM halo does not substantially affect the time evolution of the angular momentum and use $P(j)$ given in Eq.~\eqref{eq:Pj} throughout.
In the end, the impact of these corrections is not huge in most of the parameter space: we show in Fig.~\ref{fig:firstRemapping} the relative difference between the original PDF and the ``remapped'' one. Perhaps the largest effect is that it is possible for wider binaries to form when the mass of the halo is included. For binaries decoupling close to $z_\mathrm{eq}$, we can treat the dressed PBH as a point mass $M_\mathrm{tot} \approx M_\mathrm{PBH} + M_\mathrm{halo}(z_\mathrm{eq}) \approx 2 M_\mathrm{PBH}$. This means that the maximum possible semi-major axis $a_\mathrm{max}$, given in Eq.~\eqref{eq:amax}, is increased by a factor of $2^{1/3} \approx 1.26$. This leads to a slight increase in the total number of binaries which can be formed.
\section{Impact of DM mini-halos on BBH orbits}
\label{sec:simulations}
\subsection{N-body simulations}
\label{sec:Nbody}
In order to assess the impact of DM mini-halos on the orbits of PBH binaries, we use N-body simulations to follow the dynamics of these systems. We use the publicly available \textsc{GADGET-2}~ code \cite{Springel:2005mi} as a gravity-only N-body solver. In this section, we summarise the key features of the simulations, with full details given in Appendix~\ref{sec:Gadget}. The code for setting up and analysing the simulations is publicly available \href{https://github.com/bradkav/BlackHolesDarkDress}{here} \cite{DarkDressCode}. Selected animations are also available \href{https://doi.org/10.6084/m9.figshare.6298397}{here} \cite{Animations}.
Considering first a single PBH, we set up the surrounding DM halo with a density profile similar to that of Eq.~\eqref{eq:rhoDM} but with a smooth truncation at the truncation radius. The truncation radius and mass of the halo are set based on the semi-major axis of the orbit, as discussed in Sec.~\ref{sec:BBHproperties}. We initialise each DM halo in equilibrium with an isotropic, spherically symmetric velocity distribution obtained using the Eddington inversion formula \cite{2008gady.book.....B}.
We initialise a binary with a given $(a, e)$ as if it consisted of two point masses of $M_\mathrm{tot} = M_\mathrm{PBH} + M_\mathrm{halo}$ each and we begin the simulation during the first in-fall of the PBHs from apoapsis.
We set the softening length for Dark Matter pseudo-particles to be at least a factor of 5 smaller than the distance of closest approach of the PBHs $r_\mathrm{min} = a_i (1-e_i)$. For eccentricities smaller than $e = 0.995$, we use roughly $10^4$ equal-mass DM particles per halo. For eccentricities larger than $e = 0.995$ (requiring a finer resolution), we employ a multi-mass scheme \cite{Zemp:2007nt} using four different masses of DM particles. In this case, we use a total of roughly $4 \times 10^4$ DM particles per halo. We have checked that the density profile of the DM halo is stable down to $r_\mathrm{min}$ on time scales corresponding to the time of the first close passage of the PBH binaries under consideration. Further details are provided in Appendix~\ref{sec:Gadget}.
We follow the evolution of the binary system until the semi-major axis and eccentricity of the PBH-PBH system have stabilised.
The final eccentricity and semi-major axis are then estimated from the specific energy $\epsilon$ and specific angular momentum $h$ of the PBH-PBH binary:
\begin{equation}
e = \sqrt{1 + \frac{\epsilon h^2}{2 (G_N M_\mathrm{PBH})^2}}\,,\;\; a = -\frac{G_N M_\mathrm{PBH}}{2\epsilon}\,.
\end{equation}
Here, $\epsilon = \frac{1}{2}v^2 - 2 G_N M_\mathrm{PBH}/r$ and $h = |\mathbf{r} \times \mathbf{v}|$, for PBH separation $\mathbf{r}$ and relative velocity $\mathbf{v}$ \cite{9780471146360}.
In Figs.~\ref{fig:PBHseparation}~and~\ref{fig:PBHangmom}, we show the main properties of the binary system during a single simulation, specifically a pair of dressed $30 \,M_\odot$ PBHs with initial orbital elements $a_i = 0.01 \,\mathrm{pc}$ and $e_i = 0.995$. Figure~\ref{fig:PBHseparation} shows the separation of the PBHs as a function of simulation time (blue), as well as the DM mass enclosed with $0.1\,R_\mathrm{tr}$ around one of the PBHs (green).
During each close passage, the enclosed DM mass jumps by a factor of roughly 2, as the PBH passes through the halo of its companion.
After the close passage the remaining DM mass is reduced, as a significant fraction of the halo is ejected by the close encounter. This key feature -- feedback between the PBHs and DM halos -- drives the shrinking of the binary orbit. With successive orbits, the DM mass is gradually depleted and the semi-major axis shrinks until it eventually stabilises. This typically take $<\mathcal{O}(10)$ orbits, on timescales $\mathcal{O}(10\,\mathrm{kyr})$.
In Fig.~\ref{fig:PBHangmom}, we plot the angular momentum of the same system. In blue we plot the total angular momentum of the two PBHs, while in orange we plot the total angular momentum of the DM halos. During the first close passage, at $t \sim 5.8 \,\mathrm{kyr}$, there is very little exchange of angular momentum. While dynamical friction acts to slow the PBHs as they pass through the halos, the orbit is almost radial so there is no resulting torque. We note, however, the slowing of the PBHs close to periapsis slightly circularises the orbit.
As the PBHs move away from their first close passage, they then encounter the particles of the disrupted DM halo, which have been ejected with high speed. In this case, dynamical friction acts to accelerate the PBHs and they begin to regain angular momentum. With each successive close passage, however, the effects of dynamical friction with the remaining DM halo particles will slow the PBHs, inducing a torque on the (now more circular) binary.
For the eccentric $e=0.995$ binary we consider here, the angular momentum of the PBH system at late times is comparable to the initial value. In less eccentric binaries, we have observed that the DM halo can carry away a substantial fraction of the PBH angular momentum. Increasing the eccentricity, on the other hand, typically decreases the amount of angular momentum exchanged between the PBHs and DM halos.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{plots/PBH_separation_M30_a0_01_lo_bin_e0_995_full.pdf}
\caption{\textbf{PBH separation and retained DM halo mass during a single simulation.} In blue (left axis), we show the separation of the PBHs during a single simulation while in green (right axis) we show the DM mass enclosed within 10\% of the halo truncation radius, $R_\mathrm{tr}$. Here, we simulate $M_\mathrm{PBH} = 30\,M_\odot$ and initial orbital elements $a_i = 0.01\,\mathrm{pc}$ and $e_i = 0.995$. The truncation radius is $R_\mathrm{tr} \approx 4 \times 10^{-3}\,\mathrm{pc}$ and the total DM mass per halo is $3.1\,M_\odot$.}
\label{fig:PBHseparation}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{plots/PBH_L_M30_a0_01_lo_bin_e0_995_full.pdf}
\caption{\textbf{Angular momentum of PBHs and DM during a single simulation.} The total angular momentum of the PBH (DM) particles in the simulation is shown in blue (orange). The simulation parameters are as in Fig.~\ref{fig:PBHseparation}. The times at which the PBHs undergo a close passage are marked by grey dashed lines.}
\label{fig:PBHangmom}
\end{figure}
In Fig.~\ref{fig:NbodyResults}, we show the final semi-major axis $a_f$ and final angular momentum $j_f$ for a number of simulated binary systems. We show results for three PBH masses -- $1\,\,M_\odot$, $30\,\,M_\odot$ and $1000\,\,M_\odot$ -- and in each case we select the most likely initial semi-major axis $a_i$ for binaries merging today (see Fig.~\ref{fig:ProbabilityDistribution}). We see that the final semi-major axis (left panel) is typically smaller than the initial semi-major axis by a factor of $\mathcal{O}(10)$, meaning that the final orbit is much smaller when the DM halo around each PBH is significant. The final orbit is also more circular $j_f > j_i$, as we see in the right panel of Fig.~\ref{fig:NbodyResults}. These two changes - shrinking and circularisation of the binary - have opposing effects on the merger time, Eq.~\eqref{eq:tmerge}, of the binary.
From Fig.~\ref{fig:ProbabilityDistribution}, we see that binaries merging today typically have angular momenta in the range $j = 10^{-3}\text{--}10^{-2}$. We have performed simulations down to $j \approx 0.03$ ($e = 0.9995$) but realistic simulations corresponding to binaries merging today would require around 2 orders of magnitude improvement in spatial resolution in the DM halo (owing to the much smaller close passage distances). As we outline in Appendix~\ref{sec:Gadget}, performing large numbers of such simulations would be computationally infeasible. Instead, in the next section, we use analytic arguments to understand the behaviour of binaries merging today.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.49\linewidth,]{plots/FinalSemiMajorAxis.pdf}
\includegraphics[width=0.49\linewidth,]{plots/FinalAngularMomentum.pdf}
\caption{\textbf{Impact of Dark Matter halos on the orbital elements of PBH binaries.} We show the final semi-major axis $a_f$ (\textbf{left}) and final angular momentum $j_f$ (\textbf{right}) of the PBH binaries at the end of our N-body simulations, as a function of the initial angular momentum $j_i$. Each point corresponds to the result of a single simulation run while the solid lines correspond to the analytic estimates which we describe in Sec.~\ref{sec:analytic} (these curves are \textit{not} fit to the data). We show results for three different PBH masses, in each case with a different initial semi-major axis $a_i$. The grey shaded region illustrates typical values of $j$ for which the binaries are expected to merge on timescales of order the age of the Universe.
}
\label{fig:NbodyResults}
\end{figure*}
\subsection{Analytic results}
\label{sec:analytic}
Guided by the results of our numerical simulations, we now present analytic estimates which capture the key features. As we will see, the resulting expressions are rather simple, but are not trivial to derive without input and validation from N-body simulations (as presented in Sec.~\ref{sec:Nbody}).
\subsubsection{Semi-major axis}
First, we consider the evolution of the semi-major axis of the BBH orbits, incorporating the effects of the DM halos surrounding them using simple energy conservation arguments. Initially, the total orbital energy of the system is given by:
\begin{equation}
E_i^\mathrm{orb} = -\frac{G_N M_\mathrm{tot}^2}{2 a_i}\,,
\end{equation}
where $M_\mathrm{tot} = M_\mathrm{PBH} + M_\mathrm{halo}$ and we have treated each PBH and its halo as a point object. The binding energy of each DM halo, including all DM particles at a distance greater than $r_\mathrm{in}$ from the PBH, is given by:
\begin{equation}
E^\mathrm{bind}(r_\mathrm{in}) = -4 \pi G_N \int_{r_\mathrm{in}}^\infty \frac{M_\mathrm{enc}(r)}{r} \, r^2 \rho_\mathrm{DM}(r) \,\mathrm{d}r \,.
\end{equation}
From the simulations, we see that the work done by dynamical friction unbinds the DM halo, with more of the halo unbound as the distance of closest approach $r_\mathrm{peri} = a_i (1-e_i)$ decreases. We assume that each PBH maintains a halo of radius $r_\mathrm{min}/2$, with DM particles further away than this being completely unbound. The final orbital energy of the binary is then given by:
\begin{equation}
E_f^\mathrm{orb} = -\frac{G_N M_f^2}{2 a_f}\,,
\end{equation}
where $M_f = M_\mathrm{PBH} + M_\mathrm{halo}(r < r_\mathrm{min}/2)$.
The final semi-major axis $a_f$ is then obtained (for a given $r_\mathrm{min}$ and therefore a given $j_i = \sqrt{1-e_i^2}$) from energy conservation,
\begin{equation}
E_i^\mathrm{orb} + 2 \,E^\mathrm{bind}(r_\mathrm{min}/2) = E_f^\mathrm{orb}\,.
\label{eq:energy_conservation}
\end{equation}
The final semi-major axis calculated in this way can be written explicitly as follows:
\begin{equation}
a_f (a_i) \,=\, \, \frac{G_N M_{f}^2 a_i}{G_N M_{tot}^2 + 4 a_i E^\mathrm{bind}(r_\mathrm{in})}\,.
\label{eq:afinal}
\end{equation}
We show this result in the left panel of Fig.~\ref{fig:NbodyResults} as solid lines for the three different scenarios. For circular orbits ($j_i \rightarrow 1$) there is little change in the semi-major axis as the PBHs do not pass within each other's DM halos\footnote{Note that over longer periods, tidal effects would be expected to disrupt the two halos. We are interested in much more eccentric binaries and so we do not consider this effect further.}. For increasingly eccentric binaries, more and more of the DM halo is stripped, reducing the final orbital energy of the PBH pair and therefore the final semi-major axis. At high eccentricity ($j_i \ll 1$), almost all of mass of each DM halo is stripped; almost all of the halo binding energy is converted to orbital energy and decreasing $j_i$ further has no impact on the final semi-major axis.
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\linewidth,]{plots/Mapping_a.pdf}
\caption{\textbf{Impact of DM halos on the semi-major axis of highly eccentric PBH binaries.} Final semi-major axis of PBH binaries after their local DM halos have been disrupted and unbound, following the analytic prescription of Sec.~\ref{sec:analytic}. We show results for 3 different PBH masses and assume the DM density profile given in Eq.~\eqref{eq:rhoDM}. The black dashed line corresponds to $a_f = a_i$.}
\label{fig:semimajoraxis}
\end{figure}
In Fig.~\ref{fig:semimajoraxis}, we show the analytic estimate of $a_f$ as a function of $a_i$ for binaries with PBH masses of $1 \,\,M_\odot$, $30 \,\,M_\odot$ and $1000 \,\,M_\odot$. In this case, we assume a DM density profile given by Eq.~\eqref{eq:rhoDM} and assume that the entire DM halo of each PBH is stripped, which is valid for highly eccentric orbits. For small orbits ($a_i \lesssim 10^{-4} -10^{-3} \,\,\mathrm{pc}$) we find little change in the semi-major axis. This is because these binaries decouple from the Hubble flow early and have not had time to grow a substantial DM halo. The impact of the DM halo increases with increasing semi-major axis, as the binary decouples later and the size of the halo at decoupling grows.
\subsubsection{Angular Momentum}
As in the case of the semi-major axis, we can use conservation arguments to estimate the final dimensionless angular momentum $j$ of the orbits after the effects of the DM halo have been taken into account.
The dimensionful angular momentum $L$ for a binary of two point masses $M$ is given by:
\begin{equation}
\label{eq:AngMom}
L^2 = \frac{1}{2}G_N M^3 \, a \, j^2\,.
\end{equation}
As we have seen from the N-body simulations in the previous section (in particular Fig.~\ref{fig:PBHangmom}), for very eccentric orbits there is very little exchange of angular momentum between the PBHs and the DM particles. This can be understood from the fact that for large eccentricity the orbits are almost radial. This means that there is very little torque acting on the PBHs, despite the large dynamical friction force. At the distance of closest approach, the PBH velocity is perpendicular to PBH separation and the DM density is highest, in which case we might expect a large torque. However, this is also the point in the orbit where the PBHs have the highest velocity, suppressing the dynamical friction force \cite{Chandrasekhar1943a}. As we see from our N-body results, the latter effect dominates and very little angular momentum is exchanged.
As discussed in Sec.~\ref{sec:BBHproperties}, we are interested in highly eccentric binaries $j \lesssim 10^{-2}$ (corresponding to $e \gtrsim 0.9999$) which are expected to merge today. In this case then, we may assume that there is no angular momentum exchange, in which case the angular momentum of both the PBHs and the DM halos are \textit{separately} conserved. From this, it holds that
\begin{equation}
L^2 = \frac{1}{2}G_N M_\mathrm{PBH}^3\, a\, j^2\,,
\end{equation}
is conserved and therefore that:
\begin{equation}
\label{eq:jf}
j_f = \sqrt{\frac{a_i}{a_f}}j_i \qquad \text{for } j \ll 1\,.
\end{equation}
Combined with the prescription for calculating the final semi-major axis, this allows us to calculate the final angular momentum of the PBH binaries.
In the right panel of Fig.~\ref{fig:NbodyResults}, we plot as solid lines the estimates of $j_f$ (given by Eq.~\eqref{eq:jf}), which agree well with the N-body simulation results at small $j_i$. For large $j$, the final angular momentum is smaller than this estimate would suggest. In this case, the more circular orbits lead to angular momentum exchange between the PBHs and their DM halos; the torque from dynamical friction reduces the angular momentum of the PBH binary. The conservation of angular momentum of the PBH binary is not an intrinsic property of the system then, but only a special quality of the most eccentric orbits, relevant for mergers today.
\subsubsection{Merger times}
\label{merger_times}
With the results of the previous sections at hand, we can now calculate the final merger time for a binary (Eq.~\eqref{eq:tmerge}), given its initial orbital elements.
We note here that the merger time scales $t_\mathrm{merge} \propto a^4 j^7$, while the conserved angular momentum of the PBH binary scales as $L^2 \propto a j^2$: This indicates that, despite the strong scaling of the merger time with $a$ and $j$, the final merger time will not be changed substantially by the DM halo. Indeed, substituting Eq.~\eqref{eq:jf} into Eq.~\eqref{eq:tmerge}, we find that,
\begin{equation}
t_f = \sqrt{\frac{a_i}{a_f}}\,t_i\,,
\label{eq:merger_time_final}
\end{equation}
where $t_i$ and $t_f$ are the initial and final merger times of the binary, before and after the impact of the DM halo are taken into account. As we see in Fig.~\ref{fig:semimajoraxis}, the semi-major axis is typically not reduced by more than a factor of 10, meaning that the merger time is unlikely to be reduced by more than a factor of a few.
\section{Merger Rates and Constraints on the PBH density}
\label{sec:results}
We can now combine the various findings described in the previous sections in order to compute the impact of DM mini-halos on the primordial BBH merger rate and the corresponding LIGO limit on the PBH fraction.
Let us recap in detail the prescription we follow:
\begin{itemize}
\item We begin with the distribution of orbital elements $(a, e)$, or equivalently $(a, j)$, for PBH binaries in the early Universe, as described in Sec.~\ref{sec:selfconsistent}.
\item For a PBH binary with a given semi-major axis, we estimate the redshift $z_\mathrm{dec}$ at which the pair decouples from the Hubble flow, and calculate the DM halo mass accreted at that redshift.
\item We compute the final semi-major axis and eccentricity of the binary adopting the relations derived above -- summarized by Eqs.~\eqref{eq:afinal}~and~\eqref{eq:jf} -- in order to calculate the new distribution of orbital elements $(a, e)$.
\item Once this remapping is performed, we calculate the corresponding distribution of merger times and, eventually, we obtain: {\bf 1)} The merger rate {\it today} of PBH binaries formed in the early Universe (to be compared to the one derived by assuming the original distribution of orbital elements derived in \cite{Ali-Haimoud:2017rtz} and given by Eq.~\eqref{eq:Paj}); {\bf 2)} The corresponding limit on the fraction of DM in PBHs.
\end{itemize}
Let us now present and discuss the details of this procedure, and the two main results of the calculation.
\subsection{Merger Rate Today}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,]{plots/mergerRate_remapped.pdf}
\caption{
\textbf{Primordial Black Hole merger rate, averaged between $z = 0$ and $z = 1$, as a function of the DM fraction.}
{\it Dotted lines:} Merger rate for the ``naked'' PBH binary distribution derived in \cite{Ali-Haimoud:2017rtz}.
{\it Solid lines}: Merger rate for the ``dressed'' PBH binary distribution, with the effect of dynamical friction taken into account, as derived in the present work.
{\it Gray band}: Merger rate inferred by the LIGO and Virgo collaboration, from \cite{Abbott:2017vtc}.
}
\label{fig:ratePlot}
\end{figure}
The merger rate of primordial BBHs at present time\footnote{Note that $\mathcal{R}$ is the comoving merger rate density {\it in the source frame.}} is given by:
\begin{equation}
\mathcal{R}_0 = n_\mathrm{PBH} P(t_\mathrm{merge} = t_\mathrm{univ})\,,
\end{equation}
where $n_\mathrm{PBH}$ is the comoving number density of PBHs and $t_\mathrm{univ} \approx 13.7 \,\,\mathrm{Gyr}$ is the age of the Universe.
However, since LIGO probes mergers approximately in the range $z \in [0, 1]$, we consider the rate averaged over redshift:
\begin{equation}
\langle \mathcal{R} \rangle = n_\mathrm{PBH} \int_{0}^{1} P(t[z])\,\mathrm{d}z \,.
\end{equation}
We now compute the probability distribution of the merger time for both the original PDF given by Eq.~\eqref{eq:Paj}, and for the remapped one, that takes into account the impact of DM dresses.
In the former case, the computation can be carried out analytically by performing a change of variables and a marginalization over the semi-major axis as follows:
\begin{equation}
\label{eq:Pt}
P(t) \, = \, \int_{a_\mathrm{min}}^{a_\mathrm{max}}{ P(a, j(a,t)) \,\left(\frac{{\rm d} j}{{\rm d} t}\right) \,{\rm d}a }\,,
\end{equation}
where $j(a,t)$ is obtained by inverting Eq.~\eqref{eq:tmerge}
In the latter case, we perform a numerical estimate as follows.
We first sample the original PDF by means of an affine-invariant MCMC ensemble sampler \cite{ForemanMackey:2012ig} and obtain a collection of $\sim 10^5$ points in the $(a, j)$ parameter space. We then apply the remapping prescriptions presented in the previous section (Eq. \ref{eq:afinal} and Eq. \ref{eq:jf}) to this set of points, and eventually determine the final distribution of merger times associated to the remapped points.
We show the result in Fig.~\ref{fig:LIGO_limit}. As argued in Sec.~\ref{merger_times}, the merger time distribution is not strongly affected by the remapping, despite the significant changes in the properties of the binaries, and the strong scaling of the coalescence time with $a$ and $j$. This highly non-trivial result mainly stems from the fact that the shrinking and the circularisation of the binaries (that affect the merger time in opposite directions) are not independent, given the separate conservation of both the PBH and DM dress angular momentum, derived in the previous sections.
\subsection{LIGO/Virgo upper limit}
We now turn to the upper limit on the PBH fraction, which can be obtained by comparing the merger rate predicted for a given $M_\mathrm{PBH}$ and $f_\mathrm{PBH}$ with the upper limit determined by the LIGO experiment. These upper limits are obtained assuming that the merger rate is constant as a function of comoving volume and time in the source frame \cite{Abbott:2016drs,Abbott:2017iws}. For the PBH binaries we consider, however, the merger rate is not constant in time in the source frame, as the distribution of merger times given by Eq.~\eqref{eq:Pt} is not flat.
The \textit{effective} merger rate\footnote{This is the merger rate constant in time in the source frame which would produce the same number of above-threshold events in LIGO as the true time-dependent merger rate $\mathcal{R}(z)$.} which would be measured by LIGO is therefore given by:
\begin{equation}
\label{eq:RLIGO}
\mathcal{R}_\mathrm{LIGO} = n_\mathrm{PBH} \frac{\int S(z) P(t[z])\,\mathrm{d}z}{\int S(z) \,\mathrm{d}z}\,,
\end{equation}
where $S(z) = \mathrm{d}\langle VT \rangle/\mathrm{d}z$ is the space-time sensitivity of LIGO as a function of redshift and depends on the mass of the merging BHs. For $M_\mathrm{BH} = 10,\,20,\,40\,\,M_\odot$, we obtain $S(z)$ from Fig.~7 of Ref.~\cite{Abbott:2016drs}. For $M_\mathrm{BH} = 100,\,200,\,300\,\,M_\odot$, we assume that the overall shape of $S(z)$ does not change substantially from the $40\,\,M_\odot$ case. For a given BH mass, we then rescale $S(z)$ such that the maximum redshift to which LIGO is sensitive corresponds to the horizon distance (Fig.~1 of Ref.~\cite{Abbott:2017iws}) for that mass. We then adjust the normalisation to give the correct value of the space-time volume sensitivity $\langle VT \rangle = 2.302/\mathcal{R}_{90\%}$ (Tab.~1 of Ref.~\cite{Abbott:2017iws}).\footnote{As described in Ref.~\cite{Abbott:2017iws}, the LIGO analysis to search for intermediate mass BHs ($M_\mathrm{BH} \gtrsim 100 \,M_\odot$) is different to that for lighter BH mergers. This is because for intermediate mass BH mergers, only part of the merger and ring-down appear in the LIGO frequency band. This means that rescaling $S(z)$ from $40 \,M_\odot$ up to $300 \,M_\odot$ is not strictly correct. However, the method we use should capture the broad redshift dependence of the sensitivity.}
As in the previous section, we calculate $\mathcal{R}_\mathrm{LIGO}$ by Monte Carlo sampling, in this case weighting each sample by the sensitivity $S(z)$. The LIGO upper limit on the PBH fraction is then obtained by finding the value of $f_\mathrm{PBH}$ for which $\mathcal{R}_\mathrm{LIGO} = \mathcal{R}_{90\%}$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,]{plots/PBH_constraints_v3.pdf}
\caption{\textbf{Constraints on the fraction $f_\mathrm{PBH}$ of Dark Matter in primordial black holes (PBHs).} Constraints from the LIGO/Virgo merger rate including the effects of local DM halos on PBH binaries (this work) are shown as the dark blue shaded region and labelled `LIGO'. Complementary constraints on PBHs are also shown, with details given in the text. We assume a monochromatic PBH mass function throughout.}
\label{fig:LIGO_limit}
\end{figure}
In Fig.~\ref{fig:LIGO_limit}, we plot the limits on $f_\mathrm{PBH}$ for the PBH masses listed above, including the effects of the DM dress on the binary evolution. We also show a number of additional constraints on PBHs (assuming mono-chromatic mass functions) in the range $M_\mathrm{PBH} \in [1, 1000]\,M_\odot$. Micro-lensing observations from the MACHO \cite{Allsman:2000kg} and EROS \cite{Tisserand:2006zx} collaborations place constraints on PBH masses up to $30 \,M_\odot$. The presence of PBHs may also disrupt wide binaries \cite{Monroy-Rodriguez:2014ula} and stellar clusters in dwarf galaxies \cite{Brandt:2016aco,Koushiappas:2017chw}. In Fig.~\ref{fig:LIGO_limit}, we show the limit coming from observations of stellar clusters in Eridanus II \cite{Brandt:2016aco}. The accretion of baryons onto PBHs in the inner Milky Way would cause them to radiate: a comparison with known radio and X-ray sources yields constraints at the 1\%--10\% level for PBHs between 20 and 100 $M_\odot$ \cite{Gaggero:2016dpq}. Finally, accretion onto PBHs may distort the CMB spectrum and affect CMB anisotropies \cite{Carr1981}: we show resulting constraints from COBE/FIRAS \cite{Blum:2016cjs,Clesse:2016ajp} and PLANCK \cite{Ali-Haimoud:2016mbv}, using conservative assumptions on accretion onto PBHs in the early Universe. We note that, in general, all constraints on the PBH fraction are subject to a range of uncertainties and caveats (see e.g.~Refs.~\cite{Hawkins:2015uja,Li:2016utv,Garcia-Bellido:2017xvr,Poulin:2017bwe}).
The limits we derive here constrain the PBH fraction to be no more than $4 \times 10^{-3}$ in the mass range $M_\mathrm{PBH} \in [10, 300]\,M_\odot$. The strongest constraints are for $M_\mathrm{PBH} = 100\,M_\odot$ (where the LIGO sensitivity peaks), giving $f_\mathrm{PBH} < 8 \times 10^{-4}$. These limits on the PBH fraction from the observed merger rate are the most stringent in this mass range, improving on constraints from Galactic radio and X-ray emission \cite{Gaggero:2016dpq} by over an order of magnitude.
\section{Discussion}
\label{sec:discussion}
The presence of the dark matter dress makes the limits on $f_\mathrm{PBH}$ (dark blue shaded region labelled `LIGO' in Fig.~\ref{fig:LIGO_limit}) a factor of 2 stronger than those derived in the case of naked black holes~\cite{Ali-Haimoud:2017rtz}. Roughly 10-20\% of this improvement comes from the inclusion of the DM halos in the decoupling calculations, described in Sec.~\ref{sec:selfconsistent}, which slightly distorts the distribution of orbital elements of PBH binaries. A further $\sim$20\% comes from the impact of the DM halo on the orbit of the PBH binaries, as discussed in Sec.~\ref{sec:simulations}. Finally, around a 50\% increase in the merger rate comes from including the full redshift-dependence of the LIGO sensitivity; there are more binaries with a shorter merger time (corresponding to a higher redshift), increasing the effective merger rate which would be measured by LIGO.
These results show that local DM halos have a relatively small effect on the merger rates of PBH binaries, despite drastically changing the size and shape of their orbits. For the widest orbits the merger time may be reduced by an order of magnitude, but integrated over the entire population of binaries, the increase in the merger rate today is an $\mathcal{O}(10\%)$ effect. This effect is comparable in size to uncertainties in the detectors' amplitude calibration, which introduces an uncertainty of about 18\% in the upper limit on the merger rate \cite{Abbott:2017iws}. This work therefore increases the robustness of LIGO limits on the PBH fraction (such as those of Refs.~\cite{Sasaki:2016jop,Ali-Haimoud:2017rtz}, as well as those presented here).
A key aspect of our results is that binary systems formed in the early Universe survive the growth of local DM halos. However, there are several ways in which the properties of PBH binaries may also be altered~\cite{Ali-Haimoud:2017rtz}:
\begin{itemize}
\item They may interact with a circumbinary accretion disk of baryons \cite{Hayasaki:2008eu,Hayasaki:2009ug,Macfadyen:2006jx,Tang:2017eiz}. As we have seen in our simulations, catastrophic feedback between the PBHs and their DM halos is a key feature of the binaries. Therefore, it seems likely then that dedicated numerical simulations will be required to understand how baryonic accretion affects PBH binaries.
\item PBH binaries may be disrupted by interactions with the smooth DM halo or with other PBHs. Binaries formed from dressed PBHs are however typically smaller in size after having unbound their DM halos. From Fig.~\ref{fig:semimajoraxis}, we see that the maximum semi-major axis for binaries of $30\,M_\odot$ PBHs is reduced from $\sim 10^{-1}\,\mathrm{pc}$ to $\sim 2 \times 10^{-3}\,\mathrm{pc}$ once the effects of local DM halos are taken into account. Since smaller binaries are less likely to be disrupted by interactions with the smooth DM halo or with other PBHs, and since naked PBH binaries are not expected to be significantly disrupted~\cite{Ali-Haimoud:2017rtz}, we conclude that dressed PBH binaries should not be affected by mergers into larger virialized DM halos
\item The binary may experience dynamical friction from DM enclosed within the orbit or accreted after decoupling. In this case, the accretion cannot be modeled as in Sec.~\ref{sec:decoupling} as the time-dependent, non-spherical nature of the potential must be accounted for. This is likely to require dedicated simulations and we defer this to future work. However, we expect that DM infalling onto the binary will only be loosely bound and will therefore not dramatically affect the final semi-major axis and merger time of the binary.
\end{itemize}
We have focused on PBHs characterized by a monochromatic mass function and uniform distribution in space, while realistic scenarios of PBH formation would naturally lead to a PBH population with a range of masses \cite{Kuhnel:2015vtw}, and a significant clustering may be present in the spatial distribution \cite{Chisholm:2005vm}.
Concerning the former point, the formation of PBH binaries from a population with an extended mass function has recently been studied in Refs.~\cite{Chen:2018czv,Raidal:2017mfl}. In general, constraints may be weakened or strengthened relative to the monochromatic case \cite{Green:2016xgy,Bellomo:2017zsr,Kuhnel:2017pwq,Carr:2017jsz,Lehmann:2018ejc} (depending on the shape of the mass function), so a dedicated reanalysis would be required in this case.
The analytic treatment we have developed in Sec.~\ref{sec:analytic} could be straightforwardly extended to PBHs of different masses, and we leave this for future studies.
As for the latter point, we expect that significant clustering would further increase the merger rate, making the constraints even stronger (see however Ref.~\cite{Ali-Haimoud:2018dau}). Also in this case, we leave a detailed and quantitative study for future work.
\section{Conclusions}
\label{sec:conclusions}
In this work, we have explored the impact of Dark Matter (DM) halos around black holes (BHs) on the properties of binaries and their merger rates. We have focused on primordial black holes (PBHs), forming binaries in the early Universe, though our techniques are more generally applicable to other merging systems. In the case of PBHs, the growth of DM halos in the radiation-dominated era is a generic prediction and so their impact must be properly included.
We have performed N-body simulations of orbiting PBHs and their respective DM halos, finding that close passages between the BHs tend to disrupt and unbind the DM particles, exchanging orbital energy for gravitational binding energy of the halos. For the most eccentric binaries, relevant for merger events today, we find that little angular momentum is exchanged between the BHs and the DM halos. The results of these simulations have allowed us to determine simple, analytic expressions for how the semi-major axis and eccentricity of binaries change after the disruption of the DM halos.
Using these relations, we have calculated the distribution of merger times for PBH binaries and the corresponding merger rate which would be observed at LIGO, as a function of the PBH mass $M_\mathrm{PBH}$ and fraction $f_\mathrm{PBH}$. In order not to exceed the limits on BH merger rates set by the LIGO and Virgo collaborations, we set the most stringent limits on the PBH fraction in the mass range $10\text{--}300\,M_\odot$. For PBHs of mass $100 \, M_\odot$, for example, we require $f_\mathrm{PBH} < 8 \times 10^{-4}$.
These constraints are stronger than the potential limits suggested in Ref.~\cite{Ali-Haimoud:2017rtz} (also based on LIGO merger rates), but still within a factor of $2$. This indicates that, while DM halos around PBHs can substantially alter the size and shape of PBH binary orbits, they lead to only an $\mathcal{O}(10\%)$ effect on the merger rate of PBH binaries today. This result strengthens the case that PBH binaries should survive until today, placing LIGO/Virgo bounds on $f_\mathrm{PBH}$ on more solid ground.
The techniques we have employed are also more generally applicable to astrophysical black holes with a dark matter dress.
A particularly interesting application concerns the analysis of the gravitational waves signal emitted by dressed astrophysical binary black holes. It has been suggested that the presence of dark matter would modify the dynamics of the merger, and induce a potentially detectable dephasing in the waveform \cite{Eda:2013gg,Eda:2014kra}. This conclusion is however based on the assumption that there is no dynamical effect on the dark matter, whose distribution is kept constant with time. We leave the analysis of these systems, and of the ensuing gravitational wave emission, to a future publication.
\begin{acknowledgements}
We thank Joe Silk for stimulating discussions on PBH mergers, as well as Sarah Caudill and Christopher Berry for helpful discussions on LIGO sensitivities. We also thank Alfredo Urbano, Marco Taoso and Joe Silk for comments on a draft of this manuscript.
Where necessary, we have used the publicly available WebPlotDigitizer \cite{WebPlotDigitizer} to digitise plots.
We thank SURFsara (\href{www.surfsara.nl}{www.surfsara.nl}) for the support in using the Lisa Compute Cluster.
BJK acknowledges funding from the Netherlands
Organization for Scientific Research (NWO) through the VIDI research program ``Probing the Genesis of Dark Matter'' (680-47-532).
\end{acknowledgements}
| proofpile-arXiv_065-12777 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Highly energetic and/or heavy particles (hard probes) represent an excellent tool to study the quark-gluon plasma (QGP), as they are likely created in the initial stages of the collisions and do not attain thermal equilibrium.
Traversing the QGP, the hard probes in general lose energy, due to interactions with the medium. These energy-loss mechanisms are mostly described theoretically by either collisional or radiative interactions with the medium, as well as combinations thereof.
The combination of the observables of the nuclear modification factor $R_{\rm AA}$ and the elliptic flow $v_2$ allows to put constraints on these processes.
However, it remains still ambiguous, how much collisional and radiative processes contribute to the total energy loss, since theoretical models that contain either radiative processes as well as models that use a combination of both radiative and collisional processes allow to describe $R_{\rm AA}$ and $v_2$ equally well (cf. e.g.~\cite{Andronic:2015wma} for a current review).
$R_{\rm AA}$ and $v_2$ are observables corresponding to the distributions of individual hard probes.
However, radiative interactions with the medium may act as an additional source for correlated particle pairs, while purely collisional processes will not.
Thus, observables for the multiple hard particles contained within jets may provide additional constraints on parton energy-loss mechanisms.
An example are two-particle correlations, which have been studied intensively at experiments at the LHC.
Recently, the values of the jet shape parameter $\rho$ that allow to gain insight on the angular structure of jets have been obtained at the CMS experiment~\cite{Sirunyan:2018jqr}.
In these proceedings we argue that radiative and collisional energy loss mechanisms yield qualitatively different contributions to the jet shapes.
To this end, three effective model approaches were used to describe collisional as well as radiative jet-medium interactions.
While these approaches are rather simplistic, they provide an overall consistent framework for the interactions with the medium and are, thus, particularly well suited for comparisons between different types of energy-loss mechanisms.
Numerical results for these effective models were obtained from of a Monte-Carlo algorithm. The medium effects on jet-evolution are implemented in parallel to collinear parton splitting due to bremsstrahlung, which is already present in the vacuum. The jet-evolution due to bremsstrahlung in the algorithm represents a Monte-Carlo simulation of the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution of jets with leading order splitting functions, starting from an initial parton.
\section{Effective Models for in-medium jet-evolution}
In general, we focus on a description of timelike parton cascades, generated by an initial quark with a maximal virtuality of $Q_\uparrow$ and energy $E_{\rm ini}$.
The partons in the cascades undergo multiple collinear splittings, until their virtualities $Q$ reach a minimal scale $Q_\downarrow$.
In the vacuum, jet evolution is described via multiple bremsstrahlung emissions, which follows the DGLAP equations. Numerical results have been obtained in a Monte-Carlo simulation of the corresponding jets with splitting functions and the Sudakov factors of leading order (LO) in perturbative QCD.
The effects on jet particles of multiple scatterings with medium particles were summarized in a continuous change of the cascade-parton four-momenta during their in-medium propagation.
It was assumed that the jet-medium interactions yield only a small perturbation to the vacuum-jet evolution.
Thus, the parton splittings were selected by means of the LO-Sudakov factors and LO-splitting functions that had already been used for the DGLAP-evolution of timelike parton cascades in the vacuum.
The four-momenta of the produced intermediate partons were then evolved in the medium following the respective approach to jet-medium interactions (which is described in detail further below) over the parton life-time $\tau$.
In the parton rest-frame the life-time can be estimated to be of the order of $1/Q$, which yields in the lab-frame
\begin{equation}
\tau=\frac{E}{Q^2}\,,
\label{eq:timeest}
\end{equation}
where $E$ is the parton energy.
Almost identical approaches have been used before~\cite{Zapp:2008gi,Renk:2008pp}.
Since both $E$ and $Q$ can be affected by jet-medium interactions, Eq.~(\ref{eq:timeest}) has to be solved self-consistently (as noted in~\cite{Renk:2008pp}).
The parton four-momenta obtained from the self-consistent solution are used in the selection of a new parton splitting, if $Q>Q_\downarrow$.
If $Q<Q_\downarrow$ the particle is not subjected to any further interactions or splittings.
The approach to radiative energy loss is largely based on an early version of YAJEM~\cite{Renk:2008pp}, as it was assumed that medium induced radiation can be simulated by a continuous increase of $\hat{q}_R$ over time $t$ in the squared parton virtuality.
For quarks, the increase in $Q^2$ is described by
\begin{equation}
\frac{d }{dt}Q^2=\hat{q}_R\,.
\label{eq:yajemcont}
\end{equation}
For gluons, the right hand side of Eq.~(\ref{eq:yajemcont}) is multiplied by a factor
$C_A/C_F$.
The increase in virtuality leads to an additional amount of radiated jet particles, and corresponds to an on average shorter life time of the intermediate particles, and vice versa.
Thus, we argue that the model is well suited to describe the qualitative effects of medium induced radiation.
The change in $Q^2$ in Eq.~(\ref{eq:yajemcont}) requires that at least one four-momentum component changes as well.
We choose $\dot{E}^2=\dot{Q}^2=\hat{q}_R$, since this is the only choice that leaves the parton three-momenta $\vec{p}$ completely invariant, and, thus, allows to simulate purely collisional effects.
Physically, this choice corresponds to an energy transfer from the medium to the jet.
However, due to additional parton-splittings, the final particles in this medium model have smaller energies than in vacuum-cascades.
For an effective description of collisional energy loss, a transverse momentum transfer $\hat{q}_C$ and a longitudinal drag-force $\vec{A}$ are used, i.e.
\begin{eqnarray}
\hat{q}_C(t):=\frac{d \langle\vec{p}_\perp\rangle^2}{dt}\,,
&&
\vec{A}(t):=-\frac{d}{dt}\langle \vec{p}_L \rangle\,,
\end{eqnarray}
where $\vec{p}_L$ is the three-momentum component in direction of the incident cascade particle and $\vec{p}_\perp$ is the component orthogonal to $\vec{p}_L$.
The medium is assumed to be locally in thermal equilibrium, when the jet-medium interactions occur.
Thus, $\hat{q}_C$ and $\vec{A}$ are related by an Einstein-Smoluchowski relation.
In particular, from Ref.~\cite{Berrehrah:2014kba},
\begin{equation}
\frac{\hat{q}_C}{A}\approx 0.56+1.44\frac{T}{T_c}\,,
\label{eq:qhatAratio}
\end{equation}
was obtained with the critical temperature $T_c=0.158$~GeV.
Following results of the jet collaboration~\cite{Burke:2013yra} a proportionality $\hat{q}_C\propto T^3$, more specifically for this work $\hat{q}_C=7T^3$, was assumed.
For the numerical implementation of the medium models, time was discretized into small steps $\Delta t$. Then, a direction $\vec{n}:= \vec{p}_\perp/\|\vec{p}_\perp\|$ for the action of the transverse kicks was determined by selecting an azimuthal angle in the plane orthogonal to $\vec{p}_L=\vec{p}(t)$. Per timestep $\Delta t$, the three-momenta and the virtuality change in the following way
\begin{eqnarray}
&&Q(t)\mapsto Q(t+\Delta t)=\sqrt{Q(t)^2+r\Delta t \hat{q}_R(t)}\,,\nonumber\\
&&\vec{p}(t)\mapsto\vec{p}(t+\Delta t)=\nonumber\\&&\vec{p}(t)+s\left(\vec{n}(t)\sqrt{\hat{q}_C(t)\Delta t}-A(t)\Delta t \frac{\vec{p}(t)}{\|\vec{p}(t)\|}\right)\,,
\label{eq:hybdpdqD}
\end{eqnarray}
where the parameters $r$ and $s$ specify the effective model of jet-medium interaction:
the purely radiative model A ($r=1$, $s=0$), the purely collisional model B ($r=0$, $s=1$), as well as a hybrid model C ($r=1$, $s=1$).
The changes per $\Delta t$ in Eq.~(\ref{eq:hybdpdqD}) correspond to the following changes in parton-energy $E$
\begin{eqnarray}
&&E(t)\mapsto E(t+\Delta t)=\left(E(t)^2+\right.\nonumber\\&&\left.\Delta t(r\hat{q}_R(t)+s(\hat{q}_C(t)-2\|\vec{p}(t)\|A(t)))+\mathcal{O}\left(\Delta t^2\right)\right)^{\frac{1}{2}}\!.
\label{eq:yajemDEincr}
\end{eqnarray}
For models that contain collisional energy loss, the parton energy decreases for parton momenta $\|\vec{p}\|\gg T$, but increases if
\begin{equation}
\|\vec{p}\|<\frac{\hat{q}_C+r\hat{q}_R}{2A}\,.
\label{eq:ptcomparD}
\end{equation}
For simplicity we assumed $\hat{q}:=\hat{q}_C=\hat{q}_R$ and used the fit
\begin{equation}
\hat{q}(t)=\frac{a}{(b+t)^c}\,.
\label{eq:qhatdev}
\end{equation}
from Ref.~\cite{Renk:2008pp} with $b=1.5$~fm/c and $c=2.2$. The parameter $a$ is determined by the overall transfer
\begin{equation}
\Delta Q^2:=\int_{t_0}^{t_f}\hat{q}(t)dt\,,
\label{eq:dQ2def}
\end{equation}
where $t_0=0$ and $t_f=L=10$~fm/c was assumed, which yields $a\approx\frac{\Delta Q^2}{0.47}$.
\section{Jet shapes}
An observable~\cite{Sirunyan:2018jqr} that has recently gained some experimental interest are the so called jet shapes $\rho(\Delta r)$, defined as
\begin{equation}
\rho(\Delta r):=\frac{1}{\delta r}\frac{\sum_{\rm jets}\sum_{{\rm tracks}\in (r_a,r_b)}p_T^{\rm trk}}{\sum_{\rm jets}\sum_{{\rm tracks}}p_T^{\rm trk}}\,,
\label{eq:defrho}
\end{equation}
where the $p_T^{\rm trk}$ are transverse momenta of jet particles. $\Delta r$ is defined as the radial distance $\Delta r:=\sqrt{\Delta \eta^2+\Delta \phi^2}$ with regard to the jet axis. In the numerator of Eq.~(\ref{eq:defrho}) only transverse momenta of particles are summed up, where the radial distance to the jet axis is inside the interval $(r_a,r_b)$ with $r_a=\Delta r-\delta r/2$ and $r_b=\Delta r+\delta r/2$. In the denominator the transverse momenta of all jet-particles are summed up.
By definition jet shapes are infrared and collinearly safe observables.
This property is important for making comparisons with the results from the Monte-Carlo algorithm that was discussed in the previous section: So far, the Monte-Carlo algorithm for partonic cascades has not yet been convoluted with a hadronization mechanism. Instead the cascade evolution is cut off at the scale $Q_\downarrow$. Thus, the jet shapes $\rho$ are expected to depend only weakly on the value of $Q_\downarrow$, or on particular choices of hadronization mechanisms.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=10cm]{vacuum_experiment.pdf}}
\caption{jet shapes from experimental data by CMS~\cite{Sirunyan:2018jqr} in p-p and central Pb-Pb collisions and for jets propagating in the vacuum, all of them together with their respective contributions from soft jet-particles.}
\label{fig:vacuum}
\end{figure}
\begin{figure}[h!!!]
\centering
\includegraphics[width=9.5cm]{inelastic.pdf}\\[-10mm]
\includegraphics[width=9.5cm]{elastic.pdf}
\caption{jet shapes from models A (upper panel) and B (lower panel) compared to experimental data by CMS~\cite{Sirunyan:2018jqr} in central Pb-Pb collisions and jets propagating in the vacuum, together with their respective contributions from soft jet-particles.}
\label{fig:A+B}
\end{figure}
Fig.~\ref{fig:vacuum} shows results for $\rho$ obtained from the Monte-Carlo simulation of parton cascades that evolve in the vacuum from an initial quark with $Q_\uparrow=E_{\rm ini}=200$~GeV down to a virtuality scale of $Q_\downarrow=0.6$~GeV together with experimental data from CMS for p-p collisions and Pb-Pb collisions of a centrality below $10\%$, both for $\sqrt{s}=5.02$~TeV.
The Monte-Carlo results, just as the p-p collision data, show a strong decrease at small values of $\Delta r$ that becomes smaller at large $\Delta r$, although the decrease at small $\Delta r$ is less pronounced than in the experimental data.
Fig.~\ref{fig:vacuum} also shows the contributions to $\rho(\Delta r)$ from soft jet particles, where $0.7<p_T^{\rm trk}<1$~GeV.
For these soft contributions Monte-Carlo results and p-p collision data show a similar behavior.
The comparison between experimental data for p-p and Pb-Pb collisions in Fig.~\ref{fig:vacuum} reveals that the overall decrease of $\rho(\Delta r)$ is smaller in case of the heavy ion collisions, yielding also larger values of $\rho(\Delta r)$ at large $\Delta r$ values. This behavior corresponds to an increased radiation at large $\Delta r$ values, as well as the additional production of soft particles, where $0.7<p_T^{\rm trk}<1$~GeV (for additional experimental data, cf. Ref.~\cite{Sirunyan:2018jqr}).
Fig.~\ref{fig:A+B} shows the results for Monte-Carlo simulations of the purely radiative model A (upper panel) and the purely collisional model B (lower panel), both for $\Delta Q^2=3$~GeV$^2$, in comparison to experimental data from Pb-Pb collisions and the Monte-Carlo results for vacuum cascades.
While the Monte-Carlo results do not reproduce the experimental values, the results for all $p_T^{\rm trk}$ values from both models exhibit a smaller decrease with $\Delta r$ as compared to results from cascades in the vacuum.
For model A this broadening effect is rather small (as compared to the behavior for model B). However, the soft contributions are largely increased for model A - in agreement with data from Pb-Pb collisions at large $\Delta r$, but corresponding to values for $\rho$ at small $\Delta r$ that are too large.
We conclude from the behavior of the $\rho(\Delta r)$ results for model A that the broadening effects in this purely radiative model are mainly due to the medium-induced radiation of very soft particles.
For the purely collisional model B the broadening effects of the medium are even larger than for model A, while the contributions from soft particles are only mildly affected by the medium. In contrast to the behavior of the results for model A, the soft contributions for model B are not enhanced at all $\Delta r$ values, but only at large $\Delta r$.
We argue that the behavior of the results for model B can be explained by properties of the stochastic transverse forces and the drag force: These forces affect hard as well as soft particles (much in contrast to medium induced radiation, which predominantly leads to the production of soft particles). Due to the deflections by the effective forces, the angles of particle three-momenta, represented by the corresponding $\Delta r$ values, are larger in the medium than in the vacuum.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=11.5cm]{hybrid.pdf}
}
\caption{Same as Fig.~\ref{fig:A+B} but for the hybrid model C.}
\label{fig:C}
\end{figure}
Finally, Fig.~\ref{fig:C} shows the results for the hybrid model C, also for $\Delta Q^2=3$~GeV$^2$: As expected, this model exhibits the largest broadening effects, corresponding to an enhancement specifically at large angles, due to collisional effects, as well as a largely increased production of soft particles due to radiative effects.
\section{Summary}
We have presented a simplistic, yet consistent framework of effective models for jet-medium interactions.
Three models, a purely radiative model A, a purely collisional model B, and a hybrid model C were implemented in a Monte-Carlo algorithm for the simulation of jets in the medium, in order to study medium effects on the jet-shape observables.
While the models, and the algorithms implemented neglect many details and are therefore not suitable to reproduce experimental data quantitatively, the qualitative behavior of jet-shape data from, e.g. the measurements at CMS can be obtained.
Radiative energy-loss mechanisms were found to lead to a higher production of soft particles, while collisional energy-loss mechanisms deflect particle three-momenta to larger angles $\Delta r$ with regard to the jet axis.
The qualitatively different behaviors of the jet shapes for collisional and radiative energy loss, in particular with regard to their soft contributions, may be used as a tool to disentangle collisional from radiative in-medium energy-loss mechanisms.
\section{Acknowledgements}
This research was supported in part by the Polish National Science
Centre Grant No. 2015/19/B/ST2/00937.
| proofpile-arXiv_065-12814 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
Open Source Software (OSS) projects play an essential role in inclusion in workforce development, where contributors join projects to learn new skills~\cite{gerosa2021shifting}, showcase their skills~\cite{von2012carrots}, or improve their career paths~\cite{jergensen2011onion}. Successful participation in OSS projects also helps newcomers gain visibility among their peers~\cite{cai2016reputation, riehle2015open}, benefits society by developing a product used by many users~\cite{parra2016making}, and improves their chances of achieving professional success ~\cite{greene2016cvexplorer, riehle2015open}.
Unfortunately, newcomers to OSS face several challenges~\cite{steinmacher2015social}, and these challenges differently affect underrepresented populations, including those whose cognitive styles are ill-supported by the project's information landscape~\cite{padala2020gender, mendez2018open}. The consequences of these challenges to underrepresented populations may include a steeper learning curve, lack of community support, and obstacles to figuring out how to start, all of which add to the diversity imbalance in OSS~\cite{trinkenreich2021women}. Social diversity has been shown to positively affect productivity, teamwork, and quality of contributions~\cite{horwitz2007effects, vasilescu2015gender}. On the other hand, low diversity has unfortunate effects: (i) OSS projects miss out on the benefits of a more expansive set of contributors and the diversity of thought that these potential contributors could bring; (ii) minorities miss out on the learning and experience opportunities that OSS projects provide; and (iii) job opportunities evade minorities when OSS contributions are used to make hiring decisions~\cite{marlow2013impression, singer2013mutual}. Although the lack of diversity in OSS has been well-documented for years, there is limited progress in closing this gap~\cite{trinkenreich2021women, ford2017someone, robles2016women}.
Past work~\cite{padala2020gender, mendez2018open} has shown that the way information is provided in OSS projects (e.g., documentation, issue description) benefits certain cognitive styles (e.g., those who learn by tinkering) over others (e.g., process-oriented learners). The information architecture of OSS project pages (e.g., project description pages and descriptions of issues in the issue tracker) usually appeal to those who have high self-efficacy and are motivated by individual pursuits such as intellectual stimulation, competition, and learning technology for fun. According to \citet{burnett2010gender}, these pursuits cater to characteristics associated with men, which can neglect women and other contributors who may have different motivations and personal characteristics (see also \cite{cazan2016computer, singh2013role}).
This lack of support for diverse user characteristics leads to inclusivity bugs~\cite{chatterjee2021aid, guizani2022debug}---software behaviors that disproportionately disadvantage a particular group of users of that software.
In our study, we investigate inclusivity bugs in the GitHub platform itself that affect newcomers to this platform. Inclusivity bugs in the platform can have far-reaching impacts on thousands of OSS projects (as of today, 200 Million repositories are hosted on GitHub). The following research questions guided our investigation:
\newcommand{What inclusivity bugs does GitHub pose for newcomers trying to make their first contribution?}{What inclusivity bugs does GitHub pose for newcomers trying to make their first contribution?}
\newcommand{\rqone}[2][]{
\begin{rqbox}{\textbf{Research Question 1}}{#2}
What inclusivity bugs does GitHub pose for newcomers trying to make their first contribution?
#1
\end{rqbox}
}
\rqone{}
\newcommand{What are the effects of fixing those inclusivity bugs?}{What are the effects of fixing those inclusivity bugs?}
\newcommand{\rqtwo}[2][]{
\begin{rqbox}{\textbf{Research Question 2}}{#2}
What are the effects of fixing those inclusivity bugs?
#1
\end{rqbox}
}
\rqtwo{}
We analyzed four tasks newcomers often perform to make their first pull request on GitHub and found inclusivity bugs in all of them. We redesigned the impacted interface to address the identified bugs and implemented a browser plugin to change the platform interface based on our redesign (we do not have access to change GitHub itself). We evaluated the original and the redesigned interface through a between-subject user study with 75 participants.
Our main goal is to mitigate cognitive barriers newcomers face due to inclusivity bugs. As we show in this paper, GitHub, a platform newcomers use to contribute to OSS, creates contribution barriers for users with different characteristics, which is particularly harmful to underrepresented populations. This paper provides insights into how newcomers' performance can be improved when their cognitive styles are supported. Providing adequate support for minorities can be particularly important in improving the overall community diversity. These barriers may discourage newcomers and help enlarge the existing diversity gaps as these tools and infrastructure are the main channels through which OSS newcomers interact with the community.
\section{Related Work}
\label{sec:relatedwork}
This section discusses work related to newcomers' onboarding in OSS, diversity and bias in OSS, and cognitive styles.
\textbf{Newcomer's Onboarding:} Previous work has investigated OSS contribution challenges~\cite{hannebauer2017relationship, jensen2011joining, steinmacher2015social,steinmacher2014preliminary}. \citet{steinmacher2015social} conducted a mixed-method study and identified 58 barriers faced by newcomers. Researchers have also investigated specific types of challenges. For example, toxic environments have been studied in the literature~\cite{bosu2019diversity, prana2021including,guizani2021long}, which evidenced situations in which OSS project members were unfriendly, unhelpful, or elitist~\cite{storey2016social}. \citet{jensen2011joining} analyzed the speed at which emails sent by newcomers are answered, the role played by gender or nationality in the kinds of answers newcomers receive, and the reception newcomers face. A better understanding of the barriers enables communities and researchers to design and produce tools and conceive strategies to better support newcomers~\cite{balali2018newcomers}. Our work complements these work by focusing on making social coding platforms more inclusive by supporting the onboarding of newcomers with different cognitive styles.
\textbf{Diversity/Bias in OSS:} Low diversity in OSS is a concern raised by different studies in the literature when considering gender~\cite{guizani2022debug, bosu2019diversity, terrell2016gender, vasilescu2015gender, trinkenreich2021women}, language~\cite{storey2016social}, and location~\cite{storey2016social}. Past work has shown that diverse teams are more productive~\cite{vasilescu2015gender}. However, minorities face challenges in becoming a part of an OSS community~\cite{trinkenreich2021women}. Most OSS communities function as meritocracies~\cite{feller2000framework}, in which minorities report experiencing ``imposter syndrome''~\cite{vasilescu2015gender}. These competitive settings have been known to discourage minorities such as women in OSS~\cite{miller2012toward, vugt2007gender}. Participant observation of OSS contributors found that ``men monopolize code authorship and simultaneously de-legitimize the kinds of social ties necessary to build mechanisms for women's inclusion''~\cite{nafus2012patches}. Generally, cultures that describe themselves as meritocracies tend to be male-dominated ones that women experience as unfriendly~\cite{turkle2005second}.
In our work, we aim to reduce the bias found in social coding platforms used by a wide range of users to support them regarding their different cognitive styles to interact with OSS projects.
\textbf{Cognitive styles:} Research has shown that developers have different cognitive styles~\cite{burnett2016gendermag} and motivation~\cite{gerosa2021shifting}, and that cognition plays an essential role in software engineering activities~\cite{fagerholm2022cognition}. For example, more women are task-oriented, whereas more men are motivated to learn a new technology for fun~\cite{mendez2018open, mendez2018gender, padala2020gender}.
These differences in cognitive styles may negatively impact how women and men contribute to OSS, and it mainly happens when OSS projects and the underlying infrastructure support certain cognitive styles (e.g., selective information processing or learning by tinkering) and impede others (e.g., comprehensive information processing or process-oriented learning). Our work considers a variety of cognitive styles to propose changes to GitHub to support diverse newcomers.
\section{Research Method}
\label{sec:methodology}
We followed a three-step method, as illustrated in Figure~\ref{fig:methodologyactivity}: (i) we conducted a GenderMag analysis on the GitHub platform to identify inclusivity bugs. GenderMag has been extensively used to detect gender biases in commercial and OSS products~\cite{burnett2010gender, burnett2016finding, cunningham2016supporting, mendez2018open, shekhar2018cognitive, vorvoreanu2019gender}; (ii) also, we proposed fixes to the GitHub-related inclusivity bugs and developed a browser plugin to implement these changes in the GitHub interface; and (iii) we conducted an experiment to compare the original GitHub interface with the interface enriched by the plugin.
\begin{figure}[!ht]
\centering
\includegraphics[width=8cm]{images/newmethodseis.pdf}
\vspace{-2.5mm}
\caption{Research method overview.}
\label{fig:methodologyactivity}
\end{figure}
\vspace{-2.5mm}
\subsection{Step 1 - Identifying GitHub Inclusivity Bugs}\label{sec:gendermag}
To identify inclusivity bugs, we followed GenderMag~\cite{burnett2016gendermag}, a systematic inspection method that captures individual differences in how people solve problems and use software features. The method is used to evaluate problem-solving features in software considering an inclusiveness perspective, which is based on five cognitive facets. The output of a GenderMag is a set of inclusivity bugs, a special type of usability bug that disproportionately impacts users with a specific cognitive style~\cite{chatterjee2022inclusivity}. The literature has shown that the cognitive styles statistically cluster by gender, with GenderMag Abi's styles being more common among women and Tim's styles statistically more common among men~\cite{burnett2016gendermag}. For example, a study with men and women using a search product showed that women's action failure rates were over twice as high as men's. However, after the product owners fixed the gender-inclusivity bugs GenderMag revealed using the Abi and Tim personas, failure rates of both the participating genders went down, and the difference between these two genders' failure rates completely disappeared~\cite{vorvoreanu2019gender}.
The method encapsulates the facets into personas, which are embedded into a process based on Cognitive Walkthrough~\cite{spencer2000streamlined}.
The five facets used by the GenderMag method are: (i) \textbf{Motivation:} women tend (statistically) to be motivated to use technology for what they can accomplish with it, whereas men are often motivated by their enjoyment of technology per se~\cite{margolis2002unlocking, burnett2010gender, burnett2011gender}; (ii) \textbf{Information processing styles:} women are more likely (statistically) to process new information comprehensively—gathering fairly complete information before proceeding—but men are more likely to use selective styles—following the first promising information, then backtracking if needed~\cite{riedl2010there, meyers2015revisiting}; (iii) \textbf{Computer self-efficacy:} it relates with a person's confidence about succeeding at a specific task, which influences their use of cognitive strategies, persistence, and strategies for coping with obstacles. Empirical data have shown that women often have lower computer self-efficacy than men, which can affect their behavior with technology~\cite{margolis2002unlocking, beckwith2006tinkering, burnett2010gender, burnett2011gender, singh2013role, huffman2013using}; (iv) \textbf{Risk aversion:} women statistically tend to be more risk-averse than men~\cite{dohmen2011individual, charness2012strong}. Risk aversion with software usage can impact users' decisions as to which feature sets to use; and (v) \textbf{Learning: by Process vs. by Tinkering:} research reports women being statistically less likely to playfully experiment (``tinker'') with software features new to them, compared to men~\cite{beckwith2006tinkering, burnett2010gender, cao2010debugging}.
The facets presented above are used to define personas (e.g., Abi and Tim) as part of the GenderMag method. GenderMag highlights that differences relevant to inclusiveness lie not in a person's gender identity but in the facet values themselves~\cite{hill2017gender}. Nevertheless, Abi's facet values are more frequent in women than in other genders, and Tim's facet values are more frequent in men than in other genders. Each cognitive style has advantages, but either is at a disadvantage when not supported by the software.
The GenderMag procedures start with the persona selection. In this work, we focus on the Abi and Tim personas~\cite{burnett2018gendermag} as they represent opposite ends of the GenderMag facet value ranges. We customized our persona profile to represent our target users: newcomers who are willing to make their first contribution using GitHub and have never before performed a pull request (PR). We defined a set of contribution goals and subgoals, as described in Table~\ref{tab:gendermagscen} (e.g., edit a file, submit a pull request, fork repository, upload a new file). These goals were chosen because they are often part of the process to make the first contribution to an OSS project~\cite{steinmacher2016overcoming}. In this paper, we differentiate task from goal. Task refers to the actions performed by the participants in the experiment while goal refers to how we organized the GenderMag analysis. While these terms are different from each other, they have a close association (e.g., Task 1 is associated with Goal 1).
\begin{table}[!ht]\scriptsize
\centering
\vspace{-2.5mm}
\caption{GenderMag analysis goals and subgoals}
\label{tab:gendermagscen}
\begin{tabular}{m{32mm}|m{43mm}}
\hline
\multicolumn{1}{c|}{\textbf{Goal}} & \multicolumn{1}{c}{\textbf{Subgoals}} \\ \hline \hline
\multirow{2}{*}{\parbox{32mm}{Goal \#1 - Submit a pull request}} & \#1.1 - Make a change to a README file \\ \cline{2-2}
& \#1.2 - Submit the pull request \\ \hline
Goal \#2 - View changed files in PR & \#2.1 - Find the changed files in the interface \\ \hline
Goal \#3 - Request help to solve the pull request & \#3.1 - Find an experienced contributor in the project to ask for help to solve the PR \\ \hline
\multirow{2}{*}{\parbox{32mm}{Goal \#4 - Upload file}} & \#4.1 - Discover how to upload a file \\ \cline{2-2}
& \#4.2 - Request push access to upload file \\ \hline \hline
\end{tabular}
\end{table}
Given these personas, goals, and subgoals, 6 members of our research group conducted the GenderMag walkthroughs on GitHub-hosted projects using the procedures defined by \citet{burnett2018gendermag}. The group had prior training and experience in conducting GenderMag analysis. We conducted one walkthrough for each goal, focusing on Abi's facets to answer the following questions about each subgoal and action:
\begin{itemize}
\item \textbf{SubgoalQ:} Will Abi have formed this subgoal as a step to their overall goal? (Yes/no/maybe, why, facets involved).
\item \textbf{\textit{ActionQ1:}} Will Abi know what to do at this step? (Yes/no/maybe, why, facets involved).
\item \textbf{\textit{ActionQ2:}} If Abi does the right thing, will s/he know s/he did the right thing and is making progress toward their goal? (Yes/no/maybe, why, facets involved).
\end{itemize}
Using this process, we identified 12 inclusivity bugs in different parts of the GitHub interface.
\subsection{Step 2 - Fixing GitHub Inclusivity Bugs}
\begin{comment}
\begin{figure}[!ht]
\centering
\includegraphics[width=8.5cm]{images/gitinterface1.pdf}
\vspace{-1.7mm}
\caption{GitHub original interface.}
\label{fig:gitoriginterface}
\end{figure}
\end{comment}
We used the GenderMag analysis results to redesign the GitHub interface to support the GenderMag facets that were unsupported leading to the inclusivity bugs identified in Step 1. As stated by \citet{vorvoreanu2019gender}, the GenderMag analysis outcomes can identify not only inclusivity bugs but why the bugs might arise and what specific problem-solving facet(s) are implicated.
As an example of redesign, for Goal \#1, we identified an issue related to Abi's process-oriented learning style and self-efficacy facets that would affect her ability to edit a file in an OSS project.
The redesign focused on Abi's process-oriented learning facet to give explicit guidance on submitting a pull request, by leveraging the design principle of ``visiblity". We did so by presenting the README file information to users more explicitly through a new tab called home (Figure~\ref{fig:gitoriginterfacepost}), highlighting the importance of the README file presenting its content and including a tooltip to explain that the user can edit the file: \textit{To edit this file, go to the ``code'' tab above, and select the file you want to edit.} Our proposed solution also addressed Abi's self-efficacy facet by showing that she is on the right track to completing the subgoal.
\begin{figure}[!h]
\centering
\vspace{-1.7mm}
\includegraphics[width=8cm]{images/vector/gitinterpost.pdf}
\caption{GitHub interface modified by the developed plugin.}
\label{fig:gitoriginterfacepost}
\end{figure}
Once our research team agreed with the redesign solutions proposed for each issue identified in Step 1, we started the development of a plugin to change GitHub's interface. The plugin was developed as a Chrome extension to change the original GitHub interface. The plugin is developed in JavaScript and uses the GitHub API to collect data about a user in JSON format. It is available on GitHub\footnote{https://github.com/NAU-OSL/ResearchPlugin} for anyone interested in using it and making contributions as well as in the supplementary material\footnote{\url{https://figshare.com/s/4e7724bde0b1d47ecaeb}}.
\subsection{Step 3 - Assessing the Inclusivity Bug Fixes}
Finally, we conducted an experiment to evaluate whether the modified interface improved the user experience for Abi-like users. Even though inclusivity bugs can be fixed in multiple ways, we expected that we would be able to reduce an eventual performance gap between Abi and Tim users who use the modified interface since the modifications were supported by the analytic/theory-based method.
We follow the guidelines provided in~\cite{wohlin2012experimentation} to report our experiment. We conducted an experiment aimed at \textit{\textbf{analyze}} how the proposed plugin supports newcomers with different cognitive styles. We compared users using GitHub's original version (i.e., control group) to users using the GitHub plugin implemented in this study (i.e., plugin group), \textit{\textbf{for the purpose of}} evaluating, \textit{\textbf{with respect to their}} effectiveness in completing the tasks, \textit{\textbf{from the point of view of the}} researchers, \textit{\textbf{in the context of}} the GitHub environment when a newcomer attempts to make their first contribution.
In the experiment, the participants interacted with a copy of a community-based OSS project named JabRef\footnote{\url{https://github.com/JabRef/jabref}}. Participants executed the following tasks (based on the goals defined in Section~\ref{sec:gendermag}: \textbf{\textit{Task \#1 - Submit a pull request:}} The newcomer needs to edit a file in the project and submit the changes via a pull request (PR); \textbf{\textit{Task \#2 - View changed file:}} In this task, we asked participants to analyze an open pull request and find which files were changed when this pull request was created; \textbf{\textit{Task \#3 - Request help to solve the PR:}} The participant needs to find an experienced project contributor of the project and invite them to work together to solve the pull request; and \textbf{\textit{Task \#4 - Upload a file:}} The participant should try to upload a new file to the repository.
We conducted a pilot study with five researchers outside our group to collect feedback about the instruments and research design. The pilot study helped to improve our instruments (questionnaires and task definitions). We used an iterative process to apply the required changes after each pilot session. This resulted in more detailed scripts and documentation about the tasks. We ran new pilot sessions until we reached a consensus that the instruments were reliable enough to start the actual study. A replication package with all material used in the experiment is available online (see the previous subsection). The replication package also includes the developed GitHub plugin and the installation instructions.
We recruited 75 undergraduate students from diverse STEM majors from 5 different universities in the US and Brazil. The majority of participants were pursuing Computer Science majors. Our recruiting criteria were students who knew how to program but had never opened a pull request on GitHub, so previous experiences with the interface would not bias them. We opted to recruit undergraduate students for our study because the literature mentions that educators have been using OSS to train students, and these students are potential OSS project contributors~\cite{steinmacher2016overcoming}. We asked the students if they had previous experience with GitHub and OSS. Some of them responded that they had used GitHub once (e.g., Plugin = 10 and Control = 7), but when we questioned about what they had used GitHub for, they said that they just created the account but never contributed to any project, so they fit in our criteria (never opened a pull request). We also asked about their experience with OSS, and a few participants answered that they had some experience (e.g., Plugin = 4 and Control = 3). When we asked what kind of experience they had, they informed us that they had studied OSS concepts in previous courses in college.
We used a between-subject design to balance participants in the original version (i.e., control group) and GitHub plugin version (i.e., plugin group) by GenderMag facets~\cite{montoya2022selecting, vorvoreanu2019gender}. Due to participants' absence during the experiment application, we have an unbalanced number of participants in the control and plugin groups (Table~\ref{tab:numberofpart}). However, we have an almost proportional number of participants considering the GenderMag facets. At first, we conducted each session one participant at a time with a facilitator and an observer. The participants were asked to perform four scenarios described in Table~\ref{tab:gendermagscen}. We collected audio recordings and observation notes from the sessions and qualitatively analyzed participants' data. We conducted those individual sessions with 50\% of our participants and then we decided to optimize the data collection by conducting the experiment with students of two classes where we provided an online questionnaire with all the instructions they had to follow to conduct the experiment. There was a researcher present during the whole time to assist the students in case they needed any help or had any questions.
We performed a quantitative analysis by collecting the percentage of tasks achieved by participants in each group and applied a self-efficacy survey to measure newcomers' confidence in using GitHub.
We employed GenderMag's questionnaire to assess participants' facets with 9-point Likert items inspired by \citet{burnett2018gendermag}.
Based on those answers, we characterized the participants' cognitive styles according to the facets of the GenderMag personas Tim and Abi and attempted to balance our sample according to the number of participants with Abi and Tim's cognitive styles in both groups. Table~\ref{tab:numberofpart} presents the participants' characteristics in each group.
As noted, despite Abi's cognitive style being more common to women, some men also have this cognitive style.
\begin{table}[!ht]\scriptsize
\centering
\vspace{-2.5mm}
\caption{Number of participants in the experiment}
\label{tab:numberofpart}
\begin{tabular}{cc|cc|cc}
\cline{2-6}
& & \multicolumn{2}{c|}{\textbf{Facets}} & \multicolumn{2}{c}{\textbf{Gender}} \\ \cline{3-6}
\multirow{-2}{*}{\textbf{}} & \multirow{-2}{*}{\textbf{Subjects}} & \multicolumn{1}{c|}{\textbf{Tim}} & \textbf{Abi} & \multicolumn{1}{c|}{\textbf{Man}} & \textbf{Woman} \\ \hline \hline
\multicolumn{1}{c|}{\textbf{Control}} & {\textbf{36}} & \multicolumn{1}{c|}{\textbf{18}} & {\textbf{18}} & \multicolumn{1}{c|}{\textbf{30}} & {\textbf{6}} \\ \hline
\multicolumn{1}{c|}{\textbf{Plugin}} & {\textbf{39}} & \multicolumn{1}{c|}{\textbf{20}} & {\textbf{19}} & \multicolumn{1}{c|}{\textbf{27}} & {\textbf{12}} \\ \hline
\multicolumn{1}{c|}{\textbf{Total}} & {\textbf{75}} & \multicolumn{1}{c|}{\textbf{38}} & {\textbf{37}} & \multicolumn{1}{c|}{\textbf{57}} & {\textbf{18}} \\ \hline \hline
\end{tabular}
\end{table}
We also administered a questionnaire in which participants provided their self-perception about their ability to complete tasks using GitHub, i.e., self-efficacy to complete specific tasks. The questionnaire was based on the work of \citet{bandura2014social} and had 5 items. Participants answered those questions before and after the experiment with the following items using a 5-point Likert scale ranging from strongly disagree to strongly agree (with a neutral option). The goal was to capture the students' self-perceived efficacy about the provided task, before and after going through the task. The questions were prefixed with ``I am confident that I can:'' followed by: (i) \ldots use GitHub to contribute to projects; (ii) \ldots open a pull request using the GitHub web interface; (iii) \ldots change a file and submit the changes to the project using GitHub; (iv) \ldots find someone to help me using the GitHub web interface; and (v) \ldots submit a new file to a project using GitHub.
In addition to the quantitative analysis, we qualitatively analyzed participants' comments to the open questions of the survey following open
coding procedures~\cite{strauss1998basics}. We asked participants after each task to explain any difficulties they experienced in accomplishing the task and what in the interface helped them. Our goals were to understand (i) students' difficulties in using the regular and the modified interfaces; and (ii) what in the interfaces helped students the most to complete each task. The analysis was made by 2 authors and validated by a third author and it took around 1 month to analyze the quantitative data.
\begin{comment}
\input{demographics.tex}
\end{comment}
\begin{comment}
\begin{figure}[!ht]
\centering
\includegraphics[width=8.5cm]{images/newtaskseis.pdf}
\vspace{-2.5mm}
\caption{Experiment tasks overview.}
\label{fig:experimentscen}
\end{figure}
\end{comment}
For our empirical study, we considered the following variables:
(i) as the \textbf{dependent variables} we have the completion of each task by the participants (Y/N), and (ii) the \textbf{independent variables} that can affect the outcomes are the use of the Plugin (indicating the use or non-use of the plugin proposed in our study) and GenderMag Persona (whether the participant is Tim- and Abi-like).
\begin{comment}
\begin{table}[!ht]\scriptsize
\centering
\vspace{-2.5mm}
\caption{Experiment Variables}
\label{tab:experimentvariables}
\begin{tabular}{l}
\hline
\textbf{Dependent} \\ \hline \hline
\multicolumn{1}{c}{- Tasks completion (Task 1, 2, 3 and 4)} \\ \hline
\textbf{Independent} \\ \hline \hline
- Plugin (use or non-use) \\ \hline
- Personas (Tim, Abi) \\ \hline
\end{tabular}
\end{table}
\end{comment}
\section{Results}
\label{sec:results}
\begin{table*}[!ht]\scriptsize
\centering
\vspace{-2.5mm}
\caption{GitHub Inclusivity Bugs and Proposed Fixes}
\label{tab:inclusivitybugs}
\newcommand{\pb}[1]{\parbox[t][][t]{1.0\linewidth}{#1} \vspace{-2pt}}
\begin{tabular}{m{16mm}|c|m{35mm}|m{28mm}|m{73mm}}
\hline
\multicolumn{1}{c|}{\textbf{Goal}} & \multicolumn{1}{c|}{\textbf{\# Bugs}} & \multicolumn{1}{c|}{\textbf{Bug Description}} & \multicolumn{1}{c|}{\textbf{GenderMag Facets}} & \multicolumn{1}{c}{\textbf{Bug Fixes}} \\ \hline \hline
\multirow{14}{*}{\pb{\centering\#1 \\ Submit pull request}}
& 1 & Difficulty in finding Readme file to edit; & \pb{- Learning: Process vs. Tinkering;\\ - Computer self-efficacy.} & \pb{- Add ``Home'' link to the navbar to highlight the importance of the Readme File in the repository hierarchy. This link presents the content of this file and also includes a tooltip to explain that the user can edit the file.} \\ \cline{2-5}
& 2 & \pb{After clicking to edit the file, difficulty in finding the options to edit the file;} & \pb{- Learning: Process vs. Tinkering;\\ - Computer self-efficacy;\\ - Attitude Towards Risk.} & \pb{- Include a progress bar, to indicate the steps of the workflow related to this task;\\ - Include a tooltip to explain what happens in case the user changes the original filename.} \\ \cline{2-5}
& 3 & Difficulty in understanding the commit form; & \pb{- Learning: Process vs. Tinkering;\\ - Computer self-efficacy;\\ - Attitude Towards Risk.} & \pb{- Put tooltips and field labels explaining form fields to help the user understand the importance of informing a commit message.} \\ \cline{2-5}
& 4 & \pb{Difficulty in understanding the workflow after the file is edited;} & \pb{- Motivations;\\ - Learning: Process vs. Tinkering;\\ - Information Processing Style.} & \pb{- We have the progress bar, to indicate that this step is important to complete the task;\\ - Include a tooltip to explain the conflict message that appears;\\ - Include a tooltip to explain the code that is related to the changes.} \\ \cline{2-5}
& 5 & \pb{Lack of feedback indicating if the creation of the pull request was successful;} & \pb{- Learning: Process vs. Tinkering;\\ - Computer Self-Efficacy;\\ - Information Processing Style.} & \pb{- After the click on the create pull request button, redirect to a page with a success message and a progress bar showing that the pull request is completed;\\ - Include a tooltip to explain what does the ``Close pull request'' button do.} \\ \hline \hline
\multirow{1}{*}{\pb{\centering\#2 View \\changed files}}
& 6 & \pb{Difficulty in understanding the workflow after the user opens a pull request;} & \pb{- Learning: Process vs. Tinkering;\\ - Computer Self-Efficacy.} & \pb{- Include a tooltip to highlight the navbar that describes some actions that can be made in the pull request.} \\ \hline \hline
\multirow{3}{*}{\pb{\centering\#3 \\ Request help to solve the PR}}
& 7 & \pb{Difficulty in finding the option to mention another contributor;} & \pb{- Learning: Process vs. Tinkering;\\ - Computer Self-Efficacy.} & \pb{- Include a tooltip in the @ symbol icon to say "Use it to mention a contributor".} \\ \cline{2-5}
& 8 & \pb{Lack of feedback about the action of mentioning another contributor;} & \pb{- Information Processing Style;\\ - Computer Self-Efficacy.} & \pb{- Add a confirmation message to let the user know that the mentioned contributor will receive a notification and may help the newcomer in this issue.} \\ \hline \hline
\multirow{7}{*}{\pb{\centering\#4 \\ Upload file}} & 9 & \pb{Difficulty in understanding the steps needed to upload a file;} & \pb{- Information Processing Style;\\ - Attitude Towards Risk;\\ - Motivations.} & \pb{- Change the message to inform that it is necessary to fork; \\ - Make the fork button green to highlight that it is enabled.} \\ \cline{2-5}
& 10 & \pb{Lack of feedback indicating if the action of forking the repository is completed;} & \pb{- Information Processing Style;\\ - Computer Self-Efficacy.} & \pb{- Add a success message to the page that appears after the click on the fork button.} \\ \cline{2-5}
& 11 & \pb{Difficulty in understanding the commit form;} & \pb{- Learning: Process vs. Tinkering;\\ - Motivations.} & \pb{- Include tooltips explaining the form fields to make the newcomer understand the importance of informing a commit message.} \\ \cline{2-5}
& 12 & \pb{Lack of feedback indicating if the action of uploading the file is completed;} & \pb{- Information Processing Style;\\ - Learning: Process vs. Tinkering;} & \pb{- Add a success message to the repository page that appears after the click on the commits changes button.} \\ \hline \hline
\end{tabular}
\end{table*}
\subsection{Discovering and Fixing inclusivity bugs on GitHub}
To answer RQ1, we conducted a GenderMag analysis to investigate inclusivity bugs on the GitHub interface and found 12 inclusivity bugs. Table~\ref{tab:inclusivitybugs} summarizes the problems, associated GenderMag facets, and how we fixed them. The fixes leveraged the design principles of visibility and feedback, along with clarity in instructions to reduce information overload. The specific UI design changes were inspired by successful fixes to inclusivity bugs compiled in the GenderMag design catalog~\footnote{\url{https://gendermag.org/dc/}}. The parts of the GitHub interface where these bugs were found can be accessed in the supplementary material\footnote{\url{https://figshare.com/s/4e7724bde0b1d47ecaeb}}.
In \textit{Goal \#1 - Submit pull request}, we investigated the GitHub interface that an average user interacts with to edit a file and open a pull request. For this goal, we found five inclusivity bugs. Among the reported bugs, we found difficulty in understanding the workflow after the file was edited (bug \#4). This can impact Abi's facet of \textit{information processing style}. Comprehensive information processors tend to gather information in full to form a complete understanding of a problem before attempting to solve it, while selective information processors pursue knowledge selectively; they take the first promising path, and if they do not find what they need, they back out and try the next path~\cite{meyers2015revisiting}. However, remembering the extensive information gathered may be difficult as it increases cognitive load~\cite{meyers2015revisiting}. To address this problem, we proposed the inclusion of a progress bar indicating the steps of the workflow (improved \textit{feedback}) and a tooltip (improved \textit{visibility}) to explain what happens when the file is edited to reduce the cognitive load of information to the user (Figure~\ref{fig:commit}).
\begin{figure}[!ht]
\centering
\vspace{-1.7mm}
\includegraphics[width=8cm]{images/vector/gitcommitnew.pdf}
\caption{Goal \#1 / Bugfix \#2 - plugin interface: inclusion of progress bar and tooltip.}
\label{fig:commit}
\end{figure}
In \textit{Goal \#2 - View changed files}, we found one inclusivity bug in which participants with Abi's cognitive style face barriers in understanding the workflow. After the user opens a pull request, they are redirected to another page, which does not inform what can be done. This affects users with Abi's facet of \textit{learning (by process vs. by tinkering)} and \textit{computer self-efficacy}. A user with Abi's style would be lost, not knowing what to do next. Our solution includes a tooltip to the navbar that describes some actions that can be made in the pull request (improved \textit{visibility}) (Figure~\ref{fig:fileschanged}).
\begin{figure}[!ht]
\centering
\includegraphics[width=8cm]{images/vector/fileschangedcropped.pdf}
\vspace{-1.7mm}
\caption{Goal \#2 / Bugfix \#6 - plugin interface: inclusion of tooltip to guide users.}
\label{fig:fileschanged}
\end{figure}
In \textit{Goal \#3 - Request help to solve the PR}, we found 2 inclusivity bugs that could affect users' performance with Abi's cognitive style. The pull-request interface is not straightforward. Once the user opens the pull request, it is not clear that it is possible to mention someone in the comment box to ask for help. This lack of information affects users with Abi's facets of \textit{learning (by process vs. by tinkering)} and \textit{computer self-efficacy}. To address this bug, we included a tooltip in the @ symbol icon to display ``\textit{Use @ to mention a contributor to help},'' as illustrated in Figure~\ref{fig:comentario}.
\begin{figure}[!ht]
\centering
\includegraphics[height=3.3cm]{images/vector/gitcomentario.pdf}
\vspace{-1.7mm}
\caption{Goal \#3 / Bugfix \#7 - plugin interface: inclusion of tooltip to guide users.}
\label{fig:comentario}
\vspace{-3mm}
\end{figure}
Moreover, after the mention is made, the GitHub interface does not give any feedback about what happens next, affecting users with Abi's facets of \textit{information processing style} and \textit{computer self-efficacy}. This can impair the user's ability to continue with the pull request. Even if they asked for help, they would unsure if the mentioned developer would receive a notification to help them. To fix this bug, we proposed the addition of a confirmation message (improved \textit{feedback}) to the top of the page informing that: \textit{The mentioned user will receive a notification and may help you to work on the pull request}, as illustrated in Figure~\ref{fig:message}.
\begin{figure}[!ht]
\centering
\includegraphics[width=8cm]{images/vector/gitcropcomment.pdf}
\caption{Goal \#3 / Bugfix \#8 - plugin interface: inclusion of confirmation message to provide feedback to users.}
\label{fig:message}
\vspace{-1.7mm}
\end{figure}
In \textit{Goal \#4 - Upload a file}, to upload an image to an OSS project, the user needs to have push access to it. For this goal, we found four inclusivity bugs. The major bug is related to the second subgoal: it is not possible to upload a file because the newcomer does not have a repository fork nor push access to the original repository. The interface only presents the message that the user needs to have push access to the repository, but no direction that helps the user. This bug impacts the GenderMag Facets of \textit{information processing style}, \textit{attitude towards risk}, and \textit{motivations}.
We proposed the following fixes to address this bug: we changed the message to give better \textit{feedback} informing the user that it is necessary to fork the repository and made the fork button green to highlight that it is enabled on the page. The new message states \textit{In order to upload files, click the fork button in the upper right} (see Figure~\ref{fig:forktwo}).
\begin{figure}[!ht]
\centering
\includegraphics[width=7cm]{images/vector/cropgitfork2.pdf}
\vspace{-1.7mm}
\caption{Goal \#4 / Bugfix \#9 - plugin interface: change of message and color of the fork button.}
\label{fig:forktwo}
\end{figure}
\rqone[
\tcblower
\textbf{Answer:} We found 12 inclusivity bugs after applying the GenderMag inspection method in the four tasks described in our research method. These bugs are generally correlated with two or more GenderMag facets and encompass the five facets described by the GenderMag method (i.e., motivation, self-efficacy, attitude towards risk, information processing, and learning).
]{}
\subsection{Effects of removing GitHub inclusivity bugs}
In RQ2, we investigate how we impact the users with Abi's and Tim's cognitive styles by mitigating the inclusivity bugs found in Step 1. Table~\ref{tab:matrixcompleted} presents the number of users who correctly completed the tasks and the number of failures, comparing the groups and the different personas.
\begin{table*}[!bp]\scriptsize
\centering
\vspace{-2.5mm}
\caption{Number of tasks completed or failed by participants}
\label{tab:matrixcompleted}
\begin{tabular}{lcc|cc|cc|cc|cc}
\cline{2-11}
& \multicolumn{2}{c|}{\textbf{Task \#1}} & \multicolumn{2}{c|}{\textbf{Task \#2}} & \multicolumn{2}{c|}{\textbf{Task \#3}} & \multicolumn{2}{c|}{\textbf{Task \#4}} & \multicolumn{2}{c}{\textbf{All Tasks}} \\ \cline{2-11}
\multicolumn{1}{c}{\textbf{}} & \multicolumn{1}{c|}{\textbf{Completed}} & \textbf{Failed} & \multicolumn{1}{c|}{\textbf{Completed}} & \textbf{Failed} & \multicolumn{1}{c|}{\textbf{Completed}} & \textbf{Failed} & \multicolumn{1}{c|}{\textbf{Completed}} & \textbf{Failed} & \multicolumn{1}{c|}{\textbf{Completed}} & \textbf{Failed} \\ \hline \hline
\multicolumn{1}{l|}{\textbf{Control}} & \multicolumn{1}{c|}{\cellcolor[HTML]{65C194}33} & \cellcolor[HTML]{FDF3F3}3 & \multicolumn{1}{c|}{\cellcolor[HTML]{61BF91}34} & \cellcolor[HTML]{FEF7F7}2 & \multicolumn{1}{c|}{\cellcolor[HTML]{7DCBA4}28} & \cellcolor[HTML]{F9DDDD}8 & \multicolumn{1}{c|}{\cellcolor[HTML]{CCEBDC}11} & \cellcolor[HTML]{EA9595}25 & \multicolumn{1}{c|}{\cellcolor[HTML]{84CDA9}106} & \cellcolor[HTML]{F7D7D7}38 \\ \hline
\multicolumn{1}{l|}{\textbf{Plugin}} & \multicolumn{1}{c|}{\cellcolor[HTML]{5CBD8D}38} & \cellcolor[HTML]{FFFCFC}1 & \multicolumn{1}{c|}{\cellcolor[HTML]{57BB8A}39} & \cellcolor[HTML]{FFFFFF}0 & \multicolumn{1}{c|}{\cellcolor[HTML]{60BF90}37} & \cellcolor[HTML]{FEF8F8}2 & \multicolumn{1}{c|}{\cellcolor[HTML]{60BF90}37} & \cellcolor[HTML]{FEF8F8}2 & \multicolumn{1}{c|}{\cellcolor[HTML]{5DBE8E}151} & \cellcolor[HTML]{FFFBFB}5 \\ \hline
\multicolumn{1}{l|}{\textbf{Abi - Control}} & \multicolumn{1}{c|}{\cellcolor[HTML]{77C8A0}15} & \cellcolor[HTML]{FAE3E3}3 & \multicolumn{1}{c|}{\cellcolor[HTML]{62C092}17} & \cellcolor[HTML]{FEF6F6}1 & \multicolumn{1}{c|}{\cellcolor[HTML]{96D5B6}11} & \cellcolor[HTML]{F4C6C6}7 & \multicolumn{1}{c|}{\cellcolor[HTML]{CBEADB}5} & \cellcolor[HTML]{EA9696}13 & \multicolumn{1}{c|}{\cellcolor[HTML]{8FD2B1}48} & \cellcolor[HTML]{F5CDCD}24 \\ \hline
\multicolumn{1}{l|}{\textbf{Tim - Control}} & \multicolumn{1}{c|}{\cellcolor[HTML]{57BB8A}18} & \cellcolor[HTML]{FFFFFF}0 & \multicolumn{1}{c|}{\cellcolor[HTML]{60BF90}17} & \cellcolor[HTML]{FEF8F8}1 & \multicolumn{1}{c|}{\cellcolor[HTML]{68C296}17} & \cellcolor[HTML]{FCF0F0}1 & \multicolumn{1}{c|}{\cellcolor[HTML]{CDEBDC}6} & \cellcolor[HTML]{EA9494}12 & \multicolumn{1}{c|}{\cellcolor[HTML]{7BCAA3}58} & \cellcolor[HTML]{F9DFDF}14 \\ \hline
\multicolumn{1}{l|}{\textbf{Abi - Plugin}} & \multicolumn{1}{c|}{\cellcolor[HTML]{57BB8A}18} & \cellcolor[HTML]{FFFFFF}1 & \multicolumn{1}{c|}{\cellcolor[HTML]{57BB8A}19} & \cellcolor[HTML]{FFFFFF}0 & \multicolumn{1}{c|}{\cellcolor[HTML]{63C093}18} & \cellcolor[HTML]{FDF5F5}1 & \multicolumn{1}{c|}{\cellcolor[HTML]{6FC59B}17} & \cellcolor[HTML]{FBEAEA}2 & \multicolumn{1}{c|}{\cellcolor[HTML]{5BBD8D}72} & \cellcolor[HTML]{FEF7F7}4 \\ \hline
\multicolumn{1}{l|}{\textbf{Tim - Plugin}} & \multicolumn{1}{c|}{\cellcolor[HTML]{5EBE8F}20} & \cellcolor[HTML]{FEF9F9}0 & \multicolumn{1}{c|}{\cellcolor[HTML]{57BB8A}20} & \cellcolor[HTML]{FFFFFF}0 & \multicolumn{1}{c|}{\cellcolor[HTML]{5EBE8F}19} & \cellcolor[HTML]{FEF9F9}1 & \multicolumn{1}{c|}{\cellcolor[HTML]{57BB8A}20} & \cellcolor[HTML]{FFFFFF}0 & \multicolumn{1}{c|}{\cellcolor[HTML]{5BBD8D}79} & \cellcolor[HTML]{FFFCFC}1 \\ \hline
\hline
\end{tabular}
\end{table*}
We evaluated the effectiveness of both groups in completing the tasks using the \textit{Chi-Square test} to check the independent relation between two categorical variables~\cite{sureiman2013conceptual}. Our results indicate that there was an effectiveness gap between users with Tim and Abi's cognitive styles before tasks \#2 and \#3 (Table~\ref{tab:difference}). Abis were statistically significantly worse than Tims in \textit{Task \#2 - View changed file} (i.e., 61\% (Abi) vs. 94\% (Tim), \textit{p-value = 0.016}) and \textit{Task \#3 - Request help to solve the PR} (61\% (Abi) x 94\% (Tim), \textit{p-value = 0.016}). On the other hand, in the plugin group, there were no statistically significant differences, and users with Tim and Abi facets achieved similar performance (i.e., Task \#2 - 100\% (Abi) x 100\% (Tim) and Task \#3 94\% (Abi) x 95\% (Tim)).
\begin{table*}[!hbtp]\scriptsize
\centering
\vspace{-2.5mm}
\caption{Effectiveness of tasks completed and comparison among groups.}
\label{tab:difference}
\begin{tabular}{c|cc|cc|cccc}
\hline
& \multicolumn{2}{c|}{\textbf{Abi}} & \multicolumn{2}{c|}{\textbf{Tim}} & \multicolumn{4}{c}{\textbf{Differences}} \\ \cline{2-9}
\multirow{-2}{*}{\textbf{Tasks}} & \multicolumn{1}{c|}{\textbf{Control}} & \textbf{Plugin} & \multicolumn{1}{c|}{\textbf{Control}} & \textbf{Plugin} &
\multicolumn{1}{c|}{{\color[HTML]{F56B00} \textbf{Abi}}-C x {\color[HTML]{3166FF} \textbf{Tim}}-C} &
\multicolumn{1}{c|}{{\color[HTML]{F56B00} \textbf{Abi}}-P x {\color[HTML]{3166FF} \textbf{Tim}}-P} &
\multicolumn{1}{c|}{{\color[HTML]{F56B00} \textbf{Abi}}-P x {\color[HTML]{F56B00} \textbf{Abi}}-C} &
\multicolumn{1}{c}{{\color[HTML]{3166FF} \textbf{Tim}}-P x {\color[HTML]{3166FF} \textbf{Tim}}-C} \\ \hline \hline
\textbf{\#1} & \multicolumn{1}{c|}{\cellcolor[HTML]{77C8A0}83.3\%} & \cellcolor[HTML]{57BB8A}94.7\% & \multicolumn{1}{c|}{\cellcolor[HTML]{57BB8A}100\%} & \cellcolor[HTML]{5EBE8F}100\% &
\multicolumn{1}{c|}{
-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & - \\ \hline
\textbf{\#2} & \multicolumn{1}{c|}{\cellcolor[HTML]{62C092}61.1\%} & \cellcolor[HTML]{57BB8A}100\% & \multicolumn{1}{c|}{\cellcolor[HTML]{60BF90}94.4\%} & \cellcolor[HTML]{57BB8A}100\% &
\multicolumn{1}{c|}{
{\color[HTML]{FE0000} \textbf{$\Downarrow$}} -33.3\%
*
}
& \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & - \\ \hline
\textbf{\#3} & \multicolumn{1}{c|}{\cellcolor[HTML]{96D5B6}61.1\%} & \cellcolor[HTML]{64C093}94.7\% & \multicolumn{1}{c|}{\cellcolor[HTML]{68C296}94.4\%} & \cellcolor[HTML]{5EBE8F}95\% & \multicolumn{1}{c|}{
{\color[HTML]{FE0000} \textbf{$\Downarrow$}} -33.3\%
*
} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{
{\color[HTML]{32CB00} \textbf{$\Uparrow$}} +36.6\%
*
} & - \\ \hline
\textbf{\#4} & \multicolumn{1}{c|}{\cellcolor[HTML]{CBEADB}27.7\%} & \cellcolor[HTML]{70C59B}89.4\% & \multicolumn{1}{c|}{\cellcolor[HTML]{CDEBDC}33.3\%} & \cellcolor[HTML]{57BB8A}100\% & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{
{\color[HTML]{32CB00} \textbf{$\Uparrow$}} +61.7\%
**
} &
{\color[HTML]{32CB00} \textbf{$\Uparrow$}} +66.6\%
**
\\ \hline
\multicolumn{9}{l}{(* p$\leq$0.05; ** p$\leq$0.01)}
\\ \hline \hline
\end{tabular}
\end{table*}
In the control group, we can notice that participants with Tim and Abi's cognitive styles struggled to complete \textit{Task \#4 - Upload file}. The bug fixes implemented in the plugin statistically significantly helped both---Tim (100\% (plugin) vs. 33\% (control), \textit{p-value $<$ 0.001
) and Abi (89\% (plugin) vs. 27\% (control), \textit{p-value = 0.0001332142}).
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{images/ControlPluginText.pdf}
\vspace{-2.5mm}
\caption{Self-efficacy results.}
\label{fig:selfefficacy}
\end{figure}
In our study, we also requested the participants to answer self-efficacy questionnaires at the beginning and end of the study. Figure~\ref{fig:selfefficacy} presents the results of the self-efficacy questionnaire for both control and plugin participants grouped by cognitive styles. Given the low self-efficacy values in the control group, our results suggest that the GitHub environment can be difficult for every newcomer, regardless of their GenderMag cognitive style (i.e., Abi or Tim). We can see that both control and plugin participants increased self-efficacy. However, the increase in the plugin group was bigger for users with both Abi and Tim's cognitive styles. We calculated the Wilcoxon signed-rank test, a frequently used nonparametric test for paired data (e.g., consisting of pre- and post-treatment measurements)~\cite{rosner2006wilcoxon}, which indicates that the difference is significant for the Tim plugin Pre vs. Tim plugin Post (\textit{p-value = 0.0009}) and for Abi plugin Pre vs. Abi plugin Post (\textit{p-value = 0.005}). Furthermore, we calculated Cliff's delta effect size measure~\cite{cliff1993dominance} for the self-efficacy questionnaire to verify the significance of the difference in magnitude between the values of the two distributions. The results of the comparison between Tim plugin Pre vs. Tim plugin Post have a large effect size (\textit{delta = 0.682}), and the results for Abi plugin Pre vs. Abi plugin Post also have a large effect size (\textit{delta = 0.542}).
We also collected qualitative data in our experiment by asking after each task's execution two questions: (i) \textit{what did you find most difficult in the process?} and (ii) \textit{what in the interface helped you the most in the process?}
\textbf{Participants' difficulties.} Participants from the control group reported facing more challenges than the plugin group. In \textit{Task \#1 - Submit pull request} - We asked users to edit the README file, once it would be simple to understand and modify then if it was a programming file(e.g., XML or JAVA). Also, the README is an essential file for a new contributor that wants to start contributing to an OSS project and to have an overview of the project. Users with Abi's cognitive style in the control group faced difficulties in figuring out how to make the pull request. Among the challenges mentioned by Tim users in the control group, they experienced difficulty finding the editor and the README file. As mentioned by P44, ``\textit{Finding the README file, definitely, because I didn't know where to look for all these files, I didn't think you would be like in the middle of those files.}'' This difficulty is related to bug \#1 (Table~\ref{tab:inclusivitybugs}) and it was not mentioned in the plugin group, which highlights that the proposed solution was beneficial for users with both personas. On the other hand, in the plugin group, users with Abi's persona had difficulty figuring out how to edit the file and save it after editing. This comment highlights Abi's risk-averse facet when users have to use new technologies. P1 said, ``\textit{Starting the Edit process was really hard. And once you have a little computer knowledge and you actually get into the Edit tab, you can look at the various files you want to edit and then go through the process}''.
In \textit{Task \#2 - View changed file} - One difficulty that arose in the control group was finding the changed files. It is related to bug \#6 (Table~\ref{tab:inclusivitybugs}). Abi and Tim users in the control group mentioned having difficulty finding the changed files in the pull request interface. Two Tim users and no Abi in the plugin group mentioned that they faced problems finding the changed files. We can see that the solution proposed in our work could reduce the impacts of this bug on both groups of participants.
In \textit{Task \#3 - Request help to solve the PR} - Tim users in the control group mentioned that they had difficulty finding how to request help, and their first idea was to contact the experienced user directly. P43 mentioned that: ``\textit{I thought there would be a way that I could just like leave them a personal message and ask for help rather than posting. It looks like a public comment.}''. Other participants tried to contact the user directly by going to their GitHub profile page and looking for a direct message option, which GitHub does not offer. In the plugin group, only one Abi user reported that they faced challenges in finding an option to request help. Another Abi user mentioned that the task was intuitive (P40): ``\textit{Once I recognized that I needed to do this task as well, it was pretty intuitive.}''.
In \textit{Task \#4 - Upload file} - Users in the control group faced more difficulty requiring push access: one Abi user and ten Tim users mentioned having that difficulty. Out of these ten users, only three overcame this challenge and completed the task successfully. This problem is described in Table~\ref{tab:inclusivitybugs} as bug \#9. With the solutions proposed in the plugin group, we did not have any plugin user mentioning that difficulty, which highlights the contribution of the plugin to help users with different cognitive styles.
\textbf{Interface help.} We also investigated what in the interface helped participants. In \textit{Task \#1 - Submit pull request} - Users with Abi's cognitive style in the plugin group mentioned that button colors and the tooltips helped. Tooltips were included in the interface to help users who need to gather information before starting to use the technology (\textit{information processing} style). It also improved their \textit{self-efficacy} facet by letting users know they were on the right path. Indeed, P29 mentioned that ``\textit{the tooltip guides me into the execution of the task}''.
Abi users in the control group said that the interface did not help and was confusing: P40 mentioned: ``\textit{I need to stay on code to do the Edit. So, I scrolled down and found the proposed changes. And I thought, `you know that I don't want to lose my progress on the code.'}''. Tim users in the control group mentioned the button colors, the interface labels, and the icons helped in general.
In \textit{Task \#2 - View changed file}, both Tim and Abi users in the plugin group mentioned that the changed files in the navigation menu and the tooltips helped them to complete the task.
As in Task \#1, Tim users in the control group showed more satisfaction with regular GitHub interface elements.
In \textit{Task \#3 - Request help to solve the PR}, plugin users noted that the mention icon (@) helped. This fix supported users who \textit{learn by process} and have lower \textit{computer self-efficacy}. The fix was designed to make the action of mentioning someone self-explanatory to start interacting in the pull request interface.
In \textit{Task \#4 - Upload file}, Abi users in the plugin group mentioned that the green fork button and the message helped. As mentioned by P26, ``\textit{interface messages when trying to upload the file helps a lot}''. Tim users in the plugin group said the same, exemplified by P5: ``\textit{So when I went back, I saw that the fork was highlighted in like the same green color. (...) It really just puts me back in the right direction}''.
\rqtwo[
\tcblower
\textbf{Answer:} We were able to remove the gap between users with Tim versus Abi cognitive styles in \textit{Task \#2} and \textit{Task \#3}. In the control group, Abi users performed worse than Tim users, while in the plugin group, people with both cognitive styles achieved similar performance. For \textit{Task \#4}, people with both cognitive styles struggled, and our changes helped both. Hence, fixing inclusivity bugs can help all users. The qualitative data highlighted that participants from the control group faced more challenges than those in the plugin group. The interface elements included by the bug fixes
helped them complete the tasks.
]{}
\section{Discussion}
\label{sec:discussion}
Many factors can cause cognitive biases. These factors relate to how individuals think and can be associated with limited cognitive capacity (e.g., availability bias) or individuals' problem-solving styles (e.g., present bias). Disregarding cognitive biases can result in negative outcomes~\cite{chattopadhyay2020tale}. If cognitive styles are neglected, it can harm the users' performance. Our study investigated how users with different cognitive styles interact with the GitHub original interface and the modified version proposed in our research. We wanted to examine how the inclusivity bugs in software can impact the performance of users with different cognitive styles (i.e., Abi's and Tim's).
GenderMag method considers inclusivity bugs as issues tied to one or more cognitive facets. Those bugs are not only cognitive inclusivity bugs but also gender-inclusivity bugs because the facets capture statistical gender differences in how people problem-solve~\cite{beckwith2006tinkering, burnett2011gender, burnett2010gender, burnett2016gendermag}.
We found 12 inclusivity bugs after applying the GenderMag method on GitHub. We proposed a plugin to address these inclusivity bugs, aiming to improve users' performance. While participants in the control group mentioned these bugs as difficulties, those in the plugin group did not mention several of them, indicating that it was possible to fix them with tooltips (Bugfixes \#1, \#3), a progress bar (Bugfix \#4), and colors and messages (Bugfix \#9).
Among the bugs found in our study, some also appeared in the work of~\citet{chatterjee2021aid}, which focused on analyzing project pages: (i) ``README: Unclear path to the readme'', Bug \#1 in our study; and (ii) ``Template: Pull request: not enough instructions'', related to the Bugs \#3 and \#4 in our study. Their study also identified the bugs ``Not enough information: to choose options'' and ``Not enough information: to take action''. We also identified these problems in different parts of the interface through our analysis. Our bug fixes include various tooltips to help users choose the best option to complete the required actions (bugs \#1, \#2, \#3, \#4, \#5, \#6, and \#11).
Although we mitigated part of the bugs, some were still present in the plugin group. Participants still had difficulties editing the file (Bug \#2): our fixes to this bug only tackled the page that appears after clicking on the edit button. Similarly, participants still had difficulty finding the option to request help (Bug \#7). We added a tooltip in the @ symbol, but participants with both cognitive styles wanted to send direct messages to experienced developers, which GitHub does not support. The icons in the comment box are more related to text format styles, and the mention icon is mixed in with them.
Our results show a gap between users with Tim's and Abi's cognitive styles in the original GitHub platform in \textit{Task \#2} and \textit{Task \#3}, with Abi users performing worse than Tim users. These differences are mitigated with changes to the interface oriented to better support Abi's cognitive style. This is a promising result that can be further extended to improve performance and facilitate the onboarding of different users.
When the gap between newcomers' skills and those needed to accomplish the task is too broad, it demotivates newcomers, causing them to drop out~\cite{balali2020recommending, steinmacher2015understanding}. This can particularly impact students, who typically have limited skills, time, and experience when first contributing to an OSS project. It also highlights the importance of the GenderMag method to find inclusivity bugs that could help a diverse set of users of a software product overcome cognitive style barriers that make them feel insecure about how to complete a task in an OSS environment.
Previous studies have reported more inclusivity bugs when using Abi than when using the other GenderMag personas~\cite{burnett2016finding, mendez2018open}. Thus, removing the inclusivity bugs makes users with Abi's cognitive styles face fewer challenges, as we observe in our study. Concerning users with Tim's persona, removing the inclusivity bugs also helped them face fewer challenges in some situations. These results highlight that by attempting to make software less gender-biased, we help improve the performance of all users with different cognitive styles.
\citet{padala2020gender} investigated in depth how the three most frequently occurring GenderMag facets in their work (i.e., \textit{Information Processing}, \textit{Self-efficacy}, and \textit{Learning Style}) came together with the top reported barrier categories. We found some similarities between our results. First, they presented how the \textit{information processing style} biases make Abi feel disoriented due to insufficient upfront information. In our study, Abi participants also reported feeling lost in the control group. Second, concerning \textit{computer self-efficacy}, their participants were worried and described a lack of knowledge of the technologies. These findings also appeared in our results---participants in the control group felt scared by the GitHub interface. This indicates that it is essential to improve the GitHub platform to prevent users with lower \textit{self-efficacy} from being negatively impacted
Finally, in the \textit{learning style} facet, the participants in their study mentioned a lack of clear instructions on how to contribute. We observed that Abi participants in the control group got stuck in completing some tasks. This occurred because the GitHub interface did not provide any instructions on how to move forward to make progress and make a contribution.
We noticed in our results that the men (the majority comprised of Tim) felt more empowered to talk about the challenges they faced by complaining about things and saying how they would improve the GitHub interface. In contrast, the women (the majority constituted by Abi) did not complain as much. These results can be due to the facets of behavior described in the literature~\cite{padala2020gender}, wherein women tend to have lower \textit{computer self-efficacy} (confidence) than men within their peer sets. This can affect their behavior with technology~\cite{wang2018competence, burnett2011gender, cazan2016computer, huffman2013using}, indicating that women feel less comfortable in sharing their opinions, tending to think that it is their fault for not being able to use a certain technology. We point out that it is essential to make the technology inclusive for all users regardless of how comfortable they are in expressing their opinions.
\section{Implications}
\label{sec:implications}
\textbf{\textit{Implications for social coding platforms}}. For the designers and developers of GitHub and other social coding platforms, our results highlight the importance of developing software that encompasses the diversity of users. Social coding platforms can insert inclusivity biases that are crosscutting to a large number of projects. Social coding platform designers should consider newcomers' cognitive styles to understand how they process information or use the technology itself and how they can accomplish tasks to help them reach their main goals. A more inclusive design means including more users, making it easier for them to contribute to OSS projects.
\textbf{\textit{Implications for Maintainers of OSS projects}}. Our work reports inclusivity bugs that newcomers can face and what part of a task they can get stuck on. Maintainers can use this information to consider how they could help to mitigate those challenges. One suggestion would be to provide more information in the README/Contributing.md files. We also hope our work can foster and bring the interest of the OSS communities to investigate inclusivity bugs.
\textbf{\textit{Implications for newcomers (Abis and Tims)}}. Our results are important for newcomers. We showed the difficulties they face, where they struggle most, and how the interface can help them.
Abi users, who notoriously have low self-efficacy, should be aware that the interface was not designed for their cognitive style, and poor performance can be a consequence of inclusivity bugs. Tim users should be aware that developers with diverse cognitive styles exist and respect the differences.
\textbf{\textit{Implications for educators}}. Familiarizing students with the OSS contribution process is becoming more common~\cite{pintoFSG17}. Contributing to a real project helps students gain valuable real-life experience and allows them to add this experience to their resume and become more likely to find jobs. Our results highlight that according to students' cognitive styles, they can face more challenges when interacting with the GitHub platform. Educators should understand those challenges and teach how to overcome them. They can also explore other ways to facilitate students' use of the GitHub platform.
\section{Limitations}
\label{sec:threatstovalidity}
Our investigation also has threats to validity and limitations. We focused our analysis on finding inclusivity bugs for newcomers based on GenderMag Abi's persona. We followed the guidelines suggested by \citet{hilderbrand2020engineering} and focused on this persona because its facet values tend to be more undersupported in software than the other personas~\cite{guizani2022debug, burnett2016finding}. However, fixing problems from only this persona's perspective could leave non-Abi-like newcomers less supported. We mitigated this risk by empirically evaluating the fixes with both Abi and Tim's persona participants. Tim participants also showed performance improvements for some tasks.
\balance
Despite our best efforts to recruit women for the experiment, there is a gender imbalance in the sample. Although the number of women is lower than men, we have almost the same amount of Abi (37) and Tim (38) participants. Nevertheless, the goal of the paper is to investigate the cognitive facets, and some men also present facets associated with Abi's persona.
In the GenderMag analysis, we carefully conducted the GenderMag walkthroughs on GitHub following the procedures described by \citet{burnett2016gendermag}. We had different meetings to review the GenderMag analysis and solutions proposed to fix the inclusivity bugs and the members of our research group had previous experience in conducting GenderMag analysis. Another concern is that the GenderMag method only relies on participants' gender, though that is not the case. \citet{vorvoreanu2019gender} states that the keys to more inclusive software lie not in someone's gender, but in the facet values themselves. As this answer makes clear, GenderMag can be used to find and fix inclusiveness issues without ever speaking of gender.
We recruited 75 undergraduate students from diverse STEM majors from 5 different universities in the US and Brazil. The majority of participants were pursuing Computer Science majors. Future studies may investigate whether newcomers from different countries or with education levels would have similar results.
About the plugin development and evaluation, we ran tests during the development to assert its usability and correctness. However, the plugin could have different behaviors depending on the browser. To mitigate this threat, we made available a pre-configured computer in case the plugin did not behave as we expected during the experiment.
We collected the time participants spent completing the tasks. However, the high number of participants that did not complete the tasks made it hard to compare the time differences between groups. Future studies with larger samples may help to investigate time differences.
Concerning the analysis of the qualitative results, we are aware that the data interpretation can lead to bias. To mitigate subjectivity, we employed two researchers who independently coded the answers and conducted meetings to discuss and resolve conflicts.
\section{Conclusions}
\label{sec:conclusion}
Making software products usable to people regardless of their differences has practical importance. If a project's development tools or products fail to achieve inclusiveness, not only does its adoption fall but so does the involvement of underrepresented populations in the teams themselves~\cite{ford2016paradise, mendez2018open}. In this work, we found 12 inclusivity bugs in four tasks that are common for OSS newcomers. These bugs mainly affect users with cognitive styles that are more common to women---defined in the Abi persona~\cite{burnett2016gendermag}. We proposed fixes to the inclusivity bugs, implemented them in a plugin that changed the GitHub interface, and tested them in an experiment with 75 newcomers.
We found that Abi participants in the control group (regular GitHub) underperformed Tim participants in some tasks, with Abi users in the control group being able to complete only 67\% of the tasks. Implementing the fixes proposed in the GenderMag analysis reduced these differences and improved the performance of Abi participants to 95\%, indicating that the redesign improves the easiness of use. In one of our tasks, both Tim and Abi participants faced challenges, and the bug fixes implemented in the plugin statistically significantly helped both users. We also noticed an overall increase in the self-efficacy perception for both Abi and Tim persona users in the plugin group, highlighting how solving inclusivity bugs for minorities can also help the majority population.
In future work,
we plan to use this study's results to continue exploring the gender-biased barriers in tools and infrastructure to improve newcomers' performance and make the GitHub platform and its projects more friendly for newcomers that want to engage in OSS projects.
\section*{Acknowledgment}
This work is partially supported by the National Science Foundation under grant numbers 1815486, 1815503, 1900903, 1901031, 2236198, 2235601, CNPq \#313067/2020-1, CNPq/MCTI/FNDCT \#408812/2021-4, and MCTIC/CGI/FAPESP \#2021/06662-1. We also thank the students for participating in our study and Zachary Spielberger for helping with the plugin development.
\bibliographystyle{IEEEtranN}
| proofpile-arXiv_065-13081 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This work introduces Yggdrasil Decision Forests (YDF), a library for the training, serving, and interpretation of decision forests\footnote[2]{Yggdrasil Decision Forests, whose logo is illustrated in Figure~\ref{fig:logo}, is available at \href{https://github.com/google/yggdrasil-decision-forests}{https://github.com/google/yggdrasil-decision-forests}.}. Decision forests are a rich class of Machine Learning (ML) models that utilize decision trees as weak learners to learn powerful functions. Decision forests stand out for their ease of use due to the relatively small number of hyper-parameters that enjoy intuitive default values and stability; their competitive and often superior performance on tabular data; their native support for numerical and categorical features without the need for preprocessing; their robustness to noisy data; their sample efficiency; and, their lightweight training and fast inference. Popular decision forests learning algorithms include Random Forests~\cite{breiman_2001}, Gradient Boosted Decision Trees~\cite{friedman_2001}, AdaBoost~\cite{schapire_2013}, and popular decision tree learning algorithms include C4.5~\cite{quinlan_1994}, and CART~\cite{breiman_1984}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.1]{ydf_1024.png}
\caption{Logo of the YDF library. Yggdrasil is a mythological holy tree in the Nordic culture. While not as ambious as the Yggdrasil tree, our library aims to reach and support all relevant domains of machine learning.}\label{fig:logo}
\end{figure}
In spite of the advances in other areas of ML and the growth of neural networks in particular over the past decade, decision forests remain competitive solutions to many real-world problems and, as a technology, are integral to many production systems to date, as evidenced by the many open-source libraries~\cite{ke_2017_lightgbm,chen_2016}. As such, software libraries that facilitate the use of decision forests are central to software engineering in industry as well as academic research, with the design choices behind them having significant ramifications for research and development. While we acknowledge that this is a challenge faced by software libraries in general, it is particularly more acute in ML due to its many moving parts and the sheer volume of novel methods and algorithms that make up a training or inference algorithm. If the library proves rigid and hard to extend in the face of rapid and often unforeseen developments in the literature, by continually building atop that stack, developers and researchers alike run the risk of creating overly complex and hard-to-maintain software which may ultimately constrain product quality and the scope of research.
With that in mind, we developed YDF based on four design principles: simplicity of use; safety of use; modularity and high-level abstraction; and compatibility and integration with other ML libraries. These guidelines have proved consequential for the development of YDF, which is what motivated us to share them with the community for the benefit of software engineers and researchers. We discuss the four pillars of YDF in Section~\ref{sec:principles}. To explain how these guidelines determined the design choices in the development of YDF, we discuss its architecture in detail in Section~\ref{sec:learner_and_models}. In Section~\ref{sec:usage_examples}, we present an application of YDF to classical ML problems, followed by a complete benchmark to compare YDF with existing solutions in Section~\ref{sec:benchmarks}. Section~\ref{sec:conclusion} concludes this work.
\section{Four design principles}
\label{sec:principles}
\subsection{Simplicity of use}
\emph{Simplicity of use} refers to the ease and efficiency with which a typical user of the library can deploy and maintain a solution. This is an increasingly difficult objective because, with the democratization of ML, the users of ML libraries now include not just researchers and experienced software engineers, but also a diverse population of non-ML researchers, students, and hobbyists among others. In addition to standard software library development best practices, therefore, simplicity in the context of an ML library entails the following properties:
\begin{description}
\item[High-level interactions and messages:] The API, documentation, error messages, and results (such as model evaluation reports), should communicate with enough abstraction that is easily digested by the user but that nonetheless provides enough context for troubleshooting. This often means communicating ideas at a high and intuitive level with as much detail as necessary to allow the user to build a mental model of the library and its workflows. In addition, error messages must provide directions and guidelines to resolve the underlying issues. To illustrate this point, we write in Table~\ref{tab:error_message} two messages that describe the same underlying error from a supervised binary classification pipeline.
\item[Sensible default values and behavior:]
The library should rely on meaningful and documented default parameters and behaviors. Those default values and behaviors should be explicit to and adjustable by the user so that they can be tailored to their specific use case. For instance, while the quality of a model can often be improved by optimizing the hyper-parameters of the learning algorithm, some hyper-parameter values give sufficiently satisfactory results in most cases (e.g., a shrinkage rate of 0.1 is reasonable for most cases in gradient boosted trees learning). As another example, YDF can automatically determine the characteristics and semantic (e.g., numerical vs. categorical) of an input feature---as detailed in Section~\ref{sec:automated_feature_injestion}. The output of this automated system is then presented to the user, which gives the user an opportunity to rectify an incorrectly determined type or to modify it arbitrarily.
In practice, YDF takes this rule one step further by adopting the following philosophy: Any operation that can be automated should be automated, the user should be made aware of the automation, and should be given control over it. This is summarized in YDF's motto: \emph{``With only five lines of configuration, you can produce a functional, competitive, trained and tuned, fully evaluated and analysed machine learning model. With four more lines of configuration, you can compare this model to any other machine learning model. With three more lines, you can deploy your model in production.''}.
\item[Clarity and transparency:] The user should have an accurate mental model of what the library does. To that end, the library must concretely define its concepts and terminology, and must accurately document metrics and algorithms with citations to the literature where appropriate. The library should explicitly note any heuristic or approximation that it uses for efficiency reasons on large datasets (e.g., evaluation on a random subset of the dataset). Reports should be self-contained, readable, and exhaustive.
\end{description}
\begin{table*}
\caption{Example of (a) poor and (b) well-written messages for an error.}
\label{tab:error_message}
\noindent\fbox{
\parbox{\textwidth}{
(a) \texttt{\small
Invalid tensor shape name="internal\_tensor\_1" shape={[}None, 4{]},
dtype=int64, {[}large stack...{]}}
}
}
\noindent\fbox{
\parbox{\textwidth}{
(b) \texttt{\small
Binary classification training (task=BINARY\_CLASSIFICATION) requires
a training dataset with a label having 2 classes, however, 4 classe(s)
were found in the label column "color". Those 4 classe(s) are {[}blue,
red, green, yellow{]}. Possible solutions: (1) Use a training dataset
with two classes, or (2) use a learning algorithm that supports
single-class or multi-class classification e.g. learner='RANDOM\_FOREST'}
}
}
\end{table*}
\subsection{Safety of use}
Applied ML is rather unusual in that errors can lead to suboptimal yet entirely functional models! As an obvious example, tuning the hyper-parameters of a model on a held-out test dataset can lead to great offline but poor live model performance. This effect can be hard to distinguish from the impact of a distributional shift. Other common ML mishaps include modeling features according to an incorrect semantic (e.g., numerical features interpreted as categorical, thereby preventing the model from using order and scale information), or comparing trained models without accounting for the training and evaluation uncertainties.
The \emph{safety of use} principle aims to reduce the likelihood for both experienced and inexperienced users to introduce such errors and increase the chances of catching them during development. For an ML library this entails the following:
\begin{description}
\item[Warning and error messages]: Just as a compiler warns the user of potential mistakes, an ML library should look for possible errors and alert the user accordingly. When error seems likely or the impact of a potential error catastrophic, the error should interrupt the operation by default, with an option to ignore it explicitly. When error seems less likely, a non-interrupting warning will do instead.
Furthermore, the note on \emph{high-level interactions and messages} stated in the previous section applies to these warnings as well. For instance, the training of a multi-class classification model on a label that looks like a numerical value (i.e., with a large number of unique values), the error could state that: \texttt{\small The classification label column ``revenue'' looks like a regression column (4,123 unique values or 50,000 examples, 99\% of the values look like numbers). Solutions: (1) Configure the training as a regression with task=REGRESSION, or (2) disable the error with disable\_error.classification\_ look\_like\_regression=true}.
\item[Easily accessible, correct methods]: The library should make it easy to (automatically) execute what is considered ML best practices. Explanations of those practices should be well documented and easily available to the user. For example, model evaluation should contain confidence bounds with a sufficiently detailed description of how they are computed (e.g., bootstrapping, closed form solution, approximation) and how they may be used. Similarly, model comparison should include the results of appropriate statistical tests.
\end{description}
\subsection{Modularity and high level abstraction}
Modularity is a well-understood but informal blueprint for providing adaptability in software design~\cite{sullivan_2001}. In YDF, modularity implies that any sufficiently-complex part of the code may be understood as an independent module, with the interface between the various modules relying on clearly and concretely defined high-level concepts that do not expose their internals, and that are extensible and interchangeable. The initial version of every module can be as simple and generic as possible, even at the expense of execution efficiency, to facilitate readability.
At the start of development, modularity incurs a development cost and may therefore seem unwarranted. In fact, overly generic and slow code can be counterproductive. In YDF, however, we observed that modularity brought about many advantages as we elaborate shortly. As the library grows with newly published techniques, some modules become overly complex or inefficient. In cases like that, modularity allows for the re-writing of a specific module in isolation without having to understand the library in its entirety. It also allows recycling unit- and integration-tests of the previous version of the module. A re-write may often involve breaking up a single module into sub-modular parts.
Consider, as an example, the \emph{splitter} routines that are at the core of every decision tree learning algorithm and are responsible for selecting split conditions for an intermediate node in the tree. In YDF, the initial splitter implementation was a single module handling only numerical features on classification labels. The splitter was implemented using an ``in-sorting'' approach (i.e., sorting feature values for each node) making it simple to implement and test, usable for both deep and shallow trees, but slower than more advanced or specialized splitters. A few other splitters supporting other common feature types were later added as separate modules.
As support for other types of features (e.g., categorical, pre-sorted numerical, categorical-set~\cite{guillame_bert_2020_catset}, with or without missing values), other types of labels (e.g., regression, ranking), parameters (e.g., divide and conquer growth, global growth~\cite{shi_2007}, oblique splits~\cite{tomita_2020}), and shape of trees (e.g., shallow, deep) became necessary, the splitters were refactored and sub-modularized. YDF splitter code is now organized into three types of modules responsible for label type, feature type, and the specific splitting logic, with the resulting organization favoring code reuse and reducing the cost of extension and maintenance. For example, the module handling binary classification (label type) is used for all feature types, and the module handling categorical features (feature type) is used for all label types. This new design incorporated the engineering experience acquired during the implementation of the first generation of splitters.
Modularity also allows for the cohabitation of both generic-and-slow and specialized-and-fast code, where the initial simple modules serve as the ground truth in unit testing of specialized or optimized modules. For example, as mentioned above, the first YDF splitters used a simple in-sorting approach. Later, more efficient but complex splitters (e.g., pre-sorting approach, distributed approach) used the in-sorting approach as unit tests. In the case of deep trees (e.g., trees trained by the Random Forest learning algorithm), in-sorting is sometimes more efficient than pre-sorting. The modularity thus allows YDF to dynamically choose the most efficient splitter implementation for each node.
\subsection{Integration with other ML libraries}
ML libraries and frameworks should easily interact with each other and allow compositions. The possibility of \emph{interaction} between libraries reduces the risk of the ``framework trapping effect,'' in which the user is limited to the methods available in the library they are most familiar with, or on which the project relies, possibly missing some more suitable methods available in other libraries, resulting in sub-optimal solutions. For example, while R~\cite{R_2022} contains a rich variety of data mining and decision forest libraries, it is not trivial for TensorFlow~\cite{tensorflow_2015} users to use them.
\emph{Composability} is important for both research and advanced production work. For example, the composition of decision forests and neural networks can lead to improvements in model quality~\cite{bruch_2020_differentiable,li_2019_combine,guillame_bert_2020_catset,guolin_2019_deep_gbm}. But neural network libraries often show poor efficiency when executing the branching algorithms used to train decision forests, making the composability of neural networks and decision forests libraries a prerequisite for such hybrid research.
\section{Structure of YDF}
In this section, we present the design decisions behind YDF and show the role the principles of Section~\ref{sec:principles} played in the formation of these decisions. Figure~\ref{fig:structure} depicts the high-level structure of YDF. We explain each of the components in this figure in the remainder of this section.
\begin{figure}[t]
\centering
\includegraphics{structure}
\caption{High level modules in YDF.}\label{fig:structure}
\end{figure}
\subsection{\textsc{Learner}-\textsc{Model} abstraction}
\label{sec:learner_and_models}
YDF's inter-module communication and user API relies on two model-agnostic ML concepts: \textsc{Learner}s and \textsc{Model}s. A \textsc{Model} is a function, in the mathematical sense, that takes an observation as input, and returns a prediction.
A \textsc{Learner} is a function that takes a list of examples as input and returns a \textsc{Model}.
An \textsc{example} is a couple \{\textsc{observation}, \textsc{label}\}.
The \textsc{Learner}-\textsc{Model} abstraction is simple but generic enough for the integration of any new learning algorithm because it makes few assumptions about how the learning algorithm or model work internally. \textsc{Learner}s, for example, are not required to rely on stochastic gradient descent for optimization.
The \textsc{Learner}-\textsc{Model} abstraction is commonly used in the non-parametric learning R~\cite{R_2022} packages~\cite{wiener_2002,therneau_2002,wright_2017}. In contrast, in Python ML libraries, we more routinely encounter the \textsc{Estimator-Predictor} paradigm. For example, in scikit-learn~\cite{sklearn_api} both the training and inference logic are encoded into the same object using the \emph{model.fit} and \emph{model.predict} functions. A similar design choice exists in XGBoost~\cite{chen_2016} and TensorFlow~\cite{tensorflow_2015}. Note that, for consistency, the port of YDF in TensorFlow, called \emph{TensorFlow Decision Forests}, uses the \textsc{Estimator-Predictor} abstraction.
We argue that the distinction between \textsc{Learner}s and \textsc{Model}s allows for the separation of training and inference logic (the inference logic is generally simpler than training) as well as code reuse (i.e., different \textsc{Learner}s can train the same type of \textsc{Model}, and a given \textsc{Learner} can produce different types of \textsc{Model}s). For example, \citet{breiman_1984} and \citet{guillame_bert_2018} are two algorithms to train Random Forest models. While the algorithms are different in the way they learn random forests, the models they produce have a similar structure, and the same post-training Random Forest-specific tools are applicable to both \textsc{Learner}'s outputs.
Finally, the separation of the learning and inference logic facilitates the development of technology-agnostic tools such as hyper-parameter tuners, cross-validation learner evaluators, model ensemblers, feature selection algorithms, and model agnostic interpreters.
To illustrate the benefit of separating the learning and inference logic for library integration and efficiency, consider the following example.
Suppose a \textsc{Learner} trains a linear and a decision tree \textsc{Model} using two separate external libraries and returns the best performing one.
The \textsc{Model} returned by the learner is either a linear model \emph{or} a decision tree, compatible with the tooling and the respective external library.
Deploying the model to a production service so as to generate predictions only requires loading the inference logic of one of the models.
By comparison, if the learning and inference logic are packed into the same object, this object is not directly compatible with the external libraries. Finally, loading the model in a production setting to generate predictions requires loading (or at least making accessible) the inference and training logic of both models and the model selection logic.
In YDF, \textsc{Models} and \textsc{Learners} are implemented by deriving abstract model and learner C++ classes respectively. This abstraction is independent of the task at hand: YDF includes \textsc{Learners} with support for classification, regression, ranking, and uplifting tasks. The abstract classes expose various additional functionality common to many learners and models, such as (de-)serialization, determining variable importance, and human-readable summary.
A new \textsc{Learner} can be integrated into YDF using a C++ registration mechanism (e.g., \texttt{REGISTER\_AbstractLearner(MyLearner)})---see Section~\ref{sec:modularity} for details on the YDF registration mechanism. YDF comes with a few official \textsc{Learner}s such as CART~\cite{breiman_1984_cart}, Random Forest~\cite{breiman_2001} and Gradient Boosted Trees~\cite{friedman_2001}, as well as \emph{meta}-learners that we will describe in Section~\ref{sec:meta_learners}. Additionally, YDF offers learners that are effectively wrappers to other ML libraries.
\subsection{Meta-learners}
\label{sec:meta_learners}
One of the interesting properties of the \textsc{Learner}-\textsc{Model} abstraction is that it allows for the composition of algorithms. To illustrate this point, consider a hyper-parameter tuner which is code that is responsible for finding the optimal hyper-parameters of a learner. It turns out that a hyper-parameter tuner can itself also be thought of as a \textsc{Learner}: It returns a model trained with a base \textsc{Learner} but using the optimal hyper-parameter values. To make matters more interesting, the method used by the hyper-parameter tuner to assess the optimality of candidate hyper-parameters (e.g., cross-validation, train-validation) is itself a hyper-parameter of the hyper-parameter tuner! We call all such \textsc{Learner}s that use another or multiple other \textsc{Learner}s, \textsc{Meta-Learner}s.
Other \textsc{Meta-Learner}s include: the ``calibrator'' which calibrates the predictions of a \textsc{Model}; the ``ensembler'' which ensembles a set of \textsc{Model}s; and the ``feature selector'' which determines the optimal subset of input features for a \textsc{Learner} on a given dataset.
\textsc{Meta-Learners} too can be composed together. Figure~\ref{fig:example_meta_learner} shows an example of a calibrator \textsc{Meta-Learner} containing an ensembler, which itself contains both a hyper-parameter tuner optimising a Random Forest \textsc{Learner}, and a vanilla (i.e., without hyper-parameter tuning) Gradient Boosted Tree \textsc{Learner}.
\begin{figure}[t]
\centering
\includegraphics{meta_learner}
\caption{Representation of the three imbricated \textsc{Meta-Learners}. }\label{fig:example_meta_learner}
\end{figure}
\subsection{Model validation}
By default, YDF \textsc{Learners} do not take validation datasets as input. Instead, if the learning algorithm requires a validation dataset (e.g., for early-stopping), it is extracted once (for train-validation) or multiple times (for cross-validation) from the training dataset by the \textsc{Learner} implementation itself. The amount of examples to extract is a hyper-parameter of the \textsc{Learner}. For applications where distribution shift is a potential issue, YDF \textsc{Learners} support an optional validation dataset as input. Each \textsc{Learner} can use this validation dataset as desired.
\subsection{Automated feature ingestion}
\label{sec:automated_feature_injestion}
One of the more important facets of an input feature is its semantic, which directly determines its mathematical properties. Using the appropriate feature semantics is critical to train good models. YDF defines several model agnostic feature semantics: \emph{Numerical features} (i.e., features with numerical semantics) are values in a continuous or discrete space with total ordering and scale significance. These generally represent quantities or counts, such as the age of a person, or the number of items in a bag. \emph{Categorical features} are values in a discreet space without order. They are generally used to represent types such as the color $red$ in the set $\{red, blue, green\}$. Other semantics include structural information like \emph{categorical sets} (i.e., where a value is a set of categories), \emph{categorical lists}, \emph{numerical lists}, \emph{numerical sets}, \emph{text}, \emph{boolean}, and \emph{hashes}.
It is worth noting that the value of a feature may be \emph{missing} or \emph{unavailable} and that, for multi-valued features such as categorical sets in particular, a missing value is semantically different from the empty set. Algorithms take great care in handling missing values and different algorithms can handle missing values differently, which is referred to as imputation strategy. The decision tree learning algorithm used in decision forest \textsc{Learner}s, for example, supports \emph{global} and \emph{local} imputation where missing values are replaced by the mean (numerical features) or most frequent (categorical feature) value estimated on the whole dataset (global) or the examples in the same tree node.
Generally speaking, the semantics of an input feature cannot be determined reliably from values or its representation. For example, the string ``2'' in a CSV file could be a numerical value, a categorical value, free text, or a numerical list with only one element. YDF uses a number of heuristics to assist the user in automatically determining feature semantics and build auxiliary data structures and metadata as required for the given feature type (such as dictionaries for categorical features).
As we noted previously, while basic heuristics often yield reasonable results, YDF insists that the user validate and optionally modify the automatically determined feature semantics.
\subsection{Modularity}
\label{sec:modularity}
YDF is organized into interchangeable modules.
Most modules are implemented with Object-oriented programming inheritance, that is, by deriving and registering an abstract class.
The \textsc{Learners} and \textsc{Models}, ``inference engines'', and ``distributed computation backend'' modules are presented respectively in Sections~\ref{sec:learner_and_models}, \ref{sec:infernce_engine}, and \ref{sec:distributed_training}. Here, we describe other noteworthy modules.
\begin{description}
\item[\textsc{Reader}s] read a stream of examples. Different dataset readers support different file formats.
\item[\textsc{Writer}s] write a stream of examples as a dataset with support for many file formats.
\item[Decision tree IO] imports and exports a decision tree from and to a file stored on disk. This module is used by all models made of trees.
\item[\textsc{Splitter}s] are algorithms that find the splitting conditions in a decision tree.
\end{description}
Official modules are directly part of the YDF code base. But custom modules can be hosted outside of the YDF code base using a Bazel~\cite{bazel_2015} third-party dependency.
\subsection{Model self evaluation}
\label{sec:model_self_evaluation}
It is often essential to validate the quality of a model (i.e., to determine if quality is satisfactory) and use this information to direct model development (i.e., select the more performant model). A typical method to estimate model quality is by evaluating metrics of interest on held-out examples that are not seen by the learning algorithm. While simple, this method can be problematic and unstable in applications with a small amount of labeled data.
Other approaches to obtaining a fair estimate of model quality includes out-of-bag evaluation and cross-validation methods. In YDF, we abstract out all such model validation methods as a model \textsc{Self-Evaluation} module, leading to a powerful model-agnostic abstraction that can be utilized by \textsc{Learner}s and \textsc{Meta-Learner}s alike. For example, the feature-selector \textsc{Meta-Learner} can choose the optimal input features for a Random Forest \textsc{Model} using Out-of-bag \textsc{Self-Evaluation}.
\subsection{Inference engine}
\label{sec:infernce_engine}
The most na\"ive algorithm to compute the prediction of a decision tree is made up of a single \emph{while} loop that iterates from the root node of the tree, taking the left or right branch according to the node condition, and terminating at one of the leaves. This is shown in the Algorithm~\ref{alg:default_serving}.
This simple algorithm is, however, inefficient on modern CPUs due to its slow and unpredictable random memory access pattern and branching mispredictions~\cite{nima_2014}. This observation inspired a line of research to optimize tree inference by using more complex but more efficient tree traversal logic. A prominent example of tree inference algorithms is QuickScorer~\cite{lucchese_2015}. It can efficiently infer decision trees with up to $64$ nodes on a $64$-bit CPU, with the obvious caveat that it does not extend to larger trees such as those generated by the Random Forest algorithm~\cite{breiman_2001}.
In addition to the tree traversal algorithm, another factor that contributes to the latency of tree inference is the types of conditions in decision nodes. An inference algorithm that only supports one type of condition will inevitably be faster than an algorithm that supports many types. Additionally, instruction-level parallelism when available often has an outsize impact on latency. Finally, hardware accelerators (e.g. GPU, TPU, FPGA) allow for significant optimization as well.
\begin{algorithm}[tb]
\caption{Simple tree inference algorithm}
\label{alg:default_serving}
\begin{algorithmic}
\STATE {\bfseries Input:} example $x$
\STATE $c \leftarrow $ root node of the tree
\WHILE{$c$ is not a leaf}{}
\STATE $e \leftarrow $ evaluate the condition of $c$ on $x$
\IF{$e$ \bf{is} \textsc{True}}
\STATE $c \leftarrow $ \textsc{Positive Child of} $c$
\ELSE
\STATE $c \leftarrow $ \textsc{Negative Child of} $c$
\ENDIF
\ENDWHILE
\STATE \textbf{return} $c$'s value
\end{algorithmic}
\end{algorithm}
To handle this diversity of solutions and to maximize model inference speed, but to shield the user from this complexity, YDF introduces the concept of \emph{inference engine} or \emph{engine} for short. An engine is the result of a possibly \emph{lossy} compilation of a \textsc{Model} for a specific inference algorithm. In other words, we compile a \textsc{Model} into an engine, which is chosen based on the model structure and available hardware. In this way, space, complexity, and latency can be traded off depending on which factors are important to a particular production environment. For example, when the program size is important, such as on embedded devices like Internet-of-things, and when the model is known in advance, YDF can be compiled with only the required engine.
\subsection{Splitters}
\label{sec:splitters}
Splitters are modules that find the optimal decision for a given node according to a splitting criteria. Their complexity, therefore, is tied to the number and type of features as well as the cardinality of their space of values. By default, YDF's splitters are \emph{exact} for numerical features, in the sense that numberical values are taken on face value and are in no way transformed (e.g., by discretization). This naturally leads to more candidate splits to be considered, an approach that could prove slow when the value of a feature cover a large range. This exact approach is similar to XGBoost~\cite{chen_2016} but different from LightGBM~\cite{ke_2017_lightgbm}. Like those libraries, YDF also supports \emph{approximate} splitting but discretization, leading to a significant speed-up at the cost of a potential degredation to model quality.
In addition to numerical features, YDF natively supports categorical features with exact splitting~\cite{fisher_1958_exact} (similar to LighGBM), random categorical projection~\cite{breiman_2001}, and one-hot encoding (similar to XGBoost). Finally, YDF has special support for oblique numerical splits~\cite{tomita_2020} and categorical-set splits~\cite{guillame_bert_2020_catset}.
\subsection{Distributed training}
\label{sec:distributed_training}
Distributed training is essential for training on large datasets and computationally-intensive learning algorithms such as hyper-parameter tuners. To facilitate the development of distributed algorithms, YDF defines an API with the primitives necessary for decision forest distributed training, along with a distributed training framework for common \textsc{Meta-Learner}s, all with built-in fault-tolerance.
The implementation of this API are modulable, with two particular implementations available based on gRPC and TensorFlow Parameter Server distribution strategies. But because the development and testing of distributed algorithms can be cumbersome, YDF also contains a third implementation specialized for development, debugging, and unit-testing. This implementation simulates multi-worker computation in a single process, making it easy to use breakpoints or execute the distributed algorithm step by step. How the user selects which distributed implementation to use is a single piece of configuration.
The YDF implementation of decision forests distributed training relies on both ``feature parallel'' and `example parallel'' distribution based on the work of~\citet{guillame_bert_2018}. Each \emph{training worker} is responsible for a subset of input features. Communication between workers is optimized with a delta-bit encoding and multi-round hierarchical synchronization so as to minimize the maximum network IO among workers. The type and number of features allocated to each worker is dynamically adjusted to handle fluctuation in worker availability due to concurrent execution. Workers evaluate the quality of the model on a validation dataset, and possibly trigger early stopping of the training.
\subsection{Multi API and integration with other ML frameworks}
\label{sec:other_framework_interraction}
C and C++ code is generally well supported by other languages. YDF is available in C++, in JavaScript using WebAssembly, in Go, and Command line interface (CLI). YDF is also available in Python through TensorFlow~\cite{tensorflow_2015} under the name TensorFlow Decision Forests, making it compatible with the TensorFlow ecosystem. YDF supports NumPy~\cite{harris_2020} and Pandas~\cite{mckinney_2010} making it easy to use on small datasets in Python. YDF can read scikit-learn~\cite{sklearn_api} decision forest models.
It is worth noting that models and training configurations are cross-API compatible. For example, a model trained with the Python API can be run with the JavaScript API.
\subsection{Backwards compatibility and default values}
\label{sec:backward_compatibility}
YDF models are fully backwards compatible---as an anecdote, models trained in 2018 are still usable today. Additionally, the YDF training logic is deterministic: The same \textsc{Learner} on the same dataset always returns the same \textsc{Model}. This last rule may only be violated by changes in the underlying pseudo-random number generator implementation.
An important property of YDF, and a constraint we impose on the development of the library, is that hyper-parameters are backwards compatible: Running a \textsc{Learner} configured with a given set of hyper-parameters always returns the same \textsc{Model}---modulo changes to the pseudo-random number generators. This implies that default hyper-parameters cannot change and that all newer methods of learning are disabled by default.
By construction, the default values of all hyper-parameters are set to the values recommended in the paper that introduces the algorithm or in the authors' implementation of it. For example, by default, classification Random Forest uses an attribute sampling ratio of the square root of the total number of features as recommended by Breiman~\cite{breiman_2001}.
To simplify the use of the library, particularly for users who would like to use the latest algorithms in YDF but who are not well-versed in the literature and may not understand fully the hyper-parameters involved, YDF offers a hyper-parameter template system. For example, a learner configured with the \texttt{benchmark\_rank1} parameter template will be trained with the best hyper-parameters according to our benchmark. The \texttt{benchmark\_rank1@v1} template for the gradient boosted trees learner~\cite{friedman_2001} uses global tree growth~\cite{shi_2007} (i.e., best first or leaf-wise growth), sparse oblique splits~\cite{tomita_2020}, and random categorical splits~\cite{breiman_2001}. As new versions of YDF are released, those hyper-parameters can change but YDF retains version information. For example, a learner configured with the \texttt{benchmark\_rank1@v1} will be trained on the best hyper-parameters in version $1$ of this template.
\section{Usage examples}
\label{sec:usage_examples}
This section demonstrates a use case where we apply YDF for binary classification on the Adult dataset (also known as the Census Income dataset). This dataset is stored in two CSV files containing the training and test examples. Input features are either numerical or categorical, with some feature values missing.
We first highlight how one may use YDF with the CLI API to train, evaluate and analyse a classical gradient boosted trees model. This API is similar (but less verbose) than the C++ interface. We then show how to use the TensorFlow Decision Forests API to do the same task. In both cases, the hyper-parameters of the learner are left to their default values. In addition, the input features are not explicitly fed to YDF; instead, YDF will use all available features (excluding labels) with automated semantic detection.
\subsection{The CLI API}
\label{sec:cli_api}
The following is the CLI usage example. The resulting artefacts---dataspec, model information, model evaluation report, and model inference benchmark report---are included in Appendix~\ref{sec:artefacts_cli} for completeness.
\begin{small}
\begin{verbatim}
# Detect feature semantics, producing dataspec
infer_dataspec --dataset=csv:train.csv
--output=dataspec.pbtxt
# Print details of the inferred semantics
show_dataspec --dataspec=dataspec.pbtxt
# Configure the learner
cat <<EOF > learner.pbtxt
task: CLASSIFICATION
label: "income"
learner: "GRADIENT_BOOSTED_TREES"
EOF
# Train the model
train --dataset=csv:train.csv \
--dataspec=dataspec.pbtxt \
--config=learner.pbtxt \
--output=model_path
# Display information about the model
show_model --model=model_path
# Evaluate the model
evaluate --dataset=csv:test.csv \
--model=model_path
# Generate model predictions
predict --dataset=csv:test.csv \
--model=model_path \
--output=csv:predictions.csv
# Benchmark the model inference speed
benchmark_inference --dataset=csv:test.csv \
--model=model_path
\end{verbatim}
\end{small}
\subsection{The Python and Tensorflow API}
In this section, we showcase the TensorFlow Decision Forests API.
The port of YDF in TensorFlow does not use the YDF model evaluation logic demonstrated in Section~\ref{sec:cli_api}. Instead, TensorFlow native metric implementations are used for evaluation.
\begin{small}
\begin{python}
import tensorflow_decision_forests as tfdf
import pandas as pd
# Load datasets as Pandas dataframes
train_df = pd.read_csv("train.csv")
test_df = pd.read_csv("test.csv")
# Convert the datasets to TensorFlow format
train_ds = tfdf.keras
.pd_dataframe_to_tf_dataset(train_df,
label="income")
test_ds = tfdf.keras
.pd_dataframe_to_tf_dataset(test_df,
label="income")
# Train a model
model = tfdf.keras
.GradientBoostedTreesModel()
model.fit(train_ds)
# Summary of the model structure.
model.summary()
# Evaluate the model.
model.compile(metrics=["accuracy"])
print(model.evaluate(test_ds,
return_dict=True))
\end{python}
\end{small}
\section{Experiments}
\label{sec:benchmarks}
We compare the accuracy of YDF v0.2.5 to four popular decision forests learning libraries: XGBoost v1.5.1 (XGB)~\cite{chen_2016}, scikit-learn v1.0.2 (SKLearn)~\cite{sklearn_api}, LightGBM v3.0.0.99 (LGBM)~\cite{ke_2017_lightgbm}, and TensorFlow BoostedTrees Estimators v2.9.1 (TF BTE)~\cite{ponomareva_2017_tfb}). We also include a linear classifier (TF Linear) on 70 small tabular (binary and multi-class) classification datasets from the OpenML
repository~\cite{vanschoren_2013_open_ml}. The list of datasets used in our evaluation appears in Appendix~\ref{tab:dataset_stats}. The number of examples ranges from $150$ to $96,320$, with a mean of $8,647$ examples per dataset. The number of features ranges from $5$ to $1,777$, with a mean of $119$ input features per dataset.
\subsection{Learners}
For each library, we evaluate learners using both their default hyper-parameters as well as hyper-parameter values tuned using an automated tuner. The default hyper-parameters of each library might differ, except for the "number of trees" which is universally fixed to $500$. Learners that use the default hyper-parameter values are tagged with ``\emph{(default hp)}.''
As noted in Section~\ref{sec:backward_compatibility}, YDF sets the default values for all hyper-parameters to reflect the configurations in the original publication. To complement these results, we also evaluate a setting in which hyper-parameters are drawn from our benchmark configuration \texttt{benchmark\_rank1@v1}, and tag these learners with \emph{(benchmark hp')}. The definition of the \emph{default} and \texttt{benchmark\_rank1@v1} hyper-parameters are presented in appendix~\ref{sec:default_hyper_paramters}.
Tuned learners are tagged with \emph{Autotuned}. We conduct hyper-parameter tuning by aggregating results from $300$ unique random trials. Trials are scored either by log loss (noted \emph{(opt loss)}) or accuracy (noted \emph{(opt acc)}). For Random Forest models we use out-of-bag evaluation for validation, whereas for Gradient Boosted models we set aside $10$\% of the training data for validation.
The hyper-parameters tuned by each library are listed in Appendix~\ref{sec:benchmark_hyper_paramters}.
Finally, we note that Scikit-learn, XGBoost and TensorFlow BoostedTrees Estimators libraries do not offer native support for categorical features. As such, for these learners, we encode all categorical features using one-hot encoding.
\subsection{Metrics}
We evaluate all pairs of dataset and learner using a $10$-fold cross-validation protocol, where fold splits are consistent across learners to facilitate a fair comparison. Note that, the hyper-parameter tuning is applied independently on each of the $10$ folds for each library.
We measure accuracy, AUC (for binary classification), training time, and inference latency of each model. We further report the overall mean and median rank of each learner across all datasets, the number of wins or losses between pairs of learners, and the mean training and inference time of each learner.
\subsection{Computing resources}
We train each model on a single machine without using distributed training and with a $20$ threads limit. The inference of the model is evaluated with a single thread for YDF, and a number of threads selected by the library for other learners. The reported training and inference times exclude dataset reading. In aggregate, we trained a total of $1.3$ million models consisting of $840$ million trees.
\subsection{Results}
Due to space constraints, we have included the mean cross-validation accuracy of all learners on every dataset in Appendix~\ref{sec:benchmark_accuracy}, and a pairwise comparison of learners Appendix~\ref{sec:benchmark_pairwise}. Here, we present a summary of these results in Figure~\ref{fig:benchmark_mean_rank} where we render the \emph{mean rank} of each learner---equivalent to the ``mean rank'' column in the table reported in Appendix~\ref{sec:benchmark_accuracy}.
The mean rank is the average, over all datasets, of the rank of the learner (between 1 and 16) compared to other learners.
We also show in Table~\ref{tab:durations} the average training and inference duration for each learner.
\begin{figure}[t]
\centering
\includegraphics{comp.pdf}
\caption{Mean learner ranks: The rank of the 16 learners averaged over the $70$ datasets. The smaller, the better.}\label{fig:benchmark_mean_rank}
\end{figure}
\begin{table}
\caption{Training and inference duration in seconds of (un-tuned) learners, using the default hyper-parameters of the respective libraries, averaged over $10$ cross-validation folds and $70$ datasets. Learners are ordered according to their quality rank reported in Figure~\ref{fig:benchmark_mean_rank}.}
\label{tab:durations}
\center
\begin{tabular}{@{}lll@{}}
\toprule
Learner & \multicolumn{1}{l}{training (s)} & \multicolumn{1}{l}{inference (s)} \\ \midrule
YDF GBT (benchmark hp) & 39.99 & 0.108 \\
SKLean RF (default) & 7.01 & 0.250 \\
YDF RF (benchmark hp) & 29.86 & 0.598 \\
LGBM GBT (default) & 4.91 & 0.061 \\
YDF RF (default) & 4.41 & 0.326 \\
YDF GBT (default) & 34.79 & 0.044 \\
TF Linear (default) & 55.59 & 8.050 \\
XGB GBT (default) & 20.72 & 0.015 \\
TF EBT (default) & 212.06 & 1.949 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Observations}
On the mean accuracy computed over $70$ datasets, we make the following observations:
\begin{itemize}
\item No one learner is always better than another learner. The largest difference is between auto-tuned YDF and XGBoost with $612$ wins and $88$ losses. The effective difference of quality between closely-ranked learners is rather small. For example, the average difference in accuracy between auto-tuned YDF and LightGBM is about $0.5$\% for a standard deviation of $1.9$\%---see Appendix~\ref{sec:benchmark_accuracy}.
\item YDF performs better than other candidate libraries both with and without automatically tuned hyper-parameters. The YDF setting where the tuner optimizes for loss has rank $1$, and YDF with benchmark hyper-parameters has rank $4$, standing above all other non-tuned learners.
\item Automatically tuned LightGBM comes in close second place with respect to mean rank, but ranks first in terms of the number of pairwise wins and losses.
\item For Random Forests and Gradient Boosted Trees learners, YDF with all its features enabled---see Section~\ref{sec:backward_compatibility}---performs significantly better than YDF with the default hyper-parameters.
\item Tuning the hyper-parameters increases the quality of YDF and LightGBM learners significantly ($+4$ and $+6$ rank change respectively).
\item Surprisingly, scikit-learn and XGBoost without hyper-parameter tuning offer slightly better (respectively $+2$ and $+1$ rank change) results than with hyper-parameter tuning. We believe that is due to the relatively small size of the datasets and the one-hot encoding of categorical features, tuning the hyper-parameters makes the model more prone to overfitting. Note that the same hyper-parameter tuning library is used for all the learners.
\item XGBoost and TF Boosted Trees on average yield a lower accuracy than a linear model. Gradient Boosted Trees models perform better than Random Forest models in terms of accuracy.
\end{itemize}
Moving on to the training speed of the models, we observe that LightGBM is significantly faster to train than YDF and XGBoost models for the GBT algorithm. This comparative performance can be explained by the difference in the configuration of the splitter algorithms in the three libraries. See Section~\ref{sec:splitters}. YDF is slightly faster than scikit-learn for the RF algorithm but much faster than TensorFlow based models.
On the inference speed of the models, we observe that:
\begin{itemize}
\item Ignoring the different number of thread used, the inference speed of gradient boosted tree models trained default hyper-parameters is the fastest with XGBoost, followed by YDF, followed by LightGBM. Gradient boosted tree models trained with TF Estimator Boosted Trees are two orders of magnitude slower.
\item As expected, the much larger Random Forest models are slower than Gradient Boosted Tree models. However, YDF Random Forest inference is slighly slower than scikit-learn.
\item YDF with ``benchmark hp'', which notably includes oblique splits, are significantly slower to train and to infer than the ``default'' version.
\item The algorithmic complexity of inference of the linear model is significantly less than that of the decision forest models. Surprisingly, the inference of linear models executed through TensorFlow are slowest.
\end{itemize}
\section{Conclusion}
\label{sec:conclusion}
We presented a new library for the training, inference, and interpretation of decision forest models called Yggdrasil decision forests. The library is designed around four principles to ensure extensibility to new methods and efficient usage. While designed for this library, we believe these principles carry over to other machine learning libraries. We showed how to use the CLI and Python APIs of Yggdrasil and empirically compared its quality and speed with existing libraries.
| proofpile-arXiv_065-13097 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Undoubtedly, deep learning techniques perform exceptionally well in many domains. From simple to highly complex neural networks, the quest to achieve state-of-the-art performance has led to the development of giant models that are computationally expensive to train and difficult to deploy. This restricts their application in many domains. Consequently, model compression is an active area of research that aims to reduce the configuration complexity of state-of-the-art deep networks to enable their deployment in resource-limited domains without significant reduction in performance.
It is a misconception that only large and highly complex models achieve the best performance \cite{jimmy}. Many state-of-the-art approaches have shown that strategically designed lightweight models can provide similar performance \cite{buci}. It is now known that a significant percentage of nodes in these models are redundant, and pruning these connections has minimal effect on the performance. Several model compression techniques have been developed to simplify highly complex networks and substantially reduce the requirement for resources \cite{ali,fahad}. These include Knowledge Distillation (KD) \cite{hinton}, Deep Mutual Learning (DML) \cite{dml}, Network Pruning \cite{yu}, Quantization \cite{han}, Low-Rank Matrix Approximation \cite{zhou}, and their variants. The ultimate goal of all these techniques is to optimize the network configuration without compromising the model’s performance.
The Knowledge Distillation paradigm leverages the knowledge gained by a highly parameterized teacher network by sharing it with a lightweight student network such that the compact model approximates the performance of the teacher network. As an extension of the KD concept, the FitNet \cite{romeo} trains thinner and deeper student networks by supplementing the soft target outputs with intermediate representations learned by the teacher from the hidden layers to improve the training process. Other variants of KD like Evolutionary Knowledge Distillation \cite{ekd}, and Self Distillation \cite{sd} were also introduced to enhance the performance of the student networks further. In contrast, pruning involves the elimination of nodes, weights, or layers that do not significantly influence performance. It speeds up the inference process by considering the parameters that matter the most for a specific task.
Alternative approaches like quantization reduce the precision of numbers used to represent the neural network weights, and Low-rank approximation uses tensor decomposition to estimate the informative parameters of a network \cite{cheng}. Tiny machine learning is another fast-growing field that strives to bring the transformative power of machine learning to resource-constrained devices. It has shown significant potential for low-power applications in the vision and audio space \footnote{https://www.tinyml.org/about}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{model3.png}
\caption{ Illustration of a person's different learning style employed for training the deep learning model. }
\label{fig:-1}
\end{figure}
In this work, we extend the knowledge distillation paradigm with the concept of diverse learning styles from classroom dynamics. A learning style refers to a type of training mechanism that an individual prefers to use to gain new knowledge. For example, as depicted in Figure \ref{fig:-1}, the VARK model assumes four types of learners – Visual, Auditory, Reading \& Writing and Kinesthetics. Unlike conventional KD techniques, where the teacher shares the same knowledge with all the students, we propose to train individual student networks with different information from the teacher. This information can be in the form of final predictions, intermediate layer features, saliency maps, etc. The main contributions of this paper include:
\begin{itemize}
\item Adapting the concept of different learning styles to neural networks using a knowledge distillation framework. Here the teacher shares the knowledge gained with individual students in different formats.
\item We investigate the benefits of mixed information sharing using a single teacher, two-student network configuration, where the teacher shares final predictions with one student and intermediate layer features with another.
\item We explore the advantage of different learning styles in the context of mutual learning between two students of equal capacity (in the absence of a teacher network).
\item We demonstrate the efficacy of our proposed approach on two benchmark biomedical datasets selected to perform classification and segmentation tasks.
\end{itemize}
\section{Background}
\subsection{Knowledge Distillation}
Knowledge Distillation is an approach introduced to transfer the knowledge in terms of probability outputs, $p_{i}$, from a highly parameterized pre-trained teacher network $f(X,\phi)$ to a simple and compact student network $g(X,\theta)$ to achieve model compression while retaining performance. \\
Given a training set with $N$ samples
${X}=\left\{\boldsymbol{x}_{i}\right\}_{i=1}^{N}$ with corresponding labels ${Y}=\left\{y_{i} \right\}_{i=1}^{N}$, the teacher network {$f(X,\phi)$}, is trained on the ground truth labels. The probabilistic output of a teacher network for a sample $x_{i}$ is defined as $p_{i}$ given by the extended softmax as:
\begin{equation}
p_{i}^{c} = \frac{e^{\mathbf{z}^{c}/T}}{\sum_{c=1}^C e^{\mathbf{z}^{c}/T}} \ \ \ for\ c=1,2,\dots,C
\end{equation}
\noindent where $\mathbf{z}^{c}$ corresponds to the logits, $C$ is the number of classes, and $T$ is the temperature parameter to get a smoother output probability distribution of the classes. Generally, the objective function for the teacher network is the standard \emph{Cross-Entropy (CE) error} defined as:
\begin{equation}
L_{\phi}=L_{CE}=-{\sum_{i=1}^N(y_{i}\log(p_{i}) + (1 - y_{i})\log(1 - p_{i}))}
\end{equation}
Now, the student networks are trained on the combined loss of \emph{Cross-Entropy (CE)}, and \emph{Knowledge Distillation (KD)}, where the \emph{CE} helps the student networks to adhere to the ground truth labels and \emph{KD} assists them to align their learning with that of the teacher.
Hence, the loss function for the student network is the weighted $(\alpha) $ summation of the cross entropy $(L_{CE})$ and knowledge distillation $(L_{KD_p})$ terms. Here, \emph{Kullback Leibler (KL)} divergence is used for $L_{KD_P}$ to measure the correspondence between the teacher and student predictions $p_{i}$ and $s_{i}$ respectively.
\begin{equation}
\label{eqn:KD}
L_{\theta}= \alpha \ L_{CE}\left({p}_{i} , {y}_{i}\right) + (1-\alpha) \ L_{KD_p}\left({s}_{i} , {p}_{i}\right)
\end{equation}
The knowledge can be transferred in an online or offline manner from the teacher to the student networks. In offline $KD$, training is done in two steps; first, the teacher network is pre-trained on a dataset, and then the knowledge is distilled to train the student network, whereas in online $KD$ \cite{kdcl}, both teacher and student networks are trained simultaneously in a single training step.
\subsection{Deep Mutual Learning}
Unlike knowledge distillation, mutual learning is a two-way sharing of information as both networks are treated as student networks. They teach each other collaboratively throughout the entire training process. The loss functions $L_{\theta_{1}}$ and $L_{\theta_{2}}$ for the two student networks {$g_{1}(X,\theta_{1})$} and {$g_{2}(X,\theta_{2})$} respectively are defined as
\begin{equation}
\label{eqn:ML}
\begin{array}{l}
L_{\theta_1}=L_{CE}\left(\boldsymbol{s}_1 ,\boldsymbol{y}\right)+L_{ML_p}\left(\boldsymbol{s}_1 , \boldsymbol{s}_2\right) \\
\\
L_{\theta_2}=L_{CE}\left(\boldsymbol{s}_2 ,\boldsymbol{y}\right)+L_{ML_p}\left(\boldsymbol{s}_2 , \boldsymbol{s}_1\right)
\end{array}
\end{equation}
where $\boldsymbol{s}_{k}$ is the prediction of the \emph{kth} student network, and similar to \emph{KD}, $L_{ML_p}$ defined as the \emph{Mutual Learning loss} is the \emph{KL} divergence between the predictions of two students. Therefore, $L_{ML_p}(\boldsymbol{s}_{k'},\boldsymbol{s}_{k}) = D_{KL}(\boldsymbol{s}_{k'} || \boldsymbol{s}_{k})$, where the \emph{KL} distance from $\boldsymbol{s}_{k'}$ to $\boldsymbol{s}_{k}$ is computed as:
\begin{equation}
D_{KL}\left(\boldsymbol{s}_{k'} \| \boldsymbol{s}_{k}\right)=\sum_{i=1}^{N} {s}_{k'}\left({x}_{i}\right) \log \frac{{s}_{k'}\left({x}_{i}\right)}{{s}_{k}\left({x}_{i}\right)}
\end{equation}
\begin{figure}
\centering
\captionsetup{labelfont=bf}
\includegraphics[width=0.9\textwidth]{model133.png}
\caption{\textbf{KD} - transfers knowledge in terms of soft labels from pre-trained teacher to student network; \textbf{ML} - Both networks are considered as students for information exchange; \textbf{KD+ML} - Training students on different representations from teacher network as well as from each other. Conventionally, same type of knowledge was shared to student networks with $L_{KD} = L_{KD'}$ and $L_{ML} = L_{ML'}$. }
\label{fig:0}
\end{figure}
\section{Proposed Methodology}
Fig. \ref{fig:0} (a) and (b) depict the standard network configurations used for knowledge distillation and mutual learning. Extension of KD to a single teacher and multiple student networks is also quite standard. Recently, \cite{UNiyaz} proposed a combination of KD and ML, where in addition to sharing information by the teacher, students also exchange information, as shown in Fig. \ref{fig:0} (c). In such configurations, the ensemble of student networks is used for final prediction. In the current work, we investigate the benefit of mixed information sharing in three different configurations – (i) KD-only framework (Fig.\ref{fig:0} (a)) (ii) ML-only framework with two students (Fig. \ref{fig:0}(b)), and (iii) combined KD + ML frameworks with one teacher and two students (Fig. \ref{fig:0} (c)).
Leveraging the idea that different learning styles can improve the understanding of learners, we propose to train individual student networks with different information from the teacher. The teacher shares final predictions with one student and intermediate layer features with another. Similarly, in the ML framework, students exchange different information with each other. To demonstrate the importance of mixed information sharing, we train each of the above-mentioned network configurations with different information sharing strategies, as shown in Table \ref{tab:2}. Furthermore, we use student networks with identical architectures to emphasize the influence of different learning styles.
We hypothesize that while mixed information sharing will be beneficial for all three configurations, the combined KD and ML configuration with the ensemble of two student networks trained with different information from teachers as well as exchanging different information between them will exhibit the best overall performance.
\begin{table}
\captionsetup{labelfont=bf}
\caption{Combinations of network configurations and learning strategies employed: (a) KD – knowledge distillation only, (b) ML – mutual learning only, (c) combined KD and ML; V1 – sharing of final predictions only, V2 – sharing of features only, V3 – sharing of predictions and features together.}
\begin{subtable}{.5\linewidth}
\centering
\caption{}
\begin{tabular}{|l|c|c|c|}
\hline \textbf{KD} & V1 &V2 &V3\\
\hline
T $\rightarrow$ $S_{1}$ & Predictions & Features & Predictions \\
T $\rightarrow$ $S_{2}$ & Predictions & Features & Features \\
\hline
\end{tabular}
\end{subtable}%
\begin{subtable}{.5\linewidth}
\centering
\caption{}
\begin{tabular}{|l|c|c|c|}
\hline \textbf{ML} & V1 &V2 &V3\\
\hline
$S_{1}$ $\rightarrow$ $S_{2}$ & Predictions & Features & Predictions \\
$S_{2}$ $\rightarrow$ $S_{1}$ & Predictions & Features & Feature \\
\hline
\end{tabular}
\end{subtable}
\begin{subtable}{1\linewidth}
\centering
\caption{}
\begin{tabular}{|l|c|c|c|}
\hline \textbf{KD + ML} & V1 &V2 &V3\\
\hline
T $\rightarrow$ $S_{1}$, $S_{2}$ $\rightarrow$ $S_{1}$ & Predictions, Predictions & Features, Features & Features, Predictions \\
T $\rightarrow$ $S_{2}$, $S_{1}$ $\rightarrow$ $S_{2}$ & Predictions, Predictions & Features, Features & Predictions, Features \\
\hline
\end{tabular}
\end{subtable}
\label{tab:2}
\end{table}
\subsection{Classification Task}
With the assumption that the last layers of a deep neural network encode high-level semantic information, we use the output of the teacher network's last convolution layer as feature information to share with student networks. For a convolutional block with kernel size $N$ and a number of filters $D$, the dimension of the output feature map is given as $\mathcal{M} = N \times N \times D$. In KD configurations, where the teacher and student networks do not have layers with matching dimensions to share or compare the feature map information, an additional convolutional block can be added to the teacher network with an output dimension to match that of the student. This ensures the compactness of the student networks. A similar approach can also be adopted for ML configurations with non-identical student networks.
For our proposed approach that uses a combined KD + ML configuration and mixed information sharing strategy for knowledge distillation from the teacher as well as mutual learning between students, the corresponding loss functions are:
\\
\begin{equation}
\label{eqn:KDMLV3C}
\begin{array}{l}
L_{\theta_1}=\alpha \ L_{CE}\left(\boldsymbol{s}_1 , \boldsymbol{y}\right)+ \beta \ L_{KD_f}\left(\mathcal{M}_{s_1} , \mathcal{M}_{t}\right) + \gamma \ L_{ML_p}\left(\boldsymbol{s}_1 , \boldsymbol{s}_2\right)
\\
\\
L_{\theta_2}=\alpha' \ L_{CE}\left(\boldsymbol{s}_2 , \boldsymbol{y}\right) + \beta' \ L_{KD_p}\left(\boldsymbol{s}_2,\boldsymbol{p} \right)+ \gamma' \ L_{ML_f}\left(\mathcal{M}_{s_2} , \mathcal{M}_{s_1}\right)
\\
\end{array}
\end{equation}
\\
\noindent where $L_{KD_p}$ and $L_{ML_p}$ represent the same loss terms based on predictions as defined in equations \ref{eqn:KD} and \ref{eqn:ML} respectively. To support the mixed information sharing strategy, we introduce two additional loss terms, $L_{KD_f}$ and $L_{ML_f}$, based on features shared by the teacher as $\mathcal{M}_{t}$ and the other student as $\mathcal{M}_{s_k}$, respectively. We define the feature map-based loss function as the \emph{Mean Square Error (MSE)} between the feature maps of the corresponding networks. In general, for \emph{n} number of feature maps, the \emph{MSE} between two feature maps is defined as $\frac{1}{n} \sum_{i=0}^{n} (\hat{\mathcal{M}_i}-\mathcal{M}_i)^2$.
Using equation \ref{eqn:KDMLV3C}, we can derive the loss functions for KD-only configurations that use mixed information strategy by setting $\gamma = \gamma’ = 0$, $\beta=1-\alpha$ and $\beta'=1-\alpha'$. Similarly in ML-only configuration we set $\beta = \beta’ = 0$, $\gamma=1-\alpha$ and $\gamma'=1-\alpha'$. As each student is learning from different information, we use separate weighing parameters for individual terms of the loss function and optimize them using grid search. For a test sample $x_{i}$, we consider the ensemble classification probability, $\hat{s}(x_i)$, as the highest probability for a particular class across all student network predictions. This is given as ${\hat{s}(x_{i})}=max\{{{s}_k(x_{i})}\}_{k=1}^{K}$.
\subsection{Segmentation Task}
In general, U-Net is the preferred and most commonly used architecture for image segmentation. To extract feature information to share with student networks, we use the output feature map of the first convolution layer of the teacher's decoder network. It was found empirically that this layer is close to the encoder and contains more semantic information. It helps to generate more precise predictions when combined with the information from previous layers of the encoder through skip connections. Unlike classification task that uses cross-entropy loss ($L_{CE}$), we use a combination of \emph{Focal loss (FL)} and \emph{Dice loss (DL)}, defined as $L_{FD}$, to train the networks with ground truth segmentation labels. The loss function, $L_{FD}$ between the predicted mask $\boldsymbol{\hat{g}}$ and ground truth $\boldsymbol{g}$, is defined as:
\begin{center}
$L_{FD}(\boldsymbol{\hat{g}},\boldsymbol{g})=\underbrace{-\sum_{i=1}^M \left(1-\hat{g}_{i}\right)^\tau \log \left(\hat{g}_{i}\right)}_{\textcolor{black}{\text{Focal Loss}}}+
\underbrace{1-\frac{2 \sum_{i=1}^M \hat{g}_i g_i}{\sum_{i=1}^M \hat{g}_i^2+\sum_{i=1}^M g_i^2}}_{\textcolor{black}{\text{Dice Loss}}}$
\end{center}
where ${\hat{g}_{i}}$ and ${g}_{i}$ are the corresponding predicted probability and ground truth of the \emph{i-th} pixel, and $\tau$ is the focusing parameter. The $L_{FD}$ is most effective for segmentation as \emph{ Dice loss} handles unequal distribution of foreground-background elements, whereas \emph{Focal loss} focuses on class imbalance by giving more weightage to hard examples.
Consequently, the loss functions for the segmentation task using KD + ML configuration and mixed information strategy are formulated as follows:
\begin{equation}
\label{eqn:KDMLV3S}
\begin{array}{l}
L_{\theta_1}=\alpha \ L_{FD}\left(\boldsymbol{s}_1 , \boldsymbol{y}\right)+ \beta \ L_{KD_f}\left(\mathcal{M}_{s_1} , \mathcal{M}_{t}\right) + \gamma \ L_{ML_p}\left(\boldsymbol{s}_1 , \boldsymbol{s}_2\right)
\\
\\
L_{\theta_2}=\alpha' \ L_{FD}\left(\boldsymbol{s}_2 , \boldsymbol{y}\right) + \beta' \ L_{KD_p}\left(\boldsymbol{s}_2,\boldsymbol{p} \right) + \gamma' \ L_{ML_f}\left(\mathcal{M}_{s_2} , \mathcal{M}_{s_1}\right)
\end{array}
\end{equation}
Using equation \ref{eqn:KDMLV3S}, we can derive the segmentation loss functions for KD-only and ML-only configurations that use a mixed information strategy by setting the hyperparameters in the same way as we did in the classification task. Finally, the ensemble prediction mask is calculated as the the union of individual student predictions, ${\hat{y}(x_{i})}=\bigcup_{i=1}^n\{{{s}_k(x_{i})}\}_{k=1}^{K}$.
\section{Experimental Results}
\subsection{Datasets}
For the classification task, we used the Histopathologic Cancer Detection Dataset \footnote{ https://www.kaggle.com/competitions/histopathologic-cancer-detection/data} consisting of 220k images for identifying metastatic tissue in histopathologic scans of lymph node sections. Due to limited computational resources, a balanced subset of 20,000 histological images was randomly selected from the repository for our experiment.
For the segmentation task, we chose the LGG Segmentation Dataset \footnote{https://www.kaggle.com/datasets/mateuszbuda/lgg-
mri-segmentation}. This dataset consists of 7858 MR brain images and a manual FLAIR abnormality segmentation mask obtained from The Cancer Imaging Archive (TCIA) \footnote{https://www.cancerimagingarchive.net/}.
\subsection{Experimental Settings}
We used the ResNet50 and ResNet18 architectures for the classification task as our teacher and student models, respectively. For segmentation, state-of-the-art UNET architecture is utilized where the backbones for teacher and student networks are the same ResNet50 and ResNet18 models. For this particular configuration, the combined student networks reduce $15\%$ in the number of parameters compared to the teacher network. The datasets are split into training, validation, and testing with a 75:10:15 ratio. We have also performed standard data augmentation techniques to improve the generalization of a model. The optimal values of all the hyperparameters for different network configurations and learning strategies were identified using a grid search. All the models were trained with Adam optimizer with a learning rate of $0.0001$, batch size as $8$, and the value temperature parameter $T$ as 2. These parameters are selected empirically. We report our models' average and standard deviation of 3 runs for a more robust evaluation.
\subsection{Experiments}
We conducted experiments using four different network configurations – (i) ML, (ii) KD online, (iii) KD offline, and (iv) Combined KD and ML. Furthermore, we explored the performance of these network configurations using three different learning strategies – (V1) predictions only, (V2) features only, and (V3) mix of predictions and features. Various combinations of network configurations and learning strategies lead to a total of 12 different models. Furthermore, the performance of these 12 models is evaluated for both classification and segmentation tasks. While the effectiveness of classification models is evaluated using accuracy, the segmentation models are assessed using both Intersection-Over-Union (IoU) and F-score.
\begin{table}[H]
\captionsetup{labelfont=bf}
\centering
\captionsetup{font=small}
\caption{Performance comparison of baseline models (ResNet50 \& ResNet18) for classification and segmentation tasks }\label{tab:4}
\begin{tabular}{|l||c||c|c|c|}
\hline
\multirow{2}*{Model}&{Classification} &\multicolumn{2}{c|}{{Segmentation}} \\
\cmidrule{2-4}
&{Accuracy} &{IOU} &{F Score} \\
\hline
ResNet50 &$94.35\pm0.763$ &$76.86\pm0.791 $&$86.93\pm0.537$ \\
\hline
ResNet18 &$94.07\pm0.436$ & $75.06\pm0.792$& $85.13\pm 0.561$\\
\hline
\end{tabular}
\end{table}
\subsection{Results}
For a baseline comparison, we first evaluated the performance of standalone ResNet50 and ResNet18 models for both classification and segmentation tasks, as shown in Table \ref{tab:4}.
The results of training four different network configurations with three different learning strategies for the classification task are shown in Table \ref{tab:5}. We observe that for each of the individual network configurations, the mixed information sharing strategy (V3), where both predictions and features are shared, provides the best performance in terms of classification accuracy. Similarly, a comparison across different network configurations shows that the combined KD + ML approach is superior, corroborating the findings of \cite{UNiyaz}.
Expectedly, the combined KD + ML model trained with mixed information strategy (V3) outperforms all other models. When compared with the conventional KD-only or ML-only models that share only predictions, the proposed model provides an average improvement of $2\%$ in classification accuracy. Lastly, it can also be observed that in addition to the ensemble accuracy, the V3 learning strategy also improves the performance of individual student networks.
\begin{table}
\captionsetup{labelfont=bf}
\centering
\captionsetup{font=small}
\caption{Performance comparison of classification accuracy for Histopathologic Cancer Detection dataset using four different network configurations: ML - Mutual Learning, KD (on) - online Knowledge Distillation, KD (off) - offline Knowledge Distillation, and KD + ML – combined KD \& ML; and three different learning strategies - (V1) predictions only, (V2) features only and (V3) mix of predictions and features.}
\begin{subtable}{\linewidth}
\centering
\resizebox{10.42cm}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}*{ML}&{V1} & V2 & V3 \\
& ($\alpha=0.2,\alpha'=0.2$) & ($\alpha=0.2,\alpha'=0.2$) & ($\alpha=0.1,\alpha'=0.2$) \\
\hline
\textbf{S1} & $94.25\pm0.042$ &$92.52\pm0.487$ &$95.04\pm0.095$ \\
\hline
\textbf{S2} &$94.15\pm0.442$ &$93.52\pm0.219$ &$94.89\pm0.245$\\
\hline
\textbf{Ensemble} &$94.22\pm0.155$ &$94.19\pm0.106$ &$\mathbf{95.36\pm0.239}$\\
\hline
\hline
\end{tabular}
}
\label{tab:51}
\end{subtable}
\begin{subtable}{\linewidth}
\centering
\resizebox{10.4cm}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}*{KD(on)}&{V1} & V2 & V3 \\
& ($\alpha=0.2,\alpha'=0.2$) & ($\alpha=0.2,\alpha'=0.2$) & ($\alpha=0.2,\alpha'=0.2$)\\
\hline
\textbf{T} &$94.35\pm 0.763$ &$94.35\pm0.763$ &$94.35\pm0.763$\\
\hline
\textbf{S1} &$94.02\pm0.576$ &$94.32\pm0.176$ &$94.59\pm0.127$ \\
\hline
\textbf{S2} &$94.31\pm0.134$ &$94.25\pm0.245$ &$94.87\pm0.530$\\
\hline
\textbf{Ensemble} &$94.75\pm0.954$ &$94.68\pm0.176$ &$\mathbf{95.65\pm0.490} $ \\
\hline
\hline
\end{tabular}
}
\label{tab:52}
\end{subtable}{}
\begin{subtable}{\linewidth}
\centering
\resizebox{10.4cm}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}*{KD(off)}&{V1} & V2 & V3 \\
& ($\alpha=0.2,\alpha'=0.2$) & ($\alpha=0.2,\alpha'=0.2$) & ($\alpha=0.1,\alpha'=0.2$)\\
\hline
\textbf{T} &$94.43\pm0.353$ &$94.15\pm1.312$ &$95.43\pm0.707$ \\
\hline
\textbf{S1} &$94.74\pm0.191$ &$93.23\pm0.869$ &$95.75\pm1.079$ \\
\hline
\textbf{S2} &$94.55\pm0.281$ &$94.38\pm0.912$ &$95.23\pm0.438$ \\
\hline
\textbf{Ensemble} &$95.18\pm0.162$ &$95.06\pm0.776$ &$\mathbf{96.15\pm0.707}$ \\
\hline
\hline
\end{tabular}
}
\label{tab:53}
\end{subtable}{}
\begin{subtable}{\linewidth}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{5}*{KD + ML}&{V1} & V2 & V3 \\
& ($\alpha=0.1,\alpha'=0.2$) & ($\alpha=0.2,\alpha'=0.2$) & ($\alpha=0.1,\alpha'=0.4$)\\
& ($\beta=0.45,\beta'=0.4$) & ($\beta=0.4,\beta'=0.4$)& ($\beta=0.45,\beta'=0.3$) \\
& ($\gamma=0.45,\gamma'=0.4$) & ($\beta=0.4,\beta'=0.4$) & ($\gamma=0.45,\gamma'=0.3$)\\
\hline
\textbf{T} &$93.21\pm2.341$ &$94.46\pm0.309$ &$95.75\pm0.756$ \\
\hline
\textbf{S1} &$94.37\pm0.883$ &$95.46\pm0.487$ &$95.85\pm0.968$ \\
\hline
\textbf{S2} &$94.71\pm0.518$ &$95.06\pm0.353$ &$95.37\pm0.353$ \\
\hline
\textbf{Ensemble} &$95.25\pm0.360 $&$96.15\pm0.219 $ &$\textcolor{blue}{\bm{96.68\pm0.584}}$ \\
\hline
\hline
\end{tabular}
\label{tab:54}
\end{subtable}{}
\label{tab:5}
\end{table}
\begin{table}
\captionsetup{labelfont=bf}
\renewcommand{\arraystretch}{1.2}
\captionsetup{font=small}
\centering
\caption{Performance comparison for segmentation task using LGG dataset with IoU and F-score metrics using four different network configurations: ML - Mutual Learning, KD (on) - online Knowledge Distillation, KD (off) - offline Knowledge Distillation, and KD + ML – combined KD \& ML; and three different learning strategies - (V1) predictions only, (V2) features only and (V3) mix of predictions and features.}
\begin{subtable}{\linewidth}
\resizebox{12cm}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}*{ML} &\multicolumn{2}{c|} {V1} &\multicolumn{2}{c|} {V2} &\multicolumn{2}{c|} {V3} \\
& \multicolumn{2}{c|}{$\alpha=0.1,\alpha'=0.1$} & \multicolumn{2}{c|}{$\alpha=0.2,\alpha'=0.2$} & \multicolumn{2}{c|}{$\alpha=0.2,\alpha'=0.2$}\\
\hline
&IOU &F Score &IOU &F Score &IOU &F Score \\
\hline
\textbf{S1} &$76.69\pm0.989$ &$86.44\pm0.622$ &$74.08\pm1.812$ &$85.51\pm0.121$ &$77.93\pm0.537$ &$87.33\pm0.509$ \\
\hline
\textbf{S2} &$75.28\pm1.661$ &$85.18\pm1.060$ &$75.18\pm1.541$ &$85.90\pm0.782$ &$77.26\pm0.756$ &$87.13\pm0.381$ \\
\hline
\textbf{Ensemble} &$77.26\pm0.438$ &$87.49\pm0.593$ &$75.70\pm1.503$ &$86.02\pm0.974$ &$\mathbf{78.24\pm0.494}$ &$\mathbf{87.32\pm0.919}$ \\
\hline
\hline
\end{tabular}
}
\label{tab:61}
\end{subtable}
\begin{subtable}{\linewidth}
\resizebox{12cm}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{KD} &\multicolumn{2}{c|} {V1} &\multicolumn{2}{c|} {V2} &\multicolumn{2}{c|} {V3} \\
(off) & \multicolumn{2}{c|}{$\alpha=0.2,\alpha'=0.2$} & \multicolumn{2}{c|}{$\alpha=0.2,\alpha'=0.2$} & \multicolumn{2}{c|}{$\alpha=0.2,\alpha'=0.2$} \\
\hline
&IOU &F Score &IOU &F Score &IOU &F Score \\
\hline
\textbf{T} &$76.86\pm0.791$ &$86.93\pm0.537$ &$76.86\pm0.791$ &$86.93\pm0.537$ &$76.86\pm0.791$ &$86.93\pm0.537$\\
\hline
\textbf{S1} &$75.81\pm0.643$ &$86.22\pm0.381$ &$75.45\pm0.410$ &$86.01\pm0.261$ &$76.81\pm1.432$ &$86.65\pm1.381$ \\
\hline
\textbf{S2} &$75.77\pm0.410$ & $86.11\pm0.956$&$76.29\pm1.576$ &$86.50\pm0.989$ &$77.59\pm0.931$ &$86.99\pm0.728$ \\
\hline
\textbf{Ensemble} &$77.28\pm0.356$ &$87.83\pm0.254$&$77.79\pm1.283$ &$87.29\pm1.576$ &$\mathbf{78.87\pm0.452}$ &$\mathbf{87.71\pm0.381}$ \\
\hline
\hline
\end{tabular}
}
\label{tab:62}
\end{subtable}
\begin{subtable}{\linewidth}
\resizebox{12cm}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{KD} &\multicolumn{2}{c|} {V1} &\multicolumn{2}{c|} {V2} &\multicolumn{2}{c|} {V3} \\
(on)& \multicolumn{2}{c|}{$\alpha=0.1,\alpha'=0.1$} & \multicolumn{2}{c|}{$\alpha=0.2,\alpha'=0.2$} & \multicolumn{2}{c|}{$\alpha=0.1,\alpha'=0.2$} \\
\hline
&IOU &F Score &IOU &F Score &IOU &F Score \\
\hline
\textbf{T} &$75.09\pm0.848$ &$85.11\pm0.558$ &$76.64\pm0.593$ &$86.77\pm0.381$ &$76.27\pm0.516$ &$86.71\pm0.190$\\ \hline
\textbf{S1} &$76.85\pm0.945$ &$86.91\pm0.601$ &$77.74\pm0.254$ &$87.47\pm0.162$ &$78.51\pm0.473$ &$88.09\pm0.452$ \\ \hline
\textbf{S2} &$77.52\pm0.466$ &$86.65\pm0.360$ &$77.51\pm0.367$ &$87.33+0.212$ &$78.04\pm0.226$ &$87.01\pm0.967$ \\ \hline
\textbf{Ensemble} &$78.76\pm0.565$ &$88.23\pm0.763$ &$78.41\pm0.141$ &$87.90\pm0.130$ &$\mathbf{79.23\pm0.494}$ &$\mathbf{88.13\pm0.212}$ \\
\hline
\hline
\end{tabular}
}
\label{tab:63}
\end{subtable}
\begin{subtable}{\linewidth}
\resizebox{12cm}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{5}*{KD + ML} &\multicolumn{2}{c|} {V1} &\multicolumn{2}{c|} {V2} &\multicolumn{2}{c|} {V3} \\
& \multicolumn{2}{c|}{$\alpha=0.2,\alpha'=0.2$} & \multicolumn{2}{c|}{$\alpha=0.2,\alpha'=0.2$} & \multicolumn{2}{c|}{$\alpha=0.1,\alpha'=0.1$} \\
& \multicolumn{2}{c|}{$\beta=0.4,\beta'=0.4$} & \multicolumn{2}{c|}{$\beta=0.4,\beta'=0.4$} & \multicolumn{2}{c|}{$\beta=0.45,\beta'=0.45$} \\
& \multicolumn{2}{c|}{$\gamma=0.4,\gamma'=0.4$} & \multicolumn{2}{c|}{$\gamma=0.4,\gamma'=0.4$} & \multicolumn{2}{c|}{$\gamma=0.45,\gamma'=0.45$} \\
\hline
&IOU &F Score &IOU &F Score &IOU &F Score \\
\hline
\textbf{T} &$76.20\pm0.247$ &$86.08\pm0.749$ &$78.03\pm0.982$ &$87.70\pm0.685$ &$78.06\pm1.130$ &$87.61\pm1.330$\\ \hline
\textbf{S1} &$76.31\pm0.883$ &$86.56\pm0.565$ &$78.20\pm0.792$ &$87.76\pm0.459$ &$78.66\pm0.542$ &$88.46\pm0.506$ \\ \hline
\textbf{S2} &$77.00\pm0.707$ &$87.01\pm0.473$ &$78.94\pm1.440 $ &$88.23\pm0.905$ &$79.39\pm0.491$ &$88.84\pm0.367$ \\ \hline
\textbf{Ensemble} &$78.83\pm0.275$ &$88.09\pm0.070$ &$79.63\pm0.700$ &$88.31\pm0.638$ &\textcolor{blue}{\bm{$80.12\pm0.597$}} &\textcolor{blue}{\bm{$89.52\pm0.556$}}\\
\hline
\hline
\end{tabular}
}
\label{tab:64}
\end{subtable}
\label{tab:seg}
\end{table}
\begin{figure}
\captionsetup{labelfont=bf}
\hspace*{-1.3cm}
\includegraphics[width=1.2\textwidth]{output44.png}
\caption{Visualization to show the significance of the KD + ML approach using mixed information sharing: The individual students suffer in segmenting the tumor in KD only and ML only for some hard-to-segment examples, whereas in our proposed approach, both the students perform better. (The red circle denotes the missed regions.) }
\label{fig:6}
\end{figure}
The corresponding results for the segmentation task are reported in Table \ref{tab:seg}. These results depict similar trends as observed for the classification task, further emphasizing the importance of combining KD with ML as well as the influence of mixed information sharing. Moreover, it establishes the generalizability of the proposed approach to more than one type of task.
To better appreciate the significance of the proposed approach, we provide a qualitative comparison of different network configurations and information-sharing strategies using individual student predictions for some hard-to-segment test samples. Fig. \ref{fig:6} extensively compares all the models for three test samples. In general, it can be observed that KD-only and ML-only models struggle with the segmentation of small regions of interest. It can also be noticed that for conventional models, only one of the two students is able to predict these small regions. In the proposed approach, an ensemble of student predictions is used for the final prediction, facilitating the students to consistently predict the region of interest and provide the best performance. Furthermore, student networks in this approach can discern finer structures compared to other models.
We demonstrate the importance of combining KD with ML by comparing all models trained with a mixed information strategy in Fig. \ref{fig:7}. It can be noticed from these hard sample images that the combined KD + ML model helps detect these small regions of interest where KD-only and ML-only models fail. Likewise, to establish the importance of a mixed information sharing strategy, we show sample predictions from the combined KD + ML model trained with predictions only (V1), features only (V2), and mixed information (V3) strategies in Fig. \ref{fig:8}. Again, we observe that the V3 strategy helps detect small and fine regions of interest better than V1 and V2. Finally, in Fig. \ref{fig:9}, we have compared the prediction outputs of all the distillation methods, and it's quite evident that the combined KD + ML approach with mixed information sharing outperforms all other approaches.
\begin{figure}
\captionsetup{labelfont=bf}
\centering
\includegraphics[width=0.9\textwidth]{output55.png}
\caption{Visualization to depict the importance of combining KD with ML: Heatmaps of individual student predictions for some hard-to-segment examples with KD only, ML only, and KD + ML with all models using different learning styles (predictions and features)(The green circle denotes the detected regions).}
\label{fig:7}
\end{figure}
\begin{figure}
\centering
\captionsetup{labelfont=bf}
\includegraphics[width=0.9\textwidth]{outputt2.png}
\caption{Visualization to depict the importance of using different learning styles: Heatmaps of individual student predictions for some hard-to-segment examples with KD + ML model trained using predictions only (V1), features only (V2), and combined predictions and features (V3)}
\label{fig:8}
\end{figure}
\begin{figure}
\captionsetup{labelfont=bf}
\centering
\includegraphics[width=1\linewidth]{output33.png}
\caption{Comparison of the quality of segmentations obtained using different knowledge-sharing models trained with combined predictions and features (V3). The proposed approach generates segmentation masks that are much finer than other models and closer to the ground truth.}
\label{fig:9}
\end{figure}
\section{Discussion}
In this work, we propose an innovative approach to improve the effectiveness of knowledge-sharing models by leveraging the idea of preferred or distinct learning styles. Unlike conventional techniques that only share final predictions, we propose to train individual networks with different information like predictions and feature maps to enrich and diversify the learning process. Our comprehensive experiments with four different knowledge-sharing network configurations and three different learning styles demonstrate that our proposed approach outperforms conventional distillation methods. The combined knowledge distillation and mutual learning model enhanced with a mixed information sharing strategy provides improved performance on benchmark biomedical classification and segmentation datasets. As a novel approach, our work also has some limitations. While classification and segmentation tasks show great potential, preliminary experiments show that a mixed information sharing strategy does not benefit regression tasks like object detection, so we further need to investigate the nature of shared information which can be beneficial for these types of regression tasks. Our experiments were also limited to one teacher and two identical student networks with selected architectures. Further exploration of different architectures and configurations is required to generalize our observations.
\bibliographystyle{splncs04}
| proofpile-arXiv_065-13117 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
Numerical integration is an omnipresent task in mathematics and myriad applications.
While these are too numerous to list fully, prominent examples include numerical differential equations \cite{hesthaven2007nodal,quarteroni2008numerical,ames2014numerical}, machine learning \cite{murphy2012machine}, finance \cite{glasserman2013monte}, and biology \cite{manly2006randomization}.
In many cases, the problem can be formulated as follows.
Let $\Omega \subset \mathbb{R}^D$ be a bounded domain with positive volume, $|\Omega| > 0$.
Given $N$ distinct data pairs $\{ (\mathbf{x}_n,f_n) \}_{n=1}^N \subset \Omega \times \mathbb{R}$ with $f: \Omega \to \mathbb{R}$ and $f_n := f(\mathbf{x}_n)$, the aim is to approximate the weighted integral
\begin{equation}\label{eq:I}
I[f] := \int_\Omega f(\boldsymbol{x}) \omega(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}
\end{equation}
by an \emph{$N$-point CF}.
That is, by a weighted finite sum over the given function of the form
\begin{equation}\label{eq:CR}
C_N[f] = \sum_{n=1}^N w_n f(\mathbf{x}_n).
\end{equation}
Here, the distinct points $\{ \mathbf{x}_n \}_{n=1}^N$ are called \emph{data points} and the $\{ w_n \}_{n=1}^N$ are referred to as \emph{cubature weights}.
Many CFs are derived based on the idea to first approximate the (unknown) function $f$ and to exactly integrate this approximation then \cite{haber1970numerical,stroud1971approximate,engels1980numerical,cools1997constructing,krommer1998computational,cools2003encyclopaedia,krylov2006approximate,davis2007methods,brass2011quadrature,trefethen2017cubature}.
Arguably, most of the existing CFs have been derived to be exact for polynomials up to a certain degree.
See \cite{maxwell1877approximate,mysovskikh1980approximation,cools2001cubature,mysovskikh2001cubature,cools2003encyclopaedia,trefethen2021exactness}, in addition to the above references.
That said, in recent years CFs based on the exact integration of RBFs have received a growing amount of interest \cite{sommariva2005integration,sommariva2006numerical,sommariva2006meshless,punzi2008meshless,aziz2012numerical,fuselier2014kernel,reeger2016numericalA,reeger2016numericalB,watts2016radial,reeger2018numerical,sommariva2021rbf}.
The increased used of RBFs for numerical integration, as well as numerical differential equations \cite{kansa1990multiquadrics,iske1996structure,fasshauer1996solving,kansa2000circumventing,larsson2003numerical,iske2003radial,shu2007integrated,fornberg2015solving,flyer2016enhancing,glaubitz2021stabilizing,glaubitz2021towards}, seems to be only logical, considering their story of success in the last few decades.
In fact, since their introduction in Hardy’s work \cite{hardy1971multiquadric} on cartography in 1971, RBFs have become a powerful tool in numerical analysis, including multivariate interpolation and approximation theory \cite{buhmann2000radial,buhmann2003radial,wendland2004scattered,fasshauer2007meshfree,iske2011scattered,fornberg2015primer}.
Even though RBF-CFs have been proposed and applied in numerous works by now, their stability theory can still be considered as under-developed,
especially when compared to more traditional---e.\,g\ polynomial based---methods.
Stability of RBF-CFs was broached, for instance, in \cite{sommariva2005integration,sommariva2006numerical,punzi2008meshless}.
However, to the best of our knowledge, an exhaustive stability theory for RBF-CFs is still missing in the literature.
In particular, theoretical results providing clear conditions---e.\,g., on the kernel, the data points, the weight function, the degree of potentially added polynomial terms---under which stability of RBF-CFs is ensured are rarely encountered.
\subsection{Our Contribution}
The present work strives to at least partially fill this gap in the RBF literature.
This is done by providing a detailed theoretical and numerical investigation on stability of RBF-CFs for different families of kernels.
These include, compactly supported and Gaussian RBFs as well as polyharmonic splines (PHS).
In particular, we report on the following findings.
(1) Stability of RBF-CFs is connected to the Lebesgue constant of the underlying RBF interpolant.
Consequently, it is demonstrated that a low stability measure for RBF-CFs is promoted by a low Lebesgue constant.
That said, it is also shown that in many cases RBF-CFs have significantly better stability properties than one might expect based on the underlying RBF interpolant.
(2) We provide a provable sufficient condition for compactly supported RBFs to yield stable RBF-CF (see \cref{thm:main} in \cref{sec:compact}).
The result is independent of the degree of the polynomial term that is included in the RBF interpolant and assumes the data points to come from an equidistributed (space-filling) sequence.
This result is obtained by leveraging a beautiful connection to discrete orthogonal polynomials and is partially motivated by arguments that frequently occur in least-squares quadrature/cubature formulas \cite{huybrechs2009stable,migliorati2018stable,glaubitz2020stableQFs}.
(3) At least numerically, we find the aforementioned sufficient condition to also be close to necessary in many cases.
This might be considered as a discouraging result for compactly supported RBF-CFs since the sufficient condition makes some harsh restrictions on the shape parameter.
(4) Finally, the asymptotic stability of pure RBF-CFs is connected to the asymptotic stability of the same RBF-CF but augmented with polynomials of a fixed arbitrary degree.
Essentially, we are able to show that for a sufficiently large number of data points, stability of RBF-CFs is independent of the presence of polynomials in the RBF interpolant.
While there are certainly further stability results desired, in addition to the ones presented here, we believe this work to be a valuable step towards a more mature stability theory for RBF-CFs.
\subsection{Outline}
The rest of this work is organized as follows.
We start by collecting some preliminaries on RBF interpolants and CFs in \cref{sec:prelim}.
In \cref{sec:stability} a few initial comments on stability of (RBF-)CFs are offered.
Building up on these, it is demonstrated in \cref{sec:initial} that RBF-CFs in many cases have superior stability properties compared to RBF interpolation.
Next, \cref{sec:compact} contains our theoretical main result regarding stability of RBF-CFs based on compactly supported kernels.
Furthermore, in \cref{sec:connection} it is proven that, under certain assumptions, asymptotic stability of RBF-CFs is independent of the polynomial terms that might be included in the RBF interpolant.
The aforementioned theoretical findings are accompanied by various numerical tests in \cref{sec:numerical}
Finally, concluding thoughts are offered in \cref{sec:summary}.
\section{Preliminaries}
\label{sec:prelim}
We start by collecting some preliminaries on RBF interpolants (\cref{sub:prelim_RBFs}) as well as RBF-CFs (\cref{sub:prelim_CFs}).
\subsection{Radial Basis Function Interpolation}
\label{sub:prelim_RBFs}
RBFs are often considered a powerful tool in numerical analysis, including multivariate interpolation and approximation theory \cite{buhmann2000radial,buhmann2003radial,wendland2004scattered,fasshauer2007meshfree,iske2011scattered,fornberg2015primer}.
In the context of the present work, we are especially interested in RBF interpolants.
Let $f: \mathbb{R}^D \supset \Omega \to \mathbb{R}$ be a scalar valued function.
Given a set of distinct \emph{data points} (in context of RBFs sometimes also referred to as \emph{centers}), the \emph{RBF interpolant} of $f$ is of the form
\begin{equation}\label{eq:RBF-interpol}
(s_{N,d}f)(\boldsymbol{x})
= \sum_{n=1}^N \alpha_n \varphi( \varepsilon_{n} \| \boldsymbol{x} - \mathbf{x}_n \|_2 ) + \sum_{k=1}^K \beta_k p_k(\boldsymbol{x}).
\end{equation}
Here, $\varphi: \mathbb{R}_0^+ \to \mathbb{R}$ is the \emph{RBF} (also called \emph{kernel}), $\{p_k\}_{k=1}^K$ is a basis of the space of all algebraic polynomials up to degree $d$, $\mathbb{P}_d(\Omega)$, and the $\varepsilon_{n}$'s are nonnegative shape parameters.
Furthermore, the RBF interpolant \cref{eq:RBF-interpol} is uniquely determined by the conditions
\begin{alignat}{2}
(s_{N,d}f)(\mathbf{x}_n)
& = f(\mathbf{x}_n), \quad
&& n=1,\dots,N, \label{eq:interpol_cond} \\
\sum_{n=1}^N \alpha_n p_\mathbf{k}(\mathbf{x}_n)
& = 0 , \quad
&& k=1,\dots,K. \label{eq:cond2}
\end{alignat}
In this work, we shall focus on the popular choices of RBFs listed in \cref{tab:RBFs}.
A more complete list of RBFs and their properties can be found in the monographs \cite{buhmann2003radial,wendland2004scattered,fasshauer2007meshfree,fornberg2015primer} and references therein.
\begin{remark}[Implementation of $\varphi(r) = r^{2k} \log r$]
The polyharmonic splines (PHS) of the form $\varphi(r) = r^{2k} \log r$ are usually implemented as $\varphi(r) = r^{2k-1} \log( r^r )$ to avoid numerical problems at $r=0$, where "$\log (0) = -\infty$".
\end{remark}
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{c|c|c|c}
RBF & $\varphi(r)$ & parameter & order \\ \hline
Gaussian & $\exp( -(\varepsilon r)^2)$ & $\varepsilon>0$ & 0 \\
Wendland's & $\varphi_{D,k}(r)$, see \cite{wendland1995piecewise} & $D, k \in \mathbb{N}_0$ & 0 \\
Polyharmonic splines & $r^{2k-1}$ & $k \in \mathbb{N}$ & $k$ \\
& $r^{2k} \log r$ & $k \in \mathbb{N}$ & $k+1$
\end{tabular}
\caption{Some popular RBFs}
\label{tab:RBFs}
\end{table}
Note that \cref{eq:interpol_cond} and \cref{eq:cond2} can be reformulated as a linear system for the coefficient vectors $\boldsymbol{\alpha} = [\alpha_1,\dots,\alpha_N]^T$ and $\boldsymbol{\beta} = [\beta_1,\dots,\beta_K]^T$.
This linear system is given by
\begin{equation}\label{eq:system}
\begin{bmatrix} \Phi & P \\ P^T & 0 \end{bmatrix}
\begin{bmatrix} \boldsymbol{\alpha} \\ \boldsymbol{\beta} \end{bmatrix}
=
\begin{bmatrix} \mathbf{f} \\ \mathbf{0} \end{bmatrix}
\end{equation}
where $\mathbf{f} = [f(\mathbf{x}_1),\dots,f(\mathbf{x}_N)]^T$ as well as
\begin{equation}\label{eq:Phi_P}
\Phi =
\begin{bmatrix}
\varphi( \varepsilon_{1} \| \mathbf{x}_1 - \mathbf{x}_1 \|_2 ) & \dots & \varphi( \varepsilon_{N} \| \mathbf{x}_1 - \mathbf{x}_N \|_2 ) \\
\vdots & & \vdots \\
\varphi( \varepsilon_{1} \| \mathbf{x}_N - \mathbf{x}_1 \|_2 ) & \dots & \varphi( \varepsilon_{N} \| \mathbf{x}_N - \mathbf{x}_N \|_2 )
\end{bmatrix},
\quad
P =
\begin{bmatrix}
p_1(\mathbf{x}_1) & \dots & p_K(\mathbf{x}_1) \\
\vdots & & \vdots \\
p_1(\mathbf{x}_N) & \dots & p_K(\mathbf{x}_N)
\end{bmatrix}.
\end{equation}
It is well-known that \cref{eq:system} is ensured to have a unique solution---corresponding to existence and uniqueness of the RBF interpolant---if the kernel $\varphi$ is positive definite of order $m$ and the set of data points is $\mathbb{P}_{m}(\Omega)$-unisolvent.
See, for instance, \cite[Chapter 7]{fasshauer2007meshfree} and \cite[Chapter 3.1]{glaubitz2020shock} or references therein.
The set of all RBF interpolants \cref{eq:RBF-interpol} forms an $N$-dimensional linear space, denote by $\mathcal{S}_{N,d}$.
This space is spanned by the basis elements
\begin{equation}\label{eq:cardinal}
c_m(\boldsymbol{x})
= \sum_{n=1}^N \alpha_n^{(m)} \varphi( \varepsilon_{n} \| \boldsymbol{x} - \mathbf{x}_n \|_2 ) + \sum_{k=1}^K \beta^{(m)}_k p_k(\boldsymbol{x}),
\quad m=1,\dots,N,
\end{equation}
that are uniquely determined by
\begin{equation}\label{eq:cond_cardinal}
c_m(\mathbf{x}_n) = \delta_{mn} :=
\begin{cases}
1 & \text{if } m=n, \\
0 & \text{otherwise},
\end{cases}
\quad m,n=1,\dots,N,
\end{equation}
and condition \cref{eq:cond2}.
The functions $c_m$ are the so-called \emph{cardinal functions}.
They provide us with the following representation of the RBF interpolant \cref{eq:RBF-interpol}:
\begin{equation}
(s_{N,d})f(\boldsymbol{x}) = \sum_{n=1}^N f(\mathbf{x}_n) c_n(\boldsymbol{x})
\end{equation}
This representation is convenient to subsequently derive cubature weights based on RBFs that are independent of the function $f$.
\subsection{Cubature Formulas Based on Radial Basis Functions}
\label{sub:prelim_CFs}
A fundamental idea behind many CFs is to first approximate the (unknown) functions $f: \Omega \to \mathbb{R}$ based on the given data pairs $\{\mathbf{x}_n,f_n\}_{n=1}^N \subset \Omega \times \mathbb{R}$ and to exactly integrate this approximation.
In the case of RBF-CFs this approximation is chosen as the RBF interpolant \cref{eq:RBF-interpol}.
Hence, the corresponding RBF-CF is defined as
\begin{equation}\label{eq:RBF-CRs_def}
C_N[f] := I[s_{N,d}f] = \int_{\Omega} (s_{N,d}f)(\boldsymbol{x}) \omega(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}.
\end{equation}
When formulated w.\,r.\,t.\ the cardinal functions $c_n$, $n=1,\dots,N$, we get
\begin{equation}\label{eq:RBF-CRs}
C_N[f] = \sum_{n=1}^N w_n f(x_n)
\quad \text{with} \quad w_n = I[c_n].
\end{equation}
That is, the RBF cubature weights $\mathbf{w}$ are given by the moments corresponding to the cardinal functions.
This formulation is often preferred over \cref{eq:RBF-CRs_def} since the cubature weights $\mathbf{w}$ do not have to be recomputed when another function is considered.
In our implementation, we compute the RBF cubature weights by solving the linear system
\begin{equation}\label{eq:LS_weights}
\underbrace{\begin{bmatrix} \Phi & P \\ P^T & 0 \end{bmatrix}}_{= A}
\begin{bmatrix} \mathbf{w} \\ \mathbf{v} \end{bmatrix}
=
\begin{bmatrix} \mathbf{m}^{\text{RBF}} \\ \mathbf{m}^{\text{poly}} \end{bmatrix},
\end{equation}
where $\mathbf{v} \in \mathbb{R}^K$ is an auxiliary vector.
Furthermore, the vectors ${\mathbf{m}^{\text{RBF}} \in \mathbb{R}^N}$ and ${\mathbf{m}^{\text{poly}} \in \mathbb{R}^K}$ contain the moments of the translated kernels and polynomial basis functions, respectively.
That is,
\begin{equation}
\begin{aligned}
\mathbf{m}^{\text{RBF}} & = \left[ I[\varphi_1], \dots, I[\varphi_N] \right]^T, \\
\mathbf{m}^{\text{poly}} & = \left[ I[p_1], \dots, I[p_K] \right]^T,
\end{aligned}
\end{equation}
with $\varphi_n(\boldsymbol{x}) = \varphi( \varepsilon_{n} \| \boldsymbol{x} - \mathbf{x}_n \|_2 )$.
The moments of different RBFs can be found in \cref{sec:app_moments} and references listed there.
The moments of polynomials for different domains $\Omega$ can be found in the literature, e.\,g., \cite[Appendix A]{glaubitz2020stableCFs} and \cite{folland2001integrate,lasserre2021simple}.
\section{Stability and the Lebesgue Constant}
\label{sec:stability}
In this section, we address stability of RBF interpolants and the corresponding RBF-CFs.
In particular, we show that both can be estimated in terms of the famous Lebesgue constant.
That said, we also demonstrate that RBF-CFs often come with improved stability compared to RBF interpolation.
\subsection{Stability and Accuracy of Cubature Formulas}
\label{sub:stability_CFs}
We start by addressing stability and accuracy of RBF-CFs.
To this end, let us denote the best approximation of $f$ from $\mathcal{S}_{N,d}$ in the $L^\infty$-norm by $\hat{s}$.
That is,
\begin{equation}\label{eq:Lebesgue}
\hat{s} = \argmin_{s \in \mathcal{S}_{N,d}} \norm{ f - s }_{L^{\infty}(\Omega)}
\quad \text{with} \quad
\norm{ f - s }_{L^{\infty}(\Omega)} = \sup_{\mathbf{x} \in \Omega} | f(\mathbf{x}) - s(\mathbf{x}) |.
\end{equation}
Note that this best approximation w.\,r.\,t.\ the $L^\infty$-norm is not necessarily equal to the RBF interpolant.
Still, the following error bound holds for the RBF-CF \cref{eq:RBF-CRs}, that corresponds to exactly integrating the RBF interpolant from $\mathcal{S}_{N,d}$:
\begin{equation}\label{eq:L-inequality}
\begin{aligned}
| C_N[f] - I[f] |
\leq \left( \| I \|_{\infty} + \| C_N \|_{\infty} \right) \inf_{ s \in \mathcal{S}_{N,d} } \norm{ f - s }_{L^{\infty}(\Omega)}
\end{aligned}
\end{equation}
Inequality \cref{eq:L-inequality} is commonly known as the Lebesgue inequality; see, e.\,g., \cite{van2020adaptive} or \cite[Theorem 3.1.1]{brass2011quadrature}.
It is most often encountered in the context of polynomial interpolation \cite{brutman1996lebesgue,ibrahimoglu2016lebesgue}, but straightforwardly carries over to numerical integration.
In this context, the operator norms $\| I \|_{\infty}$ and $\|C_N\|_{\infty}$ are respectively given by $\| I \|_{\infty} = I[1]$ and
\begin{equation}\label{eq:stab_measure}
\| C_N \|_{\infty}
= \sum_{n=1}^N |w_n|
= \sum_{n=1}^N | I[c_n] |.
\end{equation}
Recall that the $c_n$'s are the cardinal functions (see \cref{sub:prelim_RBFs}).
In fact, $\| C_N \|_{\infty}$ is a common stability measure for CFs.
This is because the propagation of input errors, e.\,g., due to noise or rounding errors, can be bounded as follows:
\begin{equation}
| C_N[f] - C_N[\tilde{f}] |
\leq \| C_N \|_{\infty} \| f - \tilde{f} \|_{L^\infty}
\end{equation}
That is, input errors are amplified at most by a factor that is equal to the operator norm $\| C_N \|_{\infty}$.
At the same time, we have a lower bound for $\| C_N \|_{\infty}$ given by
\begin{equation}
\| C_N \|_{\infty}
\geq C_N[1],
\end{equation}
where equality holds if and only if all cubature weights are nonnegative.
This is the reason for which the construction of CFs is mainly devoted to nonnegative CFs.
\begin{definition}[Stability]
We call the RBF-CF $C_N$ \emph{stable} if $\| C_N \|_{\infty} = C_N[1]$ holds.
This is the case if and only if $I[c_n] \geq 0$ for all cardinal functions $c_n$, $n=1,\dots,N$.
\end{definition}
It is also worth noting that $C[1] = \| I \|_{\infty}$ if the CF is exact for constants.
For RBF-CFs, this is the case if at least constants are included in the underlying RBF interpolant ($d \geq 0$).
Summarizing the above discussion originating from the Lebesgue inequality \cref{eq:L-inequality}, we have a two-fold goal when using RBF-CFs.
On the one hand, the data points, the kernel, the shape parameter, and the basis of polynomials should be chosen such that $\mathcal{S}_{N,d}$ provides a best approximation to $f$ in the $L^\infty$-norm that is as accurate as possible.
On the other hand, to ensure stability, $\|C_N\|_{\infty}$ should be as small as possible.
That is, $I[c_n] \geq 0$ for all cardinal functions $c_n \in \mathcal{S}_{N,d}$.
\subsection{Stability of RBF Approximations}
\label{sub:stability_RBFs}
We now demonstrate how the stability of RBF-CFs can be connected to the stability of the corresponding RBF interpolant.
Indeed, the stability measure $\| C_N \|_{\infty}$ can be bounded from above by
\begin{equation}
\| C_N \|_{\infty}
\leq \| I \|_{\infty} \Lambda_N,
\quad \text{with} \quad
\Lambda_N := \sup_{\mathbf{x} \in \Omega} \sum_{n=1}^N | c_n(\mathbf{x}) |.
\end{equation}
Here, $\Lambda_N$ is the Lebesgue constant corresponding to the recovery process $f \mapsto s_{N,d}f$ (RBF interpolation).
Obviously, $\Lambda_N \geq 1$.
Note that if $1 \in \mathcal{S}_{N,d}$ (the RBF-CF is exact for constants), we therefore have
\begin{equation}\label{eq:stab_eq1}
\| I \|_{\infty}
\leq \| C_N \|_{\infty}
\leq \| I \|_{\infty} \Lambda_N.
\end{equation}
Hence, the RBF-CF is stable ($\| C_N \|_{\infty} = \| I \|_{\infty}$) if $\Lambda_N$ is minimal ($\Lambda_N=1$).
We briefly note that the inequality $\| C_N \|_{\infty} \leq \| I \|_{\infty} \Lambda_N$ is sharp by considering the following \cref{ex:sharp}.
\begin{example}[$\|C_N\|_{\infty} = \Lambda_N$]\label{ex:sharp}
Let us consider the one-dimensional domain $\Omega = [0,1]$ with $\omega \equiv 1$, which immediately implies $\| I \|_{\infty} = 1$.
In \cite{bos2008univariate} it was shown that for the linear PHS $\varphi(r) = r$ and data points $0 = x_1 < x_2 < \dots < x_N = 1$ the corresponding cardinal functions $c_m$ are simple hat functions.
In particular, $c_m$ is the ordinary ``connect the dots'' piecewise linear interpolant of the data pairs $(x_n,\delta_{nm})$, $n=1,\dots,N$.
Thus, $\Lambda_N = 1$.
At the same time, this yields $\|C_N\|_{\infty} = 1$ and therefore
$\|C_N\|_{\infty} = \Lambda_N$.
\end{example}
Looking for minimal Lebesgue constants is a classical problem in recovery theory.
For instance, it is well known that for polynomial interpolation even near-optimal sets of data points yield a Lebesgue constant that grows as $\mathcal{O}(\log N)$ in one dimension and as $\mathcal{O}(\log^2 N)$ in two dimensions; see \cite{brutman1996lebesgue,bos2006bivariate,bos2007bivariate,ibrahimoglu2016lebesgue} and references therein.
In the case of RBF interpolation, the Lebesgue constant and appropriate data point distributions were studied in \cite{iske2003approximation,de2003optimal,mehri2007lebesgue,de2010stability} and many more works.
That said, the second inequality in \cref{eq:stab_eq1} also tells us that in some cases we can expect the RBF-CF to have superior stability properties compared to the underlying RBF interpolant.
In fact, this might not come as a surprise since integration is well-known to have a smoothing (stabilizing) effect in a variety of different contexts.
Finally, it should be stressed that \cref{eq:stab_eq1} only holds if $1 \in \mathcal{S}_{N,d}$.
In general,
\begin{equation}\label{eq:stab_eq2}
C_N[1]
\leq \| C_N \|_{\infty}
\leq \| I \|_{\infty} \Lambda_N.
\end{equation}
Still, this indicates that a recovery space $\mathcal{S}_{N,d}$ is desired that yields a small Lebesgue constant as well as the RBF-CF potentially having superior stability compared to RBF interpolation.
\section{Theoretical Stability, Numerical Conditioning, and Robustness}
\label{sec:initial}
In this section, we report on two important observations.
The first being that in many cases we find RBF-CFs to have superior stability properties compared to the corresponding RBF interpolants.
That is, we show that most often a strict inequality, $\| C_N \|_{\infty} < \| I \|_{\infty} \Lambda_N$, holds for the second inequality in \cref{eq:stab_eq1}.
Second, we emphasize the importance to distinguish between \emph{theoretical stability} (the CF having nonnegative weights only) and overall \emph{robustness} of the CF.
The latter one is not just influenced by the theoretical stability---assuming infinite arithmetics---but also incorporates the effect of numerical conditioning.
In particular, the cubature weights $\mathbf{w}$ are computed by numerically solving the linear system \cref{eq:LS_weights}.
On a computer, this is always done in some finite arithmetic which inevitably results in rounding errors.
Such rounding errors can also propagate into the cubature weights $\mathbf{w}$ and, depending on the conditioning of the coefficient matrix $A$, might cause the RBF-CF to decrease in robustness.
That said, our findings below indicate that despite the matrix $A$ often having potentially prohibitively high condition numbers, the numerical computation of the cubature weights $\mathbf{w}$ still yields accurate results for these.
Henceforth, for sake of simplicity, we assume $\omega \equiv 1$.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{%
plots/stab_G_N20_equid_noPol}
\caption{Pure RBF interpolant/CF ($d=-1$)}
\label{fig:stab_G_noPol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{%
plots/stab_G_N20_equid_d0}
\caption{Augmented by a constant ($d=0$)}
\label{fig:stab_G_d0}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{%
plots/stab_G_N20_equid_d1}
\caption{Augmented by a linear term ($d=1$)}
\label{fig:stab_G_d1}
\end{subfigure
~
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{%
plots/cond_G_N20_equid}
\caption{Condition numbers}
\label{fig:stab_G_cond}
\end{subfigure}%
%
\caption{
A comparison of the stability measure $\|C_N\|_{\infty}$, the Lebesgue constant $\Lambda_N$, and the condition number $\cond(A)$ for the Gaussian kernel.
$N=20$ equidistant data points were considered, while the shape parameter $\varepsilon$ was allowed to vary.
Note that for the pure RBF interpolant/CF ($d=-1$), the optimal stability measure is $C_N[1]$ rather than $\|I\|_{\infty} = 1$.
}
\label{fig:stab_G}
\end{figure}
We start by demonstrating that RBF-CFs in many cases can have superior stability properties compared to RBF interpolants.
This is demonstrated in \cref{fig:stab_G} for $\Omega = [0,1]$ and a Gaussian kernel $\varphi(r) = \exp( - \varepsilon^2 r^2 )$.
The corresponding RBF approximation was either augmented with no polynomial terms (\cref{fig:stab_G_noPol}), a constant term (\cref{fig:stab_G_d0}), or a linear term (\cref{fig:stab_G_d1}).
See the caption of \cref{fig:stab_G} for more details.
The following observations can be made based on the results presented in \cref{fig:stab_G}:
(1) RBF-based integration can be distinctly more stable than RBF-based interpolation.
This is indicated by the stability measure $\| C_N \|_{\infty}$ often being smaller than the Lebesgue constant $\Lambda_N$.
(2) Finding stable (nonnegative) RBF-CFs is a nontrivial task.
Even though, in the tests presented here, we can observe certain regions of stability w.\,r.\,t.\ the shape parameter $\varepsilon$, it is not clear how to theoretically quantify the boundary of this region.
A first step towards such an analysis is presented in \cref{sec:compact} for compactly supported RBFs.
Further results in this direction would be of great interest.
(3) There are two potential sources for negative weights, causing $\| C_N \|_{\infty} > C_N[1]$ and the RBF-CF to become sensitive towards input errors.
On one hand, this can be caused by one (or multiple) of the cardinal functions having a negative moment.
This is what we previously referred to as ``theoretical instability".
On the other hand, negative weights might also be caused by numerical ill-conditioning by the coefficient matrix $A$ in the linear system \cref{eq:LS_weights} that is numerically solved to compute the cubature weights.
In fact, we can observe such numerical ill-conditioning in \cref{fig:stab_G_noPol} and \cref{fig:stab_G_d0}.
In these figures, we have $\| C_N \|_{\infty} > \|I\|_{\infty} \Lambda_N$ (note that $\|I\|_{\infty} = 1$) for $\varepsilon \approx 10^{-2}$.
Theoretically---assuming error-free computations---this should not happen.
In accordance with this, \cref{fig:stab_G_cond} illustrates that in all cases ($d=-1,0,1$) the condition number of the matrix $A$, $\cond(A)$, reaches the upper bound of (decimal) double precision arithmetics ($\approx 10^{16}$) for $\varepsilon$ close to $10^0$.
\begin{remark}[The Uncertainty Principle for Direct RBF Methods]
Severe ill-conditioning of $A$ for flat RBFs (small shape parameters $\varepsilon$) is a well-known phenomenon in the RBF community.
At the same time, one often finds that the best accuracy for an RBF interpolant is achieved when $\varepsilon$ is small.
This so-called \emph{uncertainty} or \emph{trade-off principle} of (direct) RBF methods was first formulated in \cite{schaback1995error}.
Unfortunately, it has contributed to a widespread misconception that numerical ill-conditioning is unavoidable for flat RBFs.
It should be stressed that the uncertainty principle is specific to the direct RBF approach \cite{driscoll2002interpolation,fornberg2004some,larsson2005theoretical,schaback2005multivariate}.
That is, when $A$ is formulated w.\,r.\,t.\ the basis consisting of the translated RBFs, as described in \cref{eq:Phi_P}.
Indeed, by now, numerous works have demonstrated that severe ill-conditioning of $A$ for flat RBFs can be remedied by formulating $A$ and the linear system \cref{eq:LS_weights} w.\,r.\,t\ to certain more stable bases spanning the RBF space $\mathcal{S}_{N,d}$.
See \cite{muller2009newton,pazouki2011bases,fornberg2011stable,fasshauer2012stable,de2013new,fornberg2013stable,wright2017stable} and references therein.
However, it should be noted that the linear system \cref{eq:LS_weights} used to determine the cubature weights of the RBF-CF requires knowledge of the moments of the basis that is used to formulate $A$.
This might be a potential bottleneck for some of the above-listed approaches.
A detailed discussion of how the moments of stable bases of $\mathcal{S}_{N,d}$ can be determined would therefore be of interest.
\end{remark}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/precision_G_N20_equid_noPol}
\caption{Pure RBF ($d=-1$)}
\label{fig:precision_G_noPol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/precision_G_N20_equid_d0}
\caption{Constant ($d=0$)}
\label{fig:precision_G_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/precision_G_N20_equid_d1}
\caption{Linear term ($d=1$)}
\label{fig:precision_G_d1}
\end{subfigure}%
%
\caption{
Comparison of the stability measure $\|C_N\|_{\infty}$ for different computational precisions.
Considered are double (32 bits), quadruple (64 bits) and octuple (128 bits) precision.
In all cases $N=20$ equidistant data points and the Gaussian kernel were used.
The corresponding RBF interpolant either included no polynomial terms ($d=-1$), a constant ($d=0$) or a linear ($d=1$) term.
}
\label{fig:precision_G}
\end{figure}
The results presented in \cref{fig:stab_G} were obtained by the direct RBF method.
One may therefore wonder to which extent the observed instabilities are influenced by numerical ill-conditioning.
To address this question, we have repeated the same test with an increased computational precision using the function \emph{vpa} in MATLAB.
\cref{fig:precision_G} provides a comparison of the stability measure $\|C_N\|_{\infty}$ computed by double (32 bits), quadruple (64 bits) and octuple (128 bits) precision.
Despite $A$ being highly ill-conditioned, the results for quadruple precision might be considered as ``close" to the ones for usual double precision.
In addition, further increasing the precision from quadruple to octuple precision does not seem to change the results---at least not by the naked eye.
These results agree with the often reported observation that using stable solvers leads to useful results and well-behaved RBF interpolants even in the case of unreasonably large condition numbers.
Indeed, we observe that the observed instabilities for RBF-CFs cannot be explained by numerical ill-conditioning alone.
Rather, our results indicate that numerical ill-conditioning only amplifies already existing (theoretical) instabilities in the RBF-CF.
\section{Compactly Supported Radial Basis Functions}
\label{sec:compact}
There is a rich body of literature on stability results for CFs based on (algebraic and trigonometric) polynomials, including \cite{haber1970numerical,stroud1971approximate,brass1977quadraturverfahren,engels1980numerical,cools1997constructing,krommer1998computational,krylov2006approximate,davis2007methods,brass2011quadrature} and the many references therein.
In comparison, provable results on the stability of RBF-CFs are rarely encountered in the literature, despite their increased use in applications.
Here, our goal is to pave the way towards a more mature stability theory for these.
As a first step in this direction, we next prove stability of RBF-CFs for compactly supported kernels with nonoverlapping supports.
To be more precise, we subsequently consider RBFs $\varphi: \mathbb{R}_0^+ \to \mathbb{R}$ satisfying the following restrictions:
\begin{enumerate}[label=(R\arabic*)]
\item \label{item:R1}
$\varphi$ is nonnegative, i.\,e., $\varphi \geq 0$.
\item \label{item:R2}
$\varphi$ is uniformly bounded.
W.\,l.\,o.\,g.\ we assume $\max_{r \in \mathbb{R}_0^+} |\varphi(r)| = 1$.
\item \label{item:R3}
$\varphi$ is compactly supported.
W.\,l.\,o.\,g.\ we assume $\operatorname{supp} \varphi = [0,1]$.
\end{enumerate}
Already note that \ref{item:R3} implies $\operatorname{supp} \varphi_n = B_{\varepsilon_{n}^{-1}}(\mathbf{x}_n)$, where
\begin{equation}
B_{\varepsilon_{n}^{-1}}(\mathbf{x}_n) := \{ \, \mathbf{x} \in \Omega \mid \| \mathbf{x}_n - \mathbf{x} \|_2 \leq \varepsilon_{n}^{-1} \, \},
\quad
\varphi_n(\boldsymbol{x}) := \varphi( \varepsilon_{n} \| \mathbf{x}_n - \boldsymbol{x} \|_2 ).
\end{equation}
Clearly, the $\varphi_n$'s will have nonoverlapping support if the shape parameters $\varepsilon_{n}$ are sufficiently large.
This can be ensured by the following condition:
\begin{equation}\label{eq:R4}
\varepsilon_{n}^{-1} \leq h_{n}
:= \min\left\{ \, \| \mathbf{x}_n - \mathbf{x}_m \|_2 \mid \mathbf{x}_m \in X \setminus \{\mathbf{x}_n\} \, \right\},
\quad n=1,\dots,N
\end{equation}
Here, $X$ denotes the set of data points.
The different basis functions having nonoverlapping support might seem to be a fairly restrictive sufficient condition.
However, our numerical tests presented in \cref{sec:numerical} indicate that this condition does not seem to be ``far away" from being necessary as well.
This might be considered as a discouraging result for the utility of compactly supported RBFs in the context of numerical integration.
Finally, it should be pointed out that throughout this section, we assume $\omega \equiv 1$.
This assumption is made for the main result, \cref{thm:main}, to hold.
Its role will become clearer after consulting the proof of \cref{thm:main} and is revisited in \cref{rem:omega}.
\subsection{Main Results}
\label{sub:compact_main}
Our main result is the following \cref{thm:main}.
It states that RBF-CFs are conditionally stable for any polynomial degree $d \in \mathbb{N}$ if the number of (equidistributed) data points, $N$, is sufficiently larger than $d$.
\begin{theorem}[Conditional Stability of RBF-CFs]\label{thm:main}
Let $(\mathbf{x}_n)_{n \in \mathbb{N}}$ be an equidistributed sequence in $\Omega$ and $X_N = \{ \mathbf{x}_n \}_{n=1}^N$.
Furthermore, let $\omega \equiv 1$, let $\varphi: \mathbb{R}_0^+ \to \mathbb{R}$ be a RBF satisfying \ref{item:R1} to \ref{item:R3}, and choose the shape parameters $\varepsilon_n$ such that the corresponding functions $\varphi_n$ have nonoverlapping support and equal moments ($I[\varphi_n] = I[\varphi_m]$ for all $n,m=1,\dots,N$).
For every polynomial degree $d \in \mathbb{N}$ there exists an $N_0 \in \mathbb{N}$ such that for all $N \geq N_0$ the corresponding RBF-CF \cref{eq:RBF-CRs} is stable.
That is, $I[c_m] \geq 0$ for all $m=1,\dots,N$.
\end{theorem}
The proof of \cref{thm:main} is given in \cref{sub:compact_proof} after collecting a few preliminarily results.
Note that a sequence $(\mathbf{x}_n)_{n \in \mathbb{N}}$ is \emph{equidistributed in $\Omega$} if and only if
\begin{equation}
\lim_{N \to \infty} \frac{|\Omega|}{N} \sum_{n=1}^N g(\mathbf{x}_n)
= \int_{\Omega} g(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}
\end{equation}
holds for all measurable bounded functions $g: \Omega \to \mathbb{R}$ that are continuous almost everywhere (in the sense of Lebesgue), see \cite{weyl1916gleichverteilung}.
For details on equidistributed sequences, we refer to the monograph \cite{kuipers2012uniform}.
Still, it should be noted that equidistributed sequences are dense sequences with a special ordering.
In particular, if $(\mathbf{x}_n)_{n \in \mathbb{N}} \subset \Omega$ is equidistributed, then for every $d \in \mathbb{N}$ there exists an $N_0 \in \mathbb{N}$ such that $X_N$ is $\mathbb{P}_d(\Omega)$-unisolvent for all $N \geq N_0$; see \cite{glaubitz2021construction}.
This ensures that the corresponding RBF interpolant is well-defined.
It should also be noted that if $\Omega \subset \mathbb{R}^D$ is bounded and has a boundary of measure zero (again in the sense of Lebesgue), then an equidistributed sequence in $\Omega$ is induced by every equidistributed sequence in the $D$-dimensional hypercube.
Since $\Omega$ is bounded, we can find an $R > 0$ such that $\Omega \subset [-R,R]^D$.
Let $(\mathbf{y}_n)_{n \in \mathbb{N}}$ be an equidistributed sequence in $[-R,R]^D$.\footnote{Examples for such sequences include certain equidistant, (scaled and translated) Halton \cite{halton1960efficiency} or some other low-discrepancy points \cite{hlawka1961funktionen,niederreiter1992random,caflisch1998monte,dick2013high}.}
Next, define $(\mathbf{x}_n)_{n \in \mathbb{N}}$ as the subsequence of $(\mathbf{y}_n)_{n \in \mathbb{N}} \subset [-R,R]^D$ that only contains the points inside of $\Omega$.
It was shown in \cite{glaubitz2021construction} that this results in $(\mathbf{x}_n)_{n \in \mathbb{N}}$ being equidistributed in $\Omega$ if $\partial \Omega$ is of measure zero.
\subsection{Explicit Representation of the Cardinal Functions}
\label{sub:compact_explicit}
In preparation of proving \cref{thm:main} we derive an explicit representation for the cardinal functions $c_n$ under the restrictions \ref{item:R1} to \ref{item:R3} and \cref{eq:R4}.
In particular, we make use of the concept of discrete orthogonal polynomials.
Let us define the following discrete inner product corresponding to the data points $X_N = \{\mathbf{x}_n\}_{n=1}^N$:
\begin{equation}\label{eq:discrete_scp}
[u,v]_{X_N} = \frac{|\Omega|}{N} \sum_{n=1}^N u(\mathbf{x}_n) v(\mathbf{x}_n)
\end{equation}
Recall that the data points $X_N$ are coming from an equidistributed sequence and are therefore ensured to be $\mathbb{P}_d(\Omega)$-unisolvent for any degree $d \in \mathbb{N}$ if a sufficiently large number of data points is used.
In this case, \cref{eq:discrete_scp} is therefore ensured to be positive definite on $\mathbb{P}_d(\Omega)$.
We say that the basis $\{p_k\}_{k=1}^K$ of $\mathbb{P}_d(\Omega)$, where $K = \dim \mathbb{P}_d(\Omega)$, consists of \emph{discrete orthogonal polynomials (DOPs)} if they satisfy
\begin{equation}
[p_k,p_l]_{X_N} = \delta_{kl} :=
\begin{cases}
1 & \text{ if } k=l, \\
0 & \text{ otherwise},
\end{cases}
\quad k,l=1,\dots,K.
\end{equation}
We now come to the desired explicit representation for the cardinal functions $c_m$.
\begin{lemma}[Explicit Representation for $c_m$]\label{lem:rep_cm}
Let the RBF $\varphi: \mathbb{R}_0^+ \to \mathbb{R}$ satisfy \ref{item:R2} and \ref{item:R3}.
Furthermore, choose the shape parameters $\varepsilon_n$ such that the corresponding functions $\varphi_n$ have nonoverlapping support and let the basis $\{p_k\}_{k=1}^K$ consists of DOPs.
Then, the cardinal function $c_m$, $m=1,\dots,N$, is given by
\begin{equation}\label{eq:rep_cm}
\begin{aligned}
c_m(\boldsymbol{x})
= \varphi_m(\boldsymbol{x})
- \frac{|\Omega|}{N} \sum_{n=1}^N \left( \sum_{k=1}^K p_k(\mathbf{x}_m) p_k(\mathbf{x}_n) \right) \varphi_n(\boldsymbol{x})
+ \frac{|\Omega|}{N} \sum_{k=1}^K p_k(\mathbf{x}_m) p_k(\boldsymbol{x}).
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
Let $m,n \in \{1,\dots,N\}$.
The restrictions \ref{item:R2}, \ref{item:R3} together with the assumption of the $\varphi_n$'s having nonoverlapping support yields $\varphi_n(\mathbf{x}_m) = \delta_{mn}$.
Hence, \cref{eq:cardinal} and \cref{eq:cond_cardinal} imply
\begin{equation}\label{eq:alpha}
\alpha_n^{(m)} = \delta_{mn} - \sum_{k=1}^K \beta^{(m)}_k p_k(\mathbf{x}_n).
\end{equation}
If we substitute \cref{eq:alpha} into \cref{eq:cond2}, we get
\begin{equation}
p_l(\mathbf{x}_m) - \frac{N}{|\Omega|} \sum_{k=1}^K \beta^{(m)}_k [p_k,p_l]_{X_N} = 0,
\quad l=1,\dots,K.
\end{equation}
Thus, if $\{p_k\}_{k=1}^K$ consists of DOPs, this gives us
\begin{equation}\label{eq:beta}
\beta^{(m)}_l = \frac{N}{|\Omega|} p_l(\mathbf{x}_m), \quad l=1,\dots,K.
\end{equation}
Finally, substituting \cref{eq:beta} into \cref{eq:alpha} yields
\begin{equation}
\alpha_n^{(m)} = \delta_{mn} - \frac{N}{|\Omega|} \sum_{k=1}^K p_k(\mathbf{x}_m) p_k(\mathbf{x}_n)
\end{equation}
and therefore the assertion.
\end{proof}
We already remarked that using a basis consisting of DOPs is not necessary for the implementation of RBF-CFs.
In fact, the cubature weights are, ignoring computational considerations, independent of the polynomial basis elements w.\,r.\,t.\ which the matrix $P$ and the corresponding moments $\mathbf{m}^{\text{poly}}$ are formulated.
We only use DOPs as a theoretical tool---a convenient perspective on the problem at hand\footnote{For example, many properties of interpolation polynomials are shown by representing these w.\,r.\,t.\ the Lagrange basis, while this representation is often not recommended for actual computations.}---to show stability of RBF-CFs.
\subsection{Some Low Hanging Fruits}
\label{sub:compact_low}
Using the explicit representation \cref{eq:rep_cm} it is trivial to prove stability of RBF-CFs when no polynomial term or only a constant is included in the RBF interpolant.
\begin{lemma}[No Polynomials]
Let the RBF $\varphi: \mathbb{R}_0^+ \to \mathbb{R}$ satisfy \ref{item:R1} to \ref{item:R3} and choose the shape parameters $\varepsilon_n$ such that the corresponding functions $\varphi_n$ have nonoverlapping support.
Assume that no polynomials are included in the corresponding RBF interpolant ($K=0$).
Then, the associated RBF-CF is stable, i.e., $I[c_m] \geq 0$ for all $m=1,\dots,N$.
\end{lemma}
\begin{proof}
It is obvious that $c_m(\boldsymbol{x}) = \varphi_m(\boldsymbol{x})$.
Thus, by restriction \ref{item:R1}, $c_m$ is nonnegative and therefore $I[c_m] \geq 0$.
\end{proof}
\begin{lemma}[Only a Constant]
Let the RBF $\varphi: \mathbb{R}_0^+ \to \mathbb{R}$ satisfy \ref{item:R1} to \ref{item:R3} and choose the shape parameters $\varepsilon_n$ such that the corresponding functions $\varphi_n$ have nonoverlapping support.
Assume that only a constant is included in the corresponding RBF interpolant ($d=0$ or $K=1$).
Then, the associated RBF-CF is stable, i.e., $I[c_m] \geq 0$ for all $m=1,\dots,N$.
\end{lemma}
\begin{proof}
Let $m \in \{1,\dots,N\}$.
If we choose $p_1 \equiv |\Omega|^{-1/2}$, \cref{lem:rep_cm} yields
\begin{equation}
c_m(\boldsymbol{x})
= \varphi_m(\boldsymbol{x})
+ \frac{1}{N} \left( 1 - \sum_{n=1}^N \varphi_n(\boldsymbol{x}) \right).
\end{equation}
Note that by \ref{item:R2}, \ref{item:R3}, and \cref{eq:R4}, we therefore have $c_m(\boldsymbol{x}) \geq \varphi_m(\boldsymbol{x})$.
Hence, \ref{item:R1} implies the assertion.
\end{proof}
\subsection{Proof of the Main Results}
\label{sub:compact_proof}
The following technical lemma will be convenient to the proof of \cref{thm:main}.
\begin{lemma}\label{lem:technical}
Let $(\mathbf{x}_n)_{n \in \mathbb{N}}$ be equidistributed in $\Omega$, $X_N = \{ \mathbf{x}_n \}_{n=1}^N$, and let $[\cdot,\cdot]_{X_N}$ be the discrete inner product \cref{eq:discrete_scp}.
Furthermore, let $\{ p_k^{(N)} \}_{k=1}^K$ be a basis of $\mathbb{P}_d(\Omega)$ consisting of DOPs w.\,r.\,t.\ $[\cdot,\cdot]_{X_N}$.
Then, for all $k=1,\dots,K$,
\begin{equation}
p_k^{(N)} \to p_k \quad \text{in } L^{\infty}(\Omega), \quad N \to \infty,
\end{equation}
where $\{ p_k \}_{k=1}^K$ is a basis of $\mathbb{P}_d(\Omega)$ consisting of continuous orthogonal polynomials satisfying
\begin{equation}
\int_{\Omega} p_k(\boldsymbol{x}) p_l(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}
= \delta_{kl}, \quad k,l=1,\dots,K.
\end{equation}
Moreover, it holds that
\begin{equation}
\lim_{N \to \infty} \int_{\Omega} p_k^{(N)}(\boldsymbol{x}) p_l^{(N)}(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x} = \delta_{kl}, \quad k,l=1,\dots,K.
\end{equation}
\end{lemma}
\begin{proof}
The assertion is a direct consequence of the results from \cite{glaubitz2020stableCFs}.
\end{proof}
Essentially, \cref{lem:technical} states that if a sequence of discrete inner product converges to a continuous one, then also the corresponding DOPs---assuming that the ordering of the elements does not change---converges to a basis of continuous orthogonal polynomials.
Furthermore, this convergence also holds in a uniform sense.
We are now able to provide a proof for \cref{thm:main}.
\begin{proof}[Proof of \cref{thm:main}]
Let $d \in \mathbb{N}$ and $m \in \{1,\dots,N\}$.
Under the assumptions of \cref{thm:main}, we have $I[\varphi_n] = I[\varphi_m]$ for all $n=1,\dots,N$.
Thus, \cref{lem:rep_cm} implies
\begin{equation}
I[c_m]
= I[\varphi_m] \left[ 1 - \frac{|\Omega|}{N} \sum_{n=1}^N \sum_{k=1}^K p^{(N)}_k(\mathbf{x}_m) p^{(N)}_k(\mathbf{x}_{n}) \right]
+ \frac{|\Omega|}{N} \sum_{k=1}^K p^{(N)}_k(\mathbf{x}_m) I[p_k].
\end{equation}
Let $\{ p_k^{(N)} \}_{k=1}^K$ be a basis of $\mathbb{P}_d(\Omega)$ consisting of DOPs.
That is, $[p_k^{(N)},p_l^{(N)}]_{X_N} = \delta_{kl}$.
In particular, $p_1^{(N)} \equiv |\Omega|^{-1/2}$.
With this in mind, it is easy to verify that
\begin{equation}\label{eq:omega_proof1}
\begin{aligned}
\frac{|\Omega|}{N} \sum_{n=1}^N \sum_{k=1}^K p^{(N)}_k(\mathbf{x}_m) p^{(N)}_k(\mathbf{x}_n)
= \sum_{k=1}^K p^{(N)}_k(\mathbf{x}_m) |\Omega|^{1/2} [p^{(N)}_k,p^{(N)}_1]_{X_N}
= 1.
\end{aligned}
\end{equation}
Thus, we have
\begin{equation}
I[c_m] \geq 0 \iff
\sum_{k=1}^K p_k^{(N)}(\mathbf{x}_m) I[p_k^{(N)}] \geq 0.
\end{equation}
Finally, observe that
\begin{equation}
\sum_{k=1}^K p_k^{(N)}(\mathbf{x}_m) I[p_k^{(N)}]
= |\Omega|^{1/2} \sum_{k=1}^K p_k^{(N)}(\mathbf{x}_m) \int_{\Omega} p_k^{(N)}(\boldsymbol{x}) p_1^{(N)}(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x},
\end{equation}
under the assumption that $\omega \equiv 1$.
\cref{lem:technical} therefore implies
\begin{equation}\label{eq:omega_proof2}
\lim_{N \to \infty} \sum_{k=1}^K p_k^{(N)}(\mathbf{x}_m) I[p_k^{(N)}] = 1,
\end{equation}
which completes the proof.
\end{proof}
\begin{remark}[On the Assumption that $\omega \equiv 1$]\label{rem:omega}
The assumption that $\omega \equiv 1$ in \cref{thm:main} is necessary for \cref{eq:omega_proof1} and \cref{eq:omega_proof2} to both hold true.
On the one hand, \cref{eq:omega_proof1} is ensured by the the DOPs being orthogonal w.\,r.\,t.\ the discrete inner product \cref{eq:discrete_scp}.
This discrete inner product can be considered as an approximation to the continuous inner product ${\scp{u}{v} = \int_{\Omega} u(\boldsymbol{x}) v(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}}$.
This also results in \cref{lem:technical}.
On the other hand, in general, \cref{eq:omega_proof2} only holds if the DOPs converge to a basis of polynomials that is orthogonal w.\,r.\,t.\ the weighted continuous inner product ${\scp{u}{v}_{\omega} = \int_{\Omega} u(\boldsymbol{x}) v(\boldsymbol{x}) \omega(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}}$.
Hence, for \cref{eq:omega_proof1} and \cref{eq:omega_proof2} to both hold true at the same time, we have to assume that $\omega \equiv 1$.
In this case, the two continuous inner products are the same.
\end{remark}
\section{On the Connection Between RBF-CFs With and Without Polynomials}
\label{sec:connection}
A natural question in the context of RBFs is which influence the polynomial terms have on the quality of the RBF interpolation and the RBF-CF, beyond ensuring existence of the RBF interpolant.
In particular, in the context of the present work, one might ask ``how are polynomial terms influencing stability of the RBF-CF?".
In what follows, we address this question by showing that---under certain assumptions that are to be specified yet---at least asymptotic stability of RBF-CFs is independent of polynomial terms.
We hope this result to be another step forward towards a more mature stability theory for RBF-CFs.
Recently, the following explicit formula for the cardinal functions was derived in \cite{bayona2019insight,bayona2019comparison}.
Let us denote ${\mathbf{c}(\boldsymbol{x}) = [c_1(\boldsymbol{x}),\dots,c_N(\boldsymbol{x})]^T}$, where $c_1,\dots,c_N$ are the cardinal functions spanning $\mathcal{S}_{N,d}$; see \cref{eq:cardinal} and \cref{eq:cond_cardinal}.
Provided that $\Phi$ and $P$ in \cref{eq:Phi_P} have full rank\footnote{
$P$ having full rank means that $P$ has full column rank, i.\,e., the columns of $P$ are linearly independent.
This is equivalent to the set of data points being $\mathbb{P}_d(\Omega)$-unisolvent.
},
\begin{equation}\label{eq:formula_Bayona}
\mathbf{c}(\boldsymbol{x})
= \hat{\mathbf{c}}(\boldsymbol{x})
- B \boldsymbol{\tau}(\boldsymbol{x})
\end{equation}
holds.
Here, $\hat{\mathbf{c}}(\boldsymbol{x}) = [\hat{c}_1(\boldsymbol{x}),\dots,\hat{c}_N(\boldsymbol{x})]^T$ are the cardinal functions corresponding to the pure RBF interpolation without polynomials.
That is, they span $\mathcal{S}_{N,-1}$.
At the same time, $B$ and $\boldsymbol{\tau}$ are defined as
\begin{equation}
B := \Phi^{-1} P \left( P^T \Phi^{-1} P \right)^{-1},
\quad
\boldsymbol{\tau}(\boldsymbol{x}) := P^T \hat{\mathbf{c}}(\boldsymbol{x}) - \mathbf{p}(\boldsymbol{x})
\end{equation}
with ${\mathbf{p}(\boldsymbol{x}) = [p_1(\boldsymbol{x}),\dots,p_K(\boldsymbol{x})]^T}$.
Note that $\boldsymbol{\tau}$ can be interpreted as a residual measuring how well pure RBFs can approximate polynomials up to degree $d$.
Obviously, \cref{eq:formula_Bayona} implies
\begin{equation}\label{eq:relation_weights}
\mathbf{w}
= \hat{\mathbf{w}} - B I[\boldsymbol{\tau}],
\end{equation}
where $\mathbf{w}$ is the vector of cubature weights of the RBF-CF with polynomials ($d \geq 0$).
At the same time, $\hat{\mathbf{w}}$ is the vector of weights corresponding to the pure RBF-CF without polynomial augmentation ($d=-1$).
Moreover, $I[\boldsymbol{\tau}]$ denotes the componentwise application of the integral operator $I$.
It was numerically demonstrated in \cite{bayona2019insight} that for fixed $d \in \mathbb{N}$
\begin{equation}\label{eq:observation_Bayona}
\max_{\boldsymbol{x} \in \Omega} \| B \boldsymbol{\tau}(\boldsymbol{x}) \|_{\ell^\infty} \to 0
\quad \text{as} \quad N \to \infty
\end{equation}
if PHS are used.
Note that, for fixed $\boldsymbol{x} \in \Omega$, $B \boldsymbol{\tau}(\boldsymbol{x})$ is an $N$-dimensional vector and $\| B \boldsymbol{\tau}(\boldsymbol{x}) \|_{\ell^\infty}$ denotes its $\ell^{\infty}$-norm.
That is, the maximum absolute value of the $N$ components.
It should be pointed out that while \cref{eq:observation_Bayona} was numerically demonstrated only for PHS the relations \cref{eq:formula_Bayona} and \cref{eq:relation_weights} hold for general RBFs, assuming that $\Phi$ and $P$ have full rank.
Please see \cite[Section 4]{bayona2019insight} for more details.
We also remark that \cref{eq:observation_Bayona} implies the weaker statement
\begin{equation}\label{eq:cond}
\| B \boldsymbol{\tau}(\cdot) \|_{\ell^1} \to 0
\ \ \text{in } L^1(\Omega) \quad \text{as} \quad N \to \infty.
\end{equation}
Here, $B \boldsymbol{\tau}(\cdot)$ denotes a vector-valued function, $B \boldsymbol{\tau}: \Omega \to \mathbb{R}^N$.
That is, for a fixed argument $\boldsymbol{x} \in \Omega$, $B \boldsymbol{\tau}(\boldsymbol{x})$ is an $N$-dimensional vector in $\mathbb{R}^N$ and $\| B \boldsymbol{\tau}(\boldsymbol{x}) \|_{\ell^1}$ denotes the usual $\ell^1$-norm of this vector.
Thus, \cref{eq:cond} means that the integral of the $\ell^1$-norm of the vector-valued function $B \boldsymbol{\tau}(\cdot)$ converges to zero as $N \to \infty$.
The above condition is not just weaker than \cref{eq:observation_Bayona} (see \cref{rem:assumption2}), but also more convenient to investigate stability of CFs.
Indeed, we have the following results.
\begin{lemma}\label{lem:connection}
Let $\omega \in L^{\infty}(\Omega)$.
Assume $\Phi$ and $P$ in \cref{eq:Phi_P} have full rank and assume \cref{eq:cond} to hold.
Then the two following statements are equivalent:
\begin{enumerate}
\item[(a)]
$\| \hat{\mathbf{w}} \|_{\ell^1} \to \|I\|_{\infty}$ for $N \to \infty$
\item[(b)]
$\| \mathbf{w} \|_{\ell^1} \to \|I\|_{\infty}$ for $N \to \infty$
\end{enumerate}
That is, either both the pure and polynomial augmented RBF-CF are asymptotically stable or none is.
\end{lemma}
A short discussion on the term ``asymptotically stable" is subsequently provided in \cref{rem:asymptotic_stable}.
\begin{proof}
Assume $\Phi$ and $P$ in \cref{eq:Phi_P} have full rank and assume \cref{eq:cond} to hold.
Then \cref{eq:relation_weights} follows and therefore
\begin{equation}\label{eq:connection_proof}
\begin{aligned}
\| \mathbf{w} \|_{\ell^1}
& \leq \| \hat{\mathbf{w}} \|_{\ell^1} + \| BI[\boldsymbol{\tau}] \|_{\ell^1}, \\
\| \hat{\mathbf{w}} \|_{\ell^1}
& \leq \| \mathbf{w} \|_{\ell^1} + \| BI[\boldsymbol{\tau}] \|_{\ell^1}.
\end{aligned}
\end{equation}
Next, note that $BI[\boldsymbol{\tau}] = I[ B \boldsymbol{\tau}]$ and thus
\begin{equation}
\begin{aligned}
\| BI[\boldsymbol{\tau}] \|_{\ell^1}
= \sum_{n=1}^N \left| I[ (B \boldsymbol{\tau})_n ] \right|
\leq I \left[ \sum_{n=1}^N | (B \boldsymbol{\tau})_n | \right]
= I \left[ \| B \boldsymbol{\tau} \|_{\ell^1} \right].
\end{aligned}
\end{equation}
Since $\omega \in L^{\infty}(\Omega)$ it follows that
\begin{equation}
\| BI[\boldsymbol{\tau}] \|_{\ell^1}
\leq \| \omega \|_{L^{\infty}(\Omega)} \int_{\Omega} \| B \boldsymbol{\tau}(\boldsymbol{x}) \|_{\ell^1} \, \mathrm{d} \boldsymbol{x}.
\end{equation}
Thus, by assuming that \cref{eq:cond} holds, we get $\| BI[\boldsymbol{\tau}] \|_{\ell^1} \to 0$ for fixed $d \in \mathbb{N}$ and $N \to \infty$.
Finally, substituting this into \cref{eq:connection_proof} yields the assertion.
\end{proof}
Essentially, \cref{lem:connection} states that--under the listed assumptions---it is sufficient to consider asymptotic stability of the pure RBF-CF.
Once asymptotic (in)stability is established for the pure RBF-CF, by \cref{lem:connection}, it also carries over to all corresponding augmented RBF-CFs.
Interestingly, this is following our findings for compactly supported RBFs reported in \cref{thm:main}.
There, conditional stability was ensured independently of the degree of the augmented polynomials.
\begin{remark}[Asymptotic Stability]\label{rem:asymptotic_stable}
We call a sequence of CFs with weights $\mathbf{w}_N \in \mathbb{R}^N$ for $N \in \mathbb{N}$ asymptotically stable if $\| \mathbf{w}_N \|_{\ell^1} \to \| I \|_{\infty}$ for $N \to \infty$.
Recall that $\| \mathbf{w}_N \|_{\ell^1} = \|C_N\|_{\infty}$ if the weights $\mathbf{w}_N$ correspond to the $N$-point CF $C_N$.
It is easy to note that this is a weaker property than every single CF being stable, i.\,e., $\| \mathbf{w}_N \|_{\ell^1} = \| I \|_{\infty}$ for all $N \in \mathbb{N}$.
That said, consulting \cref{eq:L-inequality}, asymptotic stability is sufficient for the CF to converge for all functions that can be approximated arbitrarily accurate by RBFs w.\,r.\,t.\ the $L^{\infty}(\Omega)$-norm.
Of course, the propagation of input errors might be suboptimal for every single CF.
\end{remark}
\cref{lem:connection} essentially makes two assumptions.
(1) $A$ and $P$ are full rank matrices on the data set of data points; and
(2) the condition \cref{eq:observation_Bayona} holds.
In the two following remarks, we comment on these assumptions.
\begin{remark}[On the First Assumption of \cref{lem:connection}]\label{rem:assumption1}
Although it might seem restrictive to require $A$ and $P$ to have full rank, there are often even more restrictive constraints in practical problems.
For instance, when solving partial differential equations, the data points are usually required to be smoothly scattered in such a way that the distance between data points is kept roughly constant.
For such data points, it seems unlikely to find $A$ and $P$ (for $N$ being sufficiently larger than $d$) to be singular.
See \cite{bayona2019insight} for more details.
\end{remark}
\begin{remark}[On the Second Assumption of \cref{lem:connection}]\label{rem:assumption2}
The second assumption for \cref{lem:connection} to hold is that \cref{eq:cond} is satisfied.
That is, the integral of $\| B \boldsymbol{\tau}(\cdot) \|_{\ell^1}: \Omega \to \mathbb{R}_0^+$ converges to zero as $N \to \infty$.
This is a weaker condition than the maximum value of $\| B \boldsymbol{\tau}(\cdot) \|_{\ell^1}$ converging to zero, which was numerically observed to hold for PHS in \cite{bayona2019insight}.
The relation between these conditions can be observed by applying H\"older's inequality (see, for instance, \cite[Chapter 3]{rudin1987real}).
Let $1 \leq p,q \leq \infty$ with $1/p + 1/q = 1$ and assume that $\omega \in L^q(\Omega)$.
Then we have
\begin{equation}
\int_{\Omega} \| B \boldsymbol{\tau}(\boldsymbol{x}) \|_{\ell^1} \omega(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}
\leq \left( \int_{\Omega} \| B \boldsymbol{\tau}(\boldsymbol{x}) \|_{\ell^1}^p \, \mathrm{d} \boldsymbol{x} \right)^{1/p}
\left( \int_{\Omega} \omega(\boldsymbol{x})^q \, \mathrm{d} \boldsymbol{x} \right)^{1/q}.
\end{equation}
Hence, $\| B \boldsymbol{\tau}\|_{\ell^1}$ converging to zero in $L^p(\Omega)$ as $N \to \infty$ for some $p \geq 1$ immediately implies \cref{eq:relation_weights}.
The special case of $p = \infty$ corresponds to \cref{eq:observation_Bayona}.
\end{remark}
\section{Numerical Results}
\label{sec:numerical}
We present a variety of numerical tests in one and two dimensions to demonstrate our theoretical findings.
In particular, a stability and error analysis for CFs based on different RBFs is presented.
Thereby, compactly supported RBFs are discussed in \cref{sub:num_compact}, Gaussian RBFs in \cref{sub:num_Gaussian}, and PHS in \cref{sub:num_PHS}.
For sake of simplicity, a constant weight function $\omega \equiv 1$ is used in all test cases.
All numerical tests presented here were generated by the open-access MATLAB code \cite{glaubitz2021stabilityCode}.
\subsection{Compactly Supported RBFs}
\label{sub:num_compact}
Let us start with a demonstration of \cref{thm:main} in one dimension.
To this end, we consider Wendland's compactly supported RBFs in $\Omega = [0,1]$.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S611_demonstration_N100_equid_noPol}
\caption{$k=1$ and $d=-1$ (pure RBF)}
\label{fig:demo_noPol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S611_demonstration_N100_equid_d0}
\caption{$d=0$ (constant term)}
\label{fig:demo_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S611_demonstration_N100_equid_d1}
\caption{$d=1$ (linear term)}
\label{fig:demo_d1}
\end{subfigure
%
\caption{
The stability measure $\|C_N\|_{\infty}$ of Wendland's compactly supported RBF $\varphi_{1,k}$ with smoothness parameters $k=0,1,2$.
In all cases, $N=100$ equidistant data points were considered, while the reference shape parameter $\varepsilon$ was allowed to vary.
$1/h$ denotes the threshold above which the basis functions have nonoverlapping support.
}
\label{fig:demo}
\end{figure}
\cref{fig:demo} illustrates the stability measure $\|C_N\|_{\infty}$ of Wendland's compactly supported RBF $\varphi_{1,k}$ with smoothness parameters $k=0,1,2$ as well as the optimal stability measure.
The latter is given by $C_N[1]$ if no constants are included and by $\|I\|_\infty = 1$ if constants are included in the RBF approximations space, meaning that the RBF-CF is exact for constants.
Furthermore, $N=100$ equidistant data points in $\Omega = [0,1]$ were used, including the end points, $x_1 = 0$ and $x_N = 1$, and the (reference) shape parameter $\varepsilon$ was allowed to vary.
Finally, $1/h$ denotes the threshold above which the compactly supported RBFs are all having nonoverlapping support.
We start by noting the RBF-CFs are observed to be stable for sufficiently small shape parameters.
This can be explained by all the basis functions, $\varphi_n$, converging to a constant function for $\varepsilon \to 0$.
At the same time, we can also observe the RBF-CF to be stable for $\varepsilon \geq 1/h$.
It can be argued that this is in accordance with \cref{thm:main}.
Recall that \cref{thm:main} essentially states that for $\varepsilon \geq 1/h$, and assuming that all basis functions have equal moments ($I[\varphi_n] = I[\varphi_m]$ for all $n,m$), the corresponding RBF-CF (including polynomials of any degree) is stable if a sufficiently large number of equidistribiuted data points is used.
Here, the equal moments condition was ensured by choosing the shape parameter as $\varepsilon_n = \varepsilon$ for the interior data points ($n=2,\dots,N-1$) and as $\varepsilon_1 = \varepsilon_N = \varepsilon/2$ for the boundary data points.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{%
plots/S612_momentCond_N100_equid_d0}
\caption{$d=0$ (constant term)}
\label{fig:momentCond_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{%
plots/S612_momentCond_N100_equid_d1}
\caption{$d=1$ (linear term)}
\label{fig:momentCond_d1}
\end{subfigure
%
\caption{
The stability measure $\|C_N\|_{\infty}$ of Wendland's compactly supported RBF $\varphi_{1,k}$ with smoothness parameters $k=0,1,2$.
In all cases, $N=100$ equidistant data points were considered.
The same shape parameter $\varepsilon$ was used for all basis functions, yielding (at least) the moments corresponding to the boundary data points $x_1 = 0$ and $x_N=1$ to differ from the others.
$1/h$ denotes the threshold above which the basis functions have nonoverlapping support.
}
\label{fig:momentCond}
\end{figure}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{%
plots/S613_nonequid_N100_Halton_d0}
\caption{$d=0$ (constant term)}
\label{fig:nonequid_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{%
plots/S613_nonequid_N100_Halton_d1}
\caption{$d=1$ (linear term)}
\label{fig:nonequid_d1}
\end{subfigure
%
\caption{
The stability measure $\|C_N\|_{\infty}$ of Wendland's compactly supported RBF $\varphi_{1,k}$ with smoothness parameters $k=0,1,2$.
In all cases, $N=100$ Halton points and a constant shape parameter $\varepsilon$ were considered.
$1/h$ denotes the threshold above which the basis functions have nonoverlapping support.
}
\label{fig:nonequid}
\end{figure}
That said, at least numerically, we observe that it is possible to drop this equal moment condition.
This is demonstrated by \cref{fig:momentCond}, where we perform the same test as in \cref{fig:demo} except choosing all the shape parameters to be equal ($\varepsilon_n = \varepsilon$, $n=1,\dots,N$).
This results in the two basis functions corresponding to the boundary points $x_1 = 0$ and $x_N = 1$ having smaller moments than the basis functions corresponding to interior data points for all $\varepsilon$.
Nevertheless, we can see in \cref{fig:momentCond} that for $\varepsilon \geq 1/h$ the RBF-CFs are still stable.
Moreover, the same observation is also made in \cref{fig:nonequid} for the same test using Halton points.
Once more, we find the corresponding RBF-CFs to be stable for $\varepsilon \geq 1/h$ as well as for sufficiently small shape parameter $\varepsilon$.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S614_error_N100_equid_k0_d0}
\caption{$k=0$ and $d=0$ (constant term)}
\label{fig:error_1d_k0_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S614_error_N100_equid_k1_d0}
\caption{$k=1$ and $d=0$ (constant term)}
\label{fig:error_1d_k1_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S614_error_N100_equid_k2_d0}
\caption{$k=2$ and $d=0$ (constant term)}
\label{fig:error_1d_k2_d0}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S614_error_N100_equid_k0_d1}
\caption{$k=0$ and $d=1$ (linear term)}
\label{fig:error_1d_k0_d1}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S614_error_N100_equid_k1_d1}
\caption{$k=1$ and $d=1$ (linear term)}
\label{fig:error_1d_k1_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S614_error_N100_equid_k2_d1}
\caption{$k=2$ and $d=1$ (linear term)}
\label{fig:error_1d_k2_d1}
\end{subfigure}%
%
\caption{
Error analysis for the one-dimensional test function $f(x) = c/(1 + (x-0.25)^2 )$ on $\Omega = [0,1]$, where $c$ is chosen such that $I[f] = 1$.
Illustrated are the error $| I[f] - C_N[f] |$ and the stability measure $\|C_N\|_{\infty}$ of Wendland's compactly supported RBF $\varphi_{1,k}$ with smoothness parameters $k=0,1,2$.
In all cases, $N=100$ equidistant data points were considered, while the reference shape parameter $\varepsilon$ was allowed to vary.
$1/h$ denotes the threshold above which the basis functions have nonoverlapping support.
}
\label{fig:error_1d}
\end{figure}
To also provide an error analysis, \cref{fig:error_1d} compares the stability measure $\|C_N\|_{\infty}$ with the error of the RBF-CF for the Runge-like test function $f(x) = c/(1 + (x-0.25)^2 )$ on $\Omega = [0,1]$, where $c$ is chosen such that $I[f] = 1$.
Once more, we considered Wendland's compactly supported RBF $\varphi_{1,k}$ with smoothness parameters $k=0,1,2$, $N=100$ equidistant data points, and a varying shape parameter $\varepsilon$, which is the same for all basis functions.
There are a few observations that can be made based on the results reported in \cref{fig:error_1d}.
Arguably most importantly, the smallest error seems to be obtained for a shape parameter that yields the RBF-CF to be stable ($\|C_N\|_{\infty} = \|I\|_{\infty}$).
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz1_N400_equid_k1_d0}
\caption{Equidistant, $d=0$}
\label{fig:error_2d_Genz1_N400_equid_k1_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz1_N400_Halton_k1_d0}
\caption{Halton, $d=0$}
\label{fig:error_2d_Genz1_N400_Halton_k1_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz1_N400_random_k1_d0}
\caption{Random, $d=0$}
\label{fig:error_2d_Genz1_N400_random_k1_d0}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz1_N400_equid_k1_d1}
\caption{Equidistant, $d=1$}
\label{fig:error_2d_Genz1_N400_equid_k1_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz1_N400_Halton_k1_d1}
\caption{Halton, $d=1$}
\label{fig:error_2d_Genz1_N400_Halton_k1_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz1_N400_random_k1_d1}
\caption{Random, $d=1$}
\label{fig:error_2d_Genz1_N400_random_k1_d1}
\end{subfigure}%
%
\caption{
Error analysis for Wendland's compactly supported RBF $\varphi_{2,k}$ in two dimensions with smoothness parameter $k=1$.
Considered is the first Genz test function $g_1$ on $\Omega = [0,1]^2$; see \cref{eq:Genz}.
In all cases, $N=400$ data points (equidistant, Halton, or random) were considered, while the reference shape parameter $\varepsilon$ was allowed to vary.
$1/h$ denotes the threshold above which the basis functions have nonoverlapping support.
}
\label{fig:error_2d_Genz1_k1}
\end{figure}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz4_N400_equid_k1_d0}
\caption{Equidistant, $d=0$}
\label{fig:error_2d_Genz4_N400_equid_k1_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz4_N400_Halton_k1_d0}
\caption{Halton, $d=0$}
\label{fig:error_2d_Genz4_N400_Halton_k1_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz4_N400_random_k1_d0}
\caption{Random, $d=0$}
\label{fig:error_2d_Genz4_N400_random_k1_d0}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz4_N400_equid_k1_d1}
\caption{Equidistant, $d=1$}
\label{fig:error_2d_Genz4_N400_equid_k1_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz4_N400_Halton_k1_d1}
\caption{Halton, $d=1$}
\label{fig:error_2d_Genz4_N400_Halton_k1_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S715_error_2d_Genz4_N400_random_k1_d1}
\caption{Random, $d=1$}
\label{fig:error_2d_Genz4_N400_random_k1_d1}
\end{subfigure}%
%
\caption{
Error analysis for Wendland's compactly supported RBF $\varphi_{2,k}$ in two dimensions with smoothness parameter $k=1$.
Considered is the fourth Genz test function $g_4$ on $\Omega = [0,1]^2$; see \cref{eq:Genz}.
In all cases, $N=400$ data points (equidistant, Halton, or random) were considered, while the reference shape parameter $\varepsilon$ was allowed to vary.
$1/h$ denotes the threshold above which the basis functions have nonoverlapping support.
}
\label{fig:error_2d_Genz4_k1}
\end{figure}
Next, we extend our numerical stability and error analysis to two dimensions, considering the domain $\Omega = [0,1]^2$ and the following Genz test functions \cite{genz1984testing} (also see \cite{van2020adaptive}):
\begin{equation}\label{eq:Genz}
\begin{aligned}
g_1(\boldsymbol{x})
& = \cos\left( 2 \pi b_1 + \sum_{i=1}^q a_i x_i \right) \quad
&& \text{(oscillatory)}, \\
g_2(\boldsymbol{x})
& = \prod_{i=1}^q \left( a_i^{-2} + (x_i - b_i)^2 \right)^{-1} \quad
&& \text{(product peak)}, \\
g_3(\boldsymbol{x})
& = \left( 1 + \sum_{i=1}^q a_i x_i \right)^{-(q+1)} \quad
&& \text{(corner peak)}, \\
g_4(\boldsymbol{x})
& = \exp \left( - \sum_{i=1}^q a_i^2 ( x_i - b_i )^2 \right) \quad
&& \text{(Gaussian)}
\end{aligned}
\end{equation}
Here, $q$ denotes the dimension under consideration and is henceforth chosen as $q=2$.
These functions are designed to have different difficult characteristics for numerical integration routines.
The vectors $\mathbf{a} = (a_1,\dots,a_q)^T$ and $\mathbf{b} = (b_1,\dots,b_q)^T$ respectively contain (randomly chosen) shape and translation parameters.
For each case, the experiment was repeated $100$ times.
At the same time, for each experiment, the vectors $\mathbf{a}$ and $\mathbf{b}$ were drawn randomly from $[0,1]^2$.
For reasons of space, we only report the results for $g_1$ and $g_4$ as well as $k=1$.
These can be found in \cref{fig:error_2d_Genz1_k1} and \cref{fig:error_2d_Genz4_k1}, respectively.
As before, the smallest errors are found for shape parameters that correspond to the RBF-CF being stable.
The results for $g_2, g_3$ and $k=0,2$ are similar and can be found as part of the open-access MATLAB code \cite{glaubitz2021stabilityCode}.
\begin{table}[tb]
\renewcommand{\arraystretch}{1.4}
\centering
\begin{tabular}{c c c c c c c c c}
\toprule
& & \multicolumn{3}{c}{$g_1$} & & \multicolumn{3}{c}{$g_4$} \\ \hline
& & $e_{\text{min}}$ & $\varepsilon$ & $\|C_N\|_{\infty}$
& & $e_{\text{min}}$ & $\varepsilon$ & $\|C_N\|_{\infty}$ \\ \hline
& & \multicolumn{7}{c}{Equidistant Points} \\ \hline
$d=0$ & & 1.4e-06 & 1.7e+00 & 1.0e+00
& & 5.6e-06 & 1.7e+00 & 1.0e+00 \\
$d=1$ & & 1.7e-06 & 1.7e+00 & 1.0e+00
& & 6.2e-06 & 1.7e+00 & 1.0e+00 \\ \hline
& & \multicolumn{7}{c}{Halton Points} \\ \hline
$d=0$ & & 5.0e-05 & 5.5e-01 & 1.0e+00
& & 2.0e-05 & 5.5e-01 & 1.0e+00 \\
$d=1$ & & 1.1e-05 & 5.5e-01 & 1.0e+00
& & 1.4e-05 & 5.5e-01 & 1.0e+00 \\ \hline
& & \multicolumn{7}{c}{Random Points} \\ \hline
$d=0$ & & 4.1e-04 & 7.7e-01 & 1.0e+00
& & 1.6e-04 & 7.7e-01 & 1.0e+00 \\ \hline
$d=1$ & & 2.3e-04 & 2.9e-01 & 1.0e+00
& & 1.8e-04 & 4.0e-01 & 1.0e+00 \\
\bottomrule
\end{tabular}
\caption{Minimal errors, $e_{\text{min}}$, for the first and fourth Genz test function, $g_1$ and $g_4$, together with the corresponding shape parameter, $\varepsilon$, and stability measure, $\|C_N\|_{\infty}$.
In all cases, Wendland's compactly supported RBF with smoothness parameter $k=1$ was used.}
\label{tab:min_error_Wendland}
\end{table}
It might be hard to identify the smallest errors as well as the corresponding shape parameter and stability measure from \cref{fig:error_2d_Genz1_k1} and \cref{fig:error_2d_Genz4_k1}.
Hence, these are listed separately in \cref{tab:min_error_Wendland}.
\subsection{Gaussian RBF}
\label{sub:num_Gaussian}
Here, we perform a similar investigation of stability and accuracy as in \cref{sub:num_compact} for the Gaussian RBF, given by $\varphi(r) = \exp( \varepsilon^2 r^2 )$.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_equid_noPol_noise0}
\caption{Equidistant, $d=-1$}
\label{fig:G_Genz14_N400_equid_noPol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_Halton_noPol_noise0}
\caption{Halton, $d=-1$}
\label{fig:G_Genz14_N400_Halton_noPol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_random_noPol_noise0}
\caption{Random, $d=-1$}
\label{fig:G_Genz14_N400_random_noPol}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_equid_d0_noise0}
\caption{Equidistant, $d=0$}
\label{fig:G_Genz14_N400_equid_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_Halton_d0_noise0}
\caption{Halton, $d=0$}
\label{fig:G_Genz14_N400_Halton_d0}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_random_d0_noise0}
\caption{Random, $d=0$}
\label{fig:G_Genz14_N400_random_d0}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_equid_d1_noise0}
\caption{Equidistant, $d=1$}
\label{fig:G_Genz14_N400_equid_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_Halton_d1_noise0}
\caption{Halton, $d=1$}
\label{fig:G_Genz14_N400_Halton_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_random_d1_noise0}
\caption{Random, $d=1$}
\label{fig:G_Genz14_N400_random_d1}
\end{subfigure}%
%
\caption{
Error analysis for the Gaussian RBF $\varphi(r) = \exp( \varepsilon^2 r^2 ) $ in two dimensions for the first and fourth Genz test function $g_1, g_4$ on $\Omega = [0,1]^2$; see \cref{eq:Genz}.
In all cases, $N=400$ data points (equidistant, Halton, or random) were considered, while the reference shape parameter $\varepsilon$ was allowed to vary.
}
\label{fig:error_Gauss_2d_Genz14}
\end{figure}
In particuar, \cref{fig:error_Gauss_2d_Genz14} reports on the stability measure $\|C_N\|_\infty$ for the Gaussian RBF-CF and the corresponding errors for the first and fourth Genz test function on $\Omega = [0,1]^2$ for $N=400$ data points.
These are given as equidistant, Halton and random points, respectively.
Furthermore, the shape parameter was allowed to vary from $10^{-4}$ to $10^3$ and the RBF-CF was computed by augmenting the RBF basis with no ($d=-1$) polynomials, a constant ($d=0$), or a linear term ($d=1$).
Also for the Gaussian RBFs, we observe the RBF-CFs to be stable for a sufficiently large shape parameter.
It might be argued that this is because the Gaussian RBF can be considered as being ``close" to a compactly supported RBF for large shape parameter.\footnote{Of course, strictly speaking, the Gaussian RBF does not have compact support. Yet, for large $\varepsilon^2 r^2$ its function value will lie below machine precision, making it compactly supported in a numerical sense.}
At the same time, however, the Gaussian RBF-CF are observed to become unstable for decreasing shape parameter $\varepsilon$.
Furthermore, we observe the smallest error to occur in a region of instability in this case.
Roughly speaking, this shape parameter---providing a minimal error---usually lies slightly below the smallest shape parameter that yields a stable RBF-CF.
This might be explained by this shape parameter balancing out the two terms in \cref{eq:L-inequality}.
One the one hand, the RBF space $\mathcal{S}_{N,d,\varepsilon}$ should provide a best approximation that is as close as possible to the underlying function, $f$.
This is reflected in the term $\norm{ f - s }_{L^{\infty}(\Omega)}$ on the right hand side of \cref{eq:L-inequality}.
On the other hand, the stability measure of the corresponding RBF-CF should be as small as possible.
This is reflected in the term $\| I \|_{\infty} + \| C_N \|_{\infty}$, by which $\norm{ f - s }_{L^{\infty}(\Omega)}$ is multiplied in \cref{eq:L-inequality}.
While for Gaussian RBFs the best approximation becomes more accurate for a decreasing shape parameter, the stability measure benefits from increasing shape parameters.
In this case, the balance between these two objectives---and therefore the smallest error---is found outside of the region of stability.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_equid_noPol_noise4}
\caption{Equidistant, $d=-1$}
\label{fig:G_Genz14_N400_equid_noPol_noise4}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_Halton_noPol_noise4}
\caption{Halton, $d=-1$}
\label{fig:G_Genz14_N400_Halton_noPol_noise4}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_random_noPol_noise4}
\caption{Random, $d=-1$}
\label{fig:G_Genz14_N400_random_noPol_noise4}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_equid_d0_noise4}
\caption{Equidistant, $d=0$}
\label{fig:G_Genz14_N400_equid_d0_noise4}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_Halton_d0_noise4}
\caption{Halton, $d=0$}
\label{fig:G_Genz14_N400_Halton_d0_noise4}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_random_d0_noise4}
\caption{Random, $d=0$}
\label{fig:G_Genz14_N400_random_d0_noise4}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_equid_d1_noise4}
\caption{Equidistant, $d=1$}
\label{fig:G_Genz14_N400_equid_d1_noise4}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_Halton_d1_noise4}
\caption{Halton, $d=1$}
\label{fig:G_Genz14_N400_Halton_d1_noise4}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S72_G_Genz14_N400_random_d1_noise4}
\caption{Random, $d=1$}
\label{fig:G_Genz14_N400_random_d1_noise4}
\end{subfigure}%
%
\caption{
Error analysis for the Gaussian RBF $\varphi(r) = \exp( \varepsilon^2 r^2 ) $ in two dimensions for the first and fourth Genz test function $g_1, g_4$ on $\Omega = [0,1]^2$; see \cref{eq:Genz}.
Uniform white noise $\mathbf{n} \in \mathbb{R}^N$ with $\| \mathbf{n} \|_\infty \leq 10^{-4}$ was added to the function values.
In all cases, $N=400$ data points (equidistant, Halton, or random) were considered, while the reference shape parameter $\varepsilon$ was allowed to vary.
}
\label{fig:error_Gauss_2d_Genz14_noise4}
\end{figure}
That said, the situation changes if the data (function values) used in the RBF-CFs are perturbed by noise, which is often the case in applications.
Such a situation is reported in \cref{fig:error_Gauss_2d_Genz14_noise4}.
Here, uniform white noise $\mathbf{n} \in \mathbb{R}^N$ with $\| \mathbf{n} \|_\infty \leq 10^{-4}$ was added to the function values of the first and fourth Genz test function.
As a result, the term including the stability measure $\| C_N \|_{\infty}$ in \cref{eq:L-inequality} gains in importance.
In accordance with this, the minimal errors in \cref{fig:error_Gauss_2d_Genz14_noise4} are now attained for larger shape parameters that correspond to the RBF-CF having a smaller stability measure $\| C_N \|_{\infty}$ as before.
Also see \cref{tab:min_error_Gauss_g1} and \cref{tab:min_error_Gauss_g4} below.
In particular, this demonstrates the increased importance of stability of CF when these are used in real-world applications where the presence of noise can often not be avoided.
\begin{table}[tb]
\renewcommand{\arraystretch}{1.4}
\centering
\begin{tabular}{c c c c c c c c c}
\toprule
& & \multicolumn{3}{c}{$g_1$ without noise} & & \multicolumn{3}{c}{$g_1$ with noise} \\ \hline
& & $e_{\text{min}}$ & $\varepsilon$ & $\|C_N\|_{\infty}$
& & $e_{\text{min}}$ & $\varepsilon$ & $\|C_N\|_{\infty}$ \\ \hline
& & \multicolumn{7}{c}{Equidistant Points} \\ \hline
$d=0$ & & 6.1e-10 & 2.4e+00 & 1.1e+02
& & 6.6e-06 & 7.5e+00 & 1.0e+00 \\
$d=1$ & & 5.4e-10 & 3.3e+00 & 2.6e+02
& & 6.7e-06 & 7.5e+00 & 1.0e+00 \\ \hline
& & \multicolumn{7}{c}{Halton Points} \\ \hline
$d=0$ & & 2.4e-09 & 2.8e+00 & 8.1e+01
& & 4.6e-05 & 8.9e+00 & 3.9e+00 \\
$d=1$ & & 4.1e-10 & 2.8e+00 & 1.4e+02
& & 1.3e-05 & 1.0e+01 & 2.2e+00 \\ \hline
& & \multicolumn{7}{c}{Random Points} \\ \hline
$d=0$ & & 1.5e-09 & 2.0e+00 & 6.4e+01
& & 1.9e-04 & 2.0e+00 & 6.4e+01 \\ \hline
$d=1$ & & 7.8e-10 & 2.0e+00 & 1.0e+02
& & 9.1e-05 & 1.2e+01 & 1.0e+01 \\
\bottomrule
\end{tabular}
\caption{Minimal errors, $e_{\text{min}}$, for the first Genz test function, $g_1$, with and without noise together with the corresponding shape parameter, $\varepsilon$, and stability measure, $\|C_N\|_{\infty}$.
In all cases, the Gaussian RBF was used.}
\label{tab:min_error_Gauss_g1}
\end{table}
\begin{table}[tb]
\renewcommand{\arraystretch}{1.4}
\centering
\begin{tabular}{c c c c c c c c c}
\toprule
& & \multicolumn{3}{c}{$g_4$ without noise} & & \multicolumn{3}{c}{$g_4$ with noise} \\ \hline
& & $e_{\text{min}}$ & $\varepsilon$ & $\|C_N\|_{\infty}$
& & $e_{\text{min}}$ & $\varepsilon$ & $\|C_N\|_{\infty}$ \\ \hline
& & \multicolumn{7}{c}{Equidistant Points} \\ \hline
$d=0$ & & 7.8e-10 & 2.4e+00 & 1.1e+02
& & 1.0e-05 & 6.4e+00 & 3.1e+00 \\
$d=1$ & & 4.6e-10 & 3.3e+00 & 2.6e+02
& & 1.0e-05 & 6.4e+00 & 3.1e+00 \\ \hline
& & \multicolumn{7}{c}{Halton Points} \\ \hline
$d=0$ & & 1.0e-09 & 2.8e+00 & 8.1e+01
& & 2.8e-05 & 8.9e+00 & 3.9e+00 \\
$d=1$ & & 1.0e-09 & 2.8e+00 & 1.4e+02
& & 2.0e-05 & 8.9e+00 & 3.9e+00 \\ \hline
& & \multicolumn{7}{c}{Random Points} \\ \hline
$d=0$ & & 4.8e-10 & 2.0e+00 & 6.4e+01
& & 1.3e-04 & 1.7e+01 & 3.6e+00 \\ \hline
$d=1$ & & 9.7e-10 & 9.1e-01 & 1.5e+02
& & 6.6e-05 & 1.4e+01 & 5.7e+00 \\
\bottomrule
\end{tabular}
\caption{Minimal errors, $e_{\text{min}}$, for the fourth Genz test function, $g_4$, with and without noise together with the corresponding shape parameter, $\varepsilon$, and stability measure, $\|C_N\|_{\infty}$.
In all cases, the Gaussian RBF was used.}
\label{tab:min_error_Gauss_g4}
\end{table}
It might be hard to identify the smallest errors as well as the corresponding shape parameter and stability measure from \cref{fig:error_Gauss_2d_Genz14} and \cref{fig:error_Gauss_2d_Genz14_noise4}.
Hence, these are listed separately in \cref{tab:min_error_Gauss_g1} and \cref{tab:min_error_Gauss_g4} for the first and fourth Genz test function with and without noise, respectively.
\subsection{Polyharmonic Splines}
\label{sub:num_PHS}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S73_TPS_Genz14_equid_d1}
\caption{TPS, equidistant}
\label{fig:S73_TPS_Genz14_equid_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S73_TPS_Genz14_Halton_d1}
\caption{TPS, Halton}
\label{fig:S73_TPS_Genz14_Halton_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S73_TPS_Genz14_random_d1}
\caption{TPS, random}
\label{fig:S73_TPS_Genz14_random_d1}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S73_cubic_Genz14_equid_d1}
\caption{cubic, equidistant}
\label{fig:S73_cubic_Genz14_equid_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S73_cubic_Genz14_Halton_d1}
\caption{cubic, Halton}
\label{fig:S73_cubic_Genz14_Halton_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S73_cubic_Genz14_random_d1}
\caption{cubic, random}
\label{fig:S73_cubic_Genz14_random_d1}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S73_quintic_Genz14_equid_d1}
\caption{quintic, equidistant}
\label{fig:S73_quintic_Genz14_equid_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S73_quintic_Genz14_Halton_d1}
\caption{quintic, Halton}
\label{fig:S73_quintic_Genz14_Halton_d1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/S73_quintic_Genz14_random_d1}
\caption{quintic, random}
\label{fig:S73_quintic_Genz14_random_d1}
\end{subfigure}%
%
\caption{
Error analysis for the TPS ($\varphi(r) = r^2 \log r$), cubic ($\varphi(r) = r^3$) and quintic ($\varphi(r) = r^5$) in two dimensions.
The first and fourth Genz test functions $g_1, g_4$ were considered on $\Omega = [0,1]^2$; see \cref{eq:Genz}.
In all cases, linear terms were incorporated, i.\,e., $d=1$.
}
\label{fig:error_PHS_Genz14}
\end{figure}
We end this section by providing a similar investigation for PHS.
Again, the first and fourth Genz test functions on $\Omega = [0,1]^2$ are considered.
However, for PHS no shape parameter is involved and we therefore consider their stability and accuracy for an increasing number of equidistant, Halton and random data points.
The results for the TPS ($\varphi(r) = r^2 \log r$), cubic ($\varphi(r) = r^3$) and quintic ($\varphi(r) = r^5$) PHS RBFs can be found in \cref{fig:error_PHS_Genz14}.
In all cases, the corresponding PHS basis was augmented with a linear term ($d=1$).
We can observe from \cref{fig:error_PHS_Genz14} that all RBF-CFs converge (with the rate of convergence depending on the order of the PHS) while also remaining stable or at least being asymptotically stable.
It would be of interest to provide a theoretical investigation on (asymptotic) stability of PHS-CFs and under which conditions this might be ensured.
This might be addressed in future works.
\section{Concluding Thoughts}
\label{sec:summary}
In this work, we investigated stability of RBF-CFs.
We started by showing that stability of RBF-CFs can be connected to the famous Lebesgue constant of the underlying RBF interpolant.
While this indicates that RBF-CFs might benefit from low Lebesgue constants, it was also demonstrated that RBF-CFs often have superior stability properties compared to RBF interpolation.
Furthermore, stability was proven for RBF-CFs based on compactly supported RBFs under the assumption of a sufficiently large number of (equidistributed) data points and the shape parameter(s) lying above a certain threshold.
Finally, we showed that under certain conditions asymptotic stability of RBF-CFs is independent of polynomial terms that are usually included in RBF approximations.
The above findings were accompanied by a series of numerical tests.
While we believe this work to be a valuable step towards a more mature stability theory of RBF-CFs, the present work also demonstrates that further steps in this direction would be highly welcome.
\section{Moments}
\label{sec:app_moments}
Henceforth, we provide the moments for different RBFs.
The one-dimensional case is discussed in \cref{sub:app_moments_1d}, while two-dimensional moments are derived in \cref{sub:app_moments_2d}.
\subsection{One-Dimensional Moments}
\label{sub:app_moments_1d}
Let us consider the one-dimensional case of $\Omega = [a,b]$ and distinct data points $x_1,\dots,x_N \in [a,b]$.
\subsubsection{Gaussian RBF}
For $\varphi(r) = \exp( - \varepsilon^2 r^2 )$, the moment of the translated Gaussian RBF,
\begin{equation}\label{eq:moment_1d_G}
m_n = m(\varepsilon,x_n,a,b) = \int_a^b \exp( - \varepsilon^2 | x - x_n |^2 ) \, \mathrm{d} x,
\end{equation}
is given by
\begin{equation}
m_n = \frac{\sqrt{\pi}}{2 \varepsilon} \left[ \mathrm{erf}( \varepsilon(b-x_n) ) - \mathrm{erf}( \varepsilon(a-x_n) ) \right].
\end{equation}
Here, $\mathrm{erf}(x) = 2/\sqrt{\pi} \int_0^x \exp( -t^2 ) \, \mathrm{d} t$ denotes the usual \emph{error function}, \cite[Section 7.2]{dlmf2021digital}.
\subsubsection{Polyharmonic Splines}
For $\varphi(r) = r^k$ with odd $k \in \mathbb{N}$, the moment of the translated PHS,
\begin{equation}
m_n = m(x_n,a,b) = \int_a^b \varphi( x - x_n ) \, \mathrm{d} x,
\end{equation}
is given by
\begin{equation}
m_n = \frac{1}{k+1} \left[ (a-x_n)^{k+1} + (b-x_n)^{k+1} \right],
\quad n=1,2,\dots,N.
\end{equation}
For $\varphi(r) = r^k \log r$ with even $k \in \mathbb{N}$, on the other hand, we have
\begin{equation}
m_n
= (x_n - a)^{k+1} \left[ \frac{\log( x_n - a )}{k+1} - \frac{1}{(k+1)^2} \right]
+ (b - x_n)^{k+1} \left[ \frac{\log( b - x_n )}{k+1} - \frac{1}{(k+1)^2} \right].
\end{equation}
Note that for $x_n = a$ the first term is zero, while for $x_n = b$ the second term is zero.
\subsection{Two-Dimensional Moments}
\label{sub:app_moments_2d}
Here, we consider the two-dimensional case, where the domain is given by a rectangular of the form $\Omega = [a,b] \times [c,d]$.
\subsubsection{Gaussian RBF}
For $\varphi(r) = \exp( - \varepsilon^2 r^2 )$, the two-dimensional moments can be written as products of one-dimensional moments.
In fact, we have
\begin{equation}
\int_a^b \int_c^d \exp( - \varepsilon^2 \|(x-x_n,y-y_n\|_2^2 )
= m(\varepsilon,x_n,a,b) \cdot m(\varepsilon,y_n,c,d).
\end{equation}
Here, the multiplicands on the right-hand side are the one-dimensional moments from \cref{eq:moment_1d_G}.
\subsubsection{Polyharmonic Splines and Other RBFs}
If it is not possible to trace the two-dimensional moments back to the one-dimensional ones, we are in need of another approach.
This is, for instance, the case for PHS.
We start by noting that for a data points $(x_n,y_n) \in [a,b] \times [c,d]$ the corresponding moment can be rewritten as follows:
\begin{equation}
m(x_n,y_n)
= \int_{a}^b \int_{c}^d \varphi( \| (x-x_n,y-y_n)^T \|_2 ) \, \mathrm{d} y \, \mathrm{d} x
= \int_{\tilde{a}}^{\tilde{b}} \int_{\tilde{c}}^{\tilde{d}} \varphi( \| (x,y)^T \|_2 ) \, \mathrm{d} y \, \mathrm{d} x
\end{equation}
with translated boundaries $\tilde{a} = a - x_n$, $\tilde{b} = b - x_n$, $\tilde{c} = c - y_n$, and $\tilde{d} = d - y_n$.
We are not aware of an explicit formula for such integrals for most popular RBFs readily available from the literature.
That said, such formulas were derived in \cite{reeger2016numericalA,reeger2016numericalB,reeger2018numerical} (also see \cite[Chapter 2.3]{watts2016radial}) for the integral of $\varphi$ over a \emph{right triangle} with vertices $(0,0)^T$, $(\alpha,0)^T$, and $(\alpha,\beta)^T$.
Assuming $\tilde{a} < 0 < \tilde{b}$ and $\tilde{c} < 0 < \tilde{d}$, we therefore partition the shifted domain ${\tilde{\Omega} = [\tilde{a},\tilde{b}] \times [\tilde{c},\tilde{d}]}$ into eight right triangles.
Denoting the corresponding integrals by $I_1, \dots, I_8$, the moment $m(x_n,y_n)$ correspond to the sum of these integrals.
The procedure is illustrated in \cref{fig:moments}.
\begin{figure}[tb
\centering
\begin{tikzpicture}[domain = -6.5:6.5, scale=0.8, line width=1.25pt]
\draw[->,>=stealth] (-4.5,0) -- (7,0) node[below] {$x$};
\draw[->,>=stealth] (0,-2.75) -- (0,4.5) node[right] {$y$};
\draw (-4,0.1) -- (-4,-0.1) node [below] {$\tilde{a}$ \ \ \ \ };
\draw (6,0.1) -- (6,-0.1) node [below] {\ \ \ \ $\tilde{b}$};
\draw (-0.1,-2) -- (0.1,-2) node [below] {\ \ $\tilde{c}$};
\draw (-0.1,3) -- (0.1,3) node [above] {\ \ $\tilde{d}$};
\draw[blue] (-4,-2) rectangle (6,3);
\draw[red,dashed] (0,0) -- (6,3) {};
\draw[red,dashed] (0,0) -- (-4,3) {};
\draw[red,dashed] (0,0) -- (-4,-2) {};
\draw[red,dashed] (0,0) -- (6,-2) {};
\draw[red] (4.5,1) node {\Large $I_1$};
\draw[red] (1,2) node {\Large $I_2$};
\draw[red] (-1,2) node {\Large $I_3$};
\draw[red] (-3,1) node {\Large $I_4$};
\draw[red] (-3,-0.5) node {\Large $I_5$};
\draw[red] (-0.5,-1.15) node {\Large $I_6$};
\draw[red] (1,-1.25) node {\Large $I_7$};
\draw[red] (4.5,-0.75) node {\Large $I_8$};
\end{tikzpicture}
%
\caption{Illustration of how the moments can be computed on a rectangle in two dimensions}
\label{fig:moments}
\end{figure}
The special cases where one (or two) of the edges of the rectangle align with one of the axes can be treated similarly.
However, in this case, a smaller subset of the triangles is considered.
We leave the details to the reader, and note the following formula for the weights:
\begin{equation}
\begin{aligned}
m(x_n,y_n)
& = \left[ 1 - \delta_0\left(\tilde{b} \tilde{d}\right) \right] \left( I_1 + I_2 \right)
+ \left[ 1 - \delta_0\left(\tilde{a} \tilde{d}\right) \right] \left( I_3 + I_4 \right) \\
& + \left[ 1 - \delta_0\left(\tilde{a} \tilde{c}\right) \right] \left( I_5 + I_6 \right)
+ \left[ 1 - \delta_0\left(\tilde{b} \tilde{c}\right) \right] \left( I_7 + I_8 \right)
\end{aligned}
\end{equation}
Here, $\delta_0$ denotes the usual Kronecker delta defined as $\delta_0(x) = 1$ if $x = 0$ and $\delta_0(x) = 0$ if $x \neq 0$.
The above formula holds for general $\tilde{a}$, $\tilde{b}$, $\tilde{c}$, and $\tilde{d}$.
Note that all the right triangles can be rotated or mirrored in a way that yields a corresponding integral of the form
\begin{equation}\label{eq:refInt}
I_{\text{ref}}(\alpha,\beta)
= \int_0^{\alpha} \int_0^{\frac{\beta}{\alpha}x} \varphi( \| (x,y)^T \|_2 ) \, \mathrm{d} y \, \mathrm{d} x.
\end{equation}
More precisely, we have
\begin{equation}
\begin{alignedat}{4}
& I_1 = I_{\text{ref}}(\tilde{b},\tilde{d}), \quad
&& I_2 = I_{\text{ref}}(\tilde{d},\tilde{b}), \quad
&& I_3 = I_{\text{ref}}(\tilde{d},-\tilde{a}), \quad
&& I_4 = I_{\text{ref}}(-\tilde{a},\tilde{d}), \\
& I_5 = I_{\text{ref}}(-\tilde{a},-\tilde{c}), \quad
&& I_6 = I_{\text{ref}}(-\tilde{c},-\tilde{a}), \quad
&& I_7 = I_{\text{ref}}(-\tilde{c},\tilde{b}), \quad
&& I_8 = I_{\text{ref}}(\tilde{b},-\tilde{c}).
\end{alignedat}
\end{equation}
Finally, explicit formulas of the reference integral $I_{\text{ref}}(\alpha,\beta)$ over the right triangle with vertices $(0,0)^T$, $(\alpha,0)^T$, and $(\alpha,\beta)^T$ for some PHS can be found in \cref{tab:moments}.
Similar formulas are also available, for instance, for Gaussian, multiquadric and inverse multiquadric RBFs.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{c|c}
$\varphi(r)$ & $I_{\text{ref}}(\alpha,\beta)$ \\
\midrule
$r^2 \log r$ & $\frac{\alpha}{144} \left[ 24\alpha^3 \arctan\left( \beta/\alpha \right) + 6 \beta (3\alpha^2 + \beta^2) \log( \alpha^2 + \beta^2 ) - 33\alpha^2\beta - 7\beta^3 \right]$ \\
$r^3$ & $\frac{\alpha}{40} \left[ 3 \alpha^4 \arcsinh\left( \beta/\alpha \right) + \beta (5\alpha^2 + 2 \beta^2) \sqrt{ \alpha^2 + \beta^2} \right]$ \\
$r^5$ & $\frac{\alpha}{336} \left[ 15 \alpha^6 \arcsinh\left( \beta/\alpha \right) + \beta (33\alpha^4 + 26\alpha^2\beta^2 + 8 \beta^4) \sqrt{ \alpha^2 + \beta^2} \right]$ \\
$r^7$ & $\frac{\alpha}{3346} \left[ 105 \alpha^8 \arcsinh\left( \beta/\alpha \right) + \beta (279\alpha^6 + 326\alpha^4\beta^2 + 200\alpha^2\beta^4 + 48 \beta^6) \sqrt{ \alpha^2 + \beta^2} \right]$ \\
\end{tabular}
\caption{The reference integral $I_{\text{ref}}(\alpha,\beta)$---see \cref{eq:refInt}---for some PHS}
\label{tab:moments}
\end{table}
We note that the approach presented above is similar to the one in \cite{sommariva2006numerical}, where the domain $\Omega = [-1,1]^2$ was considered.
Later, the same authors extended their findings to simple polygons \cite{sommariva2006meshless} using the Gauss--Grenn theorem.
Also see the recent work \cite{sommariva2021rbf}, addressing polygonal regions that may be nonconvex or even multiply connected, and references therein.
It would be of interest to see if these approaches also carry over to computing products of RBFs corresponding to different centers or products of RBFs and their partial derivatives, again corresponding to different centers.
Such integrals occur as elements of mass and stiffness matrices in numerical PDEs.
In particular, they are desired to construct linearly energy stable (global) RBF methods for hyperbolic conservation laws \cite{glaubitz2020shock,glaubitz2021stabilizing,glaubitz2021towards}.
\section*{Acknowledgements}
| proofpile-arXiv_065-13527 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The heat transfer Stefan problems such as melting and freezing, diffusion process constitute a vast area with a wide engineering and industrial applications. Stefan problems describe the heat processes in phase transitions, where these phase transitions are characterized by thermal diffusion and they have been studied widely in \cite{1}-\cite{9}. The extensive bibliography related to this study is represented in \cite{10}.
The classical direct Stefan problems with free boundaries is the phase-change problem where temperature field in liquid (in melting problem) or solid regions (in solidification problem) and interface melting temperature at free boundary $x=\beta(t)$ have to be determined but if dynamics of heat flux has to determined in this case inverse Stefan problem is considered. Such kind of problems for materials with spherical, cylindrical and cross-section domain arising in electrical contact phenomena are successfully discussed in \cite{11}-\cite{18}. Mathematical modeling of non-classical Stefan problem should take into account temperature dependence of the thermal conductivity because it is very essential to get correct description of the boiling and melting dynamics. The nonlinear Stefan problem with Diriclet, Neumann and Robin conditions on the fixed and moving face are considered and successfully solved in \cite{19}-\cite{24}. Bollati, Briozzo and Natale successfully discussed about inverse non-classical Stefan problem in which unknown thermal coefficients have to be determined \cite{25} and Briozzo, Natale with Tarzia considered inverse non-classical Stefan problem for Storm’s-type materials through a phase-change process \cite{26}. Huntul and Lesnic also discussed an inverse problem of determining the time-dependent
thermal conductivity and the transient temperature satisfying the heat equation with boundary data \cite{27}.
\begin{figure}[t]\label{Fig1}
\includegraphics[width=7.5 cm]{Figure_1.png}
\caption{Mathematical model of material with variable cross-section: $D_1$-metallic vapour region, $D_2$-melting region}
\end{figure}
Mathematical model of the heat transfer process in the material with cross-section variable region can be represented by the generalized heat equations. This kind of model is very useful to describe dynamics of temperature in metal bridge in electrical contact phenomena to prevent contact explosion. The mathematical model of initial stage of closure electrical contacts involves domains metallic vapour and liquid regions, see Figure \ref{Fig1}.Modeling of the temperature field in domain $D_1$ is a difficult problem, thus we suggest that heat is distributed in parabolic form and the mathematical model for a metallic vapour zone can be represented
\begin{equation}\label{1}
\theta_1(z,t)=Az^2+Bz+C,\;\;\;0<z<\alpha(t),\;\;t>0,
\end{equation}
and temperature in this region decreases from the temperature $\theta_{im}$ which is required for ionization of the metallic vapour
\begin{equation}\label{2}
\theta_1(0,t)=\theta_{im},\;\;\;t>0
\end{equation}
\begin{equation}\label{4}
\theta_1(\alpha(t),t)=\theta_b,\;\;\;t>0,
\end{equation}
and the balance of heat flux on $z=\alpha(t)$ is
\begin{equation}\label{5}
-\lambda\dfrac{\partial\theta_1}{\partial z}\bigg|_{z=\alpha(t)}=\dfrac{P_0}{2\sqrt{\pi t}}-l_b\gamma_b\dfrac{d\alpha}{dt},\;\;\;t>0,
\end{equation}
where $\theta_1(z,t)$ is a temperature in metallic vapour zone, $\theta_b$ is a boiling temperature. $P_0$ is a given positive constant, $l_b$ is a latent heat of boiling and $\gamma_b>0$ is a density of material at boiling. The location of the boiling interface $\alpha(t)$ can be represented
\begin{equation}\label{6}
\alpha(t)=2\alpha_0\sqrt{t}.
\end{equation}
From conditions \eqref{2},\eqref{4} we can determine
\begin{equation}\label{7}
C=\theta_{im},\;\;\;B=0,\;\;\;A=\dfrac{1}{4\alpha_0^2t}(\theta_b-\theta_{im}).
\end{equation}
Then by using \eqref{7} the temperature field for metallic vapour zone \eqref{1} can be rewritten
\begin{equation}\label{8}
\theta_1(z,t)=\dfrac{z^2}{4\alpha_0^2t}(\theta_b-\theta_{im})+\theta_{im}.
\end{equation}
With the help of \eqref{8} we can easily see that the solution of the equation \eqref{5} is \eqref{6} where $\alpha_0$ can be determined from the equation
\begin{equation}\label{9}
\alpha_0^2+D\alpha_0+E=0
\end{equation}
where
$$D=\dfrac{P_0}{2l_b\gamma_b\sqrt{\pi}},\;\;\;E=\dfrac{\lambda(\theta_b-\theta_{im})}{l_b\gamma_b}.$$
A mathematical model of temperature field of the domain $D_1$ can be represented
\begin{equation}\label{10}
c(\theta_2)\rho(\theta_2)\dfrac{\partial\theta_2}{\partial t}=\dfrac{1}{z^{\nu}}\dfrac{\partial}{\partial z}\bigg[\lambda(\theta_2)z^{\nu}\dfrac{\partial\theta_2}{\partial z}\bigg],\;\;\alpha(t)<z<\beta(t),\;\;0<\nu<1,\;\;t>0,
\end{equation}
\begin{equation}\label{11}
-\lambda(\theta_2(\alpha(t),t))\dfrac{\partial\theta_2}{\partial z}\bigg|_{z=\alpha(t)}=\dfrac{P_0e^{-\alpha_0^2}}{2\sqrt{\pi t}},\;\;\;t>0,
\end{equation}
\begin{equation}\label{12}
\theta_2(\beta(t),t)=\theta_m,\;\;\;t>0,
\end{equation}
\begin{equation}\label{13}
-\lambda(\theta_2(\beta(t),t))\dfrac{\partial\theta_2}{\partial z}\bigg|_{z=\beta(t)}=l_m\gamma_m\dfrac{d\beta}{dt},\;\;\; t>0,
\end{equation}
\begin{equation}\label{14}
\beta(0)=0
\end{equation}
where $c(\theta_2)$, $\rho(\theta_2)$ and $\lambda(\theta_2)$ are specific heat, material's density and thermal conductivity depended on temperature, $\theta_2(z,t)$ - temperature in liquid phase, $P_0$ is a given positive constant, $\theta_m$ - melting temperature, $l_m$ - latent heat of melting, $\gamma_m$ - density of material at melting, $\alpha(t)$ is a known free boundary that can be determined from \eqref{5} and \eqref{9}, $\beta(t)$ - location of the melting interface which has to be found.
We will consider one more problem replacing the heat flux condition \eqref{11} with convective boundary condition on the known free boundary $z=\alpha(t)$ such that
\begin{equation}\label{11a}
\lambda(\theta_2(\alpha(t),t))\dfrac{\partial\theta_2}{\partial z}\bigg|_{z=\alpha(t)}=\dfrac{q}{2\sqrt{\pi t}}(\theta_2(\alpha(t),t)-\theta^*),\;\;\;t>0
\end{equation}
where $q=P_0e^{-\alpha_0^2}$ is the coefficient characterizes the heat transfer at the free boundary $z=\alpha(t)$ determined from \eqref{9}, $\theta^*$ is the reference bulk temperature which arising near to free boundary $z=\alpha(t)$ with $\theta^*>\theta_2(\alpha(t),t)$.
The purpose of the paper is a providing similarity solution of the one-phase Stefan problem for generalized heat equation if heat flux enters to liquid region from known free boundary $z=\alpha(t)$ where boiling process starts and determination of the location of the melting interface on the boundary $z=\beta(t)$. In Section 2, similarity solution of two problems are introduced where condition \eqref{11a} replaced with \eqref{11} and this special method enables us to reduce the problem \eqref{10}-\eqref{14} boundary value problem with ordinary nonlinear differential equation. In Section 3, the existence and uniqueness of the similarity solutions of the two problems imposed \eqref{10}-\eqref{14} with two free boundaries is provided by using fixed point Banach theorem. In the last section, we provide the solutions for particular cases of thermal coefficients and their existence, uniqueness are discussed.
\section{Similarity solution of the problem}
\subsection{Heat flux condition}
Using dimensionless transformation
\begin{equation}\label{15}
T(z,t)=\dfrac{\theta(z,t)-\theta_m}{\theta_m}
\end{equation}
Then problem \eqref{10},\eqref{11},\eqref{12},\eqref{13} and \eqref{14} can be rewritten as
\begin{equation}\label{16}
\Bar{N}(T_2)\dfrac{\partial T_2}{\partial t}=\dfrac{a}{z^{\nu}}\dfrac{\partial}{\partial z}\bigg[\Bar{L}(T_2)z^{\nu}\dfrac{\partial T_2}{\partial z}\bigg],\;\;\alpha(t)<z<\beta(t),\;\;0<\nu<1,\;\;t>0,
\end{equation}
\begin{equation}\label{17}
\Bar{L}(T_2(\alpha(t),t))\dfrac{\partial T_2}{\partial z}\bigg|_{z=\alpha(t)}=-\dfrac{P_0e^{-\alpha_0^2}}{2\lambda_0\theta_m\sqrt{\pi t}},\;\;\;t>0,
\end{equation}
\begin{equation}\label{18}
T_2(\beta(t),t)=0,\;\;\;t>0,
\end{equation}
\begin{equation}\label{19}
\Bar{L}(T_2(\beta(t),t))\dfrac{\partial T_2}{\partial z}\bigg|_{z=\beta(t)}=-\dfrac{l_m\gamma_m}{\lambda_0\theta_m}\dfrac{d\beta}{dt},\;\;t>0,
\end{equation}
\begin{equation}\label{20}
\beta(0)=0
\end{equation}
where
\begin{equation}\label{NL}
\Bar{N}(T_2)=\dfrac{c(\theta_m T_2+\theta_m)\rho(\theta_m T_2+\theta_m)}{c_0\rho_0},\;\;\;\Bar{L}(T_2)=\dfrac{\lambda(\theta_m T_2+\theta_m)}{\lambda_0}
\end{equation}
and $c_0,\;\rho_0,\;\lambda_0$, $a=\lambda_0/(c_0\rho_0)$ are heat capacity, density, thermal conductivity and thermal diffusivity of the material.
To solve problem \eqref{16},\eqref{17},\eqref{18},\eqref{19} and \eqref{20} we use similarity type substitution
\begin{equation}\label{21}
T_2(z,t)=u_2(\eta),\;\;\;\eta=\dfrac{z}{2\sqrt{t}},
\end{equation}
and from \eqref{16},\eqref{17}, \eqref{18} and \eqref{21} the free boundaries can be represented as
\begin{equation}\label{22}
\alpha(t)=2\alpha_0\sqrt{t},\;\;\;\beta(t)=2\xi\sqrt{t},
\end{equation}
where $\alpha_0$ is known constant which determined from \eqref{9} and $\xi$ has to be determined.
Then we obtain the following problem
\begin{equation}\label{23}
[L^*(u_2)\eta^{\nu}u_2']'+\dfrac{2}{a}\eta^{\nu+1}N^*(u_2)u_2'=0,\;\;\alpha_0<\eta<\xi,\;\;\;0<\nu<1,
\end{equation}
\begin{equation}\label{24}
L^*(u_2(\alpha_0))u_2'(\alpha_0)=-q^*,
\end{equation}
\begin{equation}\label{25}
u_2(\xi)=0,
\end{equation}
\begin{equation}\label{26}
u_2'(\xi)=-M\xi
\end{equation}
where $q^*=\dfrac{P_0e^{-\alpha_0^2}}{\alpha_0\theta_m\sqrt{\pi}}$, $M=\dfrac{2l_m\gamma_m}{\lambda_0\theta_m\lambda(\theta_m)}$ and
\begin{equation}\label{NL2}
L^*(u_2)=\dfrac{\lambda(\theta_m u_2+\theta_m)}{\lambda_0},\;\;\; N^*(u_2)=\dfrac{c(\theta_m u_2+\theta_m)\rho(\theta_m u_2+\theta_m)}{c_0\rho_0}.
\end{equation}
We can deduce that $(u_2, \xi)$ is a solution of the problem \eqref{23},\eqref{24},\eqref{25} and \eqref{26} if and only if it satisfies the integral equation
\begin{equation}\label{27}
u_2(\eta)=q^*[\Phi(\xi,u_2(\xi))-\Phi(\eta,u_2(\eta))]
\end{equation}
where
\begin{equation}\label{28}
\Phi(\eta,u_2(\eta))=\alpha_0^{\nu}\int\limits_{\alpha_0}^{\eta}\dfrac{E(s,u_2(s))}{v^{\nu}L^*(u_2(v))}dv
\end{equation}
\begin{equation}\label{29}
E(\eta,u_2(\eta))=\exp\Bigg(-\dfrac{2}{a}\int\limits_{\alpha_0}^{\eta}s\dfrac{N^*(u_2(s))}{L^*(u_2(s))}ds\Bigg)
\end{equation}
and condition
\begin{equation}\label{30}
\dfrac{q^*\alpha_0^{\nu}E(\xi,u_2(\xi))}{M\lambda(\theta_m)}=\xi^{\nu+1}
\end{equation}
From expression \eqref{30} we can determine $\xi$ for the free boundary $\beta(t)$.
The solution of the free boundary \eqref{10}-\eqref{14} is given by \eqref{15} and
$$\theta_2(z,t)=\theta_m+\theta_m u_2(\eta)$$
where $\eta=z/(2\sqrt{t})$ and function $u_2(\eta)$ must satisfy the integral equation \eqref{27} and condition \eqref{30}.
\subsection{Convective boundary condition}
If we use the dimensionless substitution
\begin{equation}\label{e1}
T(z,t)=\dfrac{\theta(z,t)-\theta^*}{\theta_m-\theta^*}>0,
\end{equation}
then problem \eqref{10}-\eqref{14} with replaced condition with \eqref{11a} instead of heat flux condition becomes
\begin{equation}\label{e2}
\Bar{N}(T_2)\dfrac{\partial T_2}{\partial t}=\dfrac{a}{z^{\nu}}\dfrac{\partial}{\partial z}\bigg[\Bar{L}(T_2)z^{\nu}\dfrac{\partial T_2}{\partial z}\bigg],\;\;\alpha(t)<z<\beta(t),\;\;0<\nu<1,\;\;t>0,
\end{equation}
\begin{equation}\label{e3}
\Bar{L}(T_2(\alpha(t),t))\dfrac{\partial T_2}{\partial z}\bigg|_{z=\alpha(t)}=\dfrac{q}{2\lambda_0\sqrt{\pi t}}T_2(\alpha(t),t),\;\;\;t>0,
\end{equation}
\begin{equation}\label{e4}
T_2(\beta(t),t)=1,\;\;\;t>0,
\end{equation}
\begin{equation}\label{e5}
\Bar{L}(T_2(\beta(t),t))\dfrac{\partial T_2}{\partial z}\bigg|_{z=\beta(t)}=\dfrac{\beta'(t)}{a\text{Ste}},\;\;t>0,
\end{equation}
\begin{equation}\label{e6}
\beta(0)=0
\end{equation}
where $q=P_0e^{-\alpha_0^2}$, $\text{Ste}=\frac{(\theta_m-\theta^*)c_0}{l_m}>0$ and $\Bar{N},\;\bar{L}$ are defined from \eqref{NL}.
Then using similarity transformation \eqref{22} problem \eqref{e2},\eqref{e3},\eqref{e4},\eqref{e5},\eqref{e6} can be rewritten as
\begin{equation}\label{e6}
[L^*(u_2)\eta^{\nu}u_2']'+\dfrac{2}{a}\eta^{\nu+1}N^*(u_2)u_2'=0,\;\;\alpha_0<\eta<\xi,\;\;\;0<\nu<1,
\end{equation}
\begin{equation}\label{e7}
L^*(u_2(\alpha_0))u_2'(\alpha_0)=p^*u_2(\alpha(t)),
\end{equation}
\begin{equation}\label{e8}
u_2(\xi)=1,
\end{equation}
\begin{equation}\label{e9}
L^*(u_2(\xi))u_2'(\xi)=\dfrac{2\xi}{a\text{Ste}}
\end{equation}
where $p^*=q/(\lambda_0\sqrt{\pi})$ and $L^*,\;N^*$ are determined from \eqref{NL2}.
We conclude that the solution of the problem \eqref{e6},\eqref{e7},\eqref{e8} and \eqref{e9} is
\begin{equation}\label{e10}
u_2(\eta)=\dfrac{1+\alpha_0^{\nu}p^*\Phi(\eta, u_2(\eta))}{1+\alpha_0^{\nu}p^*\Phi(\xi, u_2(\xi))}
\end{equation}
with condition
\begin{equation}\label{e11}
\dfrac{a\alpha_0^{\nu}E(\xi,u_2(\xi))\text{Ste}}{2\big[1+\alpha_0^{\nu}p^*\Phi(\xi,u_2(\xi))\big]}=\xi^{\nu+1}
\end{equation}
where $\Phi$ and $E$ are defined by \eqref{28} and \eqref{29}.
With help of \eqref{e1} and \eqref{e10} we summarize that solution of the problem \eqref{10},\eqref{11a},\eqref{12},\eqref{13}\eqref{14} can be represented in the form of
\begin{equation}\label{e12}
\theta_2(\eta)=\theta^*+(\theta_m-\theta^*)u_2(\eta)
\end{equation}
where $\eta=z/(2\sqrt{t})$ and $u_2(\eta)$ satisfies the integral equation \eqref{e10} and condition \eqref{e11}.
\section{Existence and uniqueness of the similarity solution}
\subsection{Problem with heat flux condition}
To prove existence of the solution form \eqref{27} we assume that $\xi>0$ is a given constant. We consider the continuous real valued functions space $C^0[\alpha_0, \xi]$ which endowed with supremum norm
$$||u||=\max_{\eta\in[\alpha_0,\xi]}|u(\eta)|$$
and using a fixed point Banach theorem $(C^0[\alpha_0,\xi],||\cdot||)$, We define operator $W: C^0[\alpha_0,\xi]\to C^0[\alpha_0,\xi]$ which is
\begin{equation}\label{31}
W(u_2)(\eta):=u_2(\eta),\;\;\forall \eta\in[\alpha_0,\xi],
\end{equation}
where $u_2$ is defined by \eqref{27}. Then by using the fixed point Banach theorem we have to prove that operator \eqref{31} is contraction operator of mapping and it implies that there must exists unique solution $u\in C^0[\alpha_0,\xi]$ to integral solution \eqref{27}.
At first, we suppose that $L^*$ and $N^*$ are bounded and satisfy Lipschitz inequalities such that
\begin{enumerate}
\item[a)] There exists $L_m=\dfrac{\lambda_m}{\lambda_0}>0$ and $L_M=\dfrac{\lambda_M}{\lambda_0}>0$ such that
\begin{equation}\label{32}
L_m\leq L^*(u)\leq L_M,\;\;\;\forall u\in C^0(\mathbb{R}_0^+)\cup L^{\infty}(\mathbb{R}_0^+).
\end{equation}
and $\bar{L}=\dfrac{\bar{\lambda}(\theta_m+1)}{\lambda_0}>0$ such that
\begin{equation}\label{33}
||L^*(u_1)-L^*(u_2)||\leq \bar{L}||u_1-u_2||,\;\;\;\forall u_1,u_2\in C^0(\mathbb{R}_0^+)\cup L^{\infty}(\mathbb{R}_0^+).
\end{equation}
\item[b)] There exists $N_m=\dfrac{\sigma_m}{c_0,\gamma_0}>0$ and $N_M=\dfrac{\sigma_M}{c_0\gamma_0}>0$ such that
\begin{equation}\label{34}
N_m\leq N^*(u)\leq N_M,\;\;\;\forall u\in C^0(\mathbb{R}_0^+)\cup L^{\infty}(\mathbb{R}_0^+).
\end{equation}
and $\bar{N}=\dfrac{\bar{\sigma}(\theta_m+1)}{c_0\gamma_0}>0$ such that
\begin{equation}\label{35}
||N^*(u_1)-N^*(u_2)||\leq \bar{N}||u_1-u_2||,\;\;\;\forall u_1,u_2\in C^0(\mathbb{R}_0^+)\cup L^{\infty}(\mathbb{R}_0^+).
\end{equation}
\end{enumerate}
Now we have to obtain some preliminary results to prove the existence and uniqueness of the solution to the equation \eqref{27}.
\begin{lemma}\label{lem1}
For all $\eta\in[\alpha_0,\;\xi]$ the following inequality holds
\begin{equation}\label{36}
\exp\bigg(-\dfrac{N_M}{aL_m}(\eta^2-\alpha_0^2)\bigg)\leq E(\eta, u)\leq \exp\bigg(-\dfrac{N_m}{aL_M}(\eta^2-\alpha_0^2)\bigg).
\end{equation}
\end{lemma}
\begin{proof}
$E(\eta,u)\leq\exp\Bigg(-\dfrac{2N_m}{aL_M}\int\limits_{\alpha_0}^{\eta}sds\Bigg)=\exp\bigg(-\dfrac{N_m}{aL_M}(\eta^2-\alpha_0^2)\bigg)$.
\end{proof}
\begin{lemma}\label{lem2}
For all $\eta\in[\alpha_0,\xi]$ the following inequality holds
$$\dfrac{1}{2L_{M}}\exp\bigg(\tfrac{N_{M}}{aL_{m}}\alpha_0^2\bigg)\sqrt{\dfrac{N_{M}^{\nu-1}}{aL_{m}^{\nu-1}}}\bigg[\gamma\bigg(\dfrac{1-\nu}{2},\eta^2\dfrac{N_{M}}{aL_{m}}\bigg)-\gamma\bigg(\dfrac{1-\nu}{2}, \frac{N_{M}}{aL_{m}}\alpha_0^2\bigg)\bigg]\leq\Phi(\eta, u)$$
$$\leq\dfrac{1}{2L_{m}}\exp\bigg(\frac{N_{m}}{aL_M}\alpha_0^2\bigg)\sqrt{\dfrac{N_{m}^{\nu-1}}{aL_{M}^{\nu-1}}}\bigg[\gamma\bigg(\dfrac{1-\nu}{2},\eta^2\dfrac{N_{m}}{aL_{M}}\bigg)-\gamma\bigg(\dfrac{1-\nu}{2},\frac{N_{m}}{aL_{M}}\alpha_0^2\bigg)\bigg],$$
\end{lemma}
\begin{proof}
We have $\Phi(\eta,u)\leq \dfrac{1}{L_{m}}\exp\bigg(\frac{N_{m}}{aL_{M}}\alpha_0^2\bigg)\int\limits_{\alpha_0}^{\eta}\dfrac{\exp(-N_{m}s^2/(aL_{M}))}{s^{\nu}}ds$ after using substitution $t=s\sqrt{\frac{N_{m}}{aL_{M}}}$ we obtain
$$\Phi(\eta,u)\leq\dfrac{1}{L_{m}}\exp\bigg(\frac{N_{m}}{aL_{M}}\alpha_0^2\bigg)\sqrt{\dfrac{N_{m}^{\nu-1}}{aL_{M}^{\nu-1}}}\int\limits_{\alpha_0\sqrt{N_{m}/(aL_{M})}}^{\eta\sqrt{N_{m}/(aL_{M})}}\dfrac{e^{-t^2}}{t^\nu}dt.$$
Then using substitution $z=t^{1-\nu}$ we get
$$\Phi(\eta,u)\leq \dfrac{1}{L_{m}(1-\nu)}\exp\bigg(\frac{N_{m}}{aL_{M}}\alpha_0^2\bigg)\sqrt{\dfrac{N_{m}^{\nu-1}}{aL_{M}^{\nu-1}}}\int\limits_{(\alpha_0\sqrt{N_m/(aL_{M})})^{1-\nu}}^{(\eta\sqrt{N_{m}/(aL_{M})})^{1-\nu}}e^{-z^{\frac{2}{1-\nu}}}dz$$
and taking $y=z^{\frac{2}{1-\nu}}$ then inequality becomes
$$\Phi(\eta,u)\leq\dfrac{1}{L_{m}(1-\nu)}\exp\bigg(\frac{N_{m}}{aL_{M}}\alpha_0^2\bigg)\sqrt{\dfrac{N_{m}^{\nu-1}}{aL_{M}^{\nu-1}}}\dfrac{1-\nu}{2}\int\limits_{\alpha_0^2N_m/(aL_{M})}^{\eta^2N_{m}/(aL_{M}}y^{\frac{1-\nu}{2}-1}e^{-y}dy.$$
Then by using definition of special function type incomplete gamma function $\gamma(s,x)=\int\limits_0^x t^{s-1}e^{-t}dt$ we have proved that
$$\Phi(\eta, u)\leq\dfrac{1}{2L_{m}}\exp\bigg(\frac{N_{m}}{aL_{M}}\alpha_0^2\bigg)\sqrt{\dfrac{N_{m}^{\nu-1}}{aL_{M}^{\nu-1}}}\bigg[\gamma\bigg(\dfrac{1-\nu}{2},\eta^2\dfrac{N_{m}}{aL_{M}}\bigg)-\gamma\bigg(\dfrac{1-\nu}{2}, \frac{N_{m}}{aL_{M}}\alpha_0^2\bigg)\bigg].$$
\end{proof}
\begin{lemma}\label{lem3}
Let given $\alpha_0, \xi \in\mathbb{R^{+}}$ and assumptions \eqref{32},\eqref{33},\eqref{34},\eqref{35} hold for specific heat and dimensionless thermal conductivity then for all $u\in C^0[\alpha_0, \xi]$ we have
$$|E(\eta,u)-E(\eta,u^*)|\leq\dfrac{1}{aL_{m}}\bigg(\tilde{N}+\dfrac{N_{M}\tilde{L}}{L_{m}}\bigg)(\eta^2-\alpha_0^2)||u^*-u||.$$
\end{lemma}
\begin{proof}
By using inequality $\exp(-x)-\exp(-y)|\leq |x-y|,\;\;\forall x,y\geq 0$ we get
$$|E(\eta,u)-E(\eta,u^*)|\leq \Bigg|\exp\Bigg(-\dfrac{2}{a}\int\limits_{\alpha_0}^{\eta}s\dfrac{N(u_1(s))}{L(u(s))}ds\Bigg)-\exp\Bigg(-\dfrac{2}{a}\int\limits_{\alpha_0}^{\eta}s\dfrac{N(u^*(s))}{L(u^*(s))}ds\Bigg)\Bigg|$$
$$\leq \dfrac{2}{a}\Bigg|\int\limits_{\alpha_0}^{\eta}s\dfrac{N(u)}{L(u)}ds-\int\limits_{\alpha_0}^{\eta}s\dfrac{N(u^*)}{L(u^*)}ds\Bigg|\leq 2\int\limits_{\alpha_0}^{\eta}\Bigg|\dfrac{N(u)}{L(u)}-\dfrac{N(u^*)}{L(u^*)}\Bigg|sds$$
$$\leq\dfrac{2}{a}\int\limits_{\alpha_0}^{\eta}\Bigg|\dfrac{N(u)}{L(u)}-\dfrac{N(u^*)}{L(u)}+\dfrac{N(u^*)}{L(u)}-\dfrac{N(u^*)}{L(u^*)}\Bigg|sds$$
$$\leq\dfrac{2}{a}\int\limits_{\alpha_0}^{\eta}\Bigg(\dfrac{|N(u)-N(u^*)|}{|L(u)|}+\dfrac{|L(u^*)-L(u)|\cdot|N(u^*)|}{|L(u)||L(u^*)|}\Bigg)sds$$
$$\leq \dfrac{2}{aL_{m}}\bigg(\tilde{N}+\dfrac{N_{M}\tilde{L}}{L_{m}}\bigg)||u^*-u||\int\limits_{\alpha_0}^{\eta}sds=\dfrac{1}{aL_{m}}\bigg(\tilde{N}+\dfrac{N_{M}\tilde{L}}{L_{m}}\bigg)(\eta^2-\alpha_0^2)||u^*-u||.$$
\end{proof}
\begin{lemma}\label{lem4}
If $\alpha_0, \xi\in\mathbb{R^{+}}$ are given and \eqref{32}-\eqref{35} hold then for all $u^*\in C^0[\alpha_0, \xi]$ we have
$$|\Phi(\eta,u)-\Phi(\eta,u^*)|\leq\Tilde{\Phi}(\alpha_0,\xi)||u^*-u||,$$
where
\begin{equation}\label{37}
\Tilde{\Phi}(\alpha_0,\eta)=\dfrac{\alpha_0^{\nu}}{L_m^2}\bigg(\dfrac{1}{a}\bigg(\tilde{N}+\dfrac{N_{M}\tilde{L}}{L_{m}}\bigg)\bigg[\dfrac{\eta^{3-\nu}}{3-\nu}-\alpha_0^2\dfrac{\eta^{1-\nu}}{1-\nu}+\dfrac{2\alpha_0^{3-\nu}}{(3-\nu)(1-\nu)}\bigg]+\tilde{L}\dfrac{\eta^{1-\nu}-\alpha_0^{1-\nu}}{1-\nu}\bigg).
\end{equation}
\end{lemma}
\begin{proof}
By using lemmas \ref{lem2} and \ref{lem3} we obtain
$$|\Phi(\eta,u)-\Phi(\eta,u^*)|\leq T_1(\eta)+T_2(\eta)$$
where
$$T_1(\eta)\equiv\alpha_0^{\nu}\int\limits_{\alpha_0}^{\eta}\dfrac{|E(\eta,u)-E[\eta, u^*)|}{s^{\nu}L(u(s))}ds\leq \dfrac{\alpha_0^{\nu}}{aL_{m}^2}\bigg(\tilde{N}+\dfrac{N_{M}\tilde{L}}{L_{m}}\bigg)||u^*-u||\int\limits_{\alpha_0}^{\eta}(s^2-\alpha_0^2)s^{-\nu}ds$$
$$\leq \dfrac{\alpha_0^{\nu}}{aL_{m}^2}\bigg(\tilde{N}+\dfrac{N_{M}\tilde{L}}{L_{m}}\bigg)\bigg[\dfrac{\eta^{3-\nu}}{3-\nu}-\alpha_0^2\dfrac{\eta^{1-\nu}}{1-\nu}+\dfrac{2\alpha_0^{3-\nu}}{(3-\nu)(1-\nu)}\bigg]||u^*-u||$$
and
$$T_2(\eta)\equiv \alpha_0^{\nu}\int\limits_{\alpha_0}^{\eta}\bigg|\dfrac{1}{L(u)}-\dfrac{1}{L(u^*)}\bigg|\dfrac{1}{s^{\nu}}\exp\Bigg(-\dfrac{2}{a}\int\limits_{\alpha_0}^{\eta}t\dfrac{N(u^*)}{L(u^*)}dt\Bigg)ds$$
$$\leq \alpha_0^{\nu}\int\limits_{\alpha_0}^{\eta}\dfrac{|L(u^*)-L(u)|}{|L(u)||L(u^*)|}\dfrac{ds}{s^{\nu}}\leq \dfrac{\tilde{L}\alpha_0^{\nu}}{L_{m}^2}||u^*-u||\int\limits_{\alpha_0}^{\eta}\dfrac{ds}{s^{\nu}}\leq \dfrac{\tilde{L}(\eta^{1-\nu}-\alpha_0^{1-\nu})\alpha_0^{\nu}}{L_{m}^2(1-\nu)}||u^*-u||. $$
Finally we get
$$T_1(\eta)+T_2(\eta)\leq \dfrac{\alpha_0^{\nu}}{L_{m}^2}||u^*-u||\bigg(\dfrac{1}{a}\bigg(\tilde{N}+\dfrac{N_M\tilde{L}}{L_{m}}\bigg)\bigg[\dfrac{\eta^{3-\nu}}{3-\nu}-\alpha_0^2\dfrac{\eta^{1-\nu}}{1-\nu}+\dfrac{2\alpha_0^{3-\nu}}{(3-\nu)(1-\nu)}\bigg]+\tilde{L}\dfrac{\eta^{1-\nu}-\alpha_0^{1-\nu}}{1-\nu}\bigg).$$
\end{proof}
\begin{theorem}\label{th1}
Suppose that $L^*$ and $N^*$ satisfy the conditions \eqref{32}-\eqref{35}. If $\alpha_0<\xi<\xi^*$ where $\xi^*>0$ is defined as unique solution to $\epsilon(\alpha_0, z)=1$ with
\begin{equation}\label{38}
\epsilon(\alpha_0,z):=2p^*\tilde{\Phi}(\alpha_0,z)
\end{equation}
where $\tilde{\Phi}(\alpha_0,\eta)$ is given by \eqref{37}, then there exists a unique solution $u_2\in C^0[\alpha_0,\mu]$ for the integral equation \eqref{27}.
\end{theorem}
\begin{proof}
We have to show that operator $W$ is defined by \eqref{31} is a contraction operator. Suppose we have $u_2, u_2^*\in C^0[\alpha_0, \xi]$ and by using lemmas \ref{lem1}-\ref{lem4} we have
\begin{equation}
\begin{split}
&|W(u_2(\eta))-W(u_2^*(\eta))|\leq q^*|\Phi(\xi, u_2(\xi))-\Phi(\eta, u_2(\eta))-\Phi(\xi, u_2^*(\xi))+\Phi(\eta,u_2^*(\eta))|\\
&\leq q^*(|\Phi(\xi,u_2(\xi))-\Phi(\xi,u_2^*(\xi))|+|\Phi(\eta,u_2(\eta))-\Phi(\eta,u_2^*(\eta))|)\\
&\leq 2q^*\Tilde{\Phi}(\alpha_0,\xi)||u_2-u_2^*||.
\end{split}
\end{equation}
It follows that
$$|W(u_2)(\eta)-W(u_2^*)(\eta)|\leq \epsilon(\alpha_0,\xi) ||u_2-u_2^*||$$
where $\epsilon(\alpha_0, \eta)$ is defined by \eqref{38} and we can notice that
$$\epsilon(\alpha_0,\alpha_0)<1,\;\;\forall\xi: \alpha_0<\xi<\xi^*,\;\;\;\epsilon(\alpha_0,\xi)>1,\;\;\forall \xi: \xi>\xi^*.$$
Then we can make conclusion that $\epsilon$ is increasing function and thus there exists a unique $\xi^*>0$ such that $\epsilon(\alpha_0,\xi^*)=1$ so the operator $W$ becomes a contraction operator of mapping. By the fixed point Banach theorem there must exist a unique solution $u_2\in C^0[\alpha_0,\xi]$ to
integral equation \eqref{27}.
\end{proof}
Now we analyze the existence and uniqueness of the solution for the equation \eqref{30}. We have to show that
\begin{equation}\label{39}
\phi(\xi)=\xi^{\nu+1}
\end{equation}
where
$$\phi(\xi)= \dfrac{q^*\alpha_0^{\nu}E(\xi,u_2(\xi))}{M\lambda(\theta_m)},$$
$$q^*=\dfrac{P_0e^{-\alpha_0^2}}{\alpha_0\theta_m\sqrt{\pi}},$$ $$M=\dfrac{2l_m\gamma_m}{\lambda_0\theta_m\lambda(\theta_m)}$$
has a unique solution $\xi\in[\alpha,\xi^*]$.
\begin{lemma}\label{lem5}
Suppose assumptions \eqref{32}-\eqref{35} hold, then for all $\xi\in[\alpha_0,\xi^*]$ we have that
\begin{equation}\label{40}
\phi_1(\xi)\leq\phi(\xi)\leq\phi_2(\xi)
\end{equation}
where $\phi_1(\xi)$ and $\phi_2(\xi)$ are functions defined by
\begin{equation}\label{41}
\begin{split}
&\phi_1(\xi)=\dfrac{q^*\alpha_0^{\nu}}{M\lambda(\theta_m)}\exp\bigg(-\dfrac{N_M}{aL_M}(\xi-\alpha_0)\bigg),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\xi>\alpha_0,\\
&\phi_2(\xi)=\dfrac{q^*\alpha_0^{\nu}}{M\lambda(\theta_m)}\exp\bigg(\dfrac{N_m}{aL_M}(\xi^*-\alpha_0)-\dfrac{N_M}{aL_m}(\xi-\alpha_0)\bigg),\;\;\xi>\alpha_0
\end{split}
\end{equation}
which satisfy the following properties
\begin{equation}\label{42}
\begin{split}
&\phi_1(\alpha_0)=\dfrac{q^*\alpha_0^{\nu}}{M\lambda(\theta_m)}>0,\;\;\;\phi_1(+\infty)=0,\;\;\;\phi_1'(\xi)<0,\;\;\forall\xi>\alpha_0\\
&\phi_2(\alpha_0)=\dfrac{q^*\alpha_0^{\nu}}{M\lambda(\theta_m)}>0,\;\;\;\phi_2(+\infty)=0,\;\;\;\phi_2'(\xi)<0,\;\;\forall\xi>\alpha_0.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
We can easily prove this lemma directly using bound \eqref{36} and definitions \eqref{41},\eqref{42} of the functions $\phi_1$ and $\phi_2$.
\end{proof}
\begin{lemma}\label{lem6}
If
\begin{equation}\label{eq1}
\phi_2(\xi^*)<\xi^*
\end{equation}
then, there exists a unique solution $\alpha_0<\xi_1<\xi^*$ to the equation
\begin{equation}\label{43}
\phi_1(\xi)=\xi^{\nu+1},\;\;\xi>\alpha_0
\end{equation}
and there exists a unique solution $\xi_1<\xi_2<\xi^*$ to the equation
\begin{equation}\label{44}
\phi_2(\xi)=\xi^{\nu+1},\;\;\xi>\alpha_0.
\end{equation}
\end{lemma}
\begin{proof}
We can prove by using properties of $\phi_1$ and $\phi_2$ shown in Lemma \ref{lem5}.
\end{proof}
\begin{remark}
By using definition of $\phi_2$ and $M$ we obtain that assumption \eqref{eq1} is equivalent to the following inequality for latent of melting heat
\begin{equation}\label{eq2}
l_m>\dfrac{q^*\alpha_0^{\nu}\lambda_0\theta_m}{2\gamma_m\xi^*}\exp\bigg(\dfrac{N_m}{aL_M}(\xi^*-\alpha_0)-\dfrac{N_M}{aL_m}(\xi^*-\alpha_0)\bigg).
\end{equation}
\end{remark}
\begin{theorem}\label{th2}
Suppose \eqref{32}-\eqref{35} and \eqref{eq2} hold. Consider $\xi_1$ and $\xi_2$ determined from \eqref{43} and \eqref{44}. If $\epsilon(\alpha_0,\xi_2)<1$, where $\epsilon$ is defined by \eqref{38}, then there exists at least one solution $\bar{\xi}\in(\xi_1,\xi_2)$ to the equation \eqref{30}.
\end{theorem}
\begin{proof}
By hypothesis of Lemma \ref{lem5} if $\epsilon(\alpha_0,\xi_2)<1$ then we have that the inequality \eqref{40} holds for each $\xi_1\leq\xi\leq\xi_2\leq\xi^*$ and $\epsilon(\alpha_0,\xi)<1$. As function $\phi$ is continuous decreasing function we obtain that there exists at least one solution $\bar{\xi}\in[\xi_1,\xi_2]$ to the equation \eqref{30}.
\end{proof}
Now we can make conclusion by following main theorem.
\begin{theorem}\label{th3}
Assume that \eqref{32}-\eqref{25} hold and $\epsilon(\alpha_0,\xi_2)<1$ where $\epsilon$ defined by \eqref{38} and $\xi_2$ defined from \eqref{44} then there exist at least one solution to the problem \eqref{10}-\eqref{14} where unknown free boundary is given by
\begin{equation}\label{45}
\beta(t)=2\bar{\xi}\sqrt{t},\;\;t>0
\end{equation}
where $\bar{\xi}$ defined from Theorem \ref{th2} and temperature is given by
\begin{equation}\label{46}
\theta(z,t)=\theta_m(u_{\bar{\xi}}(\eta)+1),\;\;\alpha_0\leq\eta\leq\bar{\xi}
\end{equation}
where $\eta=\dfrac{z}{2\sqrt{t}}$ being similarity substitution and $u_{\bar{\xi}}$ is the unique solution of the integral equation \eqref{27} which was established in Theorem \ref{th1}.
\end{theorem}
\subsection{Problem with convective boundary condition} In this section, analogously as in previous, we will prove existence and uniqueness of the solution form \eqref{e10} assuming that there is given constant $\xi>0$ and considering fixed point Banach space $(C^0[\alpha_0,\xi],||\cdot||)$, defining the operator $V:C^0[\alpha_0,\xi]\to C^0[\alpha_0,\xi]$ such
\begin{equation}\label{e13}
V(u_2)(\eta)=u_2(\eta),\;\;\;\alpha_0\leq \eta\leq \xi,
\end{equation}
where $u_2$ is defined by \eqref{e10}.
Let assume that $L^*, N^*$ satisfy all assumptions \eqref{32}-\eqref{35} then we can get the following results.
\begin{theorem}\label{th4}
Suppose that \eqref{32}-\eqref{35} hold. If $\alpha_0\leq \xi\leq \xi_{c}^*$ where $\xi_c^*$ is defined as the unique solution of $\widehat{\epsilon}(\alpha_0, z)=1$ such as
\begin{equation}\label{e14}
\widehat{\varepsilon}(\alpha_0, z):=\dfrac{\tilde{\Phi}(\alpha_0, z)}{1+\frac{\alpha_0^{\nu}p^*}{2L_m}\exp\bigg(\alpha_0^2\frac{N_m}{aL_M}\bigg)\sqrt{\frac{N_m^{\nu-1}}{aL_M^{\nu-1}}}h(\alpha_0,z)},
\end{equation}
where $\tilde{\Phi}(\alpha_0,z)$ defined from \eqref{37} and
$$h(\alpha_0,\eta)=\gamma\bigg(\dfrac{1-\nu}{2},\eta^2\dfrac{N_{m}}{aL_{M}}\bigg)-\gamma\bigg(\dfrac{1-\nu}{2}, \frac{N_{m}}{aL_{M}}\alpha_0^2\bigg),$$
then there exists a unique solution $u_2\in C^0[\alpha_0,\xi]$ for integral equation \eqref{e10}.
\end{theorem}
\begin{proof}
By analogously approach in Theorem \ref{th1}, we need to show that operator $V$ defined by \eqref{e13} is a contraction operator and we suppose that there exists $u_2,\;u_2^*\in C^0[\alpha_0,\xi]$ then by using lemmas \ref{lem1}-\ref{lem4} we get
$$||V(u_2)(\eta)-V(u_2^*)(\eta)||\leq \max_{\eta\in[\alpha_0,\xi]}\Bigg|\dfrac{1+\alpha_0^{\nu}p^*\Phi(\eta,u_2)}{1+\alpha_0^{\nu}p^*\Phi(\xi,u_2)}-\dfrac{1+\alpha_0^{\nu}p^*\Phi(\eta,u_2^*)}{1+\alpha_0^{\nu}p^*\Phi(\xi,u_2^*)}\Bigg|$$
$$\leq \max_{\eta\in[\alpha_0,\xi]} \dfrac{\bigg|(1+\alpha_0^{\nu}p^*\Phi(\eta,u_2))(1+\alpha_0^{\nu}p^*\Phi(\xi,u_2^*))-(1+\alpha_0^{\nu}p^*\Phi(\eta,u_2^*))(1+\alpha_0^{\nu}p^*\Phi(\xi,u_2))\bigg|}{\bigg|1+\alpha_0^{\nu}p^*\Phi(\xi,u_2)\bigg|\bigg|1+\alpha_0^{\nu}p^*\Phi(\xi,u_2^*)\bigg|}$$
$$\leq \widehat{\varepsilon}(\alpha_0, \xi)||u_2^*-u_2||,$$
where $\widehat{\varepsilon}(\alpha_0,\xi)$ defined by \eqref{e14} and it is easy to check that
$$\widehat{\varepsilon}(\alpha_0,\alpha_0)<1, \;\;\forall\xi\in [\alpha_0, \xi_c^*],\;\;\;\;\widehat{\varepsilon}(\alpha_0,\xi)>1,\;\;\forall\xi\in[\xi_c^*,\infty).$$
We can see that $\widehat{\varepsilon}$ is an increasing function then it enables us to make conclusion that there exists a unique positive constant $\xi_c^*$ such that $\widehat{\varepsilon}(\alpha_0, \xi_c^*)=1$ and we obtain that operator $V$ is a contraction mapping operator. At the end, we can make conclusion that there must be a unique solution $u_2\in C^0[\alpha_0,\xi]$ to the equation \eqref{e10}.
\end{proof}
We obtained that for each given $\alpha_0<\xi<\xi_c^*$, a unique solution for \eqref{e10} is $u_2(\eta)=u_{2(\xi)}(\eta)$ and its derivative will be
\begin{equation}\label{e15}
u_{2(\xi)}'(\eta)=\dfrac{\alpha_0^{\nu}p^*E(\eta, u_{2(\xi)}(\eta))}{\big[1+\alpha_0^{\nu}p^*\Phi(\eta, u_{2(\xi)}(\eta))\big]\eta^{\nu}L^*(u_{2(\xi)(\eta)}).}
\end{equation}
It remains to analyze the condition \eqref{e11} which can be rewritten as
\begin{equation}\label{e16}
\phi^c(\xi)=\varphi^c(u_{2(\xi)},\xi):=\xi^{\nu+1}
\end{equation}
where
$$\phi^c(\xi)=\dfrac{a\alpha_0^{\nu}E(\xi,u_2(\xi))\text{Ste}}{2\big[1+\alpha_0^{\nu}p^*\Phi(\xi,u_2(\xi))\big]}.$$
Then we can obtain the next results.
\begin{lemma}\label{lem7}
Assume that \eqref{32}-\eqref{35} hold. Then for all $\xi\in(\alpha_0,\xi_c^*)$ we have
\begin{equation}\label{e17}
0\leq \phi^c(\xi)\leq \phi_2(\xi)
\end{equation}
where $\phi_2$ is defined by \eqref{41}.
\end{lemma}
\begin{proof}
The proof follows straightforwardly by taking into account the bounds given in Lemma \ref{lem1} and definition of $\phi_2$ in \eqref{41}.
Similarly, we notice that the properties of $\phi_2(\xi)$ studied in Lemma \ref{lem6} and if \eqref{44} holds then there exists unique solution $\alpha_0<\xi_2\leq\xi_c^*$ for the equation \eqref{e16}.
\end{proof}
\begin{theorem}\label{th5}
Assume that \eqref{32}-\eqref{35}, \eqref{44} hold, then by Lemma \ref{lem6} we can conclude that there exists unique solution $\widetilde{\xi}_c^*\in(\alpha_0, \xi_2)$ to the equation \eqref{e16}.
\end{theorem}
\begin{proof}
It can be proved analogously as Theorem \ref{th2}.
\end{proof}
\begin{theorem}\label{th6}
Assume that \eqref{32}-\eqref{35}, \eqref{e17} hold, then there exists at least one solution of the problem \eqref{10}-\eqref{14} with replaced condition with \eqref{11a} where free boundary is defined by
\begin{equation}\label{e18}
\beta(t)=2\widetilde{\xi}_c^*\sqrt{t},\;\;t>0,
\end{equation}
where $\widetilde{\xi}_c^*$ is defined in Theorem \ref{th5} and temperature in liquid region is given by
\begin{equation}\label{e19}
\theta_2(z,t)=(\theta_m-\theta^*)u_{2(\widetilde{\xi}_c^*)}(\eta)+\theta^*,\;\;\;\alpha_0\leq \eta\leq \widetilde{\xi}_c^*,
\end{equation}
where $\eta=z/(2\sqrt{t})$ is similarity variable and $u_{2(\widetilde{\xi}_c^*)}$ is unique solution to the integral equation \eqref{e10} established from Theorem \ref{th4}.
\end{theorem}
\section{Particular cases for thermal conductivity}
\subsection{Constant thermal coefficients}
In this section we are going to analyze the solution \eqref{27} and \eqref{e10} when thermal coefficients are constant such that
\begin{equation}\label{e20}
c(\theta_2)=c_0,\;\;\;\rho(\theta_2)=\rho_0,\;\;\;\lambda(\theta_2)=\lambda_0,
\end{equation}
then replacing \eqref{NL2} with $N^*=L^*=1$ we get the results for $E$ and $\Phi$ functions as the following
\begin{equation}\label{e21}
E(\eta, u_2(\eta))=\exp\bigg(-\dfrac{1}{a}(\eta^2-\alpha_0^2)\bigg),
\end{equation}
\begin{equation}\label{e22}
\Phi(\eta,u_2(\eta))=\dfrac{1}{2}\exp\bigg(\dfrac{\alpha_0^2}{a}\bigg)a^{\frac{1-\nu}{2}}\bigg[\gamma\bigg(\dfrac{1-\nu}{2},\dfrac{\eta^2}{a}\bigg)-\gamma\bigg(\dfrac{1-\nu}{2}, \dfrac{\alpha_0^2}{a}\bigg)\bigg].
\end{equation}
By making substitutions the \eqref{e21}, \eqref{e22} into integral equations \eqref{27} and \eqref{e10} then we have solution for the problem with heat flux condition as
\begin{equation}\label{e23}
u_2(\eta)=\dfrac{q^*}{2}\exp\bigg(\dfrac{\alpha_0^2}{a}\bigg)a^{\frac{1-\nu}{2}}\bigg[\gamma\bigg(\dfrac{1-\nu}{2},\dfrac{\xi^2}{a}\bigg)-\gamma\bigg(\dfrac{1-\nu}{2},\dfrac{\eta^2}{a}\bigg)\bigg]
\end{equation}
with condition
\begin{equation}\label{e24}
\phi(\xi)=\xi^{\nu+1}
\end{equation}
where
\begin{equation}\label{e25}
\phi(\xi)=\dfrac{q^*\alpha_0^{\nu}\lambda_0\exp\big(-\frac{1}{a}(\xi^2-\alpha_0^2)\big)}{2l_m\gamma_m}
\end{equation}
and it is easy to check that function \eqref{e25} is decreasing function such that
$$\phi(\alpha_0)>0,\;\;\;\phi(+\infty)=0,\;\;\;\phi'(\xi)<0,$$
then we can state that equation \eqref{e24} has an unique solution.
With help of \eqref{e21} and \eqref{e22} the solution of the problem \eqref{10}-\eqref{14} replaced with condition \eqref{11a} can be represented
\begin{equation}\label{e26}
u_2(\eta)=\dfrac{1+\frac{\alpha_0^{\nu}p^*}{2}\exp\bigg(\dfrac{\alpha_0^2}{a}\bigg)a^{\frac{1-\nu}{2}}\bigg[\gamma\bigg(\dfrac{1-\nu}{2},\dfrac{\eta^2}{a}\bigg)-\gamma\bigg(\dfrac{1-\nu}{2}, \dfrac{\alpha_0^2}{a}\bigg)\bigg]}{1+\frac{\alpha_0^{\nu}p^*}{a}\exp\bigg(\dfrac{\alpha_0^2}{a}\bigg)a^{\frac{1-\nu}{2}}\bigg[\gamma\bigg(\dfrac{1-\nu}{2},\dfrac{\xi^2}{a}\bigg)-\gamma\bigg(\dfrac{1-\nu}{2}, \dfrac{\alpha_0^2}{a}\bigg)\bigg]}
\end{equation}
with condition
\begin{equation}\label{e27}
\phi_c(\xi)=\xi^{\nu+1}
\end{equation}
where
\begin{equation}\label{e28}
\phi_c(\xi)=\dfrac{a\alpha_0^{\nu}\exp\big(-\frac{1}{a}(\xi^2-\alpha_0^2)\big)\text{Ste}}{2\bigg[1+\frac{\alpha_0^{\nu}p^*}{a}\exp\big(\frac{\alpha_0^2}{a}\big)a^{\frac{1-\nu}{2}}\bigg(\gamma\big(\frac{1-\nu}{2},\frac{\xi^2}{a}\big)-\gamma\big(\frac{1-\nu}{2}, \frac{\alpha_0^2}{a}\big)\bigg)\bigg]}
\end{equation}
and here we can also see that function \eqref{e28} is non-increasing function because
$$\phi_c(\alpha_0)>0,\;\;\;\phi_c(+\infty)=0,\;\;\;\phi_c'(\xi)<0,$$
then we can be obtained that there exists a unique solution to equation \eqref{e27}.
\subsection{Linear thermal coefficients}
In this subsection we are going to analyse the case when thermal coefficients are given by
\begin{equation}\label{e29}
c(\theta_2)=c_0\bigg(1+\alpha\dfrac{\theta-\theta^*}{\theta_m-\theta^*}\bigg),\;\;\;\rho(\theta_2)=\rho_0,\;\;\;\lambda(\theta_2)=\lambda_0\bigg(1+\beta\dfrac{\theta-\theta^*}{\theta_m-\theta^*}\bigg)
\end{equation}
where $\alpha$ and $\beta$ are given positive constants. This particular case can be considered in this paper for the problem \eqref{10},\eqref{11a},\eqref{12}-\eqref{14}.
From \eqref{NL2} replacing \eqref{e29} we can obtain
$$L*(u_2)=1+\beta u_2,\;\;\;N^*(u_2)=1+\alpha u_2$$
and notice that $u_2\in C^0[\alpha_0,\xi]$ then taking $\alpha_0=1,\;\xi=2$ from assumptions \eqref{32}-\eqref{35} we get the
$$1+\beta\leq L^*(u_2)\leq 1+2\beta,\;\;\;1+\alpha\leq N^*(u_2)\leq 1+2\alpha$$
with
$$L_m=1+\beta,\;\;\;L_M=1+2\beta,\;\;\;N_m=1+\alpha,\;\;\;N_M=1+2\alpha.$$
Then definition of $E$ and $\Phi$ functions becomes
\begin{equation}\label{e30}
E(\eta,u_2(\eta))=\exp\bigg(-\dfrac{1+\alpha}{a(1+\beta)}(\eta^2-\alpha_0^2)\bigg),
\end{equation}
\begin{equation}\label{e31}
\begin{split}
\Phi(\eta,u_2(\eta))&=\dfrac{1}{2(1+\beta)}\exp\bigg(\frac{1+\alpha}{a(1+2\beta)}\alpha_0^2\bigg)\sqrt{\dfrac{(1+\alpha)^{\nu-1}}{a(1+2\beta)^{\nu-1}}}\\
&\cdot\bigg[\gamma\bigg(\dfrac{1-\nu}{2},\eta^2\dfrac{1+\alpha}{a(1+2\beta)}\bigg)-\gamma\bigg(\dfrac{1-\nu}{2}, \frac{1+\alpha}{a(1+2\beta)}\alpha_0^2\bigg)\bigg].
\end{split}
\end{equation}
By using \eqref{e30} and \eqref{e31} the integral equation \eqref{e10} can be rewritten as the following form
\begin{equation}\label{e32}
u_2(\eta)=\dfrac{1+\frac{\alpha_0^{\nu}p^*}{2(1+\beta)}\exp\bigg(\frac{\alpha_0^2(1+\alpha)}{a(1+2\beta)}\bigg)\sqrt{\frac{(1+\alpha)^{\nu-1}}{a(1+2\beta)^{\nu-1}}}w(\alpha_0,\eta)}{1+\frac{\alpha_0^{\nu}p^*}{2(1+\beta)}\exp\bigg(\frac{\alpha_0^2(1+\alpha)}{a(1+2\beta)}\bigg)\sqrt{\frac{(1+\alpha)^{\nu-1}}{a(1+2\beta)^{\nu-1}}}w(\alpha_0,\xi)}
\end{equation}
where
$$w(\alpha_0,\eta)=\bigg[\gamma\bigg(\dfrac{1-\nu}{2},\eta^2\dfrac{1+\alpha}{a(1+2\beta)}\bigg)-\gamma\bigg(\dfrac{1-\nu}{2}, \frac{1+\alpha}{a(1+2\beta)}\alpha_0^2\bigg)\bigg]$$
with condition
\begin{equation}\label{e33}
\widetilde{\phi}_c(\xi)=\xi^{\nu+1}
\end{equation}
where
\begin{equation}\label{e34}
\widetilde{\phi}_c(\xi)=\dfrac{a\alpha_0^{\nu}\exp\big(-\frac{1+\alpha}{a(1+\beta)}(\xi^2-\alpha_0^2)\big)\text{Ste}}{2\big[1+\alpha_0^{\nu}p^*\Phi(\xi,u_2(\xi))\big]}
\end{equation}
with $\Phi(\xi,u_2(\xi))$ which can be defined by \eqref{e31}.
We can easily notice that function $\widetilde{\phi}_c$ is a decreasing function for all $\eta\in(\alpha_0,\xi)$ and it enables us to get statement that equation \eqref{e33} has a unique solution.
\section*{Conclusion}
We have studied one-phase Stefan problem for generalized heat equation with heat flux entering to domain $D_2$ from metallic vapour zone through free boundary $z=\alpha(t)$ which determined from \eqref{9}. The temperature field in liquid metal zone and free boundary on melting interface are determined. Existence and uniqueness of the similarity solution imposing heat flux and convective boundary condition at the known left free boundary which describes the location of the boiling interface is proved. This article will be very useful in electrical contact engineers to describe heat process arising in the body with cross-section variable regions, in particular, the metal bridge between two electrical contact materials is melted when explosion appears and to avoid from crashing contacts it is very important to analyze the heat transfer in bridge material with different characteristics. Explicit solutions for the problem \eqref{10}-\eqref{14} with constant and linear thermal coefficients are represented, existence and uniqueness of the solution is successfully discussed.
\section*{Acknowledgment}
The author thanked to prof. S.N. Kharin for supporting and valuable comments. The present work has been sponsored by the grant project AP14869306 "Special methods for solving electrical contact Stefan type problems and their application to the study of electric arc processes" from the Ministry of Science and Education of the Republic of Kazakhstan.
| proofpile-arXiv_065-13854 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Semiconductor lasers subject to optical injection display a wide range of dynamical behavior, including stable locking, periodic oscillations, and chaos \cite{Wieczorek2005}. Stable locking is achieved when the injection is strong enough and the detuning – between the wavelength of the master laser and the free-running slave laser – is small enough. Within the locking range, the frequency and phase of the slave laser are locked to those of the master laser. Injection locking has been extensively used to improve laser performance by, e.g., reducing the linewidth \cite{Schunk1986}, suppressing mode hopping and partition noise \cite{Iwashita1982}, or increasing the modulation bandwidth \cite{Simpson1995}. Outside the locking range, different types of dynamics emerge which have been used in various applications such as THz signal generation, chaotic communication, and random bit generation \cite{Wu2017,Liu2001,Li2012}.\\
While the research on optical injection of semiconductor lasers has been mainly carried out with single-mode lasers, multi-wavelength lasers under optical injection are gaining interest over the last years as they may be key enablers of new applications such as all-optical memory \cite{Osborne2009}, reconfigurable photonic microwave generation \cite{Carpintero2018}, and all-optical signal processing \cite{Desmet2020}. In contrast to single-mode lasers, the dynamical behavior of multi-wavelength lasers is strongly influenced by the mode competition occurring through the carrier dynamics in the active medium, and the impact of this competition on the laser dynamics remains to be fully clarified. \\
\red{The self- and cross-saturation induced by spectral effects, such as spectral hole burning, were found to determine the stability of a simultaneous dual-wavelength emission \cite{Agrawal1987}. They depend on the wavelength separation and it was thus predicted that simultaneous dual-wavelength emission would be impossible in quantum well lasers for a mode separation smaller than 0.5 to 1.6~THz \cite{Agrawal1987, Chusseau2013}. Successful experimental demonstration with a 480~GHz separation; however, called for further studies \cite{Osborne2007}. Spatial effects, on the other hand, proved to be important to explain anti-phase dynamics between modes \cite{Masoller2005} and more recently chaos in microlasers \cite{Ma2022}. Yet, their investigations require more complex modelling such as travelling wave models \cite{Homar1996, Serrat2006} or the inclusion of carrier gratings \cite{Viktorov2000, Lenstra2014}. In this case, the coupling doesn't depend on the wavelength separation but rather on the overlap between the cavity eigenmodes \cite{Viktorov2000}.}\\
When considering the effect of optical injection, this coupling between the different laser modes can naturally be expected to have a major impact on dynamical behavior. However, while the case of dual-state emitting quantum dot lasers has attracted considerable attention at least in part due to the particularly complex carrier dynamics \cite{Kelleher2021}, the case of quantum-well multi-wavelength lasers seems to have been left aside – perhaps due to their limited availability – and only a few works tackled this problem \cite{Heinricht2011,Osborneee2009,Osborne2012}.
\red{Still, in the case of quantum dot or quantum cascade lasers in which multi-wavelength emission typically involves different energy levels, the coupling mechanism could be expected to be significantly different \cite{Geiser2010, Virte2013a, Chusseau2013}, though similarities have also been reported \cite{Virte2016c}.} \\
In this work, we numerically and experimentally analyze the effect of nonlinear mode coupling in a dual-wavelength laser under optical injection. We focus mainly on the stable locking regime and consider a configuration where one mode is significantly suppressed while the other is dominant. We inject light at a wavelength corresponding to the suppressed mode and analyze the evolution of the locking range for different power balances between the injected and un-injected mode, i.e. the suppressed and dominant mode respectively when no injection is considered. We report a counter-intuitive dependence of the locking range on the power of the suppressed mode: locking appears to be facilitated when the suppressed mode has a higher output power, i.e. the suppression ratio compared to the dominant mode is lower. In addition, within the locking range of the injected mode, we report a detuning-dependent wavelength shift of the un-injected mode which increases with the injection strength and with the power of the suppressed mode, signaling a strong mode coupling in these laser structures. Numerically, we identify the cross-coupling parameter as being particularly relevant. We believe that these results bring new important insight into the mode coupling and competition taking place in multi-wavelength lasers and motivate further investigations focusing on these aspects.
\section{Experimental setup}
The dual-wavelength lasers (DWL) investigated in this work were fabricated on the InP generic foundry platform of SMART Photonics \cite{Smit2014}, and its structure is schematically described in Fig.~\ref{fig:Fig1}(a). The design uses the standard building blocks of the SMART Photonics library. The gain medium consists of a semiconductor optical amplifier (SOA) with a length of 500~$\mu$m. The laser cavity is formed by a broadband two-port \red{multimode interference reflector (MIR) \cite{Kleijn2013}} on one side and on the other side, two distributed Bragg reflectors (DBR) arranged sequentially which act as wavelength selective elements. The pitch of the DBRs has been set to obtain a spectral separation of 10~nm between both modes. In the investigated structure, the laser emits at $\lambda_1\approx1536.7$~nm and at $\lambda_2\approx1547.6$~nm. DBR$_1$ has a 3-dB bandwidth of 1.08~nm while DBR$_2$ has a bandwidth of 1.47~nm. The parameters of each DBR have been optimized using Lumerical Interconnect simulations \cite{Lumerical} to obtain similar gain levels for both modes of the DWL. With this design, distinct cavities are defined for each wavelength, thus leading to a different separation between longitudinal modes; 31.9~GHz and 41.2~GHz, for wavelengths $\lambda_1$ and $\lambda_2$ respectively. The same laser structure has already been used and its emission properties have been partially characterized in previous works \cite{Pawlusss2019,Pawlus2019}.
The chip is electrically packaged with all-metal pads being wafer bonded to PCB boards. In addition, it has been glued on a Peltier element including a thermistor. During our experiments, the temperature of the chip is set to 22~ºC. To couple the light out of the photonic integrated circuit, we use a lensed fiber followed by an optical spectrum analyzer (APEX, AP2083, resolution down to 5 MHz / 40 fm) to record the spectrum and monitor the total output power. For a given injection current $I_{SOA}$, the power balance, i.e. the ratio in emitted power of each mode of the DWL, can be controlled by tuning DBR$_1$ and DBR$_2$. By increasing the current applied to the DBR structures, the reflectivity spectrum shifts toward shorter wavelengths while its 3-dB bandwidth and maximum reflectivity decrease slightly \cite{Robbe}. This approach gives a reasonable control of the power balance between the two wavelength emission processes, as can be seen in Fig.~\ref{fig:Fig1}(b) showing the evolution of the power ratios $\Delta P_s = P_{s_1} / P_{s_2}$ between the power, $P_{s_1}$, of the suppressed mode and the power, $P_{s_2}$, of the dominant mode when the current sent to DBR$_1$ and DBR$_2$ is varied. For the record, the subscript ‘s’ stands for “slave laser” as opposed to the master laser output power which will be denoted by an ‘m’ subscript. This evolution is measured for a fixed injection current $I_{SOA}= 60$~mA which is about three times the threshold current of 21~mA. With these parameters, a wide range of $\Delta P_s$ values is achieved. Fig.~\ref{fig:Fig1}(c) depicts the optical spectra for the three DBR current configurations identified by red crosses on the map of the panel (b) and for which power ratios of $\Delta P_s = -42.5 $~dB, -18.0~dB, and -6.2~dB are reported. It should be noted that the total output power of the DWL measured after the lens fiber is $P_s\approx-4.3$~dBm, and remains constant in all tested configurations. Several longitudinal modes can be spotted in the optical spectra but they all remain largely suppressed during our investigations.
Optical injection is realized by sending the light from a tunable high-quality master laser (Keysight N7776C) into the cavity of the slave DWL via a fiber optic circulator as schematically described in Fig.~\ref{fig:Fig1}(a). A variable optical attenuator is used to adjust the injection strength and a fiber polarization controller to match the polarization of the injected light to that of the DWL. We define the injection strength as the square root of the power of injected light. Because we focus here on the effect of optical injection of the suppressed mode, we define the detuning $\Delta f=f_m -f_{s_1}$ as the frequency difference between the master laser frequency $f_m$ and the free-running frequency of the suppressed mode $f_{s_1}$.
\begin{figure}[tb]
\centering\includegraphics[width=\linewidth]{Fig1.pdf}
\caption{(a) Experimental setup: the light from a master laser is routed towards the DWL through a fiber optic circulator (FOC) and a lensed fiber (LF). The injection strength is adjusted by using a variable optical attenuator (VOA). A fiber polarization controller (FPC) is used to adjust the polarization of the injection. A high-resolution optical spectrum analyzer (OSA) is used for measurements. (b) Evolution of the output power ratio $\Delta P_s$ as a function of the current applied to DBR$_1$ and DBR$_2$. The red stars show the three configurations for which the optical spectra are shown in panel (c). (c) Optical spectra of the DWL for the highlighted DBR current configurations.}
\label{fig:Fig1}
\end{figure}
\section{Identification and classification of partial and full locking}
In contrast to single-mode lasers, we must distinguish in DWL lasers two distinct locking regimes \cite{Heinricht2011}. When the mode under injection is locked to the master laser while the un-injected mode is suppressed, the laser is in the standard locking regime. However, another regime is possible: when the DWL still emits from the un-injected mode while the mode under injection is locked, we denote this regime as partial locking. In this section, we detail how we classify these two regimes experimentally.
In Fig.~\ref{fig:Fig2}, we show the spectral evolution of the injected/suppressed and un-injected/dominant mode for a power ratio $\Delta P_s=-42.5$~dB when changing the detuning $\Delta f$ from -2.3~GHz to 1.3~GHz for four different injection strengths from $\kappa=0.1$ to 0.43. In Fig.~\ref{fig:Fig2}(a), both master and slave signals are visible in all four cases (a.1), (a.2), (a.3), and (a.4). While locking of the injected mode can be observed in all cases, for an injection strength above $\kappa>0.18$ we also observe a relatively large detuning range where the master laser is strongly influencing the frequency of the injected mode. Interestingly, when looking at the optical spectra of the un-injected mode in the second row, i.e. Fig.~\ref{fig:Fig2}(b), we see that this mode exhibits a similar redshift of its frequency with a similar trend to the one experienced by the injected mode. This is particularly visible when the injection strength is increased and the locking range is extended. We associate this frequency shift with the strong nonlinear coupling of the DWL modes under optical injection.
To further analyze the respective evolution of the injected and un-injected modes, we measure the peak power for each mode at each detuning step, as shown in the four panels of Fig.~\ref{fig:Fig2}(c), the bottom row. Here, we estimate the power of each mode by integrating the optical spectra in a range of $\pm{2.5}$~GHz around the mode frequency before injection, i.e. $f_{s_1}$ and $f_{s_2}$ respectively. For the injected/suppressed mode, with a suppression ratio of $\Delta P_s=-42.5$~dB, the power of the master signal is significantly more powerful than the injected mode. Yet, when locking occurs, we can see a clear increase of the output power of approximately 20~dB, see the blue lines in Fig.~\ref{fig:Fig2}(c.1), (c.2), (c.3), and (c.4). Although this is, in itself, a strong indication of injection locking \cite{Hui1991}, we remark that for a weak injection $\kappa=0.10$, the un-injected mode is still emitting strongly with power levels higher than that of the injected mode. These features, therefore, correspond to what we will define as the partial locking regime. For stronger injection, however, the power of the un-injected mode is experiencing a significant drop. For instance, for $\kappa=0.43$, the power of the un-injected mode is suppressed by approximately 20~dB and the laser emits almost only at the frequency of the master laser. This feature is also consistent with the standard injection locking state which we will refer to as the “full locking” regime as opposed to the “partial locking” discussed above.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{Fig2_new.pdf}
\caption{Spectral evolution of the DWL modes when the suppressed mode is subject to optical injection for $\Delta P_s=-42.5$~dB while the detuning is changed from -2.3~GHz to 1.3~GHz, for four different $\kappa$ values, from left to right: 0.10, 0.18, 0.30, and 0.43 identified by numbers (1), (2), (3) and (4) respectively. (Top row, a) Spectral evolution of the injected mode $\lambda_1$ with respect to free-running frequency $f_{s_1}$. (Middle row, b) Spectral evolution of the un-injected mode $\lambda_2$ with respect to free-running frequency $f_{s_2}$. (Bottom row, c) Evolution of the peak power around the injected (blue) and un-injected (red) modes. The dotted line represents the sum of the power of the free-running injected mode and master laser. The dashed line represents the 20~dB threshold used to identify partial locking as discussed in the text. The red shaded region indicates the partial locking, and the green shaded regions indicate the full locking range where the laser solely emits at the frequency of the master laser and the un-injected/dominant mode is strongly suppressed.}
\label{fig:Fig2}
\end{figure}
In practice, to classify experimentally a given behavior as partial or full locking, we use these different figures of merit and set discriminating thresholds. Although this is, to a certain extent, rather arbitrary and likely imperfect, it gives us a robust, objective, and systematic classification method. First, to identify partial locking, we measure the output power around the injected mode and compare it with the power of the mode in a free-running configuration, i.e. without injection, added to the power of the injected signal, i.e. power of the master laser $P_m$. The latter sum is constant for a given injection strength $\kappa$ and the suppression ratio $\Delta P_s$. Its value is represented in Fig.~\ref{fig:Fig2}(c) by the horizontal dotted line. It can also be seen that this value naturally increases with the injection strength $\kappa$. To identify locking, we then set a threshold chosen heuristically at 20~dB above this dotted line, as shown in Fig.~\ref{fig:Fig2}(c) as the horizontal dashed line. When the measured power is above this threshold, we consider the injected mode to be locked to the master laser if, in addition, the side-mode suppression ratio mode is at least 30~dB. In this case, the side-mode suppression ratio is of course not considering emission around the other wavelength but only a limited bandwidth around the injected mode \cite{Hui1991}. Second, to identify if the locking is partial or total, we compare the power of both injected and un-injected modes. When the power of the injected mode is at least 10~dB higher than the power of the un-injected mode, we consider the DWL to be fully locked to the master laser. Otherwise, if the power difference is less than 10~dB or the un-injected mode is still the dominant one, then we consider the DWL to be partially locked. With this classification method, we can identify the full and partial locking range – highlighted by the green and red shaded area in each panel of Fig.~\ref{fig:Fig2}(c), respectively. Similar to the reports on injection in single-mode lasers, we observe a shift of the locking range towards smaller detuning when increasing the injection strength, which is attributed to the amplitude-phase coupling \cite{Zhang2019}.
\section{Evolution of the locking range with variations of the suppression ratio}
In this section, we describe the impact that the suppression ratio, $\Delta P_s$, between the injected and un-injected mode has on both the frequency shift experienced by the un-injected mode and the locking range of the DWL. We perform the same measurement as in the previous section, i.e. we inject the suppressed mode and vary the detuning $\Delta f$ from -2.3~GHz to 1.3~GHz, but we repeat it systematically for increasing values of the injection strength $\kappa$ and classify the dynamical state of the laser based on the optical spectrum of the injected mode, i.e. the suppressed mode of our DWL. In Fig.~\ref{fig:Fig3}(a.1), (a.2), and (a.3), we show these detuning-injection strength maps for three different output power ratios between the two modes: $\Delta P_s=-42.5$~dB, $\Delta P_s=-18.0$~dB, and $\Delta P_s=-6.2$~dB. In each map, we identify the partial and full locking range along with additional complex dynamics featuring broad optical spectra with multiple peaks. Locking is identified by a peak count of one, i.e. only one peak is present in the optical spectrum of the injected mode, i.e. the injected mode is locked to the master laser. This includes both partial and full locking, which, as discussed in the previous section, we distinguish based on the relative output power of each mode. In contrast, a peak count of two represents cases where locking does not occur and both slave and master signals are present in the optical spectrum. Then, higher peak counts indicate the emergence of more complex dynamics such as periodic oscillations or chaos. These states are characterized by broad optical spectra typically comprising several peaks, but, since the precise classification of these dynamics is out of the scope of this work, we only indicate here the number of detected peaks without further details.
In Fig.~\ref{fig:Fig3}(a.1), (a.2), and (a.3), irrespective of the power ratio between the two modes of the DWL, we observe, as expected, that the locking range broadens when increasing the injection strength. Interestingly, this is true for both partial and full locking. When comparing the three panels, we can clearly notice a significant effect on the suppression ratio $\Delta P_s$ on the locking range: a lower suppression of the mode leads to a large extension of the locking region. In other words, for a given injection strength – estimated with respect to the square root of the power of the injected light – the locking range is extended when the injected mode is emitting at higher power. \red{With a single mode laser subject to optical injection, a more powerful slave laser with a fixed injected power would lead to a decrease of the locking range.} Our observations therefore seems rather counter-intuitive as, from the perspective of the injected mode, the injection strength is, in practice, lower. Our observation, therefore, suggests that the dominant mode might have, in this context, a stabilizing effect on the DWL.
\begin{figure}[tb]
\centering\includegraphics[width=\linewidth]{fig3_final.pdf}
\caption{Stability maps of the injected mode and frequency shift experienced by the un-injected mode for three different power ratios, $\Delta P_s$ of -42.5~dB (1, left), -18.0~dB (2, middle) and -6.2~dB (3, right). (Top, a) Detuning-injection strength maps for the injected mode. The dark pink and light pink colors illustrate the full and partial locking, respectively. The gray-shaded area indicates the detection of two peaks; thus, no locking is observed, but no dynamics are detected. Higher peak counts are associated with different types of complex dynamics. (b) Frequency shift experienced by the dominant/un-injected mode with respect to its free-running frequency, i.e. without optical injection of the suppressed mode. The blue-shaded area indicates the regions where dynamics are detected, and the data are thus discarded.}
\label{fig:Fig3}
\end{figure}
Next, we focus on the evolution of the un-injected/dominant mode and analyze the frequency shift it experiences. Fig.~\ref{fig:Fig3}(b.1), (b.2), and (b.3) show the frequency shift experienced by the un-injected mode while changing the detuning and injection strength applied to the suppressed mode for the three different suppression ratios considered. These values are specified with respect to the mode frequency when no optical injection is applied. In all three cases, we can observe that the un-injected mode experiences a slight redshift whose magnitude depends on both the detuning and injection strength. \red{A similar shift has already been reported in a quantum dot laser but between two longitudinal modes from the same energy level \cite{Hurtado2013}}. When $\Delta P_s$ is increased, both its magnitude and the detuning range in which it appears, increase. The frequency shift clearly occurs when the injected mode is locked. Although it occurs for both partial and full locking regimes, it is significantly larger in the latter. While on the positive detuning side, we observe a smooth transition with the unlocked behavior, a sharp change is observed on the negative detuning side. Complex dynamics appear at the detuning values for which similar dynamical behavior is also observed around the injected mode, see the blue-shaded area in Fig.~\ref{fig:Fig3}(b.2) and (b.3).
The suppression ratio $\Delta P_s$ plays an important role in both the locking range and the frequency shift of the un-injected mode. To better analyze its effect, we analyze previous data for different suppression ratios but at a fixed injection strength. In Fig.~\ref{fig:fig4}(a), we compare the evolution of full and full+partial locking ranges at three different injection strengths for increasing suppression ratios. We thus confirm unambiguously that both the locking ranges increase with $\Delta P_s$ regardless of its detailed definition (full or partial). Interestingly, the gap between full and full+partial locking does not appear to be fixed and is even largely reduced at high injection strength and low suppression ($\Delta P_s=-6.2$~dB). Obviously, this feature seems to be attributable to the coupling mechanism between the two modes. Though numerical models provide a good qualitative agreement, as discussed in the next section, the detailed mechanism of this coupling and cross-mode stabilization would require further investigations. Fig.~\ref{fig:fig4}(b) illustrates the detuning-dependent frequency shift of the un-injected mode at three different power ratios. Though already visible in the mapping, this plot clearly shows that increasing $\Delta P_s$ leads to an increase in both the frequency shift magnitude and the detuning bandwidth in which the frequency shift is observed. The slope of the shift curve appears to be identical for all three $\Delta P_s$ configuration and quite close to 1. Since the frequency shift of the un-injected/dominant mode appears while the injected/suppressed mode is locked, we can conclude that the wavelength difference between the two modes of the DWL is preserved despite these shifts.
\begin{figure}[tb]
\centering\includegraphics[width=\linewidth]{fig4.pdf}
\caption{Influence of the power ratio, $\Delta P_s$, on the locking range of the injected mode and frequency shift of the un-injected mode. (a) Locking range of the injected mode for an injection strength of $\kappa=0.10$ (blue), $\kappa=0.30$ (pink), and $\kappa=0.43$ (yellow). Solid lines correspond to the full locking range, dashed lines to the full+partial locking range. (b) Frequency shift of the un-injected mode as a function of the detuning for $\Delta P_s=-42.5$~dB (brown), -18~dB (red) and -6.2~dB (green) at a given injection strength of $\kappa=0.18$. For $\Delta P_s=-18.0$~dB and $\Delta P_s=-6.2$~dB, the lines are discontinued when complex dynamics appear on the negative detuning side. }
\label{fig:fig4}
\end{figure}
\section{Numerical investigations}
In this section, we investigate numerically the locking range of the injected mode and the detuning-dependent frequency shift experienced by the un-injected mode. The model employed for this theoretical analysis is based on the multi-mode extension of the well-known single-mode Lang-Kobayashi equation \cite{Koryukin2004,Viktorov2000}. The equations for the complex fields of each mode, $E_1$ and $E_2$, are given by
\begin{equation}
\frac{dE_1}{dt} =(1+i\alpha)(g_1 N_1-\frac{1-g_1}{2})E_1+\kappa_T e^{i\Delta t},
\end{equation}
\begin{equation}
\frac{dE_2}{dt} =(1+i\alpha)(g_2 N_2-\frac{1-g_2}{2})E_2.
\end{equation}
Compared to \cite{Viktorov2000}, we have removed the term corresponding to the optical feedback but have added an injection term, $\kappa_T e^{i\Delta t}$ to Eq. (1) to account for the optical injection, where $\Delta$ is the normalized detuning and $\kappa_T$ is the injection rate. It should be noted that the injection strength $\kappa$, defined in the previous experimental sections, and injection rate $\kappa_T$ used here cannot be quantitatively compared directly. The reason is two-fold: first, in the experiment, we do not have access to the coupling losses from the lens fiber to the laser cavity on the chip; second, the equations used here are normalized by the photon lifetime as described in \cite{Koryukin2004}. In Eqs. (1) and (2), $\alpha$ represents the linewidth enhancement factor, $g_{1,2}$, the normalized gain coefficients, and $N_{1,2}$ are the carrier densities. Their evolution is described by:
\begin{equation}
\tau_s\frac{dN_1}{dt} =\eta-N_1-(1+2N_1)(g_1|E_1|^2+g_2\beta|E_2|^2),
\end{equation}
\begin{equation}
\tau_s\frac{dN_2}{dt} =\eta-N_2-(1+2N_2)(g_1\beta|E_1|^2+g_2|E_2|^2).
\end{equation}
with $\tau_s$ the normalized carrier lifetime, and $\beta$ the cross-saturation parameter which takes values between 0 and 1. A value of $\beta=0$ describes two decoupled carrier pools whereas $\beta=1$ corresponds to one single carrier pool for both wavelengths.
In this work, unless stated otherwise, we use the following parameter values. The linewidth enhancement factor is set to $\alpha=3$. The pump parameter $\eta$ corresponds to a normalized laser injection current $J$ so that $\eta={J}/{J_{th}}-1$, with $J_{th}$ being the threshold current. Here, we use $\eta=2$, corresponding to an injection current equal to three times the laser threshold consistent with the conditions of our experimental work. We set the cross-saturation parameter $\beta=0.9$, and the normalized carrier lifetime to $\tau_s=1000$. To tune the suppression ratio $\Delta P_s$ between the two modes of the DWL, we fix the gain of the second mode at $g_2=1$, and adjust the modal gain $g_1$ of mode $E_1$ accordingly to achieve the desired suppression ratio without optical injection. \red{We also consider a standard noise term, not shown, in both field equations with a spontaneous emission factor of $\beta_{sp}=10^{-10}$}. Since the equations are normalized in time with respect to the photon lifetime, all parameters are dimensionless.
To reproduce experimental measurements, we sweep the detuning from $\Delta=-0.6$ to 0.2 for increasing the injection rate from $\kappa_T=0$ to 0.25 for three different values of the suppression $\Delta P_s=-40$~dB, -6~dB, and -1~dB. The gain coefficients associated with the mode under injection is varied to achieve the desired $\Delta P_s$ values. $g_1=0.9190$ to achieve $\Delta P_s=-40$~dB, $g_1=0.9234$ to achieve $\Delta P_s=-6$~dB, and $g_1=0.987$ to achieve $\Delta P_s=-1$~dB. We thus obtain the stability maps shown in Fig.~\ref{fig 5}(a.1), (a.2), and (a.3). To identify the partial and full locking range, the same conditions as those described in Sections~2, and 3 are applied. The color code is identical to the one used in Fig.~\ref{fig:Fig3}. The light and dark pink colors indicate both partial and full locking regions respectively, where only one peak is detected in the optical spectrum of the injected mode. The grey color depicts the regions where two peaks are detected reflecting the fact that both master and slave lasers are present in the optical spectrum and no locking occurs. Higher peak counts represent the complex dynamical behavior in which more than 2 peaks are detected. By increasing $\kappa_T$ or $\Delta P_s$, both partial and full locking regions broaden, i.e. the higher the injection strength or the power balance ($\Delta P_s$) the wider the locking range. This feature coincides well with our experimental observations. \red{We did not observe any sign of hysteresis or multi-stability in the partial or full locking regions}. However, we also observe large regions of complex dynamics on the positive detuning side which is clearly visible for all three suppression ratios configurations which we have not observed experimentally. Although this dynamical region is consistent with the standard behavior of a single-mode laser subject to optical injection \cite{Wieczorek2005}, it was clearly missing in our experimental measurements. Experimentally, we only achieved a $\Delta P_s$ up to -6~dB and, in this case, the dynamical region appears for relatively low injection rate which can be reached mostly before the full locking. Since in Fig.~\ref{fig:Fig3}(a.3), full locking is observed for all injection strength values considered, we might suppose that these complex dynamics might occur for weaker injections that were out of reach for our experimental setup. Nevertheless, a more detailed analysis of these cases might be required to clarify this point.
\begin{figure}[tb]
\centering\includegraphics[width=\linewidth]{fig5.pdf}
\caption{Numerical stability maps of the injected mode and frequency shift experienced by the un-injected mode for three different power ratios, $\Delta P_s$ of -40~dB (1, left), -6~dB (2, middle) and -1~dB (3, right). (Top, a) Detuning-injection strength maps for the injected mode. The dark pink and light pink colors illustrate the full and partial locking, respectively. The gray-shaded area indicates the detection of two peaks; thus, no locking is observed, but no dynamics are detected. Higher peak counts are associated with different types of complex dynamics. (b) Frequency shift experienced by the dominant/un-injected mode with respect to its free-running frequency, i.e. without optical injection of the suppressed mode. The blue-shaded area indicates the regions where dynamics are detected, and the data are thus discarded.}
\label{fig 5}
\end{figure}
In Fig.~\ref{fig 5}(b), we monitor the spectral evolution of the un-injected mode while the suppressed mode is under injection. As in the experiment, the frequency of the un-injected mode experiences a frequency shift toward lower frequency values. Both the magnitude and detuning range in which the frequency shift occurs, are affected by the power ratio between the injected and un-injected mode. The detuning range in which the frequency shift occurs aligns perfectly with the full locking range of the injected mode. In addition, we detect complex dynamical behavior – shown with blue shaded color – which coincides with that observed in the optical spectrum of the injected mode.
To better understand the impact of the power ratio on the locking range of the injected mode, we focus on three different injection strengths, $\kappa_T=0.10$, $\kappa_T=0.18$, and $\kappa_T=0.25$, and identify the full and full+partial locking range as done experimentally. Again, we adjust $g_1$ to tune the suppression ratio $\Delta P_s$, and compute the locking range for each value, see Fig.~\ref{fig 6}(a). We observe that, again, the full locking range increases by increasing $\Delta P_s$. However, for the full+partial locking, a slightly different behavior is observed. The full+partial locking decreases for $\Delta P_s$ higher than 10~dB or 5~dB for $\kappa_T=0.10$ and $\kappa_T=0.18$ respectively. This variation can be connected to the emergence of the complex dynamics region that appears on the positive detuning side of the locking range as seen in Fig.~\ref{fig 5}(a.2) and (a.3). At higher injection rates, however, the same trend can still be observed.
To investigate the dependence of the frequency shift of the un-injected mode on the suppression ratio between the two modes of the DWL, we set the injection strength to $\kappa_T=0.2$. Next, we retrieve the frequency shift experienced by the un-injected mode at three different power ratios and over the whole detuning range, see Fig.~\ref{fig 6}(b). By reducing the power ratio between the two modes of the DWL, the frequency shift magnitude and the detuning range in which the frequency shift occurs, increase. Overall, we again obtain a good agreement with the experimental observations but also with a few interesting discrepancies. The evolution of the frequency shift appears to be less linear than in the experiment and, even though the slope of the shift is again identical for all $\Delta P_s$ configurations, the slope is not close to one. Thus, meaning that, unlike in the experiment, the injection modifies the wavelength separation between the two modes. This result suggests that the wavelength separation might not be as invariant as one could have supposed based on the experimental results.
To go further, and because all these features seem to be intrinsically linked with the coupling mechanism between the two modes of the DWL, we numerically analyze the impact of the cross-saturation parameter $\beta$. In Fig.~\ref{fig 6}(c), we plot the full locking range of the injected mode with respect to the suppression ratio $\Delta P_s$ for $\kappa_T=0.12$ and different $\beta$ values. Interestingly, we observe that, by increasing the cross-coupling between the injected and un-injected mode, the dependence of the full locking range on the suppression ratio is drastically impacted to the point of being suppressed for $\beta=0.94$. The cross-saturation between the two modes of the DWL, therefore, seems to have a central role in the response of the DWL to optical injection.
\begin{figure}[tb]
\centering\includegraphics[width=\linewidth]{fig6.pdf}
\caption{Numerical analysis of the impact of the suppression ratio on both the locking bandwidth of the injected mode and frequency shift experienced by the un-injected mode and the influence of the cross-coupling between two modes of the DWL on the full locking bandwidth. (a) The locking bandwidth of the injected mode with respect to the power ratio between injected and un-injected modes at $\kappa_T=0.10$ (blue), $\kappa_T=0.18$ (yellow), and $\kappa_T=0.25$ (purple). The full locking is shown by the solid lines and the dashed lines correspond to the full+partial locking. (b) Frequency shift of the un-injected mode for an injection rate $\kappa_T=0.2$ as a function of detuning at $\Delta P_s=-40$~dB (brown), -6~dB (red) and -1~dB (green). (c) Full locking range as a function of the suppression ratio for an injection rate of $\kappa_T=0.12$ and increasing cross-coupling values $\beta=0.9$ (black), 0.92 (brown), 0.93 (yellow) and 0.94 (pink).}
\label{fig 6}
\end{figure}
\section{Summary}
In this work, we have experimentally and numerically investigated the effect of optical injection on the suppressed mode of a dual-wavelength laser. We have highlighted an important dependence of the locking range on the suppression ratio between the two modes of the laser, along with a significant frequency shift of the un-injected mode. We obtained a good agreement between our experimental observations and modeling based on a rather simple rate equation model including a cross-saturation between the two modes. We have highlighted that this cross-saturation parameter might have a leading role in shaping the dynamical behavior of DWL and, in particular, their response to optical injection. \red{The rate equation model including carrier-grating and cross-saturation effects appears to be sufficient to qualitatively reproduce the behavior observed experimentally. Yet, at this stage, we cannot fully dismiss the role of other coupling mechanisms even though this analysis is outside of the scope of this paper.} Further investigations might be required to fully uncover and understand the mode coupling in multi-wavelength lasers.
\begin{backmatter}
\bmsection{Funding}
Research Foundation - Flanders (FWO) (grants 1530318N, G0G0319N), METHUSALEM program of the Flemish Government (Vlaamse Overheid), the European Research Council (grant 948129 COLOR-UP).
\bmsection{Disclosures}
The authors declare no conflicts of interest.
\bmsection{Data availability} Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
| proofpile-arXiv_065-14017 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{Introduction}
\IEEEPARstart{I}{N} order to determine the position and clock offset of a user device (UD) in a wireless localization system, measurements such as time-of-arrival (TOA), angle-of-arrival (AOA), received signal strength (RSS) and a combination of them \cite{kuutti2018survey,wang2012novel,zhao2020closed,shi2019blas,shao2014efficient,luo2019novel,an2020distributed,coluccia2019hybrid,katwe2020nlos,tomic2019linear,zhao2021parn} have to be obtained with respect to anchor nodes (ANs) at known coordinates. Among these measurements, TOA has high accuracy and is used in a number of real-world applications such as global navigation satellite systems (GNSSs), ultra-wide band (UWB) indoor positioning, and smart vehicle autonomous navigation \cite{zafari2019survey,lu2019overview,zhao2014kalman,conti2019soft,wu2019coordinate}.
The TOA-based localization and synchronization techniques are usually categorized into one-way and two-way TOA schemes \cite{liu2007survey,guvenc2009survey}. One-way TOA is referred to as obtaining the TOA measurements by recording the timestamps of the one-way signal transmission and reception based on the local clock sources of the AN and UD \cite{yan2013review,shi2020sequential}. In two-way TOA, two one-way range measurements are obtained from timestamps of each round-trip communication between the AN and the UD. Compared with the one-way TOA scheme, two-way TOA requires more communication times but has better localization and synchronization accuracy due to more available TOA measurements \cite{zafari2019survey}.
In a two-way TOA system, a UD communicates with a number of ANs in a round-trip manner to obtain sufficient amount of TOA measurements for UD localization and synchronization \cite{bialer2016two}. This communication protocol is straightforward and easy to be implemented. Thus, it has been extensively studied and a variety of two-way TOA localization and synchronization methods are presented in literature \cite{bialer2016two,gholami2016tw,lazzari2017numerical,zheng2010joint,vaghefi2015cooperative,zou2017joint,nevat2016location,tomic2018exact,yuan2016cooperative,yin2018gnss,gao2016robust}.
These previous studies all assume that the target or UD to be located is stationary. This assumption can hold in applications such as wireless sensor networks where all sensors are placed at fixed positions. However, for other dynamic applications such as drone navigation, smart vehicle control and personnel tracking, ignoring the UD motion will result in extra position and timing errors, which seriously degrade the performance of localization and synchronization.
In this article, we develop a new optimal two-way TOA localization and synchronization method for a moving UD with clock drift, namely TWLAS, which compensates the localization and synchronization error caused by the UD motion and has higher accuracy than the conventional two-way TOA methods. Unlike existing two-way TOA methods, which do not take the UD motion into account, we formulate the localization and synchronization problem for a moving UD by modeling the UD motion with a constant velocity during a short period. We present an iterative algorithm for the TWLAS method. We derive the CRLB of the new TWLAS in the two cases with and without known UD velocity. We show that the TWLAS outperforms the conventional two-way TOA and one-way TOA methods in localization and synchronization accuracy. Numerical simulations show that the estimation accuracy of the TWLAS reaches CRLB. For a moving UD, the new TWLAS method compensates the localization and synchronization error caused by the UD motion and significantly outperforms the conventional two-way TOA method. All the numerical results are consistent with theoretical analysis.
The rest of the article is organized as follows. In Section II, the TOA measurements and the UD motion and clock are modeled, and the two-way TOA localization problem is formulated. In Section III, the optimal localization method, namely TWLAS, as well as its iterative algorithm are proposed. The estimation error of the proposed TWLAS method is analyzed in Section IV. Numerical simulations are conducted to evaluate the performance of the TWLAS method in Section V. Section VI concludes this article.
Main notations are summarized in Table \ref{table_notation}.
\begin{table}[!t]
\caption{Notation List}
\label{table_notation}
\centering
\begin{tabular}{l p{5.5cm}}
\toprule
lowercase $x$& scalar\\
bold lowercase $\boldsymbol{x}$ & vector\\
bold uppercase $\bm{X}$ & matrix\\
$\Vert \boldsymbol{x} \Vert$ & Euclidean norm of a vector\\
$\Vert \boldsymbol{x}\Vert _{\bm{W}}^2$ & square of Mahalanobis norm, i.e., $\boldsymbol{x}^T\bm{W}\boldsymbol{x}$\\
$i$, $j$ & indices of variables\\
$[\boldsymbol{x}]_{i}$ &the $i$-th element of a vector\\
$\mathrm{tr}(\bm{X})$ & trace of a matrix\\
$[\bm{X}]_{i,:}$, $[\bm{X}]_{:,j}$ &the $i$-th row and the $j$-th column of a matrix, respectively\\
$[\bm{X}]_{i,j}$ &entry at the $i$-th row and the $j$-th column of a matrix\\
$[\bm{X}]_{i:m,j:n}$ &sub-matrix from the $i$-th to the $m$-th row and from the $j$-th to the $n$-th column of a matrix\\
$\mathbb{E}[\cdot]$ & expectation operator \\
$\mathrm{diag}(\cdot)$ & diagonal matrix with the elements inside\\
$M$ & number of ANs\\
$N$ & dimension of all the position and velocity vectors, i.e., $N=2$ in 2D case and $N=3$ in 3D case\\
$\bm{I}_M$ & $M\times M$ identity matrix\\
$\bm{O}_{M\times N}$ & $M\times N$ zero matrix\\
$\boldsymbol{0}_{M}$, $\boldsymbol{1}_{M}$& $M$-element vectors with all-zero and all-one elements, respectively\\
$\boldsymbol{p}_{i}$ & position vector of AN \#$i$\\
$\boldsymbol{p}$ & unknown position vector of UD\\
$\boldsymbol{v}$ & velocity vector of UD\\
$b$, $\omega$ & unknown UD clock offset and clock drift \\
$\boldsymbol{e}$, $\boldsymbol{l}$ & unit line-of-sight (LOS) vector from the UD to the AN at the UD transmission and reception time, respectively\\
$\delta t_i$ & interval between UD signal transmission and reception from AN \#$i$ \\
$c$ & propagation speed of the signal\\
$\rho_{i}$ & request-TOA measurement at AN \#$i$ upon AN reception of the request signal from the UD\\
$\tau_{i}$ & response-TOA measurement at the UD upon reception of the response signal from AN \#$i$\\
$\boldsymbol{\theta}$ & parameter vector\\
$\varepsilon$, $\sigma^2$ & Gaussian random error and variance\\
$\mathcal{F}$ & Fisher information matrix (FIM)\\
$\bm{W}$ & weighting matrix\\
$\bm{G}$ & design matrix\\
$\mu$& estimation bias\\
$\bm{Q}$ & estimation error variance matrix\\
\bottomrule
\end{tabular}
\end{table}
\section{Problem Formulation} \label{problem}
\subsection{Two-way TOA System Model}
In the two-way TOA system as shown in Fig. \ref{fig:systemfig}, there are $M$ ANs placed at known positions. The coordinate of AN \#$i$ is denoted by $\boldsymbol{p}_i$, $i=1,\cdots,M$. The ANs are all synchronous, i.e., the clock offset and drift between any AN pair are known. This can be achieved by conducting multiple communications between ANs \cite{shi2019blas}. The UD position, denoted by $\boldsymbol{p}$, and clock offset, denoted by $b$ are unknowns to be determined. Both $\boldsymbol{p}_i$ and $\boldsymbol{p}$ are of $N$ dimension ($N=2$ for 2D cases and $N=3$ for 3D cases), i.e., $\boldsymbol{p}_{i} \text{, } \boldsymbol{p} \in \mathbb{R}^{N}$.
As shown in Fig. \ref{fig:systemfig}, during a localization and synchronization period, the UD transmits the request signal and all ANs receive it. Thus, $M$ TOA measurements are formed at the AN ends, namely request-TOA. AN \#$i$ processes the received signal and then transmits the response signal and the UD receives it to form TOA measurements, namely response-TOA. Once all the $M$ ANs finish transmission, $M$ response-TOA measurements are formed at the UD end. The communication protocol between the UD and ANs can be alternated, e.g., the UD can transmit and then receive signal to and from each AN at a time, but such alternation does not affect how the method proposed in Section \ref{locmethod} works.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{img/systemfigtwtoa.pdf}
\caption{Two-way TOA localization and synchronization system. The moving UD transmits the request signal, and all ANs receive. $M$ request-TOA measurements are formed at all ANs. Then the ANs transmit response signal sequentially to avoid collision. $M$ sequential response-TOA measurements are formed at the UD.
}
\label{fig:systemfig}
\end{figure}
\subsection{UD Motion and Clock Model}\label{clockandmovemodel}
The clock offset and drift of the UD with respect to the synchronous ANs are denoted by $b$ and $\omega$, respectively. Following the clock model in \cite{zou2017joint,hasan2018gnss}, we model the UD drift as a constant during a short period and the clock offset as the integration of the clock drift. It is expressed by
\begin{equation} \label{eq:clockbomega}
b(t_2) =b(t_1)+\omega(t_1)\cdot (t_2-t_1)\text{,}
\end{equation}
where $t_1$ and $t_2$ are two time instants close enough to ensure $\omega$ is constant during the interval.
We model the UD motion with a constant velocity. Specifically, we assume that the UD velocity, denoted by $\boldsymbol{v}$, remains stable during a short time period. Then, the UD motion is modeled as
\begin{equation} \label{eq:posvel}
\boldsymbol{p}(t_2) = \boldsymbol{p}(t_1) + \boldsymbol{v}(t_1) \cdot (t_2-t_1) \text{.}
\end{equation}
\subsection{Two-way TOA Localization and Synchronization}\label{lasproblem}
When the UD transmits the request signal and the ANs receive. The request-TOA measurement at AN \#$i$ ($i=1,\cdots,M$), denoted by $\rho_i$, equals the difference of the true signal propagation time and the clock offset plus measurement noise. Therefore, we have
\begin{align} \label{eq:rhoANi}
\rho_i = t_{RX}^{(i)}- t_{TX}=
\frac{\left\Vert\boldsymbol{p}_i-\boldsymbol{p}\right\Vert}{c} -b+ \varepsilon_{i} \text{, } i=1,\cdots,M \text{,}
\end{align}
where $t_{RX}^{(i)}$ is the local reception time at AN \#$i$, $t_{TX}$ is the UD local transmission time of the request signal, $\boldsymbol{p}$ and $b$ are the UD position and clock offset at $t_{TX}$, respectively, $t_{TX}$ is the UD transmission time of the request signal, $c$ is the signal propagation speed, and $\varepsilon_{i}$ is the measurement noise for AN \#$i$, following independent zero-mean Gaussian distribution with a variance of $\sigma_{i}^2$, i.e., $\varepsilon_{i} \sim \mathcal{N}(0,\sigma_{i}^2)$. The Gaussian distribution for measurement noise is widely adopted in literature \cite{zheng2010joint,vaghefi2015cooperative,zou2017joint,gao2016robust}. However, in practice, the measurements may deviate from this distribution due to interference such as impulse noises, which leads to large errors in TOA measurements. Preprocessing measures can be taken to detect and remove these erroneous measurements to ensure the correct localization and synchronization result \cite{enosh2014outlier,van2016optimizing,xiao2013robust}.
After receiving the request signal, all ANs transmit the response signal in a sequential manner. The UD receives the response signal from AN \#$i$ and forms a response-TOA measurement, denoted by $\tau_i$. The interval from the UD transmission to the reception of the response signal from AN \#$i$ is denoted by $\delta t_i$. We use the UD states including position, velocity, clock offset and clock drift at the transmission instant to express $\tau_i$ as
\begin{align} \label{eq:tauANi}
\tau_i =t_{RX}-t_{TX}^{(i)}= \frac{\left\Vert\boldsymbol{p}_i-\boldsymbol{p}-\boldsymbol{v}\cdot\delta t_i\right\Vert}{c}+ &b+\omega\cdot\delta t_i + \varepsilon, \nonumber\\
&i=1,\cdots,M \text{,}
\end{align}
where $t_{RX}$ is the local reception time of the response signal at the UD, $t_{TX}^{(i)}$ is the local transmission time at AN \#$i$, $\boldsymbol{p}$, $b$ and $\omega$ are all at the instant of $t_{TX}$, and $\varepsilon$ is the measurement noise for the UD, following a zero-mean Gaussian distribution with a variance of $\sigma^2$, i.e., $\varepsilon \sim \mathcal{N}(0,\sigma^2)$.
Based on the measurements given by (\ref{eq:rhoANi}) and (\ref{eq:tauANi}), the problem of localization and synchronization for the UD is to estimate the position $\boldsymbol{p}$ and the clock offset $b$ at the instant $t_{TX}$. The estimation method will be proposed in Section \ref{locmethod}.
\section{Optimal Two-way TOA Localization and Synchronization} \label{locmethod}
In this section, we will develop a ML method, namely TWLAS, to achieve localization and synchronization for a moving UD. The iterative algorithm of the TWLAS will be presented as well.
\subsection{ML Estimator for Localization and Synchronization} \label{estimator}
The unknown parameters we are interested in for localization and synchronization are the UD position $\boldsymbol{p}$ and its clock offset $b$ at the instant $t_{TX}$. However, by observing (\ref{eq:tauANi}), we also need to handle the UD velocity $\boldsymbol{v}$ and clock drift $\omega$. In practice, the UD velocity can be obtained if the UD is stationary or a motion sensor such as an inertial measurement unit is equipped. Therefore, we consider two cases, one is with known UD velocity and the other is without. For the former case, the unknown parameters to be estimated include $\boldsymbol{p}$, $b$, and $\omega$, while for the latter case, the unknown parameters are $\boldsymbol{p}$, $b$, $\boldsymbol{v}$ and $\omega$. Correspondingly, we design two modes for the TWLAS to deal with the two cases. The unknown parameter vector is
\begin{align} \label{eq:thetadef}
\boldsymbol{\theta}=\left\{
\begin{matrix}
\left[\boldsymbol{p}^T,b,\omega\right]^T, & \text{for Mode 1,}\\
\left[\boldsymbol{p}^T,b,\omega,\boldsymbol{v}^T\right]^T, &\text{for Mode 2.}
\end{matrix}
\right.
\end{align}
We note that the unknowns to be estimated in Mode 1 are the same as the conventional two-way TOA method, such as presented in \cite{zheng2010joint,vaghefi2015cooperative,zou2017joint}. Therefore, the conventional two-way TOA method that estimates $\boldsymbol{p}$, $b$, and $\omega$, ignoring the UD motion, is a special case of Mode 1 when the UD is stationary. However, without employing the UD velocity, the conventional method will produce uncompensated error for a moving UD as shown in Section \ref{perfromancedV}.
The two-way TOA measurements are written in the collective form as
$$
\boldsymbol{\rho}=
\left[\rho_1,\cdots,\rho_M,\tau_1,\cdots,\tau_M\right]^T \text{.}
$$
The relation between the unknown parameters and the measurements is
\begin{equation} \label{eq:rhoandtheta}
\boldsymbol{\rho} = h(\boldsymbol{\theta}) + \boldsymbol{\varepsilon} \text{,}
\end{equation}
where based on (\ref{eq:rhoANi}) and (\ref{eq:tauANi}), the $i$-th row of the function $h(\boldsymbol{\theta})$ is
\begin{align} \label{eq:funtheta}
&\left[h(\boldsymbol{\theta})\right]_{i} = \nonumber\\
&\left\{
\begin{matrix}
\frac{\left\Vert\boldsymbol{p}_i-\boldsymbol{p}\right\Vert}{c}-b, & i=1,\cdots,M, \\
\frac{\left\Vert\boldsymbol{p}_{i-M}-\boldsymbol{p}-\boldsymbol{v}\cdot\delta t_{i-M}\right\Vert}{c}+b+\omega \cdot \delta t_{i-M},&i=M+1,\cdots,2M,
\end{matrix}
\right.
\end{align}
and $\boldsymbol{\varepsilon}=\left[\varepsilon_1,\cdots,\varepsilon_{M},\varepsilon\boldsymbol{1}_M^T\right]^T$ with $\boldsymbol{1}_M$ being an all-one $M$-vector.
According to the measurement model presented in the previous section, all the error terms are independently Gaussian distributed. The ML estimation of $\boldsymbol{\theta}$ is written as a weighted least squares (WLS) minimizer as
\begin{equation} \label{eq:MLminimizer}
\hat{\boldsymbol{\theta}}=\text{arg}\min\limits_{{\boldsymbol{\theta}}} \left\Vert\boldsymbol{\rho} - \mathit{h}({\boldsymbol{\theta}})\right\Vert_{\bm{W}}^2
\text{,}
\end{equation}
where $\hat{\boldsymbol{\theta}}$ is the estimator, and $\bm{W}$ is a diagonal positive-definite weighting matrix given by
\begin{equation} \label{eq:matW}
\bm{W}=
\left[
\begin{matrix}
\bm{W}_{\rho} & \bm{O}_{M\times M}\\
\bm{O}_{M\times M}& \bm{W}_{\tau}
\end{matrix}
\right]\text{,}
\end{equation}
in which $\bm{O}_{M\times M}$ is a $M\times M$ square matrix with all entries being zero, and
\begin{equation} \label{eq:matWrho}
\bm{W}_{\rho}=\mathrm{diag}\left(\frac{1}{\sigma_1^2},\cdots,\frac{1}{\sigma_M^2}\right) \text{,}
\end{equation}
\begin{equation} \label{eq:matWtau}
\bm{W}_{\tau}= \frac{1}{\sigma^2}\bm{I}_M\text{,}
\end{equation}
with $\bm{I}_M$ being an identity matrix.
\subsection{Iterative Localization and Synchronization Algorithm}
In order to solve the minimization problem given by (\ref{eq:MLminimizer}), we develop an iterative algorithm for the proposed TWLAS method, following the Gauss-Newton algorithm in \cite{kaplan2005understanding,huang2015dilution,zhao2017priori}. We conduct Taylor series expansion on (\ref{eq:rhoandtheta}) at the estimate point of
$$
\check{\boldsymbol{\theta}}=\left\{
\begin{matrix}
\left[\check{\boldsymbol{p}}^T,\check{b},\check{\omega}\right]^T & \text{for Mode 1,}\\
\left[\check{\boldsymbol{p}}^T,\check{b},\check{\omega},\check{\boldsymbol{v}}^T\right]^T &\text{for Mode 2,}
\end{matrix}
\right.
$$
where $\check{\boldsymbol{p}}$, $\check{b}$, $\check{\omega}$, and $\check{\boldsymbol{v}}$ are estimates for $\boldsymbol{p}$, $b$, ${\omega}$, and ${\boldsymbol{v}}$, respectively. We keep the first-order term and ignore the higher order terms, and then have
\begin{equation} \label{eq:GNtaylor}
\boldsymbol{\rho} = \mathit{h}(\check{\boldsymbol{\theta}}) + \check{\bm{G}}\cdot \Delta\boldsymbol{\theta}+\boldsymbol\varepsilon \text{,}
\end{equation}
where $\check{\bm{G}}$ is the estimation of the design matrix
$$\bm{G}=\frac{\partial \mathit{h}(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \text{,}$$
$
\check{\bm{G}}=\frac{\partial \mathit{h}(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}}|_{\boldsymbol{\theta}=\check{\boldsymbol{\theta}}}
$,
and $\Delta\boldsymbol{\theta}$ is the error vector given by
$\Delta\boldsymbol{\theta} = \boldsymbol{\theta}-\check{\boldsymbol{\theta}} \text{.}
$
The design matrices for the two modes of the TWLAS, denoted by $\bm{G}_{\text{Mode 1}}$ and $\bm{G}_{\text{Mode 2}}$, respectively, are
\begin{align} \label{eq:matG}
\bm{G}=\left\{
\begin{matrix}
\bm{G}_{\text{Mode 1}}=\left[
\begin{matrix}
\bm{G}_0 & \boldsymbol{0}_M\\
\bm{G}_1 & [\delta t_1,\cdots,\delta t_M]^T
\end{matrix}
\right] \text{,} &\text{for Mode 1,}\\
\bm{G}_{\text{Mode 2}}=\left[
\begin{matrix}
\bm{G}_0 & \bm{O}_{M\times(N+1)}\\
\bm{G}_1 & \bm{G}_2
\end{matrix}
\right]
\text{,} &\text{for Mode 2,}
\end{matrix}
\right.
\end{align}
where
$$
\bm{G}_0=\left[
\begin{matrix}
-\boldsymbol{e}_1^T & -1 \\
\vdots & \vdots \\
-\boldsymbol{e}_M^T & -1
\end{matrix}
\right] \text{,}\;
\bm{G}_1=\left[
\begin{matrix}
-\boldsymbol{l}_1^T & 1 \\
\vdots & \vdots \\
-\boldsymbol{l}_M^T & 1
\end{matrix}
\right] \text{,}
$$
$$
\bm{G}_2=\left[
\begin{matrix}
\delta t_1 & -\boldsymbol{l}_1^T\delta t_1\\
\vdots & \vdots \\
\delta t_M & -\boldsymbol{l}_M^T\delta t_M
\end{matrix}
\right]\text{,}
$$
with $\boldsymbol{e}$ representing the unit line-of-sight (LOS) vector from the UD to the AN at the time instant of UD transmission,
\begin{equation} \label{eq:rhoLOS}
\boldsymbol{e}_{i}=\frac{\boldsymbol{p}_i - {\boldsymbol{p}} }{\Vert \boldsymbol{p}_i - {\boldsymbol{p}}\Vert } , i=1,\cdots,M,
\end{equation}
and
$\boldsymbol{l}$ representing the unit LOS vector from the UD to AN \#$i$ at the UD reception time as
\begin{equation} \label{eq:tauLOS}
\boldsymbol{l}_i=\frac{\boldsymbol{p}_i - {\boldsymbol{p}}-{\boldsymbol{v}}\cdot \delta t_i}{\Vert \boldsymbol{p}_i - {\boldsymbol{p}}-{\boldsymbol{v}} \cdot\delta t_i\Vert }, i=1,\cdots,M\text{,}
\end{equation}
and $\boldsymbol{0}_M$ is an all-zero $M$-vector.
The residual vector is denoted by $\boldsymbol{r}$,
\begin{equation}\label{eq:residual}
\boldsymbol{r} = \boldsymbol{\rho} - \mathit{h}(\check{\boldsymbol{\theta}})= \check{\bm{G}} \cdot \Delta\boldsymbol{\theta}+\boldsymbol\varepsilon \text{.}
\end{equation}
We denote the WLS estimate of the error vector $\Delta\boldsymbol{\theta}$ by $\Delta \check{\boldsymbol{\theta}}$, and have
\begin{equation} \label{eq:leastsquare}
\Delta\check{\boldsymbol{\theta}}=(\check{\bm{G}}^T\bm{W}\check{\bm{G}})^{-1}\check{\bm{G}}^T\bm{W}\boldsymbol{r} \text{.}
\end{equation}
The unknown parameter vector to be estimated is thereby updated iteratively by
\begin{equation} \label{eq:esttheta}
\check{\boldsymbol{\theta}} \leftarrow \check{\boldsymbol{\theta}} + \Delta \check{\boldsymbol{\theta}} \text{.}
\end{equation}
We then update the matrix $\check{\bm{G}}$ and the residual $\boldsymbol{r}$ using the estimated parameter from (\ref{eq:esttheta}) iteratively until convergence. The iterative procedure is given by Algorithm 1.
The noise variances in the weighting matrix $\bm{W}$ are treated as known in the proposed method, similar to what is done in \cite{zheng2010joint,vaghefi2015cooperative, gao2016robust}. However, in practice, we need to take some measures to obtain the noise variance. For example, we can collect the TOA measurement data before the devices are put in use and identify the noise variance by fitting the collected data. Then the identified value can be used in the real applications. The study in \cite{coluccia2019hybrid} provides an effective way to estimate the TOA measurement noise through a calibration step using the TOA messages between ANs.
Note that the proposed iterative algorithm requires a proper initial guess to guarantee convergence to the correct solution. In practice, some prior knowledge such as a rough estimate or the previous value of the UD position can be used as the initial guess. We will evaluate the dependence of the algorithm on the initial parameter value in the next section.
\begin{algorithm}
\caption{Iterative TWLAS Algorithm}
\begin{algorithmic}[1]
\State Input: TOA measurements $\boldsymbol{\rho}$ and weighting matrix $\bm{W}$, ANs' positions $\boldsymbol{p}_i$, $i=1,\cdots,M$, UD velocity $\boldsymbol{v}$ (for Mode 1), initial parameter estimate $\check{\boldsymbol{\theta}}_{0}$, maximum iteration count $iter$, and convergence threshold $thr$
\For {$k=1:iter$}
\State Compute LOS vectors $\boldsymbol{e}_{i}$ and $\boldsymbol{l}_{i}$, $i=1,\cdots,M$, based on (\ref{eq:rhoLOS}) and (\ref{eq:tauLOS})
\State Calculate residual $\boldsymbol{r}$ using (\ref{eq:residual})
\State Form design matrix $\check{\bm{G}}$ based on (\ref{eq:matG})
\State Calculate estimated error $\Delta \check{\boldsymbol{\theta}}$ using (\ref{eq:leastsquare})
\State Update parameter estimate $\check{\boldsymbol{\theta}}_{k} = \check{\boldsymbol{\theta}}_{k-1} + \Delta \check{\boldsymbol{\theta}}$
\If {$\left\Vert [\Delta \check{\boldsymbol{\theta}}]_{1:N} \right\Vert<thr$}
\State Exit \textbf{for} loop
\EndIf
\EndFor
\State Output: $\check{\boldsymbol{\theta}}_{k}$
\end{algorithmic}
\end{algorithm}
\section{Estimation Error Analysis} \label{locanalysis}
\subsection{CRLB Analysis}\label{CRLBanalysis}
CRLB is the lower bound for the covariance of an unbiased estimator. It is calculated from the inverse of the Fisher information matrix (FIM) as given by
\begin{equation} \label{eq:CRLBFisher}
\mathsf{CRLB}([\boldsymbol{\theta}]_i)=[\mathcal{F}^{-1}]_{i,i} \text{,}
\end{equation}
where $\mathcal{F}$ is the FIM, $[\cdot]_{i}$ represents the $i$-th element of a vector, and $[\cdot]_{i,j}$ represents the entry at the $i$-th row and the $j$-th column of a matrix.
The FIM of the TWLAS is written by
\begin{equation} \label{eq:FisherExpectation}
\mathcal{F}=\left(\frac{\partial h(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}}\right)^T\bm{W}\frac{\partial h(\boldsymbol{\theta})}{\partial \boldsymbol{\theta}}= \bm{G}^T\bm{W}\bm{G} \text{.}
\end{equation}
The parameters to be estimated for the two modes of the TWLAS, as defined by (\ref{eq:thetadef}) in Section \ref{estimator}, are different. We denote the FIMs of the two modes by $\mathcal{F}_{\text{Mode 1}}$ for Mode 1 and $\mathcal{F}_{\text{Mode 2}}$ for Mode 2, respectively. The terms that relate to localization and synchronization accuracy are the diagonal elements in the top-left $(N+1)\times(N+1)$ sub-matrix of the inverse of the FIMs, i.e.$\left[\mathcal{F}_{\text{Mode 1}}^{-1}\right]_{i,i}$ and $\left[\mathcal{F}_{\text{Mode 2}}^{-1}\right]_{i,i}$, $i=1,\cdots,N+1$. We have the following theorem.
\begin{theorem}\label{theorem0}
The localization and synchronization accuracy of TWLAS Mode 1 is higher than that of Mode 2, i.e.,
\begin{equation} \label{eq:Fisher1and2}
\left[\mathcal{F}_{\text{Mode 1}}^{-1}\right]_{i,i}<\left[\mathcal{F}_{\text{Mode 2}}^{-1}\right]_{i,i}, i=1,\cdots,N+1 \text{.}
\end{equation}
\end{theorem}
\textit{Proof.} See Appendix \ref{Appendix1}.
\textbf{Remark 1}: For TWLAS Mode 1, the UD motion, i.e., $\boldsymbol{v}$ is known and we only need to estimate the UD position, clock offset and drift. This helps Mode 1 to achieve better accuracy than Mode 2.
\subsection{Comparison with Conventional Two-way TOA Method} \label{compareold}
Different from the proposed TWLAS method, the conventional two-way TOA method presented by \cite{zheng2010joint,vaghefi2015cooperative,zou2017joint}, namely CTWLAS, ignores the UD movement and only estimates the UD position and clock parameters, i.e., $\boldsymbol{p}$, $b$, and $\omega$. For a moving UD, there will be estimation errors. We denote the estimation bias and the RMSE of the CTWLAS by $\boldsymbol{\mu}_C$ and $RMSE_C$, respectively. They are given by
\begin{equation}\label{eq:dPvsdVold}
{\boldsymbol{\mu}}_C=(\bm{G}_{C}^T\bm{W}{\bm{G}_C})^{-1}\bm{G}_C^T\bm{W}{\boldsymbol{r}_C} \text{,}
\end{equation}
and
\begin{align} \label{eq:rmseold}
RMSE_C =\sqrt{\Vert\boldsymbol{\mu}_C\Vert^2+\mathrm{tr}\left(\left(\bm{G}_C^T\bm{W}{\bm{G}_C}\right)^{-1}\right)} \text{,}
\end{align}
where $\bm{G}_C=\bm{G}_\text{Mode 1}$, and
\begin{align}\label{eq:resold}
{\boldsymbol{r}}_C
=\left[
\begin{matrix}
\boldsymbol{0}_M\\
\left\Vert \boldsymbol{p}_1 - \boldsymbol{p} \right\Vert -\left\Vert \boldsymbol{p}_1 - {\boldsymbol{p}} - {\boldsymbol{v}} \delta t_1\right\Vert \\
\vdots\\
\left\Vert \boldsymbol{p}_M - \boldsymbol{p} \right\Vert -\left\Vert \boldsymbol{p}_M - {\boldsymbol{p}} - {\boldsymbol{v}} \delta t_M\right\Vert
\end{matrix}
\right]
\text{.}
\end{align}
We can see that the term $\Vert\boldsymbol{\mu}_C\Vert^2$ is the extra error of the CTWLAS caused by the UD velocity. With increasing $\boldsymbol{v}$ and $\delta t_i$, the estimation error will increase. Therefore, for a moving UD, the localization and synchronization error of the CTWLAS is larger than that of Mode 1 of the proposed TWLAS method.
\subsection{Comparison with Conventional One-way TOA Method}
In order to obtain some insights on the estimation performance of the TWLAS Mode 2, we compare it with the commonly adopted conventional one-way TOA localization and synchronization method \cite{foy1976position,kaplan2005understanding}, namely OWLAS. The OWLAS only uses half of the measurements compared with the TWLAS. The unknown parameters to be estimated by the OWLAS include position $\boldsymbol{p}$ and clock offset $b$ only, also less than the TWLAS. Thus, it is not straightforward to obtain an intuition on which method has better estimation accuracy. We note that $\bm{G}_0$ equals to the design matrix for the conventional OWLAS method. We denote the FIM for the OWLAS by $\mathcal{F}_\text{OWLAS}$, and have
\begin{equation} \label{eq:Fisheroneway}
\mathcal{F}_{\text{OWLAS}}= \bm{G}_0^T\bm{W}\bm{G}_0\text{,}
\end{equation}
\begin{theorem}\label{theorem1}
The localization and synchronization accuracy of TWLAS Mode 2 is higher than or equal to that of the conventional one-way TOA localization (OWLAS), i.e.,
\begin{equation} \label{eq:Fisher2andOW}
\left[\mathcal{F}_{\text{Mode 2}}^{-1}\right]_{i,i}\leq \left[\mathcal{F}_{\text{OWLAS}}^{-1}\right]_{i,i},i=1,\cdots\,N+1 \text{.}
\end{equation}
\end{theorem}
\textit{Proof.} See Appendix \ref{Appendix2}.
\textbf{Remark 2}: In the case that the UD receives the response signals from all the ANs simultaneously, the equality in (\ref{eq:Fisher2andOW}) holds, as shown in Appendix \ref{Appendix2}, i.e., the estimation accuracy of the TWLAS Mode 2 is the same as that of the conventional OWLAS. However, in practice, for consumer level devices such as IoT systems, we should design a proper communication protocol to avoid possible air collision of such concurrent signals as well as reduce the power consumption and complexity in signal processing of the UD.
According to Theorems \ref{theorem0} and \ref{theorem1}, we have shown that the proposed TWLAS method has better estimation accuracy than that of the conventional OWLAS method.
\subsection{Estimation Error of Mode 1 caused by Deviated UD Velocity} \label{deviatedV}
In real-world applications, the obtained UD velocity information may not be accurate. For Mode 2 of the TWLAS, the position and clock offset of the UD are estimated regardless of the UD velocity. Thus, the inaccurate UD velocity information does not influence the estimation error of Mode 2. However, this inaccuracy will cause localization and synchronization error to the TWLAS Mode 1 as will be analyzed in this sub-section.
We denote the obtained UD velocity by $\tilde{\boldsymbol{v}}$. The deviation from the true velocity $\boldsymbol{v}$ is denoted by $\Delta \boldsymbol{v}=\tilde{\boldsymbol{v}}-\boldsymbol{v}$. The measurement error vector caused by the deviated velocity is denoted by $\tilde{\boldsymbol{r}}$ and is
\begin{align}\label{eq:resdV}
\tilde{\boldsymbol{r}}
=\left[
\begin{matrix}
\boldsymbol{0}_M\\
\left\Vert \boldsymbol{p}_1 - \boldsymbol{p} - \tilde{\boldsymbol{v}} \delta t_1\right\Vert -\left\Vert \boldsymbol{p}_1 - {\boldsymbol{p}} - {\boldsymbol{v}} \delta t_1\right\Vert \\
\vdots\\
\left\Vert \boldsymbol{p}_M - \boldsymbol{p} - \tilde{\boldsymbol{v}} \delta t_M\right\Vert -\left\Vert \boldsymbol{p}_M - {\boldsymbol{p}} - {\boldsymbol{v}} \delta t_M\right\Vert
\end{matrix}
\right]
\text{,}
\end{align}
We denote the estimation bias by $\tilde{\boldsymbol{\mu}}$ and have
\begin{equation}\label{eq:dPvsdV}
\tilde{\boldsymbol{\mu}}=(\tilde{\bm{G}}^T\bm{W}\tilde{\bm{G}})^{-1}\tilde{\bm{G}}^T\bm{W}\tilde{\boldsymbol{r}} \text{,}
\end{equation}
where
$\tilde{\bm{G}}=\bm{G}_\text{Mode 1}$.
We then come to
\begin{equation} \label{eq:mudvsquare}
\Vert\tilde{\boldsymbol{\mu}}\Vert^2=\tilde{\boldsymbol{r}}^T\bm{S}^T\bm{S}\tilde{\boldsymbol{r}} \text{,}
\end{equation}
where $\bm{S}=(\tilde{\bm{G}}^T\bm{W}\tilde{\bm{G}})^{-1}\tilde{\bm{G}}^T\bm{W}$.
The RMSE, denoted by ${\stackon[-8pt]{{$RMSE$}}{\vstretch{1.6}{\hstretch{8.2}{\tilde{\phantom{\;}}}}}}$, is
\begin{equation} \label{eq:RMSEdV}
{\stackon[-8pt]{{$RMSE$}}{\vstretch{1.6}{\hstretch{8.2}{\tilde{\phantom{\;}}}}}} =\sqrt{\Vert\tilde{\boldsymbol{\mu}}\Vert^2+\mathrm{tr}\left({\bm{Q}}\right)} \text{,}
\end{equation}
where $\bm{Q}$ is the estimation error variance and $\bm{Q}=\left(\bm{G}^T\bm{W}\bm{G}\right)^{-1}$.
The estimated position error, denoted by ${\stackon[-8pt]{{$RMSE$}}{\vstretch{1.6}{\hstretch{8.2}{\tilde{\phantom{\;}}}}}}_p$, and the clock offset error, denoted by ${\stackon[-8pt]{{$RMSE$}}{\vstretch{1.6}{\hstretch{8.2}{\tilde{\phantom{\;}}}}}}_b$, are
\begin{align} \label{eq:perrorvsv}
{\stackon[-8pt]{{$RMSE$}}{\vstretch{1.6}{\hstretch{8.2}{\tilde{\phantom{\;}}}}}}_p =\sqrt{\left\Vert[\tilde{\boldsymbol{\mu}}]_{1:N}\right\Vert^2+\mathrm{tr}\left([{\bm{Q}}]_{1:N,1:N}\right)}\text{,}
\end{align}
and
\begin{align}\label{eq:berrorvsv}
{\stackon[-8pt]{{$RMSE$}}{\vstretch{1.6}{\hstretch{8.2}{\tilde{\phantom{\;}}}}}}_b =\sqrt{[\tilde{\boldsymbol{\mu}}]_{N+1}^2+[{\bm{Q}}]_{N+1,N+1}}\text{.}
\end{align}
We note that when $\tilde{\boldsymbol{v}}=\boldsymbol{0}$, Mode 1 reduces to the conventional method that only estimates $\boldsymbol{p}$, $b$, and $\omega$ such as presented in \cite{zheng2010joint,vaghefi2015cooperative,zou2017joint}. The errors of TWLAS Mode 1 given by (\ref{eq:perrorvsv}) and (\ref{eq:berrorvsv}) are the same as that of the CTWLAS when applying to a moving UD. Therefore, the conventional method CTWLAS that only estimates $\boldsymbol{p}$, $b$, and $\omega$ can be considered as a special case of Mode 1 of the TWLAS method when the UD is stationary.
\section{Numerical Evaluation} \label{simulation}
\subsection{Simulation Settings}
We conduct numerical simulations to evaluate the localization and synchronization performance of the proposed TWLAS method. We compute the RMSE of the position and clock offset estimation results as given by
\begin{align} \label{eq:RMSEpos}
\text{Position RMSE}&=\sqrt{\frac{1}{N_s}\sum_{1}^{N_s}\Vert\boldsymbol{p}-\hat{\boldsymbol{p}}\Vert^2} \text{,}
\end{align}
and
\begin{align} \label{eq:RMSEclockb}
\text{Clock offset RMSE}&=\sqrt{\frac{1}{N_s}\sum_{1}^{N_s}\left(b-\hat{b}\right)^2} \text{,}
\end{align}
where $N_s$ is the total number of positioning result samples from the simulation, and $\hat{\boldsymbol{p}}$ and $\hat{b}$ are the localization and synchronization results, respectively, given by the proposed algorithm. We use the CRLB as an accuracy metric for comparison.
We create a simulation scene to evaluate the performance of the proposed TWLAS method. Four ANs are placed on the middle of the four sides of a 600 m$\times$600 m square area. The coordinate of the four ANs are AN \#1 (-300, -300) m, AN \#2 (-300, 300) m, AN \#3 (300, 300) m, AN \#4 (-300, 300) m, respectively. The moving UD is randomly placed inside the square area with four vertices of (-250, -250) m, (-250, 250) m, (250, 250) m, and (-250, 250) m.
At each simulation run, the UD transmits the request signal and the ANs receive to form request-TOA measurements. Then the ANs transmit the response signals in a sequential manner. The UD receives the response signal from AN \#1 after 10 ms delay and then with an incremental of 10 ms to receive the response signals from each of the remaining ANs. The initial value of the UD clock offset and drift are set randomly at the start of each simulation run. The clock offset is drawn from the uniform distribution $b\sim \mathcal{U}(-1,1)$ s, which is a large clock offset range for localization and synchronization. The clock drift is selected from $\omega\sim \mathcal{U}(-10,10)$ parts per million (ppm), which is at the level of the frequency stability of a commonly used temperature compensated oscillator (TCXO). The UD velocity at each simulation run is randomly selected, with its norm $\Vert\boldsymbol{v}\Vert$ drawn from $\mathcal{U}(0,50)$ m/s, and the direction angle drawn from $\mathcal{U}(0,2\pi)$. The clock and motion of the UD evolve during each simulated period based on the clock model and motion model given by (\ref{eq:clockbomega}) and (\ref{eq:posvel}), respectively.
For the iterative TWLAS algorithm, we set the maximum iteration count to $iter=10$, and the convergence threshold to $thr=\sigma/10$. In other words, the algorithm will stop if the number of iterations reaches 10 or the norm of the parameter error $\left\Vert[\check{\boldsymbol{\theta}}]_{1:N}\right\Vert$ is smaller than $\sigma/10$.
\subsection{Localization and Synchronization Performance}
The TOA measurement noise $\sigma$ and $\sigma_i$ are set identical. They both vary from 0.01 m to 10 m with 6 steps in total. At each step, 40,000 times of Monte-Carlo simulations are run to generate the random UD position and motion, the clock offset and drift, and the TOA measurements. The simulated data are input to the iterative TWLAS algorithm proposed in Section \ref{locmethod}. The initial parameter $\check{\boldsymbol{\theta}}_{0}$ of the iterative algorithm is set to a random point on the circumference of a 50-m radius circle centered at the true position. The weighting matrix $\bm{W}$ is set using the values of the measurement noise variances $\sigma^2$ and $\sigma_i^2$ based on \eqref{eq:matW}.
The UD position estimation errors of the two modes of the TWLAS method are shown in Fig. \ref{fig:presult}. Their respective CRLBs are also shown in the same figure. We can see that position errors of both modes reach the theoretical lower bounds, showing their optimality. We also use the conventional one-way TOA localization method (OWLAS) \cite{foy1976position,kaplan2005understanding} to generate the localization error for comparison. We can see that both modes of the TWLAS outperform OWLAS, consistent with the theoretical analysis in Section \ref{locanalysis}. The clock offset estimation errors of the TWLAS are shown in Fig. \ref{fig:bresult}. Similar to Fig. \ref{fig:presult}, both modes of TWLAS outperform OWLAS. The conventional two-way TOA method (CTWLAS) is not compared because it has larger estimation error for a moving UD as will be shown in Section \ref{mode2andold}. The results verify the theoretical analysis presented in Section \ref{locanalysis}.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{img/perrorvsnoise.pdf}
\caption{Localization error vs. measurement noise. The position estimation errors of the two modes of the proposed TWLAS method reach CRLB. The accuracy of Mode 1 is better than that of Mode 2. Both modes of the TWLAS method outperform the conventional one-way TOA method (OWLAS).
}
\label{fig:presult}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{img/berrorvsnoise.pdf}
\caption{Synchronization error vs. measurement noise. The clock offset estimation errors of both two modes of the proposed TWLAS method reach their respective CRLBs. The error of Mode 1 is smaller than that of Mode 2. Both modes outperform the conventional one-way TOA method (OWLAS).
}
\label{fig:bresult}
\end{figure}
We also plot the localization and synchronization error versus the UD speed in Fig. \ref{fig:presultv} and Fig. \ref{fig:bresultv}, respectively, with a fixed measurement noise $\sigma=0.1$ m, which is at the level of a UWB localization device \cite{shi2019blas}. We can see that the estimation errors of both modes are irrelevant to the UD velocity, showing the superiority of the TWLAS method in localization and synchronization for a moving UD. The estimation accuracy of Mode 1 with accurately known UD velocity is better than that of Mode 2, consistent with the error analysis presented in Section \ref{locanalysis}.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{img/perrorvsvnorm.pdf}
\caption{Localization error vs. norm of UD velocity. The UD velocity information for Mode 1 equals to the true velocity. The measurement noise is $\sigma=0.1$ m. The localization errors of both modes reach CRLB. The accuracy of Mode 1 is better than that of Mode 2 due to the aiding information of the UD velocity. The estimation errors of both modes are irrelevant to the UD motion.
}
\label{fig:presultv}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{img/berrorvsvnorm.pdf}
\caption{Synchronization error vs. norm of UD velocity. The UD velocity information for Mode 1 is the true velocity. The measurement noise is $\sigma=0.1$ m. The clock offset estimation errors of both modes reach their respective CRLB. The accuracy of Mode 1 is better than that of Mode 2 due to the aiding information of the UD velocity. The clock offset estimation errors of both modes are irrelevant to the UD motion.
}
\label{fig:bresultv}
\end{figure}
\subsection{Comparison between TWLAS Mode 1 and CTWLAS} \label{mode2andold}
We investigate the estimation accuracy of TWLAS Mode 1 in comparison with the conventional two-way TOA method (CTWLAS) that only estimates the UD position and clock parameters, i.e., $\boldsymbol{p}$, $b$, and $\omega$ \cite{zheng2010joint,vaghefi2015cooperative,zou2017joint}. We vary the norm of the UD velocity from 0 m/s to 50 m/s with a step of 10 m/s. We run 40,000 runs of simulations at each step. The direction angle is randomly drawn from $\mathcal{U}(0,2\pi)$. The measurement noise is set to $\sigma =\sigma_i=0.1$ m. The interval between successive TOA measurements is set to $\delta t=$ 5, 10, and 20 ms. Simulations are done for each interval. The simulated TOA measurements are input to both TWLAS Mode 1 and CTWLAS.
The localization and synchronization error of both methods are shown in Fig. \ref{fig:presultvold} and Fig. \ref{fig:bresultvold}, respectively. We can see that the estimation error of the CTWLAS grows when the UD velocity increases while that of the proposed TWLAS Mode 1 remains stable regardless of the UD velocity. In addition, the estimation error of the CTWLAS increases with a larger interval $\delta t$. Since the position and clock offset error of the TWLAS remains the same, we only show one curve in each figure. The results are consistent with the theoretical analysis presented in Section \ref{compareold}. It shows the superior performance of the TWLAS over the CTWLAS in localization and synchronization for a moving UD.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{img/perrorvsvnormoldmulti.pdf}
\caption{Localization error comparison between TWLAS Mode 1 and the CTWLAS. The measurement noise is $\sigma=0.1$ m. The localization error of the CTWLAS increases with the growing UD velocity and larger interval $\delta t$, consistent with the theoretical analysis. The error from TWLAS Mode 1 remains stable, showing its superiority.
}
\label{fig:presultvold}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{img/berrorvsvnormoldmulti.pdf}
\caption{Synchronization error comparison between TWLAS Mode 1 and the CTWLAS. The measurement noise is $\sigma=0.1$ m. The estimated clock offset error of the CTWLAS increases with the growing UD velocity and larger interval $\delta t$, consistent with the theoretical analysis. TWLAS Mode 1 remains stable when the UD velocity increases, showing its superiority over the CTWLAS.
}
\label{fig:bresultvold}
\end{figure}
\subsection{Performance of TWLAS Mode 1 with Deviated UD Velocity} \label{perfromancedV}
In the case when the known UD velocity deviates from the true value, we evaluate the impact of the deviation on the localization and synchronization error of Mode 1 of the proposed TWLAS method. In the simulation, we vary the norm of the UD velocity deviation $\Vert\Delta \boldsymbol{v}\Vert$ from 0 m/s to 20 m/s with a step of 4 m/s. We run 40,000 simulations at each step. For each simulation run, the direction of the deviated velocity $\Delta \boldsymbol{v}$ is randomly selected from $\mathcal{U}(0,2\pi)$. The velocity input to the TWLAS Mode 1 is set to $\tilde {\boldsymbol{v}}={\boldsymbol{v}}+\Delta\boldsymbol{v}$. The TOA measurement noise $\sigma$ is set to 0.1 m.
The estimated position and the clock offset error results from Mode 1 of the proposed TWLAS are shown in Fig. \ref{fig:perrorvsdVnorm} and Fig. \ref{fig:berrorvsdV}, respectively. We can see that the estimation errors increase with growing norm of the UD velocity deviation. We also compute the theoretical localization and synchronization errors based on (\ref{eq:perrorvsv}) and (\ref{eq:berrorvsv}), respectively, and plot them in the black star curves in the two figures. We can see that both the estimated position and clock offset RMSEs match the theoretical value. These results verify the theoretical analysis presented in Section \ref{locanalysis}.
The theoretical analysis in Section \ref{locanalysis} can be used as a guidance to evaluate in what cases Mode 1 or Mode 2 can be adopted for UD localization and synchronization in real-world applications. For example, when the UD is moving fast and its velocity information is difficult to obtain, Mode 2 can be used to generate position and clock offset results without being impacted from the UD motion.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{img/perrorvsdV.pdf}
\caption{Estimated position error vs. norm of the UD velocity deviation for Mode 1 of the proposed TWLAS method. The position RMSE from the TWLAS method matches the theoretical analysis presented in Section \ref{locanalysis}.
}
\label{fig:perrorvsdVnorm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{img/berrorvsdV.pdf}
\caption{Estimated clock offset error vs. norm of the UD velocity deviation for Mode 1 of the proposed TWLAS method. The clock offset error estimated by the TWLAS method is consistent with the theoretical analysis.
}
\label{fig:berrorvsdV}
\end{figure}
\subsection{Dependency of Iterative TWLAS Algorithm on Initial Parameter}
We conduct simulations to evaluate the sensitivity of the proposed iterative algorithm to the initialization error. We set the measurement noise $\sigma=5$ m. In the initial parameter $\check{\boldsymbol{\theta}}_{0}$, we set the initial clock offset $\check{b}_0 = \tau_1$, the initial clock drift $\check{\omega}_0=0$. For Mode 2, we set $\check{\boldsymbol{v}}_0=0$. The initial position $\check{\boldsymbol{p}}_0$ is of most interest. It is randomly selected from the circumference of a circle centered at the true position. The radius of the circle is the initial position error. To evaluate the correctness of the solution from the iterative TWLAS algorithm, we use a threshold of $6\sqrt{\mathsf{CRLB}}$, which equals the 6-$\sigma$ or 99.9999998\% of a Gaussian distribution. In other words, if the estimated position RMSE is smaller than $6\sqrt{\mathsf{CRLB}}$, the result is determined to be correct.
The success rate results under different initial position errors are listed in Table \ref{table_successrate}. As can be seen, with small error in the initial parameter, the iterative TWLAS algorithm is robust and gives 100\% correct results. When the initial position error increases, the algorithm begins to output incorrect solutions with a small probability, especially in Mode 2. We can also observe that Mode 1 is more robust than Mode 2 with an increasing initialization error. The reason is that the known velocity for Mode 1 provides more information and strengthens the problem. In order to obtain the correct and optimal solution, we need an initial parameter as accurate as possible. Some estimation methods such as those based on augmented variables to formulate a closed-form approach \cite{chan1994simple,bancroft1985algebraic}, can be developed to provide a proper initial guess for the iterative algorithm.
\begin{table}[!t]
\centering
\begin{threeparttable}
\caption{Success Rate of Iterative TWLAS Algorithm with Initial Parameter Error}
\label{table_successrate}
\centering
\begin{tabular}{c p{1.4cm} p{1.4cm} p{1.4cm} p{1.4cm} }
\toprule
\multirow{2}{1cm}{TWLAS}
&\multicolumn{4}{c}{Initial position error (m)}\\
\cline{2-5}
\multirow{2}{*}{} & 10& 50& 100&200\\
\hline
Mode 1 &100\%& 100\%& 100\%& 99.98\%\\
Mode 2 &100\%& 100\%& 99.98\%& 98.83\%\\
\bottomrule
\end{tabular}
\begin{tablenotes}[para,flushleft]
Note: The TOA measurement noise is set to $\sigma=5$ m. The initial position error is the distance from the true position. For each initial position error, 100,000 simulation runs are done. The localization success rates of both TWLAS Mode 1 and Mode 2 are 100\% when the initial position is not too far from the true position. With increasing initialization error, both modes have decreasing success rates. Mode 1 is less sensitive to the increasing initial error since it has more available information (known UD velocity) than Mode 2.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Computational Complexity}
It can be seen from the TWLAS algorithm as given by Algorithm 1 that each iteration has the same operations and thus has the same computational complexity. For each iteration, the major operations are the matrix multiplication and inverse in \eqref{eq:leastsquare}. The complexity of these operations is on the order of $K^3$ \cite{quintana2001note}, where $K$ is the dimension of the matrix. If there are $L$ iterations, the total complexity is on the order of $LK^3$. Note from \eqref{eq:matG} that the design matrix of Mode 1 has fewer columns than Mode 2. Therefore, the complexity of Mode 1 is lower than that of Mode 2.
We conduct numerical simulations to investigate how the computational complexity and position RMSE change with the number of iterations. The computational platform to run the simulations is a PC with Intel Core i5-10600K CPU @ 4.10 GHz and 32-GB RAM. We set the TOA measurement noise $\sigma=$0.1 m, and the initial position error to 50 m. We vary the number of iterations $L$ from 1 to 10. For each $L$, we run 10,000 Monte-Carlo simulations and compute the average sum the computation time of the iterative algorithm as shown in Fig. \ref{fig:computationtime}. The position RMSE is shown in the same figure. We can see that the computation time has a linear relation with the growing number of iterations. We can see that the computation time has a linear relation with the growing number of iterations. Even with up to 10 iterations, the computation time of the algorithm is only about 0.42 ms for TWLAS Mode 2. Such a low complexity shows that the iterative TWLAS algorithm is applicable on consumer-level electronics platforms such as IoT devices, wearables, drones and robotics.
The position estimation RMSE decreases with an increasing number of iterations and remains stable after 2 iterations as shown in Fig. \ref{fig:computationtime}. In other words, more iterations only cost more computation time, but hardly brings accuracy improvement. For this reason, we expect to exit the algorithm early if the position RMSE becomes stable.
In practice, we do not know the actual position RMSE during the operation of the algorithm. Therefore, we need another proxy variable to indicate whether to continue the iteration or not. As shown in Algorithm 1, we compare the parameter error norm $\left\Vert [\Delta \check{\boldsymbol{\theta}}]_{1:N} \right\Vert$ with $thr$ to decide whether to terminate the algorithm. Using this threshold based criterion, we can reduce the number of iterations and thus reduce complexity. We plot the parameter error in Fig. \ref{fig:computationtimenormx}. It shows that this variable has the same varying trend as the position RMSE shown in Fig. \ref{fig:computationtime}. After the 2nd iteration, it becomes smaller than the threshold $thr$, leading to the early stop of the algorithm. At the same time, as shown in Fig. \ref{fig:computationtime}, the RMSE is stable. Therefore, this result validates that the parameter error is suitable to be an indicator for algorithm termination.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{img/complexityandRMSEms.png}
\caption{Position RMSE (dashed lines) and computation time (solid lines) vs. number of iterations. For each number of iterations, the computation time is the average of 10,000 simulation runs. The computation time grows linearly with increasing number of iterations. The position RMSE decreases with an increasing number of iterations and becomes stable quickly after 2nd iteration.
}
\label{fig:computationtime}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{img/complexityandNormxms.png}
\caption{Parameter error norm (dashed lines) and computation time (solid lines) vs. number of iterations. The convergence threshold (dash-dot line) is attached to the left $y$-axis. The parameter error $\left\Vert [\Delta \check{\boldsymbol{\theta}}]_{1:N} \right\Vert$ decreases with growing number of iterations and becomes smaller than the threshold after 2nd iteration. It becomes a stable small value afterwards.
}
\label{fig:computationtimenormx}
\end{figure}
\section{Conclusion}
In this article, we propose an optimal two-way TOA localization and synchronization method, namely TWLAS. Different from existing two-way TOA methods, the new method takes the UD motion into account to compensate the error caused by the UD movement. We analyze its localization and synchronization error and derive the CRLB. The analysis shows that the TWLAS can reach CRLB and has better localization and synchronization accuracy than that of the conventional one-way TOA method. Then conventional two-way TOA method is a special case of the proposed TWLAS method when the UD is stationary. We also derive the relation between the estimation error and the deviated UD velocity information. We conduct Monte-Carlo simulations to evaluate the performance of the proposed TWLAS method. Results show that for a moving UD, the localization and synchronization error of the TWLAS reaches CRLB provided a proper parameter initialization. The accuracy is better than that of the conventional one-way TOA method. The estimation error caused by the deviated UD velocity information is consistent with theoretical analysis.
\appendices
\section{Proof of Theorem \ref{theorem0}}
\label{Appendix1}
The FIMs of Mode 1 and Mode 2 of the TWLAS are
\begin{align}
\mathcal{F}_\text{Mode 1}&=\bm{G}_\text{Mode 1}^T\bm{W}\bm{G}_\text{Mode 1}\nonumber\\
&=\left[
\begin{matrix}
\bm{G}_0^T\bm{W}_{\rho}\bm{G}_0+\bm{G}_1^T\bm{W}_{\tau}\bm{G}_1 & \bm{G}_1^T\bm{W}_{\tau}\boldsymbol{\lambda}\\
\boldsymbol{\lambda}^T\bm{W}_{\tau}\bm{G}_1 & \boldsymbol{\lambda}^T\bm{W}_{\tau}\boldsymbol{\lambda}
\end{matrix}
\right]\text{,}
\end{align}
and
\begin{align}
\mathcal{F}_\text{Mode 2}&=\bm{G}_\text{Mode 2}^T\bm{W}\bm{G}_\text{Mode 2}\nonumber\\
&=\left[
\begin{matrix}
\bm{G}_0^T\bm{W}_{\rho}\bm{G}_0+\bm{G}_1^T\bm{W}_{\tau}\bm{G}_1 & \bm{G}_1^T\bm{W}_{\tau}\bm{G}_2\\
\bm{G}_2^T\bm{W}_{\tau}\bm{G}_1 & \bm{G}_2^T\bm{W}_{\tau}\bm{G}_2
\end{matrix}
\right]\text{,}
\end{align}
where
$$\boldsymbol{\lambda}=[\delta t_1,\cdots,\delta t_M]^T \text{.}$$
According to (\ref{eq:matG}), we partition $\bm{G}_2$ column-wisely into one column vector and one sub-matrix as $\bm{G}_2$=$\left[ \boldsymbol{\lambda},\bm{L}\right]$, where
$$\bm{L}=\left[
-\boldsymbol{l}_1\delta t_1,\cdots,
-\boldsymbol{l}_M\delta t_M
\right]^T \text{.}
$$
Then $\mathcal{F}_\text{Mode 2}$ is further derived as
\begin{align}
&\mathcal{F}_\text{Mode 2} \nonumber\\
&=\left[
\begin{matrix}
\bm{G}_0^T\bm{W}_{\rho}\bm{G}_0+\bm{G}_1^T\bm{W}_{\tau}\bm{G}_1 & \bm{G}_1^T\bm{W}_{\tau}\boldsymbol{\lambda} &\bm{G}_1^T\bm{W}_{\tau}\bm{L}\\
\boldsymbol{\lambda}^T\bm{W}_{\tau}\bm{G}_1 & \boldsymbol{\lambda}^T\bm{W}_{\tau}\boldsymbol{\lambda} &\boldsymbol{\lambda}^T\bm{W}_{\tau}\bm{L} \\
\bm{L}^T\bm{W}_{\tau}\bm{G}_1& \bm{L}^T\bm{W}_{\tau}\boldsymbol{\lambda} &\bm{L}^T\bm{W}_{\tau}\bm{L}
\end{matrix}
\right]\nonumber\\
&=\left[\begin{matrix}
\mathcal{F}_\text{Mode 1} &
\begin{matrix}
\bm{G}_1^T\bm{W}_{\tau}\bm{L}\\
\boldsymbol{\lambda}^T\bm{W}_{\tau}\bm{L}
\end{matrix}\\
\begin{matrix}
\bm{L}^T\bm{W}_{\tau}\bm{G}_1& \bm{L}^T\bm{W}_{\tau}\boldsymbol{\lambda}
\end{matrix} & \bm{L}^T\bm{W}_{\tau}\bm{L}
\end{matrix}
\right]
\text{.}
\end{align}
We investigate the top-left $(N+2)\times(N+2)$ sub-matrix in the inverse of $\mathcal{F}_\text{Mode 2}$, which, according to the inverse of a partitioned matrix in \cite{horn2012matrix}, is given by
\begin{align} \label{eq:submatinvF1}
&[\mathcal{F}_\text{Mode 2}^{-1}]_{1:(N+2),1:(N+2)} \nonumber\\
&=\left(\mathcal{F}_\text{Mode 1}-\bm{B}\left(\bm{L}^T\bm{W}_{\tau}\bm{L}\right)^{-1}\bm{B}^T\right)^{-1} \text{,}
\end{align}
where
$$
\bm{B}=\left[
\begin{matrix}
\bm{G}_1^T\bm{W}_{\tau}\bm{L}\\
\boldsymbol{\lambda}^T\bm{W}_{\tau}\bm{L}
\end{matrix}
\right] \text{.}
$$
We note that $\bm{A}^T\bm{A}$ is a positive-definite matrix for an arbitrary real matrix $\bm{A}$ with full rank. The matrices $\bm{G}_{\text{Mode 1}}$ and $\bm{G}_{\text{Mode 2}}$ usually have full rank with sufficient ANs that are properly placed. Thus, $\mathcal{F}_\text{Mode 1}$, $\mathcal{F}_\text{Mode 2}$ and $\bm{L}^T\bm{W}_{\tau}\bm{L}$ are all positive-definite. $\bm{B}\left(\bm{L}^T\bm{W}_{\tau}\bm{L}\right)^{-1}\bm{B}^T$ is thereby positive-definite.
Note that $\bm{A}\succ\bm{B}$ if and only if $(\bm{A}-\bm{B})$ is positive-definite. We have
\begin{align} \label{eq:matgreater}
\mathcal{F}_{\text{Mode 1}}\succ\mathcal{F}_{\text{Mode 1}}-\bm{B}\left(\bm{L}^T\bm{W}_{\tau}\bm{L}\right)^{-1}\bm{B}^T \text{.}
\end{align}
We apply inverse to the matrices on both sides of (\ref{eq:matgreater}) and come to
\begin{align} \label{eq:matsmaller}
\mathcal{F}_{\text{Mode 1}}^{-1}\prec\left(\mathcal{F}_\text{Mode 1}-\bm{B}\left(\bm{L}^T\bm{W}_{\tau}\bm{L}\right)^{-1}\bm{B}^T\right)^{-1}.
\end{align}
We are interested in the position and clock offset related terms in the top-left $(N+1)\times(N+1)$ sub-matrix, which is the $(N+1)$-th leading principal minor. We note that all leading principal minors of a positive-definite matrix are positive definite. Thus, we have
\begin{align} \label{eq:submatsmaller}
\left[\mathcal{F}_{\text{Mode 1}}^{-1}\right]_{1:(N+1),1:(N+1)}\prec\left[\mathcal{F}_{\text{Mode 2}}^{-1}\right]_{1:(N+1),1:(N+1)}.
\end{align}
Thus, for all the diagonal entries of the two matrices, we have
\begin{equation}
\left[\mathcal{F}_{\text{Mode 1}}^{-1}\right]_{i,i}<\left[\mathcal{F}_{\text{Mode 2}}^{-1}\right]_{i,i}, i=1,\cdots,N+1 \text{,}
\end{equation}
and finish the proof.
\section{Proof of Theorem \ref{theorem1}}
\label{Appendix2}
We derive the top-left $(N+1)\times(N+1)$ sub-matrix of the inverse of $\mathcal{F}_{\text{Mode 2}}$ as
\begin{align} \label{eq:invFmode2}
\left[\mathcal{F}_{\text{Mode 2}}^{-1}\right]_{1:(N+1),1:(N+1)}&=\left(\bm{G}_0^T\bm{W}_{\rho}\bm{G}_0 + \bm{D}\right)^{-1} \nonumber\\
&=\left(\mathcal{F}_{\text{OWLAS}} + \bm{D}\right)^{-1}
\text{,}
\end{align}
where
\begin{align} \label{eq:Dexpression}
\bm{D}=\bm{G}_1^T\bm{W}_{\tau}\bm{G}_1-\bm{G}_1^{T}\bm{W}_{\tau}\bm{G}_2\left(\bm{G}_2^{T}\bm{W}_{\tau}\bm{G}_2\right)^{-1}\bm{G}_2^{T}\bm{W}_{\tau}\bm{G}_1
\text{.}
\end{align}
A special case is that the UD receives the response signals from all the ANs simultaneously, i.e., $\delta t_1=\delta t_2=\cdots=\delta t_M=\delta t$. In this case, the matrix $\bm{G}_2$ becomes
$$
\bm{G}_2=\delta t\left[
\begin{matrix}
1 & -\boldsymbol{l}_1^T\\
\vdots & \vdots \\
1 & -\boldsymbol{l}_M^T
\end{matrix}
\right]=\delta t \bm{G}_1 \bm{P} \text{,}
$$
where
$$
\bm{P}=\left[
\begin{matrix}
\boldsymbol{0}_{N} &\bm{I}_N \\
1 &\boldsymbol{0}_{N}^T
\end{matrix}
\right] \text{.}
$$
We plug this expression of $\bm{G}_2$ into (\ref{eq:Dexpression}), and obtain
\begin{align} \label{eq:Dmatrix}
\bm{D}=&\bm{G}_1^T\bm{W}_{\tau}\bm{G}_1- \nonumber\\
&\bm{G}_1^{T}\bm{W}_{\tau}\delta t\bm{G}_1\bm{P}\left(\delta t\bm{P}\bm{G}_1^{T}\bm{W}_{\tau}\bm{G}_1\bm{P}\delta t\right)^{-1}\delta t\bm{P}\bm{G}_1^{T}\bm{W}_{\tau}\bm{G}_1 \nonumber\\
=&\bm{G}_1^T\bm{W}_{\tau}\bm{G}_1- \nonumber\\
&\bm{G}_1^{T}\bm{W}_{\tau}\bm{G}_1\bm{P}\bm{P}^{-1}\left(\bm{G}_1^{T}\bm{W}_{\tau}\bm{G}_1\right)^{-1}\bm{P}^{-1}\bm{P}\bm{G}_1^{T}\bm{W}_{\tau}\bm{G}_1 \nonumber\\
=&\bm{G}_1^T\bm{W}_{\tau}\bm{G}_1- \bm{G}_1^{T}\bm{W}_{\tau}\bm{G}_1\left(\bm{G}_1^{T}\bm{W}_{\tau}\bm{G}_1\right)^{-1}\bm{G}_1^{T}\bm{W}_{\tau}\bm{G}_1 \nonumber\\
=&\bm{G}_1^T\bm{W}_{\tau}\bm{G}_1- \bm{G}_1^{T}\bm{W}_{\tau}\bm{G}_1 \nonumber\\
=&\bm{O}_{(N+1)\times(N+1)}\text{,}
\end{align}
where $\bm{O}_{(N+1)\times(N+1)}$ is an all-zero matrix.
We substitute (\ref{eq:Dmatrix}) into (\ref{eq:invFmode2}) and thus have
$$\left[\mathcal{F}_{\text{Mode 2}}^{-1}\right]_{1:(N+1),1:(N+1)}=\mathcal{F}_{\text{OWLAS}}^{-1} \text{,}$$
i.e., TWLAS Mode 2 and OWLAS have identical estimation performance in such a special case with simultaneous response signals from all the ANs.
We then investigate the general case. We note that all the diagonal entries of $\bm{W}_{\tau}$ are equal. Therefore,
\begin{align} \label{eq:D1}
\bm{D}&=\bm{G}_1^T\bm{W}_{\tau}\left(\bm{W}_{\tau}^{-1}-\bm{G}_2\left(\bm{G}_2^{T}\bm{W}_{\tau}\bm{G}_2\right)^{-1}\bm{G}_2^{T}\right)\bm{W}_{\tau}\bm{G}_1\nonumber\\
&=\frac{1}{\sigma^2}\bm{G}_1^T\left(\bm{I}_M-\bm{G}_2\left(\bm{G}_2^{T}\bm{G}_2\right)^{-1}\bm{G}_2^{T}\right)\bm{G}_1
\text{.}
\end{align}
We conduct singular value decomposition on $\bm{G}_2$ and
$$\bm{G}_2=\bm{U}\bm{\Sigma}\bm{V}^T \text{,}$$
where $\bm{U}$ is a $M\times M$ orthogonal matrix, $\bm{\Sigma}$ is an $M\times (N+1)$ diagonal matrix with non-negative diagonal entries, and $\bm{V}$ is a $(N+1)\times (N+1)$ orthogonal matrix.
The matrix in the parenthesis in (\ref{eq:D1}) becomes
\begin{align}
&\bm{I}_M-\bm{G}_2\left(\bm{G}_2^{T}\bm{G}_2\right)^{-1}\bm{G}_2^{T}\nonumber\\
&=\bm{U}\left(\bm{I}_M-\bm{\Sigma}\bm{V}^T\left(\bm{V}\bm{\Sigma}^T\bm{U}^T\bm{U}\bm{\Sigma}\bm{V}^T\right)^{-1}\bm{V}\bm{\Sigma}^T\right)\bm{U}^T\nonumber\\
&=\bm{U}\left(\bm{I}_M-\bm{\Sigma}\bm{V}^T\bm{V}\left(\bm{\Sigma}^T\bm{\Sigma}\right)^{-1}\bm{V}^T\bm{V}\bm{\Sigma}^T\right)\bm{U}^T \nonumber\\
&=\bm{U}\left(\bm{I}_M-\bm{\Sigma}\left(\bm{\Sigma}^T\bm{\Sigma}\right)^{-1}\bm{\Sigma}^T\right)\bm{U}^T
\end{align}
We note that
\begin{align}
&\bm{\Sigma}\left(\bm{\Sigma}^T\bm{\Sigma}\right)^{-1}\bm{\Sigma}^T=\nonumber\\
&\left[
\begin{matrix}
\bm{I}_{N+1} & \bm{O}_{(N+1)\times (M-N-1)}\\
\bm{O}_{(M-N-1)\times (N+1)} & \bm{O}_{(M-N-1)\times (M-N-1)}
\end{matrix}
\right]\text{.}
\end{align}
Therefore,
\begin{align}
&\bm{I}_M-\bm{\Sigma}\left(\bm{\Sigma}^T\bm{\Sigma}\right)^{-1}\bm{\Sigma}^T\nonumber\\
&=\left[
\begin{matrix}
\bm{O}_{(N+1)\times (N+1)} & \bm{O}_{(N+1)\times (M-N-1)}\\
\bm{O}_{(M-N-1)\times (N+1)} & \bm{I}_{M-N-1}
\end{matrix}
\right]\text{,}
\end{align}
in which all the eigenvalues are non-negative, indicating its positive semi-definiteness. Then $\bm{D}$ is positive semi-definite, i.e., $\bm{D}\succeq 0$. Thus, we come to
\begin{align} \label{eq:plusD}
\mathcal{F}_{\text{OWLAS}} + \bm{D} \succeq \mathcal{F}_{\text{OWLAS}}
\text{,}
\end{align}
We take inverse on (\ref{eq:plusD}) and the left side becomes (\ref{eq:invFmode2}). Then, we have
\begin{align}
\left[\mathcal{F}_{\text{Mode 2}}^{-1}\right]_{1:(N+1),1:(N+1)}\preceq\mathcal{F}_{\text{OWLAS}}^{-1}
\text{.}
\end{align}
The diagonal elements, which represent the localization and synchronization accuracy, have the relation as given by
\begin{equation}
\left[\mathcal{F}_{\text{Mode 2}}^{-1}\right]_{i,i}\leq \left[\mathcal{F}_{\text{OWLAS}}^{-1}\right]_{i,i},i=1,\cdots\,N+1 \text{.}
\end{equation}
Thus, we have proved (\ref{eq:Fisher2andOW}).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-14101 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In the last decades the importance of underwater manipulators in marine operations has grown continuously. Most robotic underwater industrial applications are conducted with Remotely Operated Vehicles (ROV) where a human operator is tasked with the remote operation of the manipulator \cite{SIVCEV2018431}. However, due to the limited number of expert operators and the high cost of operations, the industry is migrating towards Autonomous Underwater Vehicles (AUV) \cite{YAZDANI2020103382}. In this type of scenario, a manipulator, usually electric, is mounted on the AUV and operates autonomously, however, this requires a robust and adaptable control system. Furthermore, in autonomous missions different types of operational constraints may appear, such as specific joint constraints that must be followed in order to avoid collisions \cite{Papageorgiou2011SwitchingMC}, or decreased joint torque due to faulty motors. These constraints increase the need for designing complex control systems for robust manipulation.
One of the most used low-level controllers for manipulators is the classical Proportional Integrative Derivative (PID) controller \cite{Ziegler1993}. This is due mostly to its simplicity of use and low computational requirements. However, when it is used for controlling manipulators arms, it must cope with highly non-linear systems. This issue is aggravated in underwater environments where unknown disturbances affect the behaviour of the arm.
Furthermore, for underwater manipulators, controllers are used under the assumption that the arm will move slowly and as such it is possible to decouple each degree of freedom, something that is not true for every application \cite{Barbalata2018CoupledAD}.
Researchers have also turned to non-linear optimal control techniques as a viable option, since they allow to optimize a cost function under different metrics. One of these techniques is Model Predictive Control (MPC) \cite{GARCIA1989335}, used successfully for controlling a different number of underwater robots \cite{Shen2018,Bai2019ReviewAC}. However, one of the drawbacks of this technique is that it requires an accurate model of the plant in order to work properly, not a trivial matter in underwater robotics \cite{fossen1994guidance}.
Data driven control techniques have appeared as an alternative for systems with complex or unknown models. One of these techniques is Reinforcement Learning (RL) \cite{Sutton1998}. In the RL framework, the robot arm control problem can be formulated as a Markov Decision Process (MDP) \cite{Monahan1982}.
Solving a RL problem consists in iteratively learning a task from interactions to achieve a goal. During learning, an artificial agent (controller) interacts with the target system (arm) by taking an action (torque command), that makes the robot evolve from its current state $x_t \in \mathbb{X} \subseteq \mathbb{R}^n$ to $x_{t+1}$. The agent then receives a numerical signal $r_t$, called reward, which provides a measure of how good (or bad) the action taken at time $t$ is in terms of the observed state transition.
Many works have used the RL paradigm for controlling AUVs in underwater environment \cite{Carlucho2018}. However, this technique has not yet been applied to underwater manipulators.
The main contribution of this work is the development of a reinforcement learning based control system for the low-level control of an electric underwater manipulator under position and torque constraints. Our reinforcement learning formulation is based on the Deep Deterministic Policy Gradient (DDPG) algorithm \cite{DDPG2016}. The proposed method uses an actor critic structure, where the actor is a function that maps system states to actions and the critic is a function that assess actions chosen by the actor. Deep Neural Networks (DNN) are used as function approximators for the actor and critic.
Results in simulation show the advantages of our proposal when controlling a simulated version of the Reach 5 Alpha manipulator, shown in Fig. \ref{fig:reach5}. The proposed controller is compared with a MPC, showing that the RL controller is able to outperform the MPC.
The article is structured as follows, Section~\ref{sec:related_work} presents an overview of related works followed by Section~\ref{sec:dd_optimal} that introduces the basics of RL control utilized in our formulation. In Section ~\ref{sec:implementation} the details of our implementations are described, in Section~\ref{sec:results} we present the results obtained with our proposed control scheme, and finally Section~\ref{sec:conclusions} presents the overall conclusions of the proposed work.
\section{Related Works}
\label{sec:related_work}
Designing control systems under constrains considerations for robotic manipulators appeared due to the need of robots to interact with the environment. Some of the most fundamental approaches focused on designing motion/interaction control systems by using a hybrid control formulation \cite{mills1989force}, \cite{yoshikawa1987dynamic}. In this approach, the constrains are expressed based on the end-effector's working space, and are used to decide the type of control law (either a motion control law or a force regulator). Nevertheless, constraints are not imposed only by the interaction with the environment, but are also required for cases when the robotic manipulator has to adjust its working space due to obstacles in the environment, or faults in the robotic system. In \cite{zhang2019passivity} a passivity-based kinematic control law is proposed under joint velocity limits considerations. The method proposed can be adapted to different feedback control laws, but can be applied only for redundant systems. An adaptive neural-network control for robotic manipulators with parametric uncertainties and motion constraints is proposed in \cite{li2016adaptive}. The simulation and experimental results with a 2 degree of freedom (DOF) planar manipulator show the velocity constraints always being respected, but steady-state errors are present. Deep neural-network approaches have become popular in the past years. An example is given in \cite{xu2019deep} where an obstacle avoidance control law is designed for redundant manipulators. The problem is reformulated as a Quadratic Programming (QP) problem in the speed level, and a deep recurrent neural network is designed to solve the QP problem in an online way. Although the simulation results show that the robot is capable of avoiding the obstacles while tracking the predefined trajectories, an experimental evaluation is not presented.
The requirements of adaptability and the difficulties with modeling have lead research towards intelligent control methods, such as reinforcement learning. Many RL methods have been previously applied to the control of manipulators \cite{Deisenroth2011LearningTC}. In \cite{Gu2017} asynchronous reinforcement learning was used to train a series of robotic manipulators to solve a door opening task.
In \cite{Wang2020} a mobile manipulator task is solved by Proximal Policy Optimization (PPO), a well known RL algorithm \cite{Schulman2017ProximalPO}, in combination with a Deep Neural Network.
Specifically for the underwater environments, RL has been used as a control techniques in several previous works \cite{CARLUCHO201871, ELFAKDI2013271, Frost2015}. However, the majority of these works focus on the of AUVs. Works utilizing RL in underwater manipulators are lacking in the literature.
\section{Reinforcement learning based control}
\label{sec:dd_optimal}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth,valign=c]{figures/reach5.jpg}%
\label{fig:reach5real}%
\caption{Reach 5 Alpha underwater manipulator \cite{reachalpha}}
\label{fig:reach5}
\end{figure}
From the point of view of classical and modern control, the design strategy of a control system is based on the complete knowledge of the dynamics of the system under study. Assuming that there is a model with the adequate capacity to describe the dynamics of the system, generally through a set of differential equations, the control problem is reduced to design a controller (or agent) capable of generating the adequate control actions for the system to achieve a given objective, goal, task or specific desired behavior. In this way, the performance capabilities of the conventional designed control systems are excessively dependent on the mathematical models used to describe the behavior of the dynamic systems to be controlled. However, underwater manipulation is a complex decision making problem in which, the presence of uncertainty in dynamics is ubiquitous and, consequently, it is of paramount importance designing and using controllers with suitable adaptation capabilities.
Markov decision processes are models for sequential decision making problems when outcomes are uncertain \cite{Monahan1982}. In our formulation, we consider a finite-horizon Markov decision process with a ${1, 2, ...,T}$ decisions and $T-1$ visited stages \cite{Hartman92}. That is, the decision (action) at time $t$ is made at the beginning of stage $t$ which corresponds to the time interval from $t$ to the next $t+1$. So, at any stage, or discrete time, $t$, the system is at a state $\mathbf{x}_t$. In this sense, we have a finite set $X$ of system states, such that $\mathbf{x}_t \, \in X, \ \forall \, t = 1, ..., T$. The decision maker observes state $\mathbf{x}_t \, \in X $ at stage $t$ and it may choose an action $\mathbf{u}_t$ from the set of finite allowable actions $U$ generating cost $L(\mathbf{x}_t,\mathbf{u}_t)$. Moreover, we let $p(\cdot|\mathbf{x}_t,\mathbf{u}_t)$ denote the probability distribution or transition probabilities of obtaining states $\mathbf{x}'=\mathbf{x}_{t+1}$ at stage $t+1$.
A deterministic Markovian decision rule at state $\mathbf{x}_t$ is a function $\psi_t:\, \mathbf{x}_t \to \mathbf{x}_t$ which maps the action choice given at state $\mathbf{x}_t$. It is called deterministic because it chooses an action with certainty and Markovian (memoryless) since it depends only on the current system state. We let $D_t$ denote the set of possible deterministic Markovian decision rules at stage $t$. $D_t$ is a subset of more general rules where the action may depend on the past history of the system and actions may not be chosen with certainty but rather according to a probability distribution.
A policy or strategy specifies the decision rules to be used at all stages and provides the decision maker with a plan of which action to take given stage and state. That is, a policy $\pi$ is a sequence of decision rules and we restrict ourselves to ranking policies $\pi_t$ belonging to the set $D_t$ of deterministic Markov policies (if randomized policies were included, the set of policies would not be countable). In some problems, the decision maker only focuses on this subset of policies, e.g. because randomized policies are hard to manage in practice or restrictions in management strategy. Moreover, if the states at a given time step $t$ corresponds to different physical locations implementation of policies having a single action at each location may only be acceptable.
Under the above summarized framework of Markov decision processes, the reinforcement learning can be formalized where an RL agent located in its environment chooses an action, from the set of available ones, at every discrete time step $t$ based on the current state of the system $\mathbf{x}_t$. In return, at each time step, the agent receives a reward signal that quantifies the quality of the action taken in term of the goal of the control task. In this paper we only consider one criterion of optimality, namely the expected total cost criterion, so the objective is to obtain an optimal policy $\pi^*$ that satisfies:
\begin{equation}
L^* = \max L_\pi = \max \E_\pi \{ R_t | \mathbf{x}_t = \mathbf{x} \}
\end{equation}
\noindent where $r_t$ is the instantaneous reward obtained at time step $t$ and $R_t$ is the cumulative reward, such that ${R_t = \sum_{k=0}^{\infty} \gamma^k r_{t+k+1}}$.
The basic RL algorithms discussed in the literature have been extensively developed to solve reinforcement learning problems without need of a dynamic model and when the state spaces and action are finite, which means that the value functions support a tabular representation \cite{Sutton1998,CARLUCHO2017,CARLUCHO2019}. In order to find an approximate solution to the control problem it is possible to obtain a discretized form of the space of states and/or actions \cite{LOVEJOY1991,CRESPOSUN2000,CRESPOSUN2003} and then applying the RL algorithms that use discrete spaces. However, as the granularity of the representation increases, the computational implementations suffer the so-called curse of dimensionality, which consists of an exponential increase in the computational complexity of the problem due to the increase in the dimension (or number) of the state-action pairs to be selected. This makes it impossible to construct a value function for the problem in question, since the agent has a low probability of "visiting" the same state, or a state-action pair, more than once depending on whether it is working with the state value function or value-action function, respectively.
In the underwater manipulation problem we have to deal with dynamic systems where the states and the applied actions are defined in real domains (continuous spaces) which imposes an important limitation for tabular representation of the value functions. To overcome this drawback, functional approximation techniques have emerged including inductive models to attempt generalizing the value function. Since a few years ago powerful brain inspired deep neural networks \cite{LeCun2015} have been introduced as functions approximations into the RL framework giving rise to deep reinforcement learning methodologies \cite{Mnih2015}.
For instance, the Deep Deterministic Policy Gradient (DDPG) algorithm \cite{DDPG2016} is one of the most spread deep RL algorithms that utilizes an actor-critic formulation together with neural network as function approximators to obtain a deterministic optimal policy.
In the actor-critic formulation, the role of the actor is to select an action based on the policy, such that $\mathbf{u} = \pi(\mathbf{x}_t)$. The critic on the other hand, gives feedback of how good or bad the selected action was.
In the DDPG algorithm the state-action value function $Q(\mathbf{x}_t, \mathbf{u}_t)$ is used as a critic. This function is defined as:
\begin{equation}
Q^\pi(\mathbf{x}_t, \mathbf{u}_t) = \E \{ R_{t} | \mathbf{x}_t, \mathbf{u}_t\} = \E \{ \sum_{k=0}^{\infty} \gamma^k r_{k+t+1} | \mathbf{x}_t, \mathbf \}
\end{equation}
\noindent The update to the state-action value function can then be performed as:
\begin{equation}
Q^w(\mathbf{x}_t, \mathbf{u}_t) = \E \{ r_{x_t,u_t} + \gamma Q^w(\mathbf{x}_{t+1}, \mathbf{u}_{t+1}) \}
\end{equation}
\noindent where \(Q^w\) is a differentiable parameterized function, so that $Q^w \approx Q^{\pi}$.
For the actor we consider a function $\pi$ that parameterizes states directly into actions with parameters \(\theta\), thus $ \pi(x_t|\theta)$. And we define a performance objective function ${L(\pi_{\theta})= \E \{r^{\gamma}|\mu \}}$ and a probability distribution $\rho$, then the performance as an expectation can be written as:
\begin{equation}
L(\mu_{\theta}) = \int \rho^{\mu}r(\mathbf{x}_t,\mu)dx = \E [ r(\mathbf{x}_t,\mu_{\theta}(\mathbf{x}_t))]
\end{equation}
and by applying the chain rule to the expected return, we can then write:
\begin{equation} \label{eq:ddpg}
\centering
\nabla_{\theta} L = \E [\nabla_{\theta}\mu_{\theta}(\mathbf{x}) \nabla_{\mathbf{u}} Q^{\mu}(\mathbf{x},\mathbf{u})]
\end{equation}
\noindent where Eq. \eqref{eq:ddpg} is the deterministic gradient of policies, as demonstrated in \cite{Silver2014}.
As indicated previously, both the actor and the critic can be represented by function approximators, where deep neural networks are commonly used since they allow to work with continuous state spaces.
However, the nonlinearities of these networks and the training procedures used made it difficult for algorithms to converge, however, recent events have repaired these problems. The main causes of the lack of convergence were the correlation of the samples used for training and the correlation between the updates of the network $ Q $ \cite{Fujimoto2018}.
The first of these issues was addressed by implementing a replay buffer that stored state transitions, the actions applied, and the rewards earned. The agent is then trained using mini-batches of transitions that are randomly selected from the replay buffer \cite{Ioffe2015}. The second problem was solved by incorporating target networks, which are a direct copy of the actor's and critic's networks called $ \pi'$ and $Q' $, with the parameters $ \theta'$ and $ \omega'$ respectively, and which are periodically updated according to the parameter $ \tau$, so that $ \theta' \gets \theta \tau + (1- \tau) \theta'$ and $ \omega' \gets \omega \tau + (1- \tau) \omega'$ where $ \tau << 1 $.
\section{Implementation details}
\label{sec:implementation}
\begin{figure*}[t]
\centering
\subfloat[Joint Position]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_position_1.png}%
\label{fig:result1a}%
} \hfil
\subfloat[Torque Output]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_action_1.png}%
\label{fig:result1b}}%
\subfloat[Joint Errors]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_error_1.png}%
\label{fig:result1c}}%
\caption{Test 1 of the proposed RL algorithm with $\mathbf{x}_{ref} = (2.64, 0.26, -1.47, 0.82)$ [rad]}
\label{fig:result1}
\end{figure*}
\begin{figure*}[t]
\centering
\subfloat[Joint Position]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_position_2.png}%
\label{fig:result2a}%
} \hfil
\subfloat[Torque Output]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_action_2.png}%
\label{fig:result2b}}%
\subfloat[Joint Errors]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_error_2.png}%
\label{fig:result2c}}%
\caption{Test 2 of the proposed RL algorithm with $\mathbf{x}_{ref} = (-1.78, 0.11, -2.14, -2.26)$ [rad]}
\label{fig:result2}
\end{figure*}
\begin{figure*}[t]
\centering
\subfloat[Joint Position]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_position_t.png}%
\label{fig:resultta}%
} \hfil
\subfloat[Torque Output]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_action_t.png}%
\label{fig:resulttb}}%
\subfloat[Joint Errors]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_error_t.png}%
\label{fig:resulttc}}%
\caption{Test 3 of the proposed RL algorithm under Torque constraints with $\mathbf{x}_{ref} = (2.,-1., -1.75, 1.5)$ [rad]}
\label{fig:resultt}
\end{figure*}
A simulated version of the Reach Alpha 5 manipulator is used. The Reach Alpha 5 is a 5DOF underwater manipulator, capable of lifting 2kg, and is able to operate in depths up to 300m. The manipulator is shown in Fig. \ref{fig:reach5}.
For our proposed formulation, the state ($\mathbf{x}_t$) of the RL agent is determined by the joint position ($\mathbf{q} \in \mathbb{R}^n$) in [rads] and joint velocity $\mathbf{\dot{q}} \in \mathbb{R}^n$ in [rads/s], together with the desired joint position ($\mathbf{q}_{req} \in \mathbb{R}^n $) in [rad], such that the state is determined as: $\mathbf{x}_t = [\mathbf{q}_t, \mathbf{\dot{q}}, \mathbf{q}_{req}]$, with $n$ being the DOFs of the manipulator. The goal of the agent is to achieve a determined joint position, where the request comes to the agent by higher a layer in the control hierarchy.
A fully connected feed forward network is used for both the actor and critic with two hidden layers of 400 and 300 units each. As activation function Leaky ReLus are used for the hidden networks, with Tanh used for the output neurons of the actor. The learning rate used is $0.0001$ and $0.001$ for the actor and critic respectively, with Adam used as an optimizer. A decay rate of $0.96$ is applied to each learning rate after 100 thousands training steps.
In order to be able to achieve the required position and torque constraints, the reward function was developed as follows: If the position is within the allowed bounds the reward is a Gaussian function that penalizes the agent when the joint position is not close to the request and gives a positive reward when the position matches or is close to the request. On the other hand, when the agent goes over the allowed bounds it is penalized with a high negative number ($-10$). Formally the reward is defined as follows:
\begin{align} \label{eq:reward}
r_t =
\begin{cases}
-1 + e^{ - \frac{1}{2}(\frac{\mathbf{x}- \mathbf{x}_{ref}}{\sigma})^2}, & \text{if } \mathbf{x}_{min} < \mathbf{x}_t < \mathbf{x}_{max} \\
-10, & \text{otherwise}
\end{cases}
\end{align}
\noindent with $\sigma$ being a parameter that shapes the Gaussian function. For the experiments presented here we utilized $\sigma = 0.018$.
For the training of the agent, a different random goal was selected in the allowed work space for each epoch of training. The agent was trained for a total of 2000 epochs, with each epoch lasting 20 seconds of real time operation. Initially random noise is added to the actions of the agent to allow for exploration, such that $\mathbf{u}_t = \pi(\mathbf{x}_t) + \epsilon N $, with $N$ being Ornstein–Uhlenbeck noise and $\epsilon$ linearly decaying over time from $1$ to $0.1$.
The minibatch size used for training is 64, with $\tau = 0.001$ and $\gamma = 0.99$.
\section{Results}
\label{sec:results}
For the presented results the agent was trained as previously stated and the policy is now executed without the addition of exploratory noise, effectively making $\epsilon = 0$. Furthermore, the nonlinear model has been degraded with some parameters changed randomly to test the adaptability of the agent and random noise is introduced to the velocity and position readings. While the Reach 5 Alpha has 5 DOF, the last degree of freedom corresponds to the gripper joint, which we are not interested in controlling, as such all results are shown for the first 4 DOF.
A simulation using the trained RL agent was ran on the Reach Alpha manipulator under normal operative conditions with a reference joint position of ${\mathbf{x}_{ref} = (2.64, 0.26, -1.47, 0.82)}$ [rad]. Fig. \ref{fig:result1a} shows the joint position while being controlled by the RL agent, Fig. \ref{fig:result1b} shows the control actions and Fig. \ref{fig:result1c} shows the position errors. In Fig. \ref{fig:result1a} it can be seen how the agent reaches the desired position in less than two seconds, without any overshoot, even when the requested position requires a long rotation of over two radians in Joint 1. The lack of overshoots demostrates how the agent is capable of behaving without breaking any of the position constrains imposed during training. The agent utilizes the maximum torque initially available, as can be seen in Fig. \ref{fig:result1b}, and then utilizes small corrections to keep the joints in position. Fig. \ref{fig:result1c} shows how the errors are rapidly reduced, with no steady state error present.
Another example shows the behaviour of the arm when a completely different reference point is selected, ${\mathbf{x}_{ref} = (-1.78, 0.11, -2.14, -2.26)}$ [rad]. Fig. \ref{fig:result2a} shows the obtained position when using the RL controller. As can be seen the requested position is reached again in under two seconds, without any overshoot. Again the agent utilizes high levels of torque initially, with lower levels after the requested joints position have been reached, as depicted in \ref{fig:result2b}. Furthermore, no steady state error is present as ilustrated in \ref{fig:result2c}.
The example presented here aims to test the behavior of the agent under torque constraints. In this example, the torque output of Joint 1 is reduced by 75\%, with a desired requested position of $\mathbf{x}_{ref} = (2.,-1., -1.75, 1.5)$ [rad]. In Fig. \ref{fig:resultta} the achieved position are shown, where it can be seen that the agent is capable of rapidly reaching the desired positions for Joints 2, 3 and 4, while Joint 1 takes around 5 seconds due to the new restrictions in torque, but no overshoot is present. This can also be seen in Fig. \ref{fig:resulttc} where the errors are shown, where Joint 1 takes longer to reach the request, however it can also be seen that no steady state error or overshoot are present. Additionally, Fig. \ref{fig:resulttb} shows the torque output, where the reduced torque of Joint 1 can be clearly seen.
\begin{figure*}[t]
\centering
\subfloat[RL: Joint Position]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_position_compare.png}%
\label{fig:result_compare_rl_pos}%
} \hfil
\subfloat[RL: Torque Output]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_action_compare.png}%
\label{fig:result_compare_rl_act}}%
\subfloat[RL: Joint Errors]{%
\includegraphics[height=4.5cm,valign=c]{figures/rl_error_compare.png}%
\label{fig:result_compare_rl_err}}%
\caption{Comparative results using RL for $\mathbf{x}_{ref} = (2.13, -0.74, -1.03, 2.51)$ [rad]}
\label{fig:result_compare_rl}
\end{figure*}
\begin{figure*}[t]
\centering
\subfloat[MPC: Joint Position]{%
\includegraphics[height=4.5cm,valign=c]{figures/mpc_position_compare.png}%
\label{fig:result_compare_mpc_pos}%
} \hfil
\subfloat[MPC: Torque Output]{%
\includegraphics[height=4.5cm,valign=c]{figures/mpc_action_compare.png}%
\label{fig:result_compare_mpc_act}}%
\subfloat[MPC: Joint Errors]{%
\includegraphics[height=4.5cm,valign=c]{figures/mpc_error_compare.png}%
\label{fig:result_compare_mpc_err}%
} \hfil
\caption{Comparative results using MPC for $\mathbf{x}_{ref} = (2.13, -0.74, -1.03, 2.51)$ [rad]}
\label{fig:result_compare_mpc}
\end{figure*}
\subsection{Comparative results}
In this section we introduce a comparison between the proposed RL agent and a MPC controller. The cost function of the MPC controller is $ J = \sum_{k = 0}^{N} || \mathbf{x}_{ref} - \mathbf{x}_{t+k} ||^2_{{\textbf{Q}}(t)} + || \Delta \mathbf{u}_{t+ +k}||^2_{{\textbf{R}}(t)}$. The gains $Q$ and $R$ of the cost function where tuned accordingly. As previously, the Reach 5 Alpha simulated arm is used where a desired position ($\mathbf{x}_{ref}$) should be attained.
An experiment is presented when a desired reference position of $\mathbf{x}_{ref} = (2.13, -0.74, -1.03, 2.51)$ [rad] is selected. Fig. \ref{fig:result_compare_rl} shows the obtained joint position when using the proposed RL agent, while Fig. \ref{fig:result_compare_mpc} shows the results when utilizing the baseline MPC controller. While the MPC controller is able to reach the required position for Joint 1 in less than 2.5 seconds (Fig. \ref{fig:result_compare_mpc_pos}), it takes the rest of the joints longer. On the other hand, the RL agent is faster and presents no overshoot (Fig. \ref{fig:result_compare_rl_pos}). In addition, the RL agent presents no steady state error, as can be seen in Fig. \ref{fig:result_compare_rl_err}, while the MPC shows some error is present in the steady state as Fig. \ref{fig:result_compare_mpc_err} shows. While the control actions both utilize high levels of torque initially, the MPC shown in Fig. \ref{fig:result_compare_mpc_act} seems to require less torque once the steady state is reached as compared to the RL agent, in Fig. \ref{fig:result_compare_rl_act}.
A series of experiments were performed in which random positions were selected and a number of metrics were obtained in order to compare the performance of the two algorithms. These metrics include the average energy~consumed~(E) in Joules, the Root Mean Square Error (RMSE), Mean Integral Error (MIE), the Mean Steady Steate Error (MSSE), the Overshoot (OS) in percentage, and the Settling Time (ST) in seconds. A set of 20 experiments were conducted where the obtained results can be seen in Table I.
\begin{table}[!ht]
\setlength{\tabcolsep}{6pt}
\renewcommand{\arraystretch}{1.5}
\centering
\caption{Comparative RL vs MPC}
\begin{tabular}{c | ccccccc}
\hline
Algorithm & E [J] & RMSE & MIE & MSSE & OS[$\%$] & ST [s] \\ \hline
RL & 19.86 & 0.23 & 58.27 & 0.0033 & 1.43 & 6.26 \\
MPC & 13.8 & 0.27 & 97.64 & 0.033 & 18.21 & 7.96 \\ \hline
\end{tabular}
\label{tb:result}
\end{table}
The presented metrics show a much more favorable performance of the RL agent. Both MIE and MSSE are significantly lower, while the RMSE also presents lower values for the RL implementation. The OS is practically non existent in the RL agent as compared to the MPC, while the ST is over a second less for the RL, making it the faster solution. The only disadvantage is seen with regards to the energy consumption (E) which is lower for the MPC controller. However, the RL controller is not taking into account the energy consumption as this was not the main focus of the proposal. Overall, the presented results show the benefits of the RL controller.
\section{Conclusions}
\label{sec:conclusions}
In this article we presented a novel strategy for the low-level control of an underwater manipulator under position and torque constraints based on a reinforcement learning formulation. The actions selected by the actor are directly the torque commands sent to the manipulator, while the state is determined by the current position and velocity, together with the desired joint positions. By including the goal in the state we are able to generalize over different control requests, a fundamental requirement for a control system.
The data driven approach provided by RL avoids the use of complex models and is able to adapt to changes in the operative conditions.For instance, sudden changes that limit the normal operation of the system, such as obstacles in the working space, failure of any engine, and others, can cause reduced joint movement leading to limited range for joint positions and/or limited torque range. Such constraints can be difficult to surpass using classical controllers due to the lack of accurate model information and poor tuning of the controller. On the other hand, reinforcement learning controllers, are able to obtain highly non linear policies that can operate within the required boundaries and are able to adapt to new requirements.
As future works, the authors suggest the implementation of the algorithm in the Reach 5 Alpha arm as well as in other manipulators to test the adaptability of the proposal. Furthermore, investigating the possibility of reducing the higher energy consumption, when comparing with the MPC, could be of interest for autonomous operations.
\bibliographystyle{unsrt}
{\footnotesize | proofpile-arXiv_065-14146 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
It is widely known that, for high-luminosity pp operations at the LHC, it is neither possible nor desirable to record every single collision, both due to bandwidth and data storage constraints. Both the ATLAS~\cite{ATLASPaper} and CMS~\cite{CMSPaper} experiments employ \emph{trigger systems} that do a quasi-realtime (online) analysis of the collected data stream to select the most interesting events for permanent recording. For Run~2, both experiments employ a two-tiered system: a Level-1 trigger, implemented in hardware with partial detector readout, that reduces the data rate to $\mathcal{O}$(100~kHz); and a High-Level Trigger, implemented in software with full detector readout, that further reduces the data rate to $\mathcal{O}$(1~kHz). The bandwidth allocation is generally driven by physics priorities in both experiments, with most resources allocated to inclusive triggers that can be used for many different studies in the experiment. The complexity of the online selection is comparable to that of the offline analysis: in ATLAS, 2000 active trigger chains were running in 2016 pp data taking, whilst CMS had $\mathcal{O}$(300) L1 seeds and $\mathcal{O}$(500) HLT paths for the same period.
\section{ATLAS and CMS Trigger Improvements for Run~2}
For Run~2, both experiments have upgraded their trigger systems to cope with the increased centre-of-mass energy and luminosity conditions. ATLAS has deployed an improved topological trigger (L1Topo) that allows finer selection on quantities from L1Calo and L1Muon as well as on composite quantities like \MET{} and \HT{}. Both the L1Calo and the L1Muon systems have been improved as well; L1Calo now is equipped with digital autocorrelation Finite Impulse Response (FIR) filters, the capability to do dynamic, bunch-by-bunch pedestal correction and a new set of energy-dependent L1 EM isolation criteria for the cluster processor. The L1Muon system, on the other hand, has been improved to require additional coincidence with TGC inner chambers to reduce trigger rates in the endcap (see Fig.~\ref{fig:L1Improvements}), whilst new ROC chambers enhanced the trigger coverage by 4\% in the barrel region. ATLAS has also merged the L2 and Event Filter farms to allow for more flexible optimisations, moving away from the three-tiered system they had during Run~1.
CMS has also improved their systems: their Level-1 Calorimeter Trigger now does event-by-event pileup subtraction (see Fig.~\ref{fig:L1Improvements}) and is equipped with advanced algorithms for dynamic clustering; for electron/photon selection, that allows for bremsstrahlung recovery, whilst for tau selection it allows to merge the treatment of multiprong decays and isolation. On the Level-1 Muon side, CMS now combines the subdetectors (DT, RPC, CSC) information at an earlier stage, optimising the selection separately for three regions of barrel, overlap and endcap. On the High-Level Trigger front, CMS migrated their software framework to a full multithreaded model in 2016, and have completely reoptimised their track reconstruction for the Phase-1 pixel upgrade that happened in 2017.
\begin{figure}[htbp]
\centering
\includegraphics[height=2in]{FI-coin.pdf}
\includegraphics[height=2in]{xy_caloMetBE_vs_l1Met.pdf}
\caption{ATLAS and CMS L1 trigger improvements. Left: rate reduction in ATLAS endcap muon triggers thanks to additional coincidence with TGC inner chambers~\cite{ATLASTrigger:2016}. Right: correlation between \MET{} calculated at L1 and in the offline analysis, demonstrating the effectiveness of the event-by-event pileup subtraction~\cite{CMSL1Calo:2016}.}
\label{fig:L1Improvements}
\end{figure}
\clearpage
\section{Electron and Photon Triggers}
Triggering on events containing electrons and photons in the LHC collisions is complicated by the large backgrounds from multijet events; a hadronic jet can be easily misidentified as a single e/$\gamma$, especially if enriched in electromagnetic component. To mitigate that phenomenon, the experiments deploy identification and isolation algorithms already at the online selection, both at the L1 trigger and at the HLT. The prototype trigger algorithm is the \emph{single electron trigger}, which generally tries to reconstruct and identify the energy deposit in the electromagnetic calorimeter and match it with a charged particle track; photon triggers are similarly built, but without the tracking steps.
In order to cope with the harsher conditions, ATLAS moved from a cut-based identification procedure in Run~1 to a more sophisticated, likelihood-based identification in Run~2. For isolation, at L1 they rely on the energy in the EM calorimeter deposited in a ring around the electron cluster, whilst at HLT they calculate isolation based on tracks located within a variable-size cone around the reconstructed e/$\gamma$. Three trigger algorithms were primarily used by ATLAS for electrons in 2016: a low-\PT{} trigger (26~GeV) with tight identification and isolation criteria; a medium-\PT{} trigger (60~GeV) with medium identification; and a high-\PT{} trigger (140~GeV) with loose identification. \emph{Photon triggers} follow generally the same strategy, with low-\PT{} single legs of diphoton triggers (22, 25 and 35~GeV) and a high-\PT{} (140~GeV) single photon trigger. Example performance plots for ATLAS e/$\gamma$ triggers can be seen in Fig.~\ref{fig:ATLASElectronPhotonTurnOn}.
\begin{figure}[htb]
\centering
\includegraphics[height=1.9in]{Eff_Et_singleOR_full2016.png}
\includegraphics[height=1.9in]{Eff_Et_photon_22t_25l_35l_140l_full2016.png}
\caption{Left: ATLAS turn-on curve for the inclusive OR of the three electron triggers:
\texttt{HLT\_e26\_\allowbreak{}lhtight\_\allowbreak{}nod0\_ivarloose},
\texttt{HLT\_e60\_lhmedium\_nod0},
\texttt{HLT\_e140\_lhloose\_nod0}~\cite{ATLASOverallTRG:2016}.
Right: ATLAS turn-on curves for single photon triggers and legs of diphoton triggers:
\texttt{HLT\_g22\_tight},
\texttt{HLT\_g25\_tight},
\texttt{HLT\_g35\_tight},
\texttt{HLT\_g140\_tight}~\cite{ATLASOverallTRG:2016}.}
\label{fig:ATLASElectronPhotonTurnOn}
\end{figure}
CMS deploys a similar strategy for e/$\gamma$ triggers. In 2016 three main single-electron trigger paths were used. Two of those paths, with low \PT{} thresholds (27 and 25~GeV), had tight identification and isolation criteria based both on calorimeter and silicon tracker information, with the former path having full tracker coverage and the latter trading restricted coverage in pseudorapidity ($|\eta| < 2.1$) for a lower threshold in \PT{}. The third path is primarily aimed at searches for new physics and eschews isolation completely, but has to compensate with a very high transverse momentum threshold (105~GeV). Triggers for double electrons, single and double photons generally follow the same lines; CMS also deploys both low-\PT{}, double photon paths with selections on the invariant mass and high-\PT{}, single photon paths. Example performance plots for CMS electron triggers can be seen in Fig.~\ref{fig:CMSElectronTurnOn}.
\begin{figure}[htb]
\centering
\includegraphics[height=1.9in]{TriggerEfficiency_vs_PT_ElectronTriggers_Barrel_2016.pdf}
\includegraphics[height=1.9in]{TriggerEfficiency_vs_PT_ElectronTriggers_Endcap_2016.pdf}
\caption{CMS turn-on curves for the single electron trigger paths
\texttt{HLT\_Ele27\_WPTight},
\texttt{HLT\_Ele25\_eta2p1\_WPTight}
and for the single legs of the double electron trigger paths
\texttt{HLT\_Ele23\_CaloIdL\_TrackIdL\_IsoVL},
\texttt{HLT\_Ele12\_CaloIdL\_TrackIdL\_IsoVL}, for the electromagnetic calorimeter barrel (left) and endcap (right)~\cite{CMSHLTEGamma:2016}.}
\label{fig:CMSElectronTurnOn}
\end{figure}
\section{Muon Triggers}
Both CMS and ATLAS deploy trigger paths that select events containing muons, specialized for different kinds of physics: prompt muons for electroweak, top and Higgs studies, low energy muons for B physics, amongst others. The CMS HLT system considers two types of reconstructed muons: for \emph{standard muons} the system reconstructs hits in the muon system and propagates them back to the silicon tracker. For \emph{tracker muons}, instead, the HLT runs a regional iterative tracking algorithm, matched to muon system chambers. The ATLAS HLT also has two reconstruction strategies: ``muon system + inner detector'' combined as the standard reconstruction and muon system \emph{stand-alone muons} for special uses. In addition to local area reconstruction seeded from L1 location, ATLAS also deploys multi-muon triggers that, upon firing of the single L1 muon trigger, search the full detector for additional muons -- the \emph{full-scan algorithm}.
For the benchmark single muon trigger, CMS adopts a two-pronged strategy. For triggering muons with low \PT{}, both for standard model physics and most new physics searches, the experiment deploys paths with isolation both for the standard and tracker muon reconstruction. CMS also triggers on high \PT{} muons with no isolation for special cases like boosted Z bosons and lepton+jets searches. For Run~2 triggers, CMS changed their isolation strategy to have independent selections on track, electromagnetic and hadronic isolations instead of a combined one, leading to an increase in efficiency.
ATLAS adopts a very similar strategy, having in 2016 a trigger selection \texttt{mu26\_imedium OR mu50} seeded by L1 muons with $\PT > 20\GeV$. They deploy muon isolation based on inner detector tracks and for 2016 they innovate it with \emph{variable cone isolation}, with the radius depending on the muon \PT{}, whilst in 2015 and Run-1 they used fixed cone sizes; this improvement lead to a more robust performance against pileup. Example performance plots for muon triggers for both experiments can be seen in Fig.~\ref{fig:MuonTurnOn}.
\begin{figure}[htb]
\centering
\includegraphics[height=1.9in]{Total_IsoTkMu24_pt.pdf}
\includegraphics[height=1.9in]{HLT_mu26_ivarmedium_OR_HLT_mu50_Medium_IsoFixedCutTightTrackOnly_endcap_NRecVtx_eff.png}
\caption{Left: CMS turn-on curve for the inclusive OR of the \texttt{HLT\_IsoMu24} and \texttt{HLT\_IsoTkMu24} paths that implement the two kinds of reconstruction strategy (standard and tracker muons) adopted by CMS; both paths are seeded by a L1 muon with $\PT{} > 22 \GeV$~\cite{CMSHLTRun2:2016}. Right: ATLAS trigger efficiency as function of the number of reconstructed vertices for the L1 seed \texttt{L1\_MU20} and for the \texttt{HLT\_mu26\_ivarmedium} OR \texttt{HLT\_mu50} algorithms in the $1.05 < |\eta^\mu| < 2.4$ region, demonstrating the robustness against pileup effects~\cite{ATLASOverallTRG:2016}.}
\label{fig:MuonTurnOn}
\end{figure}
\clearpage
\section{Hadronic Triggers}
Both ATLAS and CMS have dedicated trigger paths to select collisions with high energy hadronic jets. The prototype trigger algorithm is the \emph{single jet trigger}. Both experiments reconstruct jets with the anti-\kt{} algorithm with different radii for jets from standard QCD production ($R$ = 0.4) and for hadronic decays of boosted massive objects that are reconstructed as a single jet ($R$ = 1.0 for ATLAS, 0.8 for CMS).
The ATLAS strategy is to construct jets from calorimetric topo-clusters and use jet area subtraction for pileup suppression. In 2016 the experiment relied on a simulation-based energy calibration procedure, whilst for 2017 they also deployed a set of data-driven dijet $\eta$ intercalibration corrections and a procedure for global sequential corrections that, based on the jet longitudinal shape and its associated tracks characteristics, enhance the resolution whilst keeping the average jet energy scale unchanged.
On the other hand, the CMS strategy relies primarily on the particle-flow (PF) algorithm; in 2016 a large effort was made to align PF between online and offline reconstruction whilst still keeping the former within the timing budget. After a preselection based on calorimetric jets, jets are built from PF candidates. A set of sequential corrections is applied to the jets: a pileup correction, based on the offset event energy density ($\rho$), and a relative correction to make the jet response uniform over $\eta$ and \PT{}. Example performance plots for jet triggers for both experiments can be seen in Fig.~\ref{fig:JetTurnOn}.
\begin{figure}[htb]
\centering
\includegraphics[height=1.8in]{2016-11-29-gsc.png}
\includegraphics[height=1.8in]{6_JetTurnOn_SingleMuon_Run2016G.pdf}
\setlength{\abovecaptionskip}{7pt
\caption{Left: ATLAS turn-on curve for the \texttt{HLT\_j380} and \texttt{HLT\_j400} paths, demonstrating the effect of the updated calibrations~\cite{ATLASOverallTRG:2016}.
Right: CMS turn-on curve for different single jet paths, ranging from \PT{} thresholds of 40~GeV to 500~GeV~\cite{CMSHLTRun2:2016}.}
\label{fig:JetTurnOn}
\end{figure}
Both experiments also deploy missing \ET{} (\MET{}) trigger paths, particularly targeting searches for BSM physics signals like dark matter. ATLAS benefits greatly from their newly unified L2/EF HLT structure, being able to do offline full-detector reconstruction
directly after the L1 trigger decision. They deploy various \MET{} reconstruction methods: whilst for 2015 a topo-cluster based approach was used, in 2016 the default algorithm was based on reconstruction of the \MET{} from jets (\MHT{}). In order to help reduce the trigger rate at high pileup, advanced algorithms that either fit the pileup effect or combine jet-based and cell-based information are being investigated for 2017. CMS again employs a strategy of preselection on calorimetric \MET{} followed by particle-flow \MET{} reconstruction; they also use combined selections on \MET{} and \MHT{} to keep trigger rate under control whilst having high efficiency for events with real momentum imbalance. Finally, both experiments also have trigger paths that select events with high amount of hadronic activity by applying thresholds on the scalar sum of all jets above a given threshold (\HT{}). Example performance plots for missing energy triggers for both experiments can be seen in Fig.~\ref{fig:METTurnOn}.
\begin{figure}[htb]
\centering
\includegraphics[height=1.65in]{METTurnOn_Zmumu.pdf}
\includegraphics[height=1.65in]{Efficiency_HLT_PFMET170_2016G_PtS.pdf}
\setlength{\abovecaptionskip}{7pt
\caption{Left: ATLAS turn-on curves for different \MET{} algorithms: the \texttt{mht} algorithm reconstructs the \MET{} from jets, whilst the \texttt{pufit} and \texttt{cell} employ additional techniques to minimize the effects of pileup~\cite{ATLASOverallTRG:2016}. Right: CMS turn-on for the \texttt{HLT\_PFMET170} path, seeded by a full set of L1 seeds with thresholds up to 120~GeV~\cite{CMSHLTRun2:2016}. Both offline and online \MET{} are reconstructed with the Particle Flow algorithm.}
\label{fig:METTurnOn}
\end{figure}
\section{Other Trigger Paths}
From the basic objects, a multitude of different trigger paths may be constructed targeting specific event topologies. Jet triggers can be enhanced by requesting jets to b-tagged or boosted-tagged; a completely different procedure may be used to select events where an hadronic tau is to be reconstructed instead. \HT{} triggers can be enhanced by optionally selecting a minimum jet multiplicity and/or special tags in the jets. Lepton + jet paths can be used to explore regions of the phase space where requiring the presence of only one object would be prohibitive due to the high thresholds that would be needed.
\vspace*{-4pt}
\section{Conclusions and Outlook}
The LHC Run~2 brought harsher conditions upon the ATLAS and CMS experiments, with 13~TeV centre-of-mass energy and 25~ns bunch spacing for pp collisions. During 2016, the instantaneous luminosity went up to 1.4$\times$10\textsuperscript{34}~cm\textsuperscript{-2}~s\textsuperscript{-1} -- and will increase further in 2017. The trigger systems of both experiments were improved to cope with these conditions whilst maintaining physics performance, both by deploying more powerful hardware as well as using better data reconstruction and selection algorithms. More than 30~fb\textsuperscript{-1} of pp collision data were taken during Run~2 up until now, and improvements are still ongoing. Meanwhile, both experiments are still working on the upgrades for the LHC Run~3 and the High Luminosity LHC.
\bigskip \bigskip \begin{center} \begin{large
This material is based upon work supported by the S\~ao Paulo Research Foundation (FAPESP) under grants No.~2013/01907-0 and 2016/15897-4.{}
| proofpile-arXiv_065-14317 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Main Text}
\subsection*{Introduction}
Coherent control on quantum systems is a fundamental element of quantum
technologies which could revolutionize the fields of information processing, simulation, and sensing. A powerful and universal method
to achieve this control is the quantum adiabatic technique, which exhibits intrinsic robustness against control
errors ensured by the quantum adiabatic evolution~\cite{Childs2001}.
Besides important applications in quantum state engineering~\cite{RMP2017, Vitanov2001}, quantum simulation~\cite{Aspuru2005, kim2010, biamonte2011adiabatic}, and quantum computation~\cite{farhi2000quantum, jones2000geometric, Farhi2001, Barends2016,Xu2017}, the quantum adiabatic evolution itself also provides interesting
properties such as Abelian~\cite{berry1984quantal} or non-Abelian geometric phases~\cite{wilczek1984appearance},
which can be used for the realization of quantum gates.
However, the conventional quantum adiabatic theorem~\cite{Born1928, MessiahBook},
which dates back to the idea of extremely slow and reversible change in classical mechanics~\cite{Laidler1994, Born1928},
imposes a speed limit on the quantum adiabatic methods, that is, for a quantum process to remain adiabatic the changes to the system Hamiltonian at all times must be much smaller than the energy gap of the Hamiltonian.
On the other hand, in order to avoid perturbations from the environment high rates of change are desirable. This tension can impose severe limitations on the practical use of adiabatic methods.
Despite the long history and broad applicability, it was
discovered recently that key aspects of quantum adiabatic evolution remain not fully understood~\cite{marzlin2004inconsistency, tong2005quantitative}
and the condition in the conventional adiabatic theorem is not necessary for quantum adiabatic evolution~\cite{Du2008, wang2016necessary}.
In this work, we experimentally demonstrate adiabatic evolutions with vanishing energy gaps and energy level crossings, which however are allowed in a recently proven quantum adiabatic condition~\cite{wang2016necessary} that is based on dynamical phases instead of energy gaps, by using an NV center~\cite{doherty2013nitrogen} in diamond.
In addition, we reveal that employing discrete jumps along the evolution path allows quantum adiabatic processes at unlimited rates which challenges the view that adiabatic processes must be slow. By jumping along the path one can even avoid path points where the eigenstates of the Hamiltonian are not feasible in experiments. Furthermore, we demonstrate theoretically and experimentally the elimination of all the non-adiabatic effects on system evolution of a finite evolution time by driving the system along the geodesic that connect initial and final states, as well as combating system decoherence by incorporating pulse sequences into adiabatic driving.
\subsection*{Experimental study of the necessary and sufficient quantum adiabatic condition}
To describe the theory for experiments, consider a quantum system driven
by a Hamiltonian $H(\lambda)$ for adiabatic evolution. In terms of its instantaneous orthonormal eigenstates $|\psi_{n}(\lambda)\rangle$
($n=1,2,\ldots$) and eigenenergies $E_{n}(\lambda)$, the Hamiltonian is written as $H(\lambda)=\sum_{n}E_{n}(\lambda)|\psi_{n}(\lambda)\rangle\langle\psi_{n}(\lambda)|$. For a given continuous finite evolution path, $|\psi_{n}(\lambda)\rangle$ changes gradually with the configuration parameter $\lambda$. In our experiments, $\lambda$ corresponds to an angle in some unit and is tuned in time such that $\lambda=\lambda(t)\in[0,1]$. The system dynamics driven by the Hamiltonian
is fully determined by the corresponding evolution propagator $U(\lambda)$.
It is shown that one can decompose the propagator $U(\lambda)=U_{\rm{adia}}(\lambda) U_{\rm{dia}}(\lambda)$ as the product of a quantum adiabatic evolution propagator $U_{\rm{adia}}(\lambda)$ that describes the ideal quantum evolution in the adiabatic limit and a diabatic propagator $U_{\rm{dia}}(\lambda)$ that includes \emph{all} the diabatic errors~\cite{wang2016necessary}. In the adiabatic limit, $U_{\rm{dia}}(\lambda)=I$ becomes an identity matrix and the adiabatic evolution
$U=U_{\rm{adia}}(\lambda)$ fully describes the geometric phases~\cite{berry1984quantal,wilczek1984appearance} and dynamic phases accompanying the adiabatic evolution (that is, the deviation from adiabaticity $U-U_{\rm{adia}}$ vanishes). This decomposition guarantees that both $U_{\rm{adia}}(\lambda)$ and $U_{\rm{dia}}(\lambda)$ are gauge invariant, i.e., invariant with respect to any chosen state basis.
According to the result of~\cite{wang2016necessary}, the error part
satisfies the first-order differential equation ($\hbar=1$),
\begin{equation}
\frac{d}{d\lambda}U_{\rm{dia}}(\lambda)=iW(\lambda)U_{\rm{dia}}(\lambda),\label{eq:Udia}
\end{equation}
with the boundary condition $U_{\rm{dia}}(0)=I$. The generator $W(\lambda)$ describes \emph{all} the non-adiabatic transitions.
In the basis of $|\psi_{n}(0)\rangle$, the diagonal matrix elements of $W(\lambda)$
vanishes, i.e., $\langle\psi_{n}(0)|W(\lambda)|\psi_{n}(0)\rangle=0$.
The off-diagonal matrix elements
\begin{equation}
\langle\psi_{n}(0)|W(\lambda)|\psi_{m}(0)\rangle=e^{i\phi_{n,m}(\lambda)}G_{n,m}(\lambda)\label{eq:W}
\end{equation}
are responsible for non-adiabaticity. Here $\phi_{n,m}(\lambda)\equiv\phi_{n}(\lambda)-\phi_{m}(\lambda)$ is the
difference of the accumulated dynamic phases $\phi_{n}(\lambda)$ on
$|\psi_{n}(\lambda)\rangle$, and
the geometric part $G_{n,m}(\lambda)=e^{i\left[\gamma_{m}(\lambda)-\gamma_{n}(\lambda)\right]}g_{n,m}(\lambda)$ consists of the geometric functions $g_{n,m}(\lambda)=i\langle\psi_{n}(\lambda)|\frac{d}{d\lambda}|\psi_{m}(\lambda)\rangle$ and the geometric phases $\gamma_{n}(\lambda)=\int_{0}^{\lambda}g_{n,n}(\lambda^{\prime})d\lambda^{\prime}$.
Equation~\ref{eq:W} show that the differences
of dynamic phases $\phi_{n,m}$ are more fundamental than the energy gaps in suppressing
the non-adiabatic effects, because the energies $E_{n}$ do not explicitly appear in these equations.
Indeed, according to~\cite{wang2016necessary}, when the dynamic phase factors at different path points add destructively
\begin{equation}
\epsilon_{n,m}(\lambda)=\left|\int_{0}^{\lambda}e^{i\phi_{n,m}(\lambda^{\prime})}d\lambda^{\prime}\right|<\epsilon, \label{eq:avg}
\end{equation}
for \wzy{$n\neq m$ and} any $\lambda\in[0,1]$ of a finite path with bounded $G_{n,m}(\lambda)$ and $\frac{d}{d\lambda}G_{n,m}(\lambda)$, the deviation from adiabaticity
can be made arbitrarily small by reducing $\epsilon$ with a scaling factor determined by
the magnitudes of $G_{n,m}(\lambda)$ and $\frac{d}{d\lambda}G_{n,m}(\lambda)$.
That is, the operator norm $||U_{\rm{dia}}(\lambda)-I||<\sqrt{\epsilon}(G_{\rm{tot}}^2+G_{\rm{tot}}^\prime)\lambda^2+(\sqrt{\epsilon}+\epsilon) G_{\rm{tot}}$, where
$G_{\rm{tot}}=\sum_{n\neq m}{\rm{max}}|G_{n,m}(\lambda^\prime)|$ and $G_{\rm{tot}}^\prime=\sum_{n\neq m}{\rm{max}}|\frac{d}{d\lambda^\prime}G_{n,m}(\lambda^\prime)|$ for $0<\lambda^\prime\leq\lambda$~\cite{wang2016necessary}.
In the limit $\epsilon\rightarrow 0$
the system evolution is adiabatic along the entire finite path with $U_{\rm{dia}}(\lambda)\rightarrow I$.
For a zero gap throughout the evolution path, the evolution is not adiabatic because $\epsilon_{n,m}(\lambda)=\lambda$ is not negligible due to the constructive interference of the dynamic phase factors at different path points. For a large constant gap, the destructive interference gives a negligible $\epsilon_{n,m}$ and hence an adiabatic evolution.
To experimentally verify the adiabatic condition \ref{eq:avg} by an NV center,
we construct the Hamiltonian for adiabatic evolution in the standard way~\cite{RMP2017,Vitanov2001}.
That is, we apply a microwave field to drive the NV electron spin states
$|m_{\rm{s}}=0\rangle\equiv|-{\rm{z}}\rangle$ and $|m_{\rm{s}}=+1\rangle\equiv |{\rm{z}}\rangle$ (see Fig.~\ref{fig1:FigConstPi}A, \wzy{Materials and Methods} for experimental details).
The Hamiltonian $H(\lambda)$ under an on-resonant microwave field reads
\begin{equation}
H_{\rm{XY}}(\lambda) = \frac{\Omega(\lambda)}{2}\Big [ |\psi_{1}(\lambda)\rangle\langle\psi_{1}(\lambda)|-|\psi_{2}(\lambda)\rangle\langle\psi_{2}(\lambda)| \Big ], \label{eq:Hxy}
\end{equation}
where the energy gap $\Omega(\lambda)$ is tunable,
and the instantaneous eigenstates of the system Hamiltonian $|\psi_{1}(\lambda)\rangle=|+_{\lambda}\rangle$ and $|\psi_{2}(\lambda)\rangle=|-_{\lambda}\rangle$. Here
\begin{equation}
|\pm_{\lambda}\rangle\equiv\frac{1}{\sqrt{2}}(|{\rm{z}}\rangle\pm e^{i\theta_{\rm{g}}\lambda}|-{\rm{z}}\rangle) \label{eq:xyPath}
\end{equation}
are tunable by varying the microwave phases $\theta_{\rm g}\lambda$. We define the initial eigenstates $|\pm\rm{x}\rangle\equiv|\pm_{0}\rangle$ and the superposition states $|\pm{\rm{y}}\rangle\equiv\frac{1}{\sqrt{2}}({|{\rm{z}}\rangle} \pm i {|-{\rm{z}}\rangle})$ for convenience.
In the traditional approach that the Hamiltonian varies slowly with a non-vanishing
gap, the strength of relative dynamic phase \wzy{$\phi_{1,2}=\phi_{1}(\lambda)-\phi_{2}(\lambda)$}
rapidly increases with the change of the path parameter $\lambda$, giving
the fast oscillating factor $e^{i\phi_{1,2}}$
with a zero mean (see Fig.~\ref{fig1:FigConstPi}C for the case of a constant gap $\Omega(\lambda)=\Omega_{0}$).
Therefore the right-hand side of
Eq.~\ref{eq:Udia} is negligible in solving the differential equation, leading to
the solution $U_{\rm{dia}}(\lambda)\approx I$. As a consequence of the adiabatic evolution $U\approx U_{\rm{adia}}(\lambda)$,
the state initialized in an initial eigenstate of the Hamiltonian follows the evolution of the instantaneous eigenstate (see Fig.~\ref{fig1:FigConstPi}D).
However, a quantum evolution with a non-vanishing gap and a long evolution time is not necessary adiabatic. In Fig.~\ref{fig1:FigConstPi} (E and F)
we show a counterexample that increasing the energy gap in Fig.~\ref{fig1:FigConstPi}C to $\Omega(\lambda)=\Omega_{0}[2+\cos(\Omega_{0} \lambda T)]\geq \Omega_{0}$
will not realize adiabatic evolution because in this case the $\epsilon_{1,2}(\lambda)$ in Eq.~\ref{eq:avg} and the $G_{1,2}(\lambda)$ are not negligible.
For example, $\epsilon_{1,2}(\lambda)=J_{2}(1)\lambda \approx 0.115\lambda$ ($J_{n}$ being the Bessel function of the first kind)
whenever the difference of dynamic phases is a multiple of $2\pi$. This counterexample is different from the previously proposed counterexamples~\cite{marzlin2004inconsistency,tong2005quantitative,
wang2016necessary,Ortigoso2012} where the Hamiltonian contains resonant terms which increase $|\frac{d}{d\lambda}G_{n,m}(\lambda)|$ and hence modify the evolution path when increasing the total time. Our
counterexample also demonstrates that the widely used adiabatic condition~\cite{MessiahBook} $|\langle \psi_{n}| \frac{d}{dt}|\psi_{m}\rangle|/|E_n-E_m|\ll 1$,
which is based on the energy gap and diverges at $E_n-E_m=0$, does not guarantee quantum adiabatic evolution. On the contrary, the condition Eq.~\ref{eq:avg} based on dynamic phases, i.e., integrated energy differences, does not diverge for any energy gaps.
We note that fast amplitude fluctuations on the control fields (hence energy gaps) can exist in adiabatic methods [e.g., see \cite{Jing2014}] because of their strong robustness against control errors. Indeed, by adding errors in the energy gap shown in Fig.~\ref{fig1:FigConstPi}E,
the adiabaticity of the evolution is significantly enhanced (see Fig.~\ref{fig2:FigSDia}), showing that the situations to have
non-adiabatic evolution with a fluctuating energy gap are relatively rare.
We demonstrate that adiabatic evolution can be achieved even when the energy spectrum exhibits vanishing gaps and crossings as long as Eq.~\ref{eq:avg} is satisfied for a sufficiently small $\epsilon$. As an example,
we consider the energy gap of the form $\Omega(\lambda)=\Omega_{\pi}(\lambda)\equiv\Omega_{0}^{\prime}\left[1+a\cos(2\Omega_{0}^{\prime} T \lambda)\right]$, which has zeros and crossings for $|a|>1$ (see Fig.~\ref{fig1:FigConstPi}G for the case of $a\approx 2.34$, where $\Omega_{0}^{\prime}=\sqrt{2/(2+a^2)}\Omega_0$
is used to have the same average microwave power in both Fig.~\ref{fig1:FigConstPi}C and Fig.~\ref{fig1:FigConstPi}G).
Despite the vanishing gaps and crossings, the corresponding factor $e^{i\phi_{1,2}}$ parameterized by the parameter $\lambda=t/T$ is fast oscillating (see Fig.~\ref{fig1:FigConstPi}G) with a zero mean and realizes quantum adiabatic evolution for a sufficiently large total time $T$ (see Fig.~\ref{fig1:FigConstPi}H). In
fig.~S1, we show how the adiabaticity can also be preserved when gradually introducing energy level crossings.
\subsection*{Unit-fidelity quantum adiabatic evolution within a finite time}
Without the restriction to non-zero energy gaps, it is possible to
completely eliminate non-adiabatic effects and to drive an arbitrary initial
state $|\Psi_{\rm{i}}\rangle$ to a target state $|\Psi_{\rm{t}}\rangle$ of a general quantum system
by the quantum adiabatic evolution of a finite time duration. We demonstrate this
by driving the system along the geodesic
for maximal speed [see, e.g., \cite{anandan1990geometry,chruscinski2004geometric} for more discussion on the geodesic in quantum mechanics]. The system eigenstate
$|\psi_{1}(\lambda)\rangle=\cos(\frac{1}{2}\theta_{\rm{g}}\lambda)|\psi_{1}(0)\rangle+\sin(\frac{1}{2}\theta_{\rm{g}}\lambda)|\psi_{2}(0)\rangle$
connects $|\psi_{1}(0)\rangle$ and $|\psi_{1}(1)\rangle$ along the geodesic by varying
$\lambda=0$ to $\lambda=1$, with its orthonormal eigenstate
$|\psi_{2}(\lambda)\rangle=-\sin(\frac{1}{2}\theta_{\rm{g}}\lambda)|\psi_{1}(0)\rangle+\cos(\frac{1}{2}\theta_{\rm{g}}\lambda)|\psi_{2}(0)\rangle$
varied accordingly \wzy{(see Materials and Methods)}. The method works for any quantum system \wzy{(e.g., a set of interacting qubits)} because a geodesics can always be found~ \cite{anandan1990geometry,chruscinski2004geometric}. An example of the geodesic path for a single qubit is given by Eq.~\ref{eq:xyPath}, which intuitively can be illustrated by the shortest path on the Bloch sphere (see Fig.~\ref{fig1:FigConstPi}B). We find that along the geodesic the only nonzero
elements $g_{2,1}(\lambda)$ and $g_{1,2}(\lambda)$ are constant.
We adopt the sequence theoretically proposed in \cite{wang2016necessary}
that changes the dynamic phases at $N$ equally spaced path
points $\lambda=\lambda_{j}$ ($j=1,2,\ldots,N$). By staying
at each of the points
\begin{equation}
\lambda_{j}=(2N)^{-1}(2j-1),
\end{equation}
for a time required to implement a $\pi$ phase shift on the dynamic phases,
we have $U_{\rm{dia}}(1)=I$ because $W(\lambda)$ commutes and $$\epsilon_{1,2}(1)=\left|\int_{0}^{1}e^{i\phi_{1,2}(\lambda)}d\lambda\right|=0.$$
That is, by jumping on discrete points $\lambda_{j}$ the system evolution at $\lambda=1$ is exactly the perfect adiabatic evolution $U_{\rm{adia}}$ even through the evolution time is finite,
and an initial state $|\Psi_{\rm{i}}\rangle$ will end up with the adiabatic target state
$|\Psi_{\rm{t}}\rangle=U_{\rm{adia}}|\Psi_{\rm{i}}\rangle$.
To realize the jumping protocol, we apply rectangular $\pi$ pulses at the points $\lambda_{j}$ without time delay between the pulses because between the points $\lambda_{j}$ the Hamiltonian has a zero energy gap and its driving can be neglected (see Fig.~\ref{fig3:Figc2jump}\wzy{C}). The simulation results in Fig.~\ref{fig3:Figc2jump} show how the transition from the standard continuous protocol to the jumping one gradually increases the fidelity of adiabatic evolution.
\wzy{We experimentally compare} the jumping protocol with the continuous one \wzy{along the geodesic given by Eq.~\ref{eq:xyPath}, by measuring} the fidelity of the evolved state to the target state $|\Psi_{\rm{t}}\rangle$ that follows the ideal adiabatic evolution $U_{\rm{adia}}$. \wzy{The continuous protocol has a constant gap and a constant sweeping rate as in Fig.~\ref{fig1:FigConstPi}C. As shown in Fig.~\ref{fig4:FigPulse} (A and B) for the case of a geodesic half circle ($\theta_{\rm g} =\pi$), the} jumping protocol reaches unit fidelity within the measurement accuracy, while the standard continuous driving has much lower fidelity at short evolution times. \wzy{The advantage of the jumping protocol is more prominent when
we traverse the half-circle path back and forth, see Fig.~\ref{fig4:FigPulse} (C to F) for the results of a total path length of $6\theta_{\rm{g}}$.}
We observe in Fig.~\ref{fig4:FigPulse} that the constant-gap
protocol provides unit state-transfer fidelity only when the initial state is an eigenstate of the initial Hamiltonian $|\Psi_{\rm{i}}\rangle=|\rm{x}\rangle$
and the relative dynamic phase accumulated in a single half circle is $\phi=\sqrt{(2k\pi)^{2}-(\theta_{\rm{g}})^{2}}$ ($k=1,2,\ldots$) \wzy{(see Materials and Methods)}.
However, the phase shifts on the system eigenstates accompanying adiabatic evolution can not be observed
when the initial state is prepared in one of the initial eigenstates. Therefore, in Fig.~\ref{fig4:FigPulse} we
also compare the fidelity \wzy{for the} initial state $|\Psi_{\rm{i}}\rangle=|\rm{y}\rangle$, which is a superposition of the initial eigenstates $|\pm\rm{x}\rangle$. The results confirm that the jumping protocol achieves exactly the adiabatic evolution $U_{\rm{adia}}$ within the experimental uncertainties.
\subsection*{Robustness of quantum adiabatic evolution via jumping}
To demonstrate the intrinsic robustness guaranteed by adiabatic evolutions,
in Fig.~\ref{fig5:FigRobust}, we consider large random driving amplitude
errors in the jumping protocol. We add random Gaussian distributed errors with a standard
deviation of 50 \% to the control amplitude. To simulate white noise,
we change the amplitude after every 10 ns in an uncorrelated
manner. Despite the large amplitude errors,
which can even cause energy level crossings, during the evolution (see Fig.~\ref{fig5:FigRobust}A for a random time trace), a change of fidelity is hardly observable in Fig.~\ref{fig5:FigRobust}B.
Additional simulations in fig.~S2
also demonstrate the robustness to amplitude fluctuations with different kinds of noise correlation, i.e., Gaussian white noise, Ornstein-Uhlenbeck process modeled noise, and static random noise.
The robustness of the jumping protocol can be further enhanced by using a larger number $N$ of points along the path (see fig.~S3).
While it is different from dynamical decoupling (DD)~\cite{yang2011preserving,wang2016necessary}, the jumping protocol can suppress the effect of environmental noise through a mechanism similar to DD. Therefore the fidelity is still high even when the evolution time is much longer than the coherence time, $T_{2}^{*}=1.7$~$\mu$s, of the NV electron spin (see fig.~S4).
This evidence is useful to design adiabatic protocols that provide strong robustness against both control errors and general environmental perturbations.
\subsection*{Avoiding unwanted path points in adiabatic evolution}
Without going through all the path points, the jumping protocol has
advantages to avoid path points (i.e., Hamiltonian with certain eigenstates) that can not be realized in experiments.
As a proof-of-principle experiment, we consider the Landau-Zener (LZ)
Hamiltonian~\cite{Shevchenko2010}
\begin{equation}
H(\lambda)=H_{\rm{LZ}}(\lambda)\equiv B_{\rm{z}}(\lambda)\frac{\sigma_{\rm{z}}}{2}+\Delta\frac{\sigma_{\rm{x}}}{2}, \label{eq:H_LZ}
\end{equation}
with $\sigma_{\alpha}$ ($\alpha=$ x, y, z) being the Pauli matrices.
Because $\Delta$ is non-zero in the LZ Hamiltonian, tuning the system eigenstates to the eigenstates $|\pm{\rm{z}}\rangle$ of $\sigma_{\rm{z}}$
requires $B_{\rm{z}} \rightarrow \pm \infty$.
Therefore for a perfect state transfer from $|\Psi_{\rm{i}}\rangle=|-{\rm{z}}\rangle$
to $|\Psi_{\rm{t}}\rangle=|{\rm{z}}\rangle$ by using the standard continuous protocol,
it is required to adiabatically tune $B_{\rm{z}}$ from $-\infty$
to $+\infty$ (see insets of Fig.~\ref{fig6:LZ}A).
The experimental implementation of $B_{\rm{z}}=\pm\infty$ however requires an infinitely large control field, which is a severe limitation.
In our experiment, a large $B_{\rm{z}}$ field can be simulated by going to the rotating frame of the microwave control field with a large frequency detuning.
The experimental realization of $B_{\rm{z}} \rightarrow \pm \infty$ can be challenging in other quantum platforms.
For example, for superconducting qubits where $\Delta/(2\pi)$ could be as large as 0.1 GHz but
the tuning range of $B_{\rm{z}}/(2\pi)$ is usually limited to a couple of GHz or even of the same order of magnitude as $\Delta/(2\pi)$~\cite{sun2015observation}. For two-level quantum system comprising Bose-Einstein condensates in optical lattices, the maximum ratio of $B_{\rm{z}}/\Delta$ is determined by the band structure~\cite{bason2012high}. For singlet-triplet qubits in semiconductor quantum dots, the exchange interaction for the control of $B_{\rm{z}}$ is positively confined~\cite{foletti2009universal}.
On the contrary, with the jumping
approach, one can avoid the unphysical points such as $B_{\rm{z}}=\pm\infty$
as infinitely slow and continuous process is not required and achieve high-fidelity state transfer as shown in Fig.~\ref{fig6:LZ}.
As a remark, we find that our jumping protocol with $N=1$ (i.e., a Rabi pulse)
specializes to the optimized composite pulse protocol~\cite{bason2012high}
but has the advantage that no additional strong $\pi/2$ pulses at
the beginning and the end of the evolution are required.
Moreover, by applying the jumping protocol with $N=1$ to the adiabatic passage proposed in \cite{Cirac1994},
we obtain the protocol that has been used to experimentally generate Fock states of a trapped atom~\cite{Meekhof1996}.
When, instead of a single target point, high-fidelity adiabatic evolution along the path is also desired,
we can use the jumping protocol with a larger $N$.
\subsection*{Conclusion and outlook}
In summary, our experiments demonstrated that energy level crossings and vanishing gaps allow and can even accelerate quantum adiabatic evolutions, challenging the traditional view that adiabatic control must be slow and unit-fidelity adiabatic processes require an infinite amount of evolution time. By experimentally verifying a recently derived quantum adiabatic condition, we have shown that the quantum dynamic phases are more fundamental than energy gaps in quantum adiabatic processes. Thanks to rapid changes of these phases, non-adiabatic transitions can be efficiently suppressed and fast varying Hamiltonians can still realize quantum adiabatic evolutions. Our results break the limit imposed by the conventional adiabatic methods which originate from the traditional concept of extremely slow change in classical mechanics~\cite{Laidler1994,Born1928}, allowing fast quantum adiabatic protocols with unit fidelity within finite evolution times. In addition, the freedom of using vanishing gaps provides the ability to avoid unphysical points in an adiabatic path and allows to incorporate pulse techniques~\cite{yang2011preserving} into a quantum adiabatic evolution to suppress environmental noise for long-time robust adiabatic control.
While it is possible to mimic the infinitely slow quantum adiabatic evolution by using additional counterdiabatic control, i.e., shortcuts to adiabaticity~\cite{Demirplak2003,Berry2009,torrontegui2013shortcuts,Deffner2014,Zhou2017,bason2012high},
the implementation of the counterdiabatic control can be exceedingly intricate because it may
need interactions absent in the system Hamiltonian~\cite{Deffner2014,Zhou2017}. Furthermore, the counterdiabatic control unavoidably changes the eigenstates of the initial Hamiltonian and introduces additional control errors~\cite{Deffner2014,Zhou2017}. However, because our protocol uses the intrinsic adiabatic path that follows the eigenstates of the Hamiltonian, no additional control is required. As a consequence, our methods avoid the use of difficult or unavailable control resources and share the intrinsic robustness of adiabatic methods. With the removal of the prerequisites in the conventional adiabatic conditions, namely non-zero gaps and slow control, our results provide new directions and promising strategies for fast, robust control on quantum systems.
\section*{Materials and Methods}
\subsection*{Adiabatic evolution along the geodesics of a general quantum system}
For two arbitrary states \wzy{(e.g., entangled states and product states)} of a general quantum system, $|\Psi_{\rm{i}}\rangle$ and $|\Psi_{\rm{t}}\rangle$,
one can write $\langle\Psi_{\rm{i}}|\Psi_{\rm{t}}\rangle=\cos\left(\frac{1}{2}\theta_{\rm{g}}\right)e^{i\phi_{\rm{i,t}}}$
with $\phi_{\rm{i,t}}$ and $\theta_{\rm{g}}$ being real. Here
$\theta_{\rm{g}}$ is the path length connecting $|\Psi_{\rm{i}}\rangle$
and $|\Psi_{\rm{t}}\rangle$ by the geodesic and we set $\phi_{\rm{i,t}}=0$
by a proper gauge transformation~\cite{anandan1990geometry}. The geodesic
\cite{anandan1990geometry,chruscinski2004geometric} that connects $|\Psi_{\rm{i}}\rangle$ and $|\Psi_{\rm{t}}\rangle$ by varying $\lambda=0$ to $\lambda=1$ can be written
as $|\psi_{1}(\lambda)\rangle=c_{\rm{i}}(\lambda)|\Psi_{\rm{i}}\rangle+c_{\rm{t}}(\lambda)|\Psi_{\rm{t}}\rangle$,
where the coefficients $c_{\rm{i}}(\lambda)=\cos(\frac{1}{2}\theta_{\rm{g}}\lambda)-\sin(\frac{1}{2}\theta_{\rm{g}}\lambda)\cot(\frac{1}{2}\theta_{\rm{g}})$
and $c_{\rm{t}}(\lambda)=\sin(\frac{1}{2}\theta_{\rm{g}}\lambda)/\sin(\frac{1}{2}\theta_{\rm{g}})$
for $\sin(\frac{1}{2}\theta_{\rm{g}})\neq0$. To describe $|\psi_{1}(\lambda)\rangle$
in terms of the system eigenstates, we choose an orthonormal state
$|\psi_{2}(0)\rangle\propto\left(I-|\Psi_{\rm{i}}\rangle\langle\Psi_{\rm{i}}|\right)|\Psi_{\rm{t}}\rangle$
if $\sin(\frac{1}{2}\theta_{\rm{g}})\neq0$. When $|\Psi_{\rm{t}}\rangle$
is equivalent to $|\Psi_{\rm{i}}\rangle$ up to a phase factor (i.e., $\sin(\frac{1}{2}\theta_{\rm{g}})=0$), $|\psi_{2}(0)\rangle$ can be an arbitrary orthonormal state. Then the geodesic can be written
as $|\psi_{1}(\lambda)\rangle=\cos(\frac{1}{2}\theta_{\rm{g}}\lambda)|\psi_{1}(0)\rangle+\sin(\frac{1}{2}\theta_{\rm{g}}\lambda)|\psi_{2}(0)\rangle$
and its orthonormal state $|\psi_{2}(\lambda)\rangle=-\sin(\frac{1}{2}\theta_{\rm{g}}\lambda)|\psi_{1}(0)\rangle+\cos(\frac{1}{2}\theta_{\rm{g}}\lambda)|\psi_{2}(0)\rangle$.
Along the geodesic we have $g_{2,1}(\lambda)=-g_{1,2}(\lambda)=i\frac{1}{2}\theta_{\rm{g}}$
being a constant and $g_{n,m}=0$ for other combinations of $n$ and $m$.
Along the geodesic if one changes the dynamic phases with
a $\pi$ phase shift only at each of the $N$ equally spaced
path points $\lambda_{j}=(2N)^{-1}(2j-1)$ with $j=1,2,\ldots,N$, the operators $W(\lambda)$ at different
$\lambda$ commute and we have $\int_{0}^{1}e^{i\phi_{1,2}}d\lambda=0$. As a consequence,
$U_{\rm{dia}}(1)=\exp\left[i\int_{0}^{1}W(\lambda)d\lambda\right]=I$
and the quantum evolution $U=U_{\rm{adia}}$ does not have any non-adiabatic
effects.
\subsection*{Hamiltonian of the NV center under microwave control}
Under a magnetic field $b_{\rm{z}}$ along the NV
symmetry axis, the Hamiltonian of the NV center electron spin without microwave control reads
$H_{\rm{NV}}=D S_{\rm{z}}^{2}-\gamma_{\rm{e}}b_{\rm{z}}S_{\rm{z}}$,
where $S_{\rm{z}}$ is the electron spin operator, $D\approx2\pi\times2.87$ GHz is the ground
state zero field splitting, $b_{\rm{z}}$ is the magnetic field, and $\gamma_{\rm{e}}=-2\pi\times 2.8$ MHz G$^{-1}$ is the electron spin gyromagnetic
ratio~\cite{doherty2013nitrogen}.
Following the standard methods to achieve a controllable Hamiltonian for quantum adiabatic evolution~\cite{RMP2017,Vitanov2001},
we apply a microwave field $\sqrt{2}\Omega(\lambda)[\cos(\omega_{\rm{mw}} t + \vartheta(\lambda)]$ to the NV $m_{\rm{s}}=0$ and $m_{\rm{s}}=1$ levels to form a qubit with the qubit states $|\rm{z}\rangle\equiv |m_{\rm{s}}= 1\rangle$ and $|-\rm{z}\rangle\equiv |m_{\rm{s}}=0\rangle$. The microwave frequency $\omega_{\rm{mw}}$ may also be tuned by the parameter $\lambda$ to realize a controllable frequency detuning $\delta(\lambda)$ with respect to the transition frequency of $m_{\rm{s}}=0$ and $m_{\rm{s}}=1$ levels. In the standard rotating frame of the microwave control field, we have
the general qubit Hamiltonian under the microwave control~\cite{doherty2013nitrogen}
\begin{equation}
H(\lambda) = \delta(\lambda) \frac{\sigma_{\rm{z}}}{2} + \Omega(\lambda)\left[\cos\vartheta(\lambda)\frac{\sigma_{\rm{x}}}{2} +\sin\vartheta(\lambda)\frac{\sigma_{\rm{y}}}{2}\right], \label{eq:HLambda}
\end{equation}
where the microwave phase $\vartheta(\lambda)$, microwave detuning $\delta(\lambda)$, and microwave Rabi frequency $\Omega(\lambda)$ are all tunable and can be time-dependent in experiment.
The usual Pauli operators satisfy $\sigma_{\rm{z}}{|\pm \rm{z}\rangle}=\pm {|\pm \rm{z}\rangle}$ and $\left[\cos(\theta_{\rm{g}}\lambda)\sigma_{\rm{x}} +\sin(\theta_{\rm{g}}\lambda)\sigma_{\rm{y}}\right]|\pm_{\lambda}\rangle=\pm |\pm_{\lambda}\rangle$, where the states $|\pm_{\lambda}\rangle$ are given by Eq.~\ref{eq:xyPath}.
By setting the microwave detuning $\delta(\lambda)=0$, we achieve the Hamiltonian in Eq.~\ref{eq:Hxy}, which in terms of the Pauli operators reads
$$H_{\rm{XY}}(\lambda) = \Omega(\lambda)\left[\cos(\theta_{\rm{g}}\lambda)\frac{\sigma_{\rm{x}}}{2} +\sin(\theta_{\rm{g}}\lambda)\frac{\sigma_{\rm{y}}}{2}\right].$$
By varying the parameter $\lambda$, the system eigenstates follow the geodesics along the equator of the Bloch sphere where the north and south poles are defined by the states $|\pm\rm{z}\rangle$. Here the energy gap $\Omega(\lambda)$ is directly controlled by the amplitude of the microwave field.
On the other hand, by using a constant Rabi frequency $\Omega(\lambda)=\Delta$ and a tunable frequency detuning $\delta(\lambda)=B_{\rm{z}}(\lambda)$, we obtain the Landau-Zener Hamiltonian $H_{\rm{LZ}}(\lambda)$ given by Eq.~\ref{eq:H_LZ}.
\subsection*{Adiabatic evolution by continuous driving with a constant gap}
Consider a conventional adiabatic driving that a constant amplitude driving field rotates around the z axis, with the Hamiltonian
$H(\lambda)=\frac{1}{2}\Omega e^{-i\frac{1}{2}\sigma_{z}\theta_{\rm{g}}\lambda}\sigma_{\theta}e^{i\frac{1}{2}\sigma_{z}\theta_{\rm{g}}\lambda}$, which
is parameterized by $\lambda=t/T$ along a circle of latitude with $\sigma_{\theta}=\sigma_{z}\cos\theta+\sigma_{x}\sin\theta$
in a total time $T$. The difference of the accumulated dynamic phases
at $\lambda=1$ on the two eigenstates is $\phi=\Omega T$. One can show
that the system evolution at $\lambda=1$ reads
\begin{equation}
U=e^{-i\frac{1}{2}\theta_{\rm{g}}\sigma_{z}}\exp\left[-i\frac{1}{2}\left(\phi\sigma_{\theta}-\theta_{\rm{g}}\sigma_{z}\right)\right].\label{eq:Uconstant}
\end{equation}
The ideal adiabatic evolution is obtained by using Eq.~\ref{eq:Uconstant}
in the adiabatic limit $T\rightarrow\infty$ (i.e., $\phi\rightarrow\infty$),
$$U_{\rm{adia}} =\lim_{T\rightarrow\infty}U=e^{-i\frac{1}{2}\theta_{\rm{g}}\sigma_{z}}e^{i\frac{1}{2}\theta_{\rm{g}}\cos\theta\sigma_{\theta}}e^{-i\frac{1}{2}\phi\sigma_{\theta}}.$$
Without the part of dynamic phases, $U_{\rm{adia}}$ describes
geometric evolution and for a cyclic evolution (i.e., $\theta_{\rm{g}}=2\pi$)
the geometric evolution is described by the Berry's phases $\pm\pi(\cos\theta-1)$.
By comparing $U_{\rm{adia}}$ and $U$ or by using the results of \cite{wang2016necessary},
the non-adiabatic correction is given by
\begin{equation}
U_{\rm{dia}}=\exp\left[i\frac{1}{2}\left(\phi-\theta_{\rm{g}}\cos\theta\right)\sigma_{\theta}\right]U^{\prime},\label{eq:UDia_Const}
\end{equation}
with
$$U^{\prime}=\exp\left[-i\frac{1}{2}\left(\phi\sigma_{\theta}-\theta_{\rm{g}}\sigma_{z}\right)\right].
$$
In the adiabatic limit $T\rightarrow\infty$ (i.e., $\phi\rightarrow\infty$), $U_{\rm{dia}}=I$
is the identify operator. We note that when the phase factor of the state is irrelevant
one can perform perfect state transfer by this driving if the initial state is prepared in an initial
eigenstate of the driving Hamiltonian $H(\lambda)$ (i.e., an eigenstates
of $\sigma_{\theta}$).
From Eq.~\ref{eq:UDia_Const}, $U_{\rm{dia}}$ is diagonal
in the basis of $\sigma_{\theta}$ when $U^{\prime}\propto I$.
As a consequence, when $U^{\prime}\propto I$ and $|\Psi_{\rm{i}}\rangle$ is prepared as an eigenstate
of $\sigma_{\theta}$ [and hence $H(\lambda=0)$], the evolved state $U|\Psi_{\rm{i}}\rangle$
matches the target state $U_{\rm{adia}}|\Psi_{\rm{i}}\rangle$
up to a phase factor. For the case of the evolution along the geodesic (e.g., $\theta=\pi/2$) and $\sqrt{\phi^{2}+\theta_{\rm{g}}^{2}}=2k\pi$
($k=1,2,\ldots$), we have $U^{\prime}\propto I$ and therefore the population transfer for the initial eigenstates
of $\sigma_{x}$ is perfect.
\subsection*{Numerical simulations}
In the simulations, we modelled dephasing noise and random fluctuations by adding them to
the Hamiltonian Eq.~\ref{eq:HLambda} via $\delta(\lambda)\rightarrow \delta(\lambda)+ \delta_0$ and $\Omega(\lambda)\rightarrow \Omega(\lambda)\wzy{(1 + \delta_1)}$.
Here $\delta_0$ is the dephasing noise from static and time-dependent magnetic
field fluctuations with a $T_{2}^{*}=1.7$ $\mu$s. $\delta_1$ is the random static
changes in the driving amplitude. $\delta_0$ follows the Gaussian distribution with the mean value $\mu=0$ and the standard deviation $\sigma=2\pi\times 130$ kHz. The probability density of $\delta_1$ has the Lorentz form $f(\delta_1,\gamma)=1/\{\pi\gamma[1+(\wzy{\delta_1}/\gamma)^2]\}$ with $\gamma=0.0067$. All the parameters in the distribution function are extracted from fitting the free induction decay (FID) and the decay of Rabi oscillation.
\subsection*{Experimental Setup}
The experiments were performed with a home-built optically detected magnetic resonance (ODMR) platform, which consists of a confocal microscope and a microwave (MW) synthesizer
(fig.~S5). A solid state green laser with 532 nm wavelength is used for initializing and reading out the NV spin state. The light beam was focus on the NV center through an oil immersion objective (N.A., 1.4). The emitted fluorescence from NV center was collected by a single photon counting module (APD). Here we used an NV center embedded in a room-temperature bulk diamond grown by chemical vapor deposition with [100] faces. It has $^{13}$C isotope of
natural abundance and nitrogen impurity less than 5 ppb. To lift the degeneracy of the $|m_{\rm{s}}=\pm1\rangle$ states, a static magnetic field of 510 G was provided by a permanent magnet. The magnetic field was aligned by adjusting the three-dimensional positioning stage on which the magnet was mounted, and simultaneously monitoring the counts of the NV center. The direction of the magnetic field is well aligned when the counts show no difference between with and without the magnet. Manipulation of the NV center is performed by MW pulses applied through a home-made coplanar waveguide (CPW). The MW pulses were generated by the I/Q modulation of the Agilent arbitrary wave generator (AWG) 81180A and the vector signal generator (VSG) E8267D and then amplified by Mini Circuits ZHL-30W-252+. An atomic clock was used to synchronize the timing of the two. The AWG supplies the I and Q data with a frequency of 400 MHz, and the VSG generates the 3898 MHz carrier. The output frequency is 4298 MHz, which matches the transition frequency between the NV $m_{\rm{s}}=0$ and $m_{\rm{s}}=+1$ states.
\subsection*{Experimental Sequences}
As the magnetic field is 510 G, we first applied the green laser for 3~$\mu$s to initialize the NV center electronic spin to the level of $m_{\rm{s}}=0$ and to polarize the adjacent $^{14}$N nuclear spin simultaneously~\cite{Epstein2005}.
The preparation of the NV electron spin in an equal superposition state of $m_{\rm{s}}=0$ and $m_{\rm{s}}=1$ was realized by applying a MW $\pi_{\rm{x}}/2$ ($\pi_{\rm{y}}/2$) pulse, i.e., by the rotation around the x (y) axis with an angle of ${\pi}/{2}$.
Then the NV electron spin was driven according to a desired path. To experimentally
characterize the evolution path, we sampled the path with several points
and measured the spin state through tomography. ${\pi_{\rm{x}}}/{2}$ or ${\pi_{\rm{y}}}/{2}$ pulses were applied to readout the off-diagonal terms.
Finally the spin state was read out by applying
the laser pulse again and measuring the spin-dependant fluorescence.
Typically the whole sequence was repeated $10^{5}$ times to get a better signal to noise ratio (SNR). The schematic diagram of the pulse sequence is shown in fig.~S6.
In driving the NV electron spin along the path given by Eq.~\ref{eq:xyPath}, we used an on-resonant MW field and swept the MW phase $\theta_{\rm{g}}\lambda$ with the path parameter $\lambda$. In driving the NV electron spin along the path of the LZ Hamiltonian (see Eq.~\ref{eq:H_LZ}), the MW phase was a constant, $\Delta$ was set by the Rabi frequency, and $B_{\rm{z}}$ was the MW frequency detuning which varied as $B_{\rm{z}}=-\Delta\cot(\theta_{\rm{g}}\lambda)$.
For continuous driving, the path parameter $\lambda$ varies with a constant rate $d\lambda/dt=f_{\rm{rot}}$.
In the jumping protocol $\lambda$ jumps from point to point: $\lambda=\lambda_{j}=(2N)^{-1}(2j-1)$ for $j=1,2,\ldots,N$. In this work the jumping protocol had a constant driving Rabi frequency $\Omega_{0}$ and $\lambda= \lambda_j$ if $(j-1)T/N\leq t<j T/N$ for a path with $N$ pulses applied in a total time $T$.
In the experiments with the back-forward motion along the geodesic, we reversed
the order of the parameter $\lambda$ in the backward path. That is, in the jumping protocol we repeat the subsequent parameters $(\lambda_{1}, \lambda_{2},\ldots, \lambda_{N-1},\lambda_{N}, \lambda_{N}, \lambda_{N-1}, \ldots, \lambda_{2}, \lambda_{1})$, while for the standard
protocol of continuous driving we used the rate $d\lambda/dt=f_{\rm{rot}}$ for a forward path
and the rate $d\lambda/dt=-f_{\rm{rot}}$ for a backward path and repeated the process.
We removed the irrelevant dynamic phases if the initial state was not prepared in an initial eigenstate to reveal the geometric evolution. At the beginning of state readout, we compensated the dynamic phases by applying an additional driving with a microwave $\pi$ phase shift (i.e., $\Omega\rightarrow-\Omega$) at the point of the target state for a time equalling to the time for adiabatic evolution. This additional driving did not change the geometric phases and state transfer because it was applied at the final path point.
\bibliographystyle{Science}
| proofpile-arXiv_065-14326 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction\label{intro}}
Transverse momentum dependent parton distributions (TMDs) \cite{Anselmino:1994gn, Barone:2001sp} are important to understand the single spin asymmetries (SSA) observed experimentally
since a long time \cite{Adams:1991rw,Adams:1991cs}. They give a three dimensional picture of the nucleons, together with the generalized parton distributions (GPDs) and
represent non-trivial and non-perturbative correlations between the intrinsic transverse momentum of the partons (quarks and gluons) and the spin of the nucleon.
Of particular interest are the Sivers function \cite{Sivers:1989cc} and the Boer-Mulders function \cite{Boer:1997nt}. At the parton level, Sivers function represents the
coupling of the intrinsic transverse momentum of the partons to the transverse spin of the target. It quantifies the
distribution of unpolarised quarks inside a transversely polarized target. Sivers effect is particularly interesting as it is sensitive to a phase interference in
the amplitudes \cite{Brodsky:2002rv} related to the gauge invariance of the underlying QCD interaction \cite{Collins:2002kn,Boer:2003cm,Belitsky:2002sm, Bomhof:2004aw}. Namely, the Sivers
function is non-zero only if it takes into account the gluonic initial and
final state interactions. Such interactions are process dependent.
The general structure of the process dependence may be quite complicated, however, Sivers function for semi-inclusive deep inelastic scattering (SIDIS)
is expected to be negative to the Sivers function in Drell-Yan (DY) process
\cite{Belitsky:2002sm,Boer:2003cm}. Sivers effect produces azimuthal asymmetry in SIDIS, that has been measured in experiment by HERMES,
COMPASS and JLab \cite{Alekseev:2010rw, Alekseev:2008aa,Airapetian:2009ae,Qian:2011py}. Boer-Mulders effect in Drell-Yan process has been investigated in Fermilab and preliminary results are available \cite{Sbrizzai:2016gro}. There is also recent data from W production at RHIC \cite{Adamczyk:2015gyk}. Another TMD that has gathered considerable interest recently is the Boer-Mulders function
\cite{Boer:1997nt}, which gives the distribution of transversely polarized quarks in an unpolarised nucleon. This measures the spin-orbit correlations of quarks.
Boer-Mulders effect produces a measurable $cos~2 \phi$ azimuthal asymmetry in SIDIS. Like Sivers function, Boer-Mulders function is also process dependent,
due to the initial and final state interactions.
There have been a lot of phenomenological studies on the Sivers as well as
Boer-Mulders function (see, for example,
\cite{DAlesio:2004eso,Efremov:2004tp,Anselmino:2005ea,Collins:2005rq,Anselmino:2008sga,Anselmino:2013rya,
Anselmino:2016uie,Martin:2017yms,Barone:2009hw,Zhang:2008nu}).
A lattice calculation is presented in \cite{Musch:2011er}. Extraction of the TMD pdfs from
experimental data usually relies on the following assumptions
\cite{Anselmino:2016uie}: (i) factorization of the $x$ dependent part of the TMD from the $k_\perp^2$ dependent part , (ii) the $k_\perp$ dependent part is a Gaussian, (iii) in the extraction of the Boer-Mulders function, one usually assumes that it is proportional to the Sivers function. The TMD functions are parametrized in terms of several parameters including the average transverse momenta $\langle k_\perp^2 \rangle$ of the partons. This introduces an uncertainty as the experimental values of
$\langle k_\perp^2 \rangle$ are still not convergent:
$\langle k_\perp^2 \rangle \approx 0.25~ \mathrm{GeV}^2$ from old EMC data and FNAL SIDIS data, whereas the value is $0.18~ \mathrm{GeV}^2$ derived from HERMES data and this is the value that has been used in the extraction of the Boer-Mulders function. Analysis using a more recent data suggests quite different value; $\langle k_\perp^2 \rangle \approx 0.57 ~ \mathrm{GeV}^2$ (HERMES) and
$\langle k_\perp^2 \rangle \approx 0.61 ~ \mathrm{GeV}^2$ (COMPASS). A recent extraction
\cite{Martin:2017yms} of the Sivers function, however, does not use
any of these values of $\langle k_\perp^2 \rangle$ as a parameter, but still it uses the factorization between the $x$ dependence and $k_\perp$ dependence.
The current state of the art can be summarized by saying that the present data is insufficient to confirm the change of sign of the Sivers function between the SIDIS and DY processes, although there is a hint of such sign change from $W^-$ production data at RHIC \cite{Anselmino:2016uie} .
The Sivers and Boer-Mulders TMDs have also been investigated in various
phenomenological models \cite{Gamberg:2007wm,Zhang:2008nu, Burkardt:2007xm,
Pasquini:2010af,Yuan:2003wk,Bacchetta:2003rz,Courtoy:2008vi}. In fact the first
model calculation of the Sivers asymmetry in \cite{Brodsky:2002rv} showed the
importance of the phase difference of the overlapping amplitudes to get a
non-zero asymmetry. Model studies are also interesting to understand various
relations between the TMDs and GPDs. An intuitive explanation of the Sivers
effect was developed in \cite{Burkardt:2003yg} in a model-dependent way. The
average transverse momentum of an unpolarised quark in a transversely polarized
nucleon generated due to the Sivers effect
is related to the distortion in impact parameter space through a lensing
function, which is the effect of final state interaction. This relation is
found to hold in spectator-type models to the lowest non-trivial order, although
expected to break down when higher order effects are taken into account. This
relation is not expected to hold in models where the so-called lensing function
does not factor out from the GPD in impart parameter space. This relation shows
the connection of the Sivers function with the orbital angular momentum (OAM) of
the quarks although depending on the model. A similar model-dependent relation
is derived in \cite{Meissner:2007rx} between the Boer-Mulders function, which
is related to the first derivative of the chiral odd GPDs $\mathcal{E}_T+2
\tilde H_T$ in the impact parameter space though the lensing function. Sivers
function and Boer-Mulders function are time-reversal (T) odd functions whereas
the GPDs above are T-even, and no model independent relation can be derived
connecting them. Of course, GPDs and TMDs can be connected through different
limits of the generalized transverse momentum dependent pdfs (GTMDs). The
motivation of the present work is to calculate the Sivers and Boer-Mulders
function using a recently developed quark-diquark model light-front wave
function of the proton based on light-front holography, calculate the
asymmetries to compare with the data and investigate to what extent the
model-dependent relations hold.
The light-front quark-diquark model\cite{Maji:2016yqo} has been briefly
discussed in the next section. The model has been used to investigate Wigner
distributions, GPDs and T-even
TMDs\cite{Chakrabarti:2017teq,Maji:2017bcz,Maji:2017ill}. The model is also
found to predict the single-spin asymmetries described by T-even TMDs (Collins
asymmetries) quite accurately at different experimental scales\cite{Maji:2017zbx}. In this work, the model has been extended to
incorporate the final state interactions(FSI) into the light front
wave functions. The FSI generates a phase in the wave function which is
responsible for non-zero T-odd TMDs i.e., Sivers and Boer-Mulders functions and
hence the spin asymmetries associated with them. The spin asymmetries evaluated
in this model are compared with the experimental data using the QCD evolution
prescribed by Abyat and Rogers\cite{Aybat:2011zv}.
\section{light-front quark-diquark model for nucleon\label{model}}
In the light-front quark-diquark model, the proton state is written in a linear
combination of quark-diquark state with the scalar and axial-vector diquark,
considering the spin-flavor $SU(4)$
structure\cite{Jakob:1997wg,Bacchetta:2008af, Maji:2016yqo} as
\begin{eqnarray}
|P; \pm\rangle = C_S|u~ S^0\rangle^\pm + C_V|u~ A^0\rangle^\pm + C_{VV}|d~ A^1\rangle^\pm. \label{PS_state}
\end{eqnarray}
Where, $C_S, C_V$ and $C_{VV}$ are the coefficient of the isoscalar-scalar diquark singlet state $|u~ S^0\rangle$, isoscalar-axial vector diquark state $|u~ A^0\rangle$ and isovector-axial vector diquark state $|d~ A^1\rangle$ respectively. $S$ and $A$ represent the scalar and axial-vector diquark with isospin at their superscript. Under the isospin symmetry, the neutron state is defined by the above formula with $u\leftrightarrow d$.
The light-cone coordinated $x^\pm=x^0 \pm x^3$. We choose a frame where the incoming proton does not have transverse momentum i,e. $P \equiv \big(P^+,\frac{M^2}{P^+},\textbf{0}_\perp\big)$. However, the struck quark and diquark have equal and opposite transverse momentum:$p\equiv (xP^+, \frac{p^2+|\bfp|^2}{xP^+},\bfp)$ and $P_X\equiv ((1-x)P^+,P^-_X,-\bfp)$. Here $x=p^+/P^+$ is the longitudinal momentum fraction carried by the struck quark. Detail kinematics of $\gamma^* P \to q(qq)$ are given for tree level and final-state-interaction diagram in the Fig.\ref{fig_FSI}.
The two particle Fock-state expansion for $J^z =\pm1/2$ for spin-0 diquark state is given by
\begin{eqnarray}
|u~ S\rangle^\pm & =& \int \frac{dx~ d^2\bfp}{2(2\pi)^3\sqrt{x(1-x)}} \bigg[ \psi^{\pm(u)}_{+}(x,\bfp)|+\frac{1}{2}~s; xP^+,\bfp\rangle \nonumber \\
&+& \psi^{\pm(u)}_{-}(x,\bfp)|-\frac{1}{2}~s; xP^+,\bfp\rangle\bigg],\label{fock_PS}
\end{eqnarray}
where $|\lambda_q~\lambda_S; xP^+,\bfp\rangle$ is the two particle state having struck quark of helicity $\lambda_q$ and a scalar diquark having helicity $\lambda_S=s$(spin-0 singlet diquark helicity is denoted by s to distinguish from triplet diquark). The state with spin-1 diquark is given as \cite{Ellis:2008in}
\begin{eqnarray}
|\nu~ A \rangle^\pm & =& \int \frac{dx~ d^2\bfp}{2(2\pi)^3\sqrt{x(1-x)}} \bigg[ \psi^{\pm(\nu)}_{++}(x,\bfp)|+\frac{1}{2}~+1; xP^+,\bfp\rangle \nonumber\\
&+& \psi^{\pm(\nu)}_{-+}(x,\bfp)|-\frac{1}{2}~+1; xP^+,\bfp\rangle +\psi^{\pm(\nu)}_{+0}(x,\bfp)|+\frac{1}{2}~0; xP^+,\bfp\rangle \nonumber \\
&+& \psi^{\pm(\nu)}_{-0}(x,\bfp)|-\frac{1}{2}~0; xP^+,\bfp\rangle + \psi^{\pm(\nu)}_{+-}(x,\bfp)|+\frac{1}{2}~-1; xP^+,\bfp\rangle \nonumber\\
&+& \psi^{\pm(\nu)}_{--}(x,\bfp)|-\frac{1}{2}~-1; xP^+,\bfp\rangle \bigg].\label{fock_PS}
\end{eqnarray}
Where $|\lambda_q~\lambda_D; xP^+,\bfp\rangle$ represents a two-particle state with a quark of helicity $\lambda_q=\pm\frac{1}{2}$ and a axial-vector diquark of helicity $\lambda_D=\pm 1,0(triplet)$.
\begin{figure}
\includegraphics[width=15cm,clip]{FSI.pdf}
\caption{\label{fig_FSI} Left: tree level diagram. Right: FSI diagram for
$\gamma^* P \to q(qq)$ }
\end{figure}
\section{Final State interaction and T-odd TMDs}
The final state interaction\cite{Brodsky:2002cx} produces a non-trivial phase in
the amplitude and gives non-vanishing T-odd TMDs along with the T-even TMDs.
There are two T-odd TMDs, $f^{\perp \nu}_{1T}(x,\bfp^2)$ (Sivers function) and $h^{\perp \nu}_1(x,\bfp^2)$ (Boer-Mulders function), at the leading twist. The contribution of FSI is incorporated in the wave functions\cite{Hwang:2010dd} and the wave functions are modified with spin dependent complex phases as:\\
(i)for scalar diquark
\begin{eqnarray}
\psi^{+(u)}_+(x,\bfp)&=& N_S~\bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_1 \bigg] \varphi^{(u)}_{1}(x,\bfp),\nonumber \\
\psi^{+(u)}_-(x,\bfp)&=& N_S\bigg(- \frac{p^1+ip^2}{xM} \bigg) \bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_2\bigg] \varphi^{(u)}_{2}(x,\bfp),\label{LFWF_S}\\
\psi^{-(u)}_+(x,\bfp)&=& N_S \bigg(\frac{p^1-ip^2}{xM}\bigg) \bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_2\bigg] \varphi^{(u)}_{2}(x,\bfp),\nonumber \\
\psi^{-(u)}_-(x,\bfp)&=& N_S~ \bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_1\bigg]\varphi^{(u)}_{1}(x,\bfp),\nonumber
\end{eqnarray}
(ii) for axial-vector diquark(for $J=\pm 1/2$ )
\begin{eqnarray}
\psi^{+(\nu)}_{+~+}(x,\bfp)&=& N^{(\nu)}_1 \sqrt{\frac{2}{3}} \bigg(\frac{p^1-ip^2}{xM}\bigg)\bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_2 \bigg] \varphi^{(\nu)}_{2}(x,\bfp),\nonumber \\
\psi^{+(\nu)}_{-~+}(x,\bfp)&=& N^{(\nu)}_1 \sqrt{\frac{2}{3}} \bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_1 \bigg] \varphi^{(\nu)}_{1}(x,\bfp),\nonumber \\
\psi^{+(\nu)}_{+~0}(x,\bfp)&=& - N^{(\nu)}_0 \sqrt{\frac{1}{3}} \bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_1 \bigg] \varphi^{(\nu)}_{1}(x,\bfp),\label{LFWF_Vp}\\
\psi^{+(\nu)}_{-~0}(x,\bfp)&=& N^{(\nu)}_0 \sqrt{\frac{1}{3}} \bigg(\frac{p^1+ip^2}{xM} \bigg) \bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_2 \bigg] \varphi^{(\nu)}_{2}(x,\bfp),\nonumber \\
\psi^{+(\nu)}_{+~-}(x,\bfp)&=& 0,\nonumber \\
\psi^{+(\nu)}_{-~-}(x,\bfp)&=& 0, \nonumber
\end{eqnarray}
and for $J=-1/2$
\begin{eqnarray}
\psi^{-(\nu)}_{+~+}(x,\bfp)&=& 0,\nonumber \\
\psi^{-(\nu)}_{-~+}(x,\bfp)&=& 0,\nonumber \\
\psi^{-(\nu)}_{+~0}(x,\bfp)&=& N^{(\nu)}_0 \sqrt{\frac{1}{3}} \bigg( \frac{p^1-ip^2}{xM} \bigg) \bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_2 \bigg] \varphi^{(\nu)}_{2}(x,\bfp),\label{LFWF_Vm}\\
\psi^{-(\nu)}_{-~0}(x,\bfp)&=& N^{(\nu)}_0\sqrt{\frac{1}{3}} \bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_1 \bigg] \varphi^{(\nu)}_{1}(x,\bfp),\nonumber \\
\psi^{-(\nu)}_{+~-}(x,\bfp)&=& - N^{(\nu)}_1 \sqrt{\frac{2}{3}} \bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_1 \bigg] \varphi^{(\nu)}_{1}(x,\bfp),\nonumber \\
\psi^{-(\nu)}_{-~-}(x,\bfp)&=& N^{(\nu)}_1 \sqrt{\frac{2}{3}} \bigg(\frac{p^1+ip^2}{xM}\bigg) \bigg[1+i \frac{e_1 e_2}{8 \pi}(\bfp^2 + B)g_2 \bigg] \varphi^{(\nu)}_{2}(x,\bfp),\nonumber
\end{eqnarray}
Where,
\begin{eqnarray}
g_1 &=& \int^1_0 d\alpha \frac{-1}{\alpha(1-\alpha)\bfp^2 + \alpha m_g^2 + (1-\alpha)B},\\
g_2 &=& \int^1_0 d\alpha \frac{-\alpha}{\alpha(1-\alpha)\bfp^2 + \alpha m_g^2 + (1-\alpha)B}\\
{\rm and},\nonumber\\
B &=& x(1-x)(-M^2+\frac{m^2_q}{x}+\frac{m^2_D}{1-x}).
\end{eqnarray}
$M, m_q,m_D$ and $ m_g$ are mass of proton, struck quark, diquark and gluon respectively. We take $m_g=0$ at the end of the calculations. $N_S, N^{(\nu)}_0$ and $N^{(\nu)}_1$ are the normalization constants.
The LFWFs $\varphi^{(\nu)}_i(x,\bfp)$ are modified form of the soft-wall AdS/QCD prediction as\cite{Gutsche:2013zia}
\begin{eqnarray}
\varphi_i^{(\nu)}(x,\bfp)=\frac{4\pi}{\kappa}\sqrt{\frac{\log(1/x)}{1-x}}x^{a_i^\nu}(1-x)^{b_i^\nu}\exp\bigg[-\delta^\nu\frac{\bfp^2}{2\kappa^2}\frac{\log(1/x)}{(1-x)^2}\bigg],
\label{LFWF_phi}
\end{eqnarray}
introducing the parameters $a^{\nu}_i, b^\mu_i$ and $\delta^\nu$. The wave
functions $\varphi_i^\nu ~(i=1,2)$ reduce to the AdS/QCD
prediction\cite{Brodsky:2007hb} for the parameters $a_i^\nu=b_i^\nu=0$ and
$\delta^\nu=1.0$.
We use the AdS/QCD scale parameter $\kappa =0.4~GeV$ as determined in \cite{Chakrabarti:2013gra} and the quarks are assumed to be massless.
The unintegrated quark-quark correlator for polarized SIDIS is defined as
\begin{eqnarray}
\Phi^{\nu [\Gamma]}(x,\textbf{p}_{\perp};S)&=&\frac{1}{2}\int \frac{dz^- d^2z_T}{2(2\pi)^3} e^{ip.z} \langle P; S|\overline{\psi}^\nu (0)\Gamma \mathcal{W}_{[0,z]} \psi^\nu (z) |P;S\rangle, \label{TMD_cor}
\end{eqnarray}
with a flavour $\nu$. The summations over the color indices of quarks are implied. $ \mathcal{W}_[0,z]$ is the Wilson line, the effect of which is incorporated in the
LFWFs in terms of FSI. Here, $p$ is the momentum of the struck quark inside the nucleon of momentum P, spin S and $x ~(x=p^+/P^+)$ is the longitudinal momentum fraction carried by struck quark.
We choose the light-cone gauge $A^+=0$.
The nucleon with helicity $\lambda_N$ has spin components $S^+ = \lambda_N \frac{P^+}{M},~ S^- = \lambda_N \frac{P^-}{M},$ and $ S_T $.
At leading twist, the T-odd TMDs are defined as
\begin{eqnarray}
\Phi^{\nu [\gamma^+]}(x,\textbf{p}_{\perp};S)&=& ... - \frac{\epsilon^{ij}_Tp^i_\perp S^j_T}{M}f^{\perp \nu} _{1T}(x,\textbf{p}_{\perp}^2),\label{Phi_1}\\
\Phi^{\nu [i \sigma^{j +}\gamma^5]}(x,\textbf{p}_{\perp};S)& = &
... + \frac{\epsilon_T^{ij}p^i_{\perp}}{M}h^{\perp \nu}
_1(x,\textbf{p}_{\perp}^2),\label{Phi_3}
\end{eqnarray}
where the ellipses indicate the terms involving T-even TMDs.
Using the Eq.(\ref{PS_state}) in the Eq.(\ref{TMD_cor}) the correlators for
transversely polarized proton are written in terms of overlap representations as
\begin{eqnarray}
\Phi^{\nu [\gamma^+]}(x,\textbf{p}_{\perp};\uparrow)&=& \frac{1}{2} \bigg[ C^2_S \frac{1}{16 \pi^3} \sum_{\lambda_q}\sum_{\lambda_N} \sum_{\lambda^\prime_N} \psi^{\lambda_N \dagger}_{\lambda_q}(x,\bfp)\psi^{\lambda^\prime_N}_{\lambda_q}(x,\bfp)\bigg]^\nu \nonumber\\
&&\hspace{1.5cm} + \frac{1}{2} \bigg[ C^2_A \frac{1}{16 \pi^3} \sum_{\lambda_q} \sum_{\lambda_D}\sum_{\lambda_N} \sum_{\lambda^\prime_N} \psi^{\lambda_N \dagger}_{\lambda_q \lambda_D}(x,\bfp)\psi^{\lambda^\prime_N}_{\lambda_q \lambda_D}(x,\bfp)\bigg]^\nu, \label{OLR_Siv}\\
\Phi^{\nu [i \sigma^{1 +}\gamma^5]}(x,\textbf{p}_{\perp};\uparrow)&=& \frac{1}{2} \bigg[ C^2_S \frac{1}{16 \pi^3} \sum_{\lambda_q}\sum_{\lambda^\prime_q} \sum_{\lambda_N} \psi^{\lambda_N \dagger}_{\lambda_q}(x,\bfp)\psi^{\lambda_N}_{\lambda^\prime_q}(x,\bfp)\bigg]^\nu \nonumber\\
&&\hspace{1.5cm} + \frac{1}{2} \bigg[ C^2_A \frac{1}{16 \pi^3} \sum_{\lambda_q}\sum_{\lambda^\prime_q} \sum_{\lambda_D}\sum_{\lambda_N} \psi^{\lambda_N \dagger}_{\lambda_q \lambda_D}(x,\bfp)\psi^{\lambda_N}_{\lambda^\prime \lambda_D}(x,\bfp)\bigg]^\nu. \label{OLR_BM}
\end{eqnarray}
Where, $\lambda_q,\lambda_D=\pm$ represent the helicity of quark and diquark respectively. In the Eq.\ref{BM_TMD} the quark polarization is taken along $x$-axis, $j=1$. The first term in the right-hand-side is for the scalar diquark and the second term is corresponding to the vector diquark.
Note, the first terms in the right-hand-side of the two Eqs.(\ref{OLR_Siv},\ref{OLR_BM}) become zero for d quark as $N^d_S=0$ in the scalar wave functions. $C^2_A$ in the second term stands for the coefficients $C^2_V$ and $C^2_{VV}$ for u quark and d quark respectively.
Comparing Eqs.(\ref{Phi_1},\ref{Phi_3}) with the Eqs.(\ref{OLR_Siv},\ref{OLR_BM}) the Sivers function $f_{1T}^{\perp\nu}(x,\bfp^2)$ and Boer-Mulders functions can be written in the LFQDM as
\begin{eqnarray}
f_{1T}^{\perp \nu}(x,\bfp^2)&=& \bigg(C^2_S N^{\nu 2}_S -C^2_A \frac{1}{3}N^{\nu 2}_0 \bigg) f^\nu(x,\bfp^2), \label{siv_TMD}\\
h_{1}^{\perp \nu}(x,\bfp^2)&=& \bigg(C^2_S N^{\nu 2}_S + C^2_A \big(\frac{1}{3}N^{\nu 2}_0 + \frac{2}{3}N^{\nu 2}_1\big)\bigg) f^\nu(x,\bfp^2). \label{BM_TMD}
\end{eqnarray}
Where
\begin{eqnarray}
f^\nu(x,\bfp^2) &=& - C_F \alpha_s \bigg[\bfp^2 + x(1-x)(-M^2+\frac{m_D^2}{1-x}+\frac{m_q^2}{x})\bigg] \frac{1}{\bfp^2} \nonumber \\
&& \hspace{1cm} \times \ln\bigg[1+\frac{\bfp^2}{ x(1-x)(-M^2+\frac{m_D^2}{1-x}+\frac{m_q^2}{x})}\bigg] \nonumber\\
&& \hspace{2cm} \times \frac{\ln(1/x)}{\pi \kappa^2}
x^{a^{\nu}_1 +a^{\nu}_2-1}(1-x)^{b^{\nu}_1+b^{\nu}_2-1} \exp\bigg[- \delta^\nu \frac{\bfp^2\ln(1/x)}{\kappa^2(1-x)^2}\bigg],
\end{eqnarray}
with struck quark mass $m_q$ and diquark mass $m_D$. In the final state
interaction, gluon exchange strength $\frac{e_1e_2}{4\pi} \rightarrow -
C_F\alpha_s$. Here the $e_1$ and $e_2$ are color charge of the struck quark and diquark respectively. The values of the parameters $a^\nu_i, b^\nu_i~(i=1,2)$ and $\delta^\nu$ are given in \cite{Maji:2017bcz}.
\begin{figure}[htbp]
\begin{minipage}[c]{0.98\textwidth}
\includegraphics[width=7.5cm,clip]{xmSiv_u_0p8_Bacc.pdf}
\includegraphics[width=7.5cm,clip]{xmSiv_d_0p8_Bacc.pdf}
\end{minipage}
\caption{\label{fig_siv_mu0} $xf^{\perp(1)}_{1T}(x)$ are for $u$ and
$d$ quarks
at initial scale $\mu_0=0.8 ~GeV$. Red continuous lines represent the model result in LFQDM and blue dashed line represent the result in spectator model \cite{Bacchetta:2008af}.}
\end{figure}
\begin{figure}[htbp]
\begin{minipage}[c]{0.98\textwidth}
\includegraphics[width=7.5cm,clip]{xDSiv_Ans_u.pdf}
\includegraphics[width=7.5cm,clip]{xDSiv_Ans_d.pdf}
\end{minipage}
\caption{\label{fig_siv_1GeV} $x\Delta^N f^{(1)}(x)$ are shown for u and d quarks
at the scale $\mu=1 ~GeV$. Our model result is shown in red continuous lines. Blue dashed lines represent the phenomenological extraction \cite{Anselmino:2012aa} from the best fit of the Sivers asymmetries measured by HERMES \cite{Airapetian:2009ae} and COMPASS \cite{Anselmino:2011gs, Alekseev:2008aa} collaborations.}
\end{figure}
Moment of the Sivers functions, defined as
\begin{eqnarray}
f^{\perp(1)}_{1T}(x) = \int d^2 p_\perp \frac{p^2_\perp}{2 M^2} f^{\perp}_{1T}(x,\bfp^2),
\end{eqnarray}
are shown in Fig.\ref{fig_siv_mu0} at the initial scale and compared with the spectator model\cite{Bacchetta:2008af}. Our model result for u quark does not have any positive peak like the spectator model. The scale evolution of the distributions are not included in the spectator model.
According to Burkardt sum rule\cite{Burkardt:2004ur}, the net transverse Sivers
momentum when summed over all the constituents is zero. In the
quark-diquark model, the constituents are quarks($q$) and diquarks($D$) only
and the statement can be written as:
\begin{eqnarray}
\sum_{i=q,D}\langle k_\perp^i\rangle=0.
\end{eqnarray}
The sum rule in terms of Sivers function can be written
as\cite{Efremov:2004tp}
\begin{eqnarray}
\sum_{i=q,D}\int dxf^{\perp(1)}_{1T}(x) =0.
\end{eqnarray}
In a scalar diquark model, it was shown\cite{Goeke:2006ef} that the Sivers functions
for the quark and diquark are related by
\begin{eqnarray} f_{1T}^{\perp D}(x,\bfp^2)=-f_{1T}^{\perp q}(1-x,\bfp^2). \end{eqnarray}
The same relation also holds in our model when averaged over the vector diquark
polarizations and the Burkardt sum rule is satisfied.
In Fig.\ref{fig_siv_1GeV}, the $x\Delta^N f^{(1)}(x)$ are presented at the scale $\mu=1~GeV$ and compared with the phenomenological fit from the HERMES and COMPASS data. The moment of the Sivers function $\Delta^N f^{(1)}(x)$ is defined as
\begin{eqnarray}
\Delta^N f^{(1)}(x)= \int d^2p_\perp (\frac{p_\perp}{4 M}) \Delta^N f_{\nu/P^\uparrow}(x,\bfp),
\end{eqnarray}
where
\begin{eqnarray}
\Delta^N f_{\nu/P^\uparrow}(x,\bfp)=(-\frac{2 p_\perp}{ M}) f^\perp_{1T}(x,\bfp^2).
\end{eqnarray}
\begin{figure}[htbp]
\begin{minipage}[c]{0.98\textwidth}
\includegraphics[width=7.5cm,clip]{xmBM_u_0p8_Bacc.pdf}
\includegraphics[width=7.5cm,clip]{xmBM_d_0p8_Bacc.pdf}
\end{minipage}
\caption{\label{fig_BM_mu0} $xh^{\perp(1)}_{1}(x)$ are shown for u and d quarks
at initial scale $\mu_0=0.8 ~GeV$. Red continuous lines represent the model result in LFQDM and blue dashed line represent the result in spectator model \cite{Bacchetta:2008af}.}
\end{figure}
In Fig.\ref{fig_BM_mu0}, we show our model result for moment of Boer-Mulder functions, defined as
\begin{eqnarray}
h^{\perp(1)}_{1}(x) = \int d^2 p_\perp \frac{p^2_\perp}{2 M^2} h^{\perp}_{1}(x,\bfp^2),
\end{eqnarray}
at the initial scale and compare with the spectator model.
In this model, we observe
\begin{eqnarray}
|h^\perp_1(x,\bfp^2)|> |f^\perp_{1T}(x,\bfp^2)|.
\end{eqnarray}
From Eq.(\ref{siv_TMD}) and Eq.(\ref{BM_TMD}), we can easily see that
Boer-Mulders function is proportional to the Sivers function. In fact,
Boer-Mulders function is parametrized\cite{Barone:2009hw} as
\begin{eqnarray}
h^{\perp\nu}_1(x,\bfp^2) \simeq \lambda^\nu f^{\perp\nu}_{1T}(x,\bfp^2).\label{Lq}
\end{eqnarray}
The Table \ref{tab_lam} shows our model result of $\lambda^\nu$ and compared
with the result of HERMES and COMPASS data fits \cite{Barone:2009hw} for $\cos
2\phi$ asymmetry in SIDIS. The results indicate that Boer-Mulders functions are
negative for both u and d quarks.
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
~~ ~~&~~ $\lambda^u$ ~~&~~ $\lambda^d$\\ \hline
~~ LFQDM ~~&~~ $ 2.29 $ ~~&~~ $-1.08$ \\
~~ Phenomenological fit ~~&~~ $2.1 \pm0.1$ ~~&~~ $-1.11\pm0.02$\\ \hline
\end{tabular}
\caption{$\lambda^\nu$ of Eq.(\ref{Lq}) for u and d quarks are shown in our model and fitted data \cite{Barone:2009hw} of HERMES and COMPASS.}
\label{tab_lam}
\end{table}
\section{Sivers asymmetry and Boer-Mulders asymmetry}
The Sivers Asymmetry correlates between transverse momentum of parton and transverse polarization of nucleon. In the SIDIS precesses, Sivers asymmetry can be extracted by incorporating the weight factor $\sin(\phi_h-\phi_S)$ as
\begin{eqnarray}
A^{\sin(\phi_h-\phi_S)}_{UT}&=&\frac{\int d\phi_h d\phi_S [d\sigma^{\ell P^\uparrow \to \ell' h X}-d\sigma^{\ell P^\downarrow \to \ell' h X}]\sin(\phi_h-\phi_S)}{\int d\phi_h d\phi_S [d\sigma^{\ell P^\uparrow \to \ell' h X}+d\sigma^{\ell P^\downarrow \to \ell' h X}]} \label{Asy_def}\\
\end{eqnarray}
Where $\uparrow,\downarrow$ at the superscript of $P$ represent the up and down transverse spin of the target proton. According to the QCD factorization scheme the Semi-Inclusive Deep Inelastic Scattering(SIDIS) cross-section for the one photon exchange process
$\ell N \to \ell' h X$ is written as
\begin{eqnarray}
d\sigma^{\ell N \to \ell' h X}=\sum_\nu \hat{f}_{\nu/P}(x,\bfp;Q^2)\otimes d\hat{\sigma}^{\ell q \to \ell q} \otimes \hat{D}_{h/\nu}(z,{\bf k}_{\perp};Q^2).
\end{eqnarray}
Where the second term represents the
hard scattering part which is calculable in pQCD. The soft part is factorized
into TMDs, denoted by $\hat{f}_{\nu/P}(x,\bfp;Q^2)$ and fragmentation functions
(FF), denoted by $\hat{D}_{h/\nu}(z,{\bf k}_{\perp};Q^2)$. This scheme holds in small
${\bf P}_{h\perp}$ and large $Q$ region,
$P_{h\perp}^2 \simeq \Lambda^2_{QCD} \ll Q^2 $. The quark-gluon corrections and higher order pQCD corrections become important at large ${\bf P}_{h\perp}$ regime \cite{Bacchetta:2008af, Ji:2006br, Anselmino:2006rv}.
The TMD factorization is presented for the
SIDIS and the DY processes in \cite{Ji:2004wu,Ji:2004xq,
Collins,GarciaEchevarria:2011rb,Echevarria:2012js} and latter on used in
\cite{Aybat:2011zv,Aybat:2011ge,Aybat:2011ta,Anselmino:2012aa}.
The kinematic variables are defined in the $\gamma^*-N$ center of mass frame as
\begin{eqnarray}
x=\frac{Q^2}{2(P.q)}=x_B, \hspace{1.5cm}
z=\frac{P.P_h}{P.q}=z_h, \hspace{1.5cm}
y=\frac{P.q}{P.\ell}=\frac{Q^2}{s x} .
\end{eqnarray}
Bjorken scaling $x_B=\frac{Q^2}{2P.q}$ with $Q^2=-q^2$. The fractional energy
transferred by the photon in the lab system is $y$ and the
energy fraction carried by the produced hadron is $z={\bf P}_{h}^-/k^-$. The transverse momentum of the fragmenting quark is denoted as ${\bf k}_{\perp}$. The momentum of the virtual photon $q \equiv (x_B P^+,\frac{Q^2}{x_B P^+}, \textbf{0}_\perp)$ and of the incoming proton $P \equiv (P^+, \frac{M^2}{P^+}, \textbf{0}_\perp)$.
The struck quark have non-zero transverse momentum $\bfp$ with the momentum $p \equiv (xP^+, \frac{p^2+|\bfp|^2}{xP^+}, \bfp)$ and the diquark carries $p_D \equiv ((1-x)P^+, \frac{p^2+|\bfp|^2}{(1-x)P^+}, -\bfp)$. In this frame, the produced hadron has a finite transverse momentum ${\bf P}_{h\perp}$.
At $\mathcal{O}(\bfp/Q)$, the relation between $\bfp, {\bf k}_{\perp}$ and ${\bf P}_{h\perp}$ is given as
${\bf k}_{\perp}={\bf P}_{h\perp}-z\bfp$.
The transverse momentum of produced hadron makes an azimuthal angle $\phi_h$ with respect to the
lepton plane and transverse spin($S_P$) of the proton has an azimuthal angle $\phi_S$.
Then the SIDIS cross-section deference \cite{Anselmino:2011ch} in the numerator can be written as
\begin{eqnarray}
\frac{d\sigma^{\ell P^\uparrow \to \ell' h X}-d\sigma^{\ell P^\downarrow \to \ell' h X}}{dx_B dy dz d^2{\bf P}_{h\perp} d\phi_S}&=& \frac{2\alpha^2}
{s x y^2}2\bigg[\frac{1+(1-y)^2}{2}\sin(\phi_h-\phi_S)F^{\sin(\phi_h-\phi_S)}_{UT}\nonumber\\
+&&\!\!\!\!\!\!\!\! (1-y)\bigg(\sin(\phi_h+\phi_S)F^{\sin(\phi_h+\phi_S)}_{UT}
+\sin(3\phi_h-\phi_S)F^{\sin(3\phi_h-\phi_S)}_{UT}\bigg)\nonumber\\
+&&\!\!\!\!\!\!\! (2-y)\sqrt{(1-y)}\bigg(\sin\phi_S
F^{\sin\phi_S}_{UT}+\sin(2\phi_h-\phi_S)F^{\sin(2\phi_h-\phi_S)}_{UT}\bigg)\bigg
].\label{N_UT}
\end{eqnarray}
The weighted structure functions, $F^{\mathcal{W}(\phi_h,\phi_S)}_{S_\ell
S}$, are defined as
\begin{eqnarray}
F^{\mathcal{W}(\phi_h,\phi_S)}_{S_\ell S
= \sum_\nu e^2_\nu \int d^2\bfp d^2{\bf k}_{\perp} \delta^{(2)}({\bf P}_{h\perp}-z\bfp-{\bf k}_{\perp})
\mathcal{W}(\bfp,{\bf P}_{h\perp}) \hat{f}^\nu(x,\bfp)\hat{D}^\nu(z,{\bf k}_{\perp}),\label{conv}
\end{eqnarray}
where $\hat{f}^\nu(x,\bfp)$ and $\hat{D}^\nu(z,{\bf k}_{\perp})$ represent leading twist
TMDs and FFs respectively.
Integrating the numerator over $\phi_h$ and $\phi_S$, with a particular weight factor $\mathcal{W}(\phi_h,\phi_S)$, one can project out the corresponding structure function $F^{\mathcal{W}(\phi_h,\phi_S)}_{S_\ell S}$ and hence the particular asymmetry can be found. For example, the $\phi_h$ and $\phi_S$ integration with the weight factors $\sin(\phi_h-\phi_S)$, and $ \cos(2\phi_h)$ end up with the Sivers asymmetry and Boer-Mulders asymmetry.
Similarly, the denominator can be written as
\begin{eqnarray}
\frac{d\sigma^{\ell P^\uparrow \to \ell' h X} + d\sigma^{\ell P^\downarrow \to \ell' h X}}{dx_B dy dz d^2{\bf P}_{h\perp} d\phi_S}&=& \frac{2\alpha^2}{s x y^2}2 \bigg[\frac{1+(1-y)^2}{2}F_{UU}+(2-y)\sqrt{1-y}\cos\phi_h F^{\cos\phi_h}_{UU} \nonumber\\
&+&(1-y)cos2\phi_h F^{\cos2\phi_h}_{UU}\bigg]. \label{D_UT}
\end{eqnarray}
Thus Sivers asymmetry can be written in terms of
structure functions \cite{Anselmino:2011ch} as
\begin{eqnarray}
A^{\sin(\phi_h-\phi_S)}_{UT}(x,z,{\bf P}_{h\perp},y)&&=
\frac{2\pi^2\alpha^2\frac{1+(1-y)^2}{s x y^2}
F^{\sin(\phi_h-\phi_S)}_{UT}(x,z,{\bf P}_{h\perp}) }
{2\pi^2\alpha^2\frac{1+(1-y)^2}{s x y^2}F_{UU}(x,z,{\bf P}_{h\perp})}\nonumber\\
=~~~~~&&\!\!\!\!\!\!\!\!\!\!\!\! \frac{2\pi^2\alpha^2\frac{1+(1-y)^2}{s x y^2}
\sum_\nu e^2_\nu \int d^2p_\perp \{\frac{-\hat{\bf{P}}_{h\perp}.\bfp}{M}\}
f^{\perp\nu}_{1T}(x,\bfp^2) D^{h/\nu}_1(z,{\bf P}_{h}-z\bfp) }
{2\pi^2\alpha^2\frac{1+(1-y)^2}{s x y^2}\sum_\nu e^2_\nu \int d^2p_\perp f^\nu_{1}(x,\bfp^2) D^{h/\nu}_1(z,{\bf P}_{h}-z\bfp) }.\label{Siv_Asy}
\end{eqnarray}
In this model, the explicit form of the Sivers functions is given in
Eq.\ref{siv_TMD} and the unpolarized TMDs is given in \cite{Maji:2017bcz}. The
model result for Sivers asymmetries are shown in Fig.\ref{fig_SivA_H} in the
$\pi^+$ and $\pi^-$ channels and compared with the HERMES data
\cite{Airapetian:2009ae} in the kinematical region
\begin{eqnarray}
0.023 < x < 0.4, \hspace{.6cm}
0.2 < z < 0.7, \hspace{.6cm}
0.31< y <0.95,\hspace{.6cm}
{\rm and~~~} P_{h\perp}> 0.05~GeV.\end{eqnarray}
To compare with the data, $f^{\perp \nu}_{1T}(x,\bfp^2)$ are taken at initial
scale and $f^\nu_1(x,\bfp^2)$ are evolved to $\mu^2=2.5~GeV^2$ following the
QCD evolution\cite{Aybat:2011zv}. For another apporach of QCD evolution of
TMDs, see \cite{Echevarria:2014xaa,Echevarria:2012pw}.
Though qualitatively our results agree with the HERMES data, but our
predictions in $\pi^+$ channel is a bit smaller than the data whereas for
$\pi^-$ channel, the model predictions are in good agreement with the data.
This may be due to the fact that our model prediction for the Sivers function
for $u$-quark is smaller than the phenomenological fit(see Fig.
\ref{fig_siv_1GeV}).
\begin{figure}[htbp]
\begin{minipage}[c]{0.98\textwidth}
\includegraphics[scale=.75]{SivAsy_N0p8_H.pdf}
\end{minipage}
\caption{\label{fig_SivA_H} Model result of Sivers asymmetries, $A^{{\rm
sin}(\phi_h-\phi_S)}_{UT}$, are shown by the continuous (red) lines for
$\pi^+$(upper row) and $\pi^-$(lower row) channels and compared with the HERMES
data\cite{Airapetian:2009ae}. $f^{\perp \nu}_{1T}(x,\bfp^2)$ are taken at
initial scale and $f^\nu_1(x,\bfp^2)$ are evolved to $\mu^2=2.5~GeV^2$
following the QCD evolution\cite{Aybat:2011zv}. The fragmentation function
$D^{h/\nu}_1(z,{\bf k}_{\perp})$ are taken as a phenomenological\cite{Kretzer:2001pz} input
at $\mu^2=2.5~GeV^2$.}
\end{figure}
\begin{figure}[htbp]
\begin{minipage}[c]{0.98\textwidth}
\includegraphics[scale=.75]{BMAsy_N0p8_H.pdf}
\end{minipage}
\caption{\label{fig_BMA_H} Model result of Boer-Mulders asymmetries, $A^{\cos 2\phi_h}_{UU}$. The continuous (red) lines represent the model prediction and the data are measured by HERMES collaboration\cite{Barone:2009hw,Giordano:2009hi}. $h^{\perp \nu}_{1}(x,\bfp^2)$ are taken at initial scale and $f^\nu_1(x,\bfp^2)$ are evolved at $\mu^2=2.5~GeV^2$ following the QCD evolution\cite{Aybat:2011zv}. The fragmentation function $H^{\perp\nu}_1(z,{\bf k}_{\perp})$ are taken as a phenomenological\cite{Anselmino:2013vqa, Anselmino:2007fs} input at
$\mu^2=2.5~GeV^2$.}
\end{figure}
\begin{figure}[htbp]
\begin{minipage}[c]{0.98\textwidth}
\includegraphics[scale=.75]{CahnAsy_N0p8_H.pdf}
\end{minipage}
\caption{\label{fig_CahnA_H} Model results for $A^{\cos \phi_h}_{UU}$ are
shown by the continuous (red) lines and compared with HERMES
data\cite{Airapetian:2012yg}. $h^{\perp \nu}_{1}(x,\bfp)$ are taken at initial
scale and $f^\nu_1(x,\bfp)$ are evolved at $\mu^2=2.5~GeV^2$ following the QCD
evolution\cite{Aybat:2011zv}. The fragmentation function
$H^{\perp\nu}_1(z,{\bf k}_{\perp})$ are taken as a phenomenological\cite{Anselmino:2013vqa,
Anselmino:2007fs} input at $\mu^2=2.5~GeV^2$.}
\end{figure}
The Boer-Mulders asymmetry can be projected out replacing the weight factor in the Eq.\ref{Asy_def} by $\cos(2\phi_h)$ and written in terms of structure functions \cite{Anselmino:2011ch} as
\begin{eqnarray}
A^{\cos (2\phi_h)}_{UU}&=& \frac{4\pi^2\alpha^2\frac{(1-y)}{s x y^2} F^{\cos
2\phi_h}_{UU}(x,z,{\bf P}_{h\perp}) }
{2\pi^2\alpha^2\frac{1+(1-y)^2}{s x y^2}F_{UU}(x,z,{\bf P}_{h\perp})} \nonumber\\
&=& \frac{4\pi^2\alpha^2\frac{(1-y)}{s x y^2} \sum_\nu e^2_\nu \int d^2p_\perp \{\frac{({\bf P}_{h\perp}.\bfp)-2 z (\hat{\bf{P}}_{h\perp}.\bfp)^2 +z p^2_\perp }{z M_h M}\} h^{\perp\nu}_{1}(x,\bfp^2) H^{\perp\nu}_1(z,|{\bf P}_{h}-z\bfp|) }
{2\pi^2\alpha^2\frac{1+(1-y)^2}{s x y^2}\sum_\nu e^2_\nu \int d^2p_\perp f^\nu_{1}(x,\bfp^2) D^{h/\nu}_1(z,|{\bf P}_{h}-z\bfp|) }. \nonumber\\
\end{eqnarray}
The Boer-Mulders function in this model is given in Eq.\ref{BM_TMD}. We use the
unpolarised fragmentation and the Collins function
$H^{\perp \nu}_1(z,{\bf k}_{\perp})$ as phenomenological inputs \cite{Anselmino:2013vqa,
Kretzer:2001pz}.
\begin{eqnarray}
D^{h/\nu}_1(z,{\bf k}_{\perp})&=&D^{h/\nu}_1(z)\frac{e^{-{\bf k}_{\perp}^2/\langle{k^2}_{\perp}\rangle}}{\pi \langle{k^2}_{\perp}\rangle},\label{FF_D1}\\
H^{\perp\nu}_1(z,{\bf k}_{\perp})&=&(\frac{z M_h}{2 k_\perp}) 2\mathcal{N}^C_\nu(z) D^{h/\nu}_1(z)h(k_\perp)\frac{e^{-{\bf k}_{\perp}^2/\langle{k^2}_{\perp}\rangle}}{\pi \langle{k^2}_{\perp}\rangle}, \label{FF_H1}
\end{eqnarray}
with
\begin{eqnarray}
\mathcal{N}^C_\nu(z)&=&N^C_\nu z^{\rho_1} (1-z)^{\rho_2}\frac{(\rho_1+\rho_2)^{(\rho_1+\rho_2)}}{\rho^{\rho_1}_1\rho^{\rho_2}_2},\\
h(k_\perp)&=&\sqrt{2 e} \frac{k_\perp}{M_h}e^{-{\bf k}_{\perp}^2/M^2_h}.
\end{eqnarray}
Where $z=P_h^-/k^-$ is the energy fraction carried by the fragmenting quark of
momentum $\textbf{k}$. The values of the parameters are listed in
\cite{Anselmino:2013vqa} and $D^{h/\nu}_1(z)$ is taken from the
phenomenological extraction\cite{Kretzer:2001pz}.
Our model prediction to Boer-Mulders asymmetry is shown in Fig.\ref{fig_BMA_H}. We compare our model result with the HERMES data\cite{Barone:2009hw,Giordano:2009hi} in the kinematical region
\begin{eqnarray}
0.023 < x < 1.0, \hspace{.6cm}
0.2 < z < 1.0, \hspace{.6cm}
0.3 < y <0.85,\hspace{.6cm}
{\rm and~~~} P_{h\perp}> 0.05~GeV.\end{eqnarray}
The Boer-Mulders asymmetries agree with the HERMES data within error
bars, except the plot with respect to $z$ in $\pi^-$ channel(Fig.\ref{fig_BMA_H}). The $A^{\cos(2\phi_h)}_{UU}$ asymmetry gets a
twist-4 contribution due to Cahn
effect\cite{Barone:2009hw} which is not included here.
Similarly, $\cos(\phi_h)$-weighted asymmetry also receives contribution
from Cahn effect and Boer-Mulders function and is
defined \cite{Anselmino:2011ch} by
\begin{eqnarray}
A^{\cos \phi_h}_{UU}&=& \frac{4\pi^2\alpha^2\frac{(2-y)\sqrt{(1-y)}}{s x y^2}
F^{\cos \phi_h}_{UU}(x,z,{\bf P}_{h\perp}) }
{2\pi^2\alpha^2\frac{1+(1-y)^2}{s x y^2}F_{UU}(x,z,{\bf P}_{h\perp})} \nonumber\\
&=& \frac{4\pi^2\alpha^2\frac{(1-y)}{s x y^2} \big(-\frac{2}{Q}\big)\sum_\nu e^2_\nu \int d^2p_\perp [(\hat{\bf{P}}_{h\perp}.\bfp)f^\nu_1 D^{h/\nu}_1 +\frac{ p^2_\perp(P_{h\perp} - z \hat{\bf{P}}_{h\perp}.\bfp) }{z M_h M} h^{\perp\nu}_{1} H^{\perp\nu}_1]}{2\pi^2\alpha^2\frac{1+(1-y)^2}{s x y^2}\sum_\nu e^2_\nu \int d^2p_\perp f^\nu_{1}(x,\bfp^2) D^{h/\nu}_1(z,|{\bf P}_{h}-z\bfp|) }.
\end{eqnarray}
The model result for the asymmetry $A_{UU}^{\cos(\phi_h)}$ is shown in
Fig.\ref{fig_CahnA_H} for $\pi^+$ and $\pi^-$ channels and compared with the
HERMES data\cite{Airapetian:2012yg}.
Recently, Boer-Mulders function and the
Cahn effects have been extracted from the experimental data of
$\cos 2\phi_h$ and $\cos \phi_h$ weighted asymmetries \cite{Christova:2017zxa}.
\section{Spin densities}
The spin density of unpolarised quarks with flavor $\nu$ in a transversely
polarized proton is defined \cite{Bacchetta:2004jz} as
\begin{eqnarray}
f_{\nu/P^\uparrow}(x,\bfp)= f^\nu_1(x,\bfp^2) - \frac{\bf{S}.(\hat{\bf{P}} \times \bfp )}{M} f^{\perp \nu}_{1T}(x,\bfp^2). \label{SpinD_Siv}
\end{eqnarray}
\begin{figure}[htbp]
\begin{minipage}[c]{0.98\textwidth}
\includegraphics[width=7.5cm,clip]{SpinD_u_Siv.pdf}
\includegraphics[width=7.5cm,clip]{SpinD_d_Siv.pdf}
\end{minipage}
\caption{\label{fig_SpinD_Siv} Spin density $f_{\nu/P^\uparrow}(x,\bfp)$ (Eq.\ref{SpinD_Siv}) are shown in transverse momentum plane for $u$ and $d$ quarks with $x=0.2$. The proton spin $\bf{S}$ is along the $y$-axis and the momentum of the proton $\bf{P}$ is along the $z$-direction.}
\end{figure}
Here for SIDIS process, we take $\hat{\bf{P}}$ is along the direction of the
momentum transfer $\hat{z}$-axis and the the spin of the proton $\bf{S}$ is
along $\hat{\bf{y}}$. The spin density $f_{\nu/p^\uparrow}(x,\bfp)$ in the
transverse momentum plane are shown in Fig.\ref{fig_SpinD_Siv} for both $u$ and
$d$ quarks. Where the longitudinal momentum fraction $x=0.2$. The distributions
are not symmetric, rather distorted towards left for $u$ quark and towards right
for $d$ quark. This left-right distortions in the distribution is observed first
time by D. Sivers \cite{Sivers:1989cc} and can be explained by the non-vanishing
Sivers function $f^{\perp \nu}_{1T}(x,\bfp^2)$. This is known as Sivers effect,
where the quarks in a transversely polarized target have a transverse momentum
asymmetry in the perpendicular direction to the nucleon spin $\bf{S}$. The left
distortion is due to the negative distribution of Sivers functions for $u$ quark
and the right distortion is due to the positive distribution of Sivers function
for $d$ quark. Similar kind of distortions are observed in other model
calculations \cite{Bacchetta:2008af} as well as in lattice QCD
\cite{Gockeler:2006zu}.
Similarly, the spin density for transversely polarized quarks with flavor $\nu$
in an unpolarized proton is defined \cite{Bacchetta:2004jz} as
\begin{eqnarray}
f_{\nu^\uparrow/P}(x,\bfp)= \frac{1}{2} [f^\nu_1(x,\bfp^2) - \frac{\bf{s}.(\hat{\bf{P}} \times \bfp )}{M} h^{\perp \nu}_{1}(x,\bfp^2)]. \label{SpinD_BM}
\end{eqnarray}
Where $\bf{s}$ is the spin of the interior quark. The spin density $f_{\nu^\uparrow/P}(x,\bfp)$ is shown in Fig.\ref{fig_SpinD_BM} for quark spin $\bf{s}$ along $\hat{\bf{y}}$ with $x=0.2$. Since Boer-Mulders functions are negative for both $u$ and $d$ quarks, we observed only a left-shift, unlike Sivers effect, as obtained in Fig.\ref{fig_SpinD_BM}.
\begin{figure}[htbp]
\begin{minipage}[c]{0.98\textwidth}
\includegraphics[width=7.5cm,clip]{SpinD_u_BM.pdf}
\includegraphics[width=7.5cm,clip]{SpinD_d_BM.pdf}
\end{minipage}
\caption{\label{fig_SpinD_BM} Spin density $f_{\nu^\uparrow/P}(x,\bfp)$ (Eq.\ref{SpinD_BM}) are shown in transverse momentum plane for $u$ and $d$ quarks with $x=0.2$. The quark spin $\bf{s}$ is along the $y$-axis and the momentum of the proton $\bf{P}$ is along the $z$-direction.}
\end{figure}
Sivers function is related with the anomalous magnetic moment and the orbital angular momentum of partons.
The Pauli form factor defined by the correlator with helicity-flip vector current, is written in terms of overlap representations as
\begin{eqnarray}
-(q^1-iq^2) \frac{F^\nu_2(Q^2)}{2M} &=& \int^1_0 \frac{dx d^2 p_\perp}{16 \pi^3} \bigg[ C^2_S \sum_{\lambda_q}\sum_{\lambda_N \neq \lambda^\prime_N} \psi^{\lambda_N \dagger}_{\lambda_q}(x,\bfp)\psi^{\lambda^\prime_N}_{\lambda_q}(x,\bfp) \nonumber\\
&&\hspace{1.5cm} + C^2_A \sum_{\lambda_q} \sum_{\lambda_D}\sum_{\lambda_N \neq \lambda^\prime_N} \psi^{\lambda_N \dagger}_{\lambda_q \lambda_D}(x,\bfp)\psi^{\lambda^\prime_N}_{\lambda_q \lambda_D}(x,\bfp)\bigg] \label{F2_def}\\
&=& \int^1_0 dx \bigg(C^2_S N^{\nu 2}_S -C^2_A \frac{1}{3}N^{\nu 2}_0 \bigg) 2 T^\nu_3(x) (1-x)^3 e^{-Q^2 \frac{\ln(1/x)}{4 \kappa^2}}
\end{eqnarray}
The anomalous magnetic moment $\kappa^\nu$ can be found from the Pauli form factor in the limit $Q^2=0$, $\kappa^\nu=F^\nu_2(0)$. Thus
\begin{eqnarray}
\kappa^\nu= \int^1_0 dx \kappa^\nu(x) = \int^1_0 dx \bigg(C^2_S N^{\nu 2}_S -C^2_A \frac{1}{3}N^{\nu 2}_0 \bigg) 2 T^\nu_3(x) (1-x)^3.
\end{eqnarray}
A simple relation between integrated Sivers function(over $\bfp$) and anomalous magnetic moments is found as
\begin{eqnarray}
f^{\perp\nu}_{1T}(x) = - C_F \alpha_s \mathcal{G}^\nu(x) \kappa^\nu(x) \label{Siv_k}
\end{eqnarray}
In this model, the relation can not be derived analytically, however numerical calculation gives the lensing function as
\begin{eqnarray}
\mathcal{G}^\nu(x) \simeq \frac{1}{4 (1-x)}\bigg|_{\nu=u,d}
\end{eqnarray}
Similar type of lensing function is found in \cite{Lu:2006kt}. In the Ref.\cite{Bacchetta:2011gx}, $\mathcal{G}^\nu(x) \propto 1/(1-x)^\eta $ where $\eta$ is typically around 0.4 but $\eta$ can vary between 0.03 and 2.
The total longitudinal angular momentum of parton $\nu$ is defined in terms of the moment of the GPDs as
\begin{eqnarray}
J^\nu = \frac{1}{2} \int^1_0 dx x [H^\nu(x,0,0) + E^\nu(x,0,0)].
\end{eqnarray}
In the forward limit, moment of the $E$ and $H$ GPDs satisfy
\begin{eqnarray}
\int^1_0 dx H^\nu(x,0,0)&=& n^\nu = \int^1_0 dx d^2\bfp f^\nu_1(x,\bfp^2)\\
\int^1_0 dx E^\nu(x,0,0)&=& \kappa^\nu
\end{eqnarray}
Where $n^u=2$ and $n^d=1$ for proton. From iso-spin symmetry flavored anomalous magnetic moments are $\kappa^u=1.673$ and $\kappa^d=-2.033$. GPDs are discussed in this model \cite{Maji:2017ill}. We define $\kappa^\nu=\int dx \kappa^\nu(x)$ and $ \kappa^\nu(x)= E^\nu(x,0,0)$.
Therefore, the Eq.\ref{Siv_k} is modified as
\begin{eqnarray}
f^{\perp\nu}_{1T}(x) \simeq - C_F \alpha_s \frac{1}{4 (1-x)} E^\nu(x,0,0)\label{Siv_E}
\end{eqnarray}
Thus the longitudinal angular momentum can be calculated from the moment of
Sivers function and unpolarised TMDs as
\begin{eqnarray}
J^\nu = \frac{1}{2} \int^1_0 dx x [f_1^\nu(x) - \frac{4(1-x)}{C_F \alpha_s} f^{\perp \nu}_{1T}(x)].
\end{eqnarray}
In this model, we obtain
\begin{eqnarray}
J^u = 0.9559 ~~ {\rm and} ~~ J^d = -0.5791.
\end{eqnarray}
Total contribution to the nucleon spin from $u$ and $d$ quarks is $0.3768$
at the initial scale $\mu_0=0.8~ GeV$. Cloudy bag model \cite{Aidala:2012mv}
and lattice
calculations predict total angular momentum contribution about $0.24$ at a scale
of $\mu^2=4~ GeV^2$.
\section{Conclusions}
We have presented the results for $T$-odd TMDs namely, the Sivers and
Boer-Mulders functions in a light-front quark-diquark model of the proton and
the spin asymmetries in SIDIS associated with these functions. It is well known
that the final state interaction is responsible to produce the required complex
phase in the amplitude which gives rise to the Sivers asymmetries. Though the
proton wave function in principle cannot describe FSI (as the participating
quark comes out of the proton state), as Hwang\cite{Hwang:2010dd} proposed, we have
modelled
the light-front wave functions to incorporate the effects of the FSI. This is done
by extending the wave functions in the quark-diquark model to have complex phases
consistent with the SIDIS amplitudes. The complex phases in the light-front
wave functions produce the Sivers and Boer-Mulders functions. Both Sivers and
Boer Mulders functions and their moments are evaluated in this model and
compared with other model and phenomenological fits. The Sivers asymmetry
$A_{UT}^{\sin(\phi_h-\phi_S)}$ for $\pi^+$ channel is found to be a bit smaller
than the experimental data; better agreements are observed for Boer-Mulders
asymmetry $A_{UU}^{\cos(2\phi_h)}$ for both $\pi^+$ and $\pi^-$ channels.
Sivers and Boer-Mulders functions help us to understand the spin structure of
the proton at the parton level. Due to Sivers effect the spin density of
an unpolarised quark in a transversely polarized proton is found to be
asymmetric in the perpendicular direction to the nuclear spin. The distortions
due to Sivers effect in our model for both $u$ and $d$ quarks are consistent
with the results found in other models and lattice QCD. Since the
Sivers function is negative for $u$ and positive for $d$ quark, the
distortion for $u$ quark is in opposite direction of the $d$ quark. Similarly,
Boer-Mulders function produces the distortion in the spin density of a
transversely polarized quark in a transversely polarized proton. Since
Boer-Mulders function has the same sign for both $u$ and $d$ quarks, the
distortions in the spin densities are also in the same direction. Sivers
function integrated over the transverse momentum is related to the anomalous
magnetic moment through the lensing function. Our model predicts that the
lensing function should go as $(1-x)^{-1}$.
| proofpile-arXiv_065-14359 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In this paper we are concerned with the incompressible Navier-Stokes
equations in the rotational framework
\begin{equation}
\left\{ \begin{aligned} & \frac{\partial u}{\partial t}-\Delta u+\Omega
e_{3}\times u+ \left(u\cdot \nabla\right) u + \nabla p = 0 \text{ in }\
\mathbb{R}^{3}\times (0,\infty)\\ & \nabla\cdot u = 0 \text{ in }\
\mathbb{R}^{3}\times (0,\infty)\\ & u(x,0) = u_{0}(x) \text{ in }\
\mathbb{R}^{3} \end{aligned}\right. , \label{NSC}
\end{equation
where $u=u(x,t)=\left( u_{1}(x,t),u_{2}(x,t),u_{3}(x,t)\right) $ and
p=p(x,t)$ stand for the velocity field and the pressure of the fluid,
respectively. The initial data $u_{0}=\left(
u_{0,1}(x),u_{0,2}(x),u_{0,3}(x)\right)$ satisfies the divergence-free
condition $\nabla \cdot u_{0}=0$. The letter $\Omega \in \mathbb{R}$
represents the Coriolis parameter while its modulus $|\Omega |$ is the speed
of rotation around the vertical vector $e_{3}=(0,0,1)$. For more details
about the physical model, we refer the reader to the book \cit
{Chemin-Gallagher2006}. Here, we will use the same notation for spaces of
scalar and vector functions, e.g., we write $u_{0}\in H^{s}$ instead of
u_{0}\in (H^{s})^{3}$.
Invoking Duhamel's principle, the system (\ref{NSC}) can be converted to the
integral equation (see e.g. \cite{HieberShibata2010})
\begin{equation}
u(t)=T_{\Omega }(t)u_{0}-\mathfrak{B}(u,u)(t), \label{integralform}
\end{equation
where the bilinear operator $\mathfrak{B}$ is defined by
\begin{equation}
\mathfrak{B}(u,v)(t)=\int_{0}^{t}T_{\Omega }(t-\tau )\mathbb{P}\nabla \cdot
(u\otimes v)(\tau )\ d\tau . \label{bili-aux-1}
\end{equation
In (\ref{bili-aux-1}), $\mathbb{P}=(\delta _{i,j}+R_{i}R_{j})_{1\leq i,j\leq
3}$ is the Leray-Helmholtz projector, $\{R_{i}\}_{1\leq i\leq 3}$ are the
Riesz transforms, and $T_{\Omega }(\cdot )$ stands for the semigroup
corresponding to the linear part of (\ref{NSC}) (Stokes-Coriolis semigroup).
More explicitly, we have tha
\begin{equation*}
T_{\Omega }(t)f=\left[ \cos \left( \Omega \frac{\xi _{3}}{|\xi |}t\right)
e^{-|\xi |^{2}t}I\widehat{f}(\xi )+\sin \left( \Omega \frac{\xi _{3}}{|\xi |
t\right) e^{-|\xi |^{2}t}\mathcal{R}(\xi )\widehat{f}(\xi )\right]^{\vee}
\end{equation*
for divergence-free vector fields $f$, where $I$ is the identity matrix in
\mathbb{R}^{3}$ and $\mathcal{R}(\xi )$ is the skew-symmetric matrix symbol
\begin{equation*}
\mathcal{R}(\xi )=\frac{1}{|\xi |}\left(
\begin{array}{ccc}
0 & \xi _{3} & -\xi _{2} \\
-\xi _{3} & 0 & \xi _{1} \\
\xi _{2} & -\xi _{1} &
\end{array
\right) \ \text{ for }\ \xi \in \mathbb{R}^{3}\setminus \{0\}.
\end{equation*
Vector-fields $u$ satisfying the formulation (\ref{integralform}) are called
mild solutions for (\ref{NSC}).
In the last decades, the global well-posedness of models in fluid mechanics
has been studied by several authors of the mathematical community,
particularly in physical models of rotating fluids as the system (\ref{NSC
). In what follows, we give a brief review on some of these results. We
start with the works of Babin, Mahalov and Nicolaenko \cit
{BabinMahalovNicolaenko1997,BabinMahalovNicolaenko1999,BabinMahalovNicolaenko2001
, who showed the global existence and regularity of solutions for (\ref{NSC
) with periodic initial velocity provided that the speed of rotation
|\Omega |$ is sufficiently large. In \cit
{Chemin-Gallagher2002,Chemin-Gallagher2006}, Chemin\textit{\ et al}.
obtained a unique global strong Leray-type solution for large $|\Omega |$
and initial data $u_{0}(x)\in L^{2}(\mathbb{R}^{2})^{3}+H^{\frac{1}{2}}
\mathbb{R}^{3})^{3}$ (notice that the first parcel of $u_{0}(x)$ depends on
(x_{1},x_{2})$ where $x=(x_{1},x_{2},x_{3})$). For almost periodic initial
data and using the $l^{1}$-norm of amplitudes with sum closed frequency set,
Yoneda \cite{Yoneda2011} proved the existence of solutions for large times
and sufficiently large $|\Omega |$. Considering the mild (semigroup)
formulation, the global well-posedness in homogeneous Sobolev spaces $\dot{H
^{s}(\mathbb{R}^{3})$ with $1/2\leq s<3/4$ was obtained by Iwabuchi and
Takada \cite{Takada2013}. They considered sufficiently large $|\Omega |$
(depending on the size of $\Vert u_{0}\Vert _{\dot{H}^{s}}$) when $1/2<s<3/4
. In the critical case $s=1/2$, they used a class of precompact subsets in
\dot{H}^{1/2}(\mathbb{R}^{3})$ in order to get similar results. Local
versions ($T$ large but finite) of the results in \cite{Takada2013} can be
found in \cite{IwabuchiTakada2015} for $1/2<s<5/4.$
Another type of results for (\ref{NSC}) is the uniform global solvability
(or well-posedness) in which the smallness condition on $u_{0}$ is
independent of $|\Omega |$. Giga\textit{\ et al}. \cite{Giga2008} obtained
the uniform global solvability for small data $u_{0}$ in $FM_{0}^{-1}
\mathbb{R}^{3})=\mbox{div}(FM_{0}(\mathbb{R}^{3}))^{3}$, where $FM_{0}
\mathbb{R}^{3})$ denotes the space of the finite Radon measures with no
point mass at the origin. The space $FM_{0}^{-1}(\mathbb{R}^{3})$ is an
example of critical space for the 3D Navier-Stokes equations (NS) ((NSC)
with $\Omega =0$), i.e., its norm is invariant by the scaling
u_{0}^{\lambda }(x)\rightarrow \lambda u_{0}(\lambda x)$, for all $\lambda
>0 $. The uniform global well-posedness for small $u_{0}$ in the Sobolev
space $H^{\frac{1}{2}}(\mathbb{R}^{3})$ was proved by Hieber and Shibata
\cite{HieberShibata2010} and for small initial data in the critical
Fourier-Besov space $\dot{FB}_{p,\infty }^{2-\frac{3}{p}}(\mathbb{R}^{3})$
with $1<p\leq \infty $ and in $\dot{FB}_{1,1}^{-1}(\mathbb{R}^{3})\cap \dot
FB}_{1,1}^{0}(\mathbb{R}^{3})$ was proved by Konieczny and Yoneda \cit
{Konieczny2011}. Iwabuchi and Takada \cite{IwabuchiTakada2014} obtained the
uniform global well-posedness with small initial velocity in the
Fourier-Besov $\dot{FB}_{1,2}^{-1}(\mathbb{R}^{3})$ as well as the
ill-posedness in $\dot{FB}_{1,q}^{-1}(\mathbb{R}^{3})$ for $2<q\leq \infty$.
These results were extended to the framework of critical
Fourier-Besov-Morrey spaces by Almeida, Ferreira and Lima \cit
{Almeida-Fer-Lima}.
Concerning the asymptotic behavior for (\ref{NSC}), we quote the work of
Iwabuchi, Mahalov and Takada \cite{IwabuchiMahalovTakada2016}, where they
treated the high-rotating cases and proved the asymptotic stability of large
time periodic solutions for large initial perturbations. We also mention
\cite{Chemin-Gallagher2006} where the reader can find convergence results of
solutions towards a two-dimensional model as $|\Omega |\rightarrow \infty $
(see also references therein).
It is worthy to highlight that global existence of strong, mild or smooth
solutions for the Navier-Stokes equations ($\Omega =0$), without assume
smallness conditions on $u_{0}$, are outstanding open problems. Thus, global
solvability results for (\ref{NSC}) with arbitrary data in suitable spaces
show an interesting \textquotedblleft smoothing effect\textquotedblright\
due to the Coriolis parameter $\Omega $.
In this paper, we show the global well-posedness of (\ref{NSC}) for large
|\Omega |$ and arbitrary initial data $u_{0}$ belonging to homogeneous Besov
spaces $\dot{B}_{2,q}^{s}(\mathbb{R}^{3})$ where $1\leq q\leq \infty $ and
1/2\leq s<3/4$. In fact, for the cases $s\in (1/2,3/4)$ with $q=\infty $ and
$s=1/2$ with $q\in \lbrack 2,\infty ],$ we introduce the suitable
initial-data classes $\mathcal{I}$ and $\mathcal{F}_{0}$ (see (\re
{aux-space-77}) and (\ref{aux-space-772})), respectively, whose definitions
depend on the Stokes-Coriolis semigroup and Besov spaces. Also, we analyze
the asymptotic behavior of solutions as $\left\vert \Omega \right\vert
\rightarrow \infty $. For the case $1/2<s<3/4$, we use some space-time
estimates of Strichartz type for the Stokes-Coriolis semigroup, and also the
condition of $|\Omega |$ being large with respect to the $\dot{B}_{2,q}^{s}
-norm ($\mathcal{I}$-norm for $q=\infty $) of the initial data $u_{0}$ (a
power-type dependence). For the critical case $s=1/2$, $|\Omega |$ depends
on initial data belonging to precompact sets $D\subset \mathcal{F}_{0}$. In
view of the strict continuous inclusions $\dot{H}^{1/2}\subset \mathcal{F
_{0}$ and
\begin{equation*}
\dot{B}_{2,1}^{s}\subset \dot{B}_{2,q_{1}}^{s}\subset \dot{H}^{s}=\dot{B
_{2,2}^{s}\subset \dot{B}_{2,q_{2}}^{s}\subset \dot{B}_{2,\infty }^{s},
\end{equation*
for $1\leq q_{1}\leq 2\leq q_{2}<\infty ,$ our results provide a new initial
data class for the global well-posedness of (\ref{NSC}) and, in particular,
a class larger than that of \cite{Takada2013}.
Throughout this paper, we denote by $C>0$ constants that may differ even on
the same line. Also, the notation $C=C(a_{1},\ldots ,a_{k})$ indicates that
C$ depends on the quantities $a_{1},...,a_{k}.$
The outline of this paper is as follows. Section 2 is devoted to review some
basic facts about homogeneous Besov spaces and certain mixed space-time
functional settings. Estimates in Besov norms for the semigroup $T_{\Omega }$
and the Duhamel integral term in (\ref{integralform}) are the subject of
Section 3. In Section 4, we state and prove our global well-posedness and
asymptotic behavior results for (\ref{NSC}).
\section{Function spaces}
This section is devoted to some preliminaries about homogeneous Besov spaces
and some mixed space-time functional settings.
We start with the definition of the homogeneous Besov spaces. For this, let
\mathcal{S}(\mathbb{R}^{3})$ and $\mathcal{S}^{\prime }(\mathbb{R}^{3})$
stand for the Schwartz class and the space of tempered distributions,
respectively. Let $\widehat{f}$ denote the Fourier transform of $f\in
\mathcal{S}^{\prime }(\mathbb{R}^{3})$.
Consider a nonnegative radial function $\phi _{0}\in \mathcal{S}(\mathbb{R
^{3})$ satisfying
\begin{equation*}
0\leq \widehat{\phi }_{0}(\xi )\leq 1\text{ for all }\xi \in \mathbb{R
^{3},\ \mbox{supp}\ \widehat{\phi }_{0}\subset \{\xi \in \mathbb{R}^{3}
\frac{1}{2}\leq |\xi |\leq 2\}\text{ and }\sum_{j\in \mathbb{Z}}\widehat
\phi }_{j}(\xi )=1\ \ \mbox{for all}\ \ \xi \in \mathbb{R}^{3}\backslash
\{0\},
\end{equation*
where $\phi _{j}(x)=2^{3j}\phi _{0}(2^{j}x).$ For $f\in \mathcal{S}^{\prime
}(\mathbb{R}^{3}),$ the Littlewood-Paley operator $\{\Delta _{j}\}_{j\in
\mathbb{Z}}$ is defined by $\Delta _{j}f=\phi _{j}\ast f$.
Let $s\in \mathbb{R}$ and $1\leq p,q\leq \infty $ and let $\mathcal{P}$
denote the set of polynomials with $3$ variables. The homogeneous Besov
space, denoted by ${\dot{B}}_{p,q}^{s}(\mathbb{R}^{3})$, is defined as the
set of all $f\in \mathcal{S}^{\prime }(\mathbb{R}^{3})/\mathcal{P}$ such
that the following norm is finite
\begin{equation*}
\Vert f\Vert _{{\dot{B}}_{p,q}^{s}}=\Vert \{2^{sj}\Vert \Delta _{j}f\Vert
_{L^{p}}\}_{j\in \mathbb{Z}}\Vert _{l^{q}(\mathbb{Z})}.
\end{equation*
The pair $(\dot{B}_{p,q}^{s},\Vert \cdot \Vert _{{\dot{B}}_{p,q}^{s}})$ is a
Banach space. We will denote abusively distributions in $\mathcal{S}^{\prime
}(\mathbb{R}^{3})$ and their equivalence classes in $\mathcal{S}^{\prime }
\mathbb{R}^{3})/\mathcal{P}$ in the same way. The space $\mathcal{S}_{0}
\mathbb{R}^{3})$ of functions in $\mathcal{S}(\mathbb{R}^{3})$ whose Fourier
transforms are supported away from $0$ is dense in $\dot{B}_{p,q}^{s}
\mathbb{R}^{3})$ for $1\leq p,q<\infty $. For more details, see \cit
{BahouriCheminDanchin2011}.
Using a duality argument, the norm $\Vert u\Vert _{\dot{B}_{p,q}^{s}}$ can
be estimated as follows
\begin{equation}
\Vert u\Vert _{\dot{B}_{p,q}^{s}}\leq C\sup_{\phi \in Q_{p^{\prime
},q^{\prime }}^{-s}}\left\vert \langle u,\phi \rangle \right\vert
\label{duality-1}
\end{equation
where $Q_{p^{\prime },q^{\prime }}^{-s}(\mathbb{R}^{3})$ denotes the set of
all functions $\phi \in \mathcal{S}(\mathbb{R}^{3})\cap \dot{B}_{p^{\prime
},q^{\prime }}^{-s}$ such that $\left\Vert \phi \right\Vert _{\dot{B
_{p^{\prime },q^{\prime }}^{-s}}\leq 1$ and $\langle \cdot ,\cdot \rangle $
is defined by
\begin{equation*}
\langle u,\phi \rangle :=\sum_{|j-j^{\prime }|\leq 1}\int_{\mathbb{R
^{3}}\Delta _{j}u(x)\Delta _{j^{\prime }}\phi (x)\ dx
\end{equation*
for $u\in \dot{B}_{p,q}^{s}(\mathbb{R}^{3})$ and $\phi \in Q_{p^{\prime
},q^{\prime }}^{-s}(\mathbb{R}^{3})$.
The next lemma contains a Leibniz type rule in the framework of Besov spaces.
\begin{lemma}[see \protect\cite{Chae2004}]
\label{product} Let $s>0,$ $1\leq q\leq \infty $ and $1\leq
p,p_{1},p_{2},r_{1},r_{2}\leq \infty $ be such that $\frac{1}{p}=\frac{1}
p_{1}}+\frac{1}{p_{2}}=\frac{1}{r_{1}}+\frac{1}{r_{2}}$. Then, there exists
a universal constant $C>0$ such that
\begin{equation*}
\Vert fg\Vert _{\dot{B}_{p,q}^{s}}\leq C\left( \Vert f\Vert
_{L^{p_{1}}}\Vert g\Vert _{\dot{B}_{p_{2},q}^{s}}+\Vert g\Vert
_{L^{r_{1}}}\Vert f\Vert _{\dot{B}_{r_{2},q}^{s}}\right) .
\end{equation*}
\end{lemma}
Considering in particular $p=r$ and $p_{i}=r_{i}$ in Lemma \ref{product}, we
have that
\begin{equation*}
\Vert fg\Vert _{\dot{B}_{r,q}^{s}}\leq C\left( \Vert f\Vert
_{L^{r_{1}}}\Vert g\Vert _{\dot{B}_{r_{2},q}^{s}}+\Vert g\Vert
_{L^{r_{1}}}\Vert f\Vert _{\dot{B}_{r_{2},q}^{s}}\right) .
\end{equation*
If $\frac{1}{r}=\frac{2}{r_{2}}-\frac{s}{3}$, then $\frac{1}{r_{1}}=\frac{2}
r_{2}}-\frac{s}{3}$ and we can use the embedding $\dot{B}_{r_{2},q}^{s}
\mathbb{R}^{3})\hookrightarrow L^{r_{1}}(\mathbb{R}^{3})$ to obtain
\begin{equation}
\Vert fg\Vert _{\dot{B}_{r,q}^{s}}\leq C\Vert f\Vert _{\dot{B
_{r_{2},q}^{s}}\Vert g\Vert _{\dot{B}_{r_{2},q}^{s}}. \label{remark1}
\end{equation
The reader is referred to \cite{BergLofstrom1976} for more details on $\dot{
}_{p,q}^{s}$-spaces and their properties.\bigskip
We finish this section by recalling some mixed space-time functional spaces.
Let $\theta \geq 1$, we denote by $L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s}
\mathbb{R}^{3}))$ the set of all distributions $f$ such that
\begin{equation*}
\Vert f\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}=\left\Vert \Vert
f(t)\Vert _{\dot{B}_{p,q}^{s}}\right\Vert _{L_{t}^{\theta }(0,\infty
)}<\infty .
\end{equation*
Also, we denote by $\tilde{L}^{\theta }(0,\infty ;\dot{B}_{p,q}^{s}(\mathbb{
}^{3}))$ the set of all distributions $f$ such that
\begin{equation*}
\Vert f\Vert _{\tilde{L}^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}=\left\Vert
\{2^{js}\Vert \Delta _{j}f\Vert _{L^{\theta }(0,\infty ;L^{p})}\}_{j\in
\mathbb{Z}}\right\Vert _{l^{q}(\mathbb{Z})}<\infty .
\end{equation*
As consequence of the Minkowski inequality, we have the following embeddings
\begin{equation}
\begin{split}
& L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})\hookrightarrow \tilde{L}^{\theta
}(0,\infty ;\dot{B}_{p,q}^{s}),\text{ if }\theta \leq q, \\
& \tilde{L}^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})\hookrightarrow L^{\theta
}(0,\infty ;\dot{B}_{p,q}^{s}),\text{ if }\theta \geq q.
\end{split}
\label{embedding}
\end{equation}
\section{\protect\bigskip Estimates}
Firstly, we recall some estimates for the heat semigroup $e^{t\Delta }$ in
Besov spaces \cite{KozonoOgawaTanuichi2003} and the dispersive estimates for
$T_{\Omega }(t)$ obtained in \cite{IwabuchiTakada2013D}.
\begin{lemma}[see \protect\cite{KozonoOgawaTanuichi2003}]
\label{heatbesov} Let $-\infty <s_{0}\leq s_{1}<\infty $, $1\leq p,q\leq
\infty $ and $f\in \dot{B}_{p,q}^{s_{0}}(\mathbb{R}^{3})$. Then, there
exists a positive constant $C=C(s_{0},s_{1})$ such that
\begin{equation*}
\Vert e^{t\Delta }f\Vert _{\dot{B}_{p,q}^{s_{1}}}\leq Ct^{-\frac{3}{2
(s_{1}-s_{0})}\Vert f\Vert _{\dot{B}_{p,q}^{s_{0}}}\text{, for all }t>0.
\end{equation*}
\end{lemma}
Before stating the dispersive estimates of \cite{IwabuchiTakada2013D}, we
need to define the operators
\begin{equation}
\mathcal{G}_{\pm }(\tau )[f]=\left[ e^{\pm i\tau \frac{\xi _{3}}{|\xi |}
\widehat{f}\right]^{\vee} ,\text{ for }\tau \in \mathbb{R}\text{,}
\label{definitionoperatorG}
\end{equation
and the matrix $\mathcal{R}$ of singular integral operators
\begin{equation}
\mathcal{R}=\left(
\begin{array}{ccc}
0 & R_{3} & -R_{2} \\
-R_{3} & 0 & R_{1} \\
R_{2} & -R_{1} &
\end{array
\right) . \label{matrix-1}
\end{equation
Using (\ref{definitionoperatorG}) and (\ref{matrix-1}), $T_{\Omega }(t)$ can
be expressed a
\begin{equation}
T_{\Omega }(t)f=\frac{1}{2}\mathcal{G}_{+}(\Omega t)[e^{t\Delta }(I+\mathcal
R})f]+\frac{1}{2}\mathcal{G}_{-}(\Omega t)[e^{t\Delta }(I-\mathcal{R})f]
\label{semigroup-formula-1}
\end{equation
for $t\geq 0$ and $\Omega \in \mathbb{R}$. \
Notice that the operators $\mathcal{G}_{\pm }(t\Omega )$ correspond to the
oscillating parts of $T_{\Omega }(t)$.
\begin{lemma}[see \protect\cite{IwabuchiTakada2013D}]
\label{operatorG} Let $s,t\in \mathbb{R}$, $2\leq p\leq \infty $ and $f\in
\dot{B}_{p^{\prime },q}^{s+3\left( 1-\frac{2}{p}\right) }(\mathbb{R}^{3})$
with $\frac{1}{p}+\frac{1}{p^{\prime }}=1$. Then, there exists a constant
C=C(p)>0$ such tha
\begin{equation*}
\Vert \mathcal{G}_{\pm }(t)[f]\Vert _{\dot{B}_{p,q}^{s}}\leq C\left( \frac
\log {(e+|t|)}}{1+|t|}\right) ^{\frac{1}{2}\left( 1-\frac{2}{p}\right)
}\Vert f\Vert _{\dot{B}_{p^{\prime },q}^{s+3\left( 1-\frac{2}{p}\right) }}.
\end{equation*
\bigskip
\end{lemma}
In what follows, we establish our estimates in Besov spaces for $T_{\Omega
}(t)$ and the Duhamel term $\int_{0}^{t}T_{\Omega }(t-\tau )\mathbb{P}\nabla
f(\tau )\ d\tau $. We start with three lemmas for $T_{\Omega }(t).$
\begin{lemma}
\label{linearestimative1} Assume that $s,\Omega \in \mathbb{R}$, $t>0,$
1<r\leq p^{\prime }\leq 2\leq p<\infty $ and $1\leq q\leq \infty $, and let
k$ be a multi-index. Then, there exists a constant $C>0$ (independent of
\Omega $ and $t$ ) such that
\begin{equation*}
\Vert \nabla _{x}^{k}T_{\Omega }(t)f\Vert _{\dot{B}_{p,q}^{s}}\leq C\left(
\frac{\log {\ (e+|t\Omega |)}}{1+|t\Omega |}\right) ^{\frac{1}{2}\left( 1
\frac{2}{p}\right) }t^{-\frac{\left\vert k\right\vert }{2}-\frac{3}{2}\left(
\frac{1}{r}-\frac{1}{p}\right) }\Vert f\Vert _{\dot{B}_{r,q}^{s}},
\end{equation*
for all $f\in \dot{B}_{r,q}^{s}(\mathbb{R}^{3})$.
\end{lemma}
\textbf{Proof. }Using the representation (\ref{semigroup-formula-1}), Lemma
\ref{operatorG}, the embedding $\dot{B}_{r,q}^{s+3\left( \frac{1}{r}-\frac{
}{p}\right) }(\mathbb{R}^{3})\hookrightarrow \dot{B}_{p^{\prime
},q}^{s+3\left( 1-\frac{2}{p}\right) }(\mathbb{R}^{3})$ and Lemma \re
{heatbesov}, we obtain
\begin{equation*}
\begin{split}
\Vert \nabla _{x}^{k}T_{\Omega }(t)f\Vert _{\dot{B}_{p,q}^{s}}& \leq C\Vert
\mathcal{G}_{\pm }(t\Omega )[\nabla _{x}^{k}e^{t\Delta }f]\Vert _{\dot{B
_{p,q}^{s}} \\
& \leq C\left( \frac{\log {(e+|t\Omega |)}}{1+|t\Omega |}\right) ^{\frac{1}{
}\left( 1-\frac{2}{p}\right) }\Vert \nabla _{x}^{k}e^{t\Delta }f\Vert _{\dot
B}_{p^{\prime },q}^{s+3\left( 1-\frac{2}{p}\right) }} \\
& \leq C\left( \frac{\log {(e+|t\Omega |)}}{1+|t\Omega |}\right) ^{\frac{1}{
}\left( 1-\frac{2}{p}\right) }\Vert \nabla _{x}^{k}e^{t\Delta }f\Vert _{\dot
B}_{r,q}^{s+3\left( \frac{1}{r}-\frac{1}{p}\right) }} \\
& \leq C\left( \frac{\log {(e+|t\Omega |)}}{1+|t\Omega |}\right) ^{\frac{1}{
}\left( 1-\frac{2}{p}\right) }t^{-\frac{\left\vert k\right\vert }{2}-\frac{
}{2}\left( \frac{1}{r}-\frac{1}{p}\right) }\Vert f\Vert _{\dot{B}_{r,q}^{s}}.
\end{split
\end{equation*
\fin
\begin{lemma}
Let $1\leq q<\infty $. Consider $s,p,\theta \in \mathbb{R}$ satisfying
\begin{equation*}
0\leq s<\frac{3}{p},\ \ 2<p<6\ \ \text{ and }\ \ \frac{3}{4}-\frac{3}{2p
\leq \frac{1}{\theta }<\min \left\{ \frac{1}{2},1-\frac{2}{p},\frac{1}{q
\right\} .
\end{equation*
Then, there exists $C>0$ (independent of $t\geq 0$ and $\Omega \in \mathbb{R}
$) such that
\begin{equation}
\Vert T_{\Omega }(t)f\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}\leq
C|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert f\Vert _{\dot{
}_{2,q}^{s}}, \label{linearestimative2}
\end{equation
for all $f\in \dot{B}_{2,q}^{s}(\mathbb{R}^{3})$.
\end{lemma}
\textbf{Proof. } By duality and estimate (\ref{duality-1}), notice that (\re
{linearestimative2}) holds true provided that
\begin{equation}
\begin{split}
I& :=\left\vert \int_{0}^{\infty }\sum_{|j-k|\leq 1}\int_{\mathbb{R
^{3}}\Delta _{j}\mathcal{G}_{\pm }(\Omega t)[e^{t\Delta }f](x)\overline
\Delta _{k}\phi (x,t)}\ dxdt\right\vert \\
& \leq C|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert f\Vert
_{\dot{B}_{2,q}^{s}}\Vert \phi \Vert _{L^{\theta ^{\prime }}(0,\infty ;\dot{
}_{p^{\prime },q^{\prime }}^{-s})},
\end{split}
\label{est-aux-2}
\end{equation
for all $\phi \in C_{0}^{\infty }(\mathbb{R}^{3}\times (0,\infty ))$ with
0\notin $supp$(\widehat{\phi }(\xi ,t))$ for each $t>0,$ where
1/p+1/p^{\prime }=1$, $1/\theta +1/\theta ^{\prime }=1$ and $1/q+1/q^{\prime
}=1.$
For (\ref{est-aux-2}), we use Parseval formula, H\"{o}lder inequality, the
inclusion $\dot{B}_{p,2}^{0}(\mathbb{R}^{3})\hookrightarrow L^{p}(\mathbb{R
^{3})$ and Lemma \ref{linearestimative1} in order to estimat
\begin{equation}
\begin{split}
I& \leq \sum_{|j-k|\leq 1}\left\vert \int_{0}^{\infty }\int_{\mathbb{R
^{3}}\Delta _{j}\mathcal{G}_{\pm }(\Omega t)[e^{t\Delta }f](x)\overline
\Delta _{k}\phi (x,t)}\ dxdt\right\vert \\
& =\sum_{|j-k|\leq 1}\left\vert \int_{0}^{\infty }\int_{\mathbb{R
^{3}}\Delta _{j}f(x)\overline{\Delta _{k}\mathcal{G}_{\mp }(\Omega
t)[e^{t\Delta }\phi (t)](x)}\ dxdt\right\vert \\
& =\sum_{|j-k|\leq 1}\left\vert \int_{\mathbb{R}^{3}}\Delta
_{j}f(x)\int_{0}^{\infty }\overline{\Delta _{k}\mathcal{G}_{\mp }(\Omega
t)[e^{t\Delta }\phi (t)](x)}\ dtdx\right\vert \\
& \leq C\sum_{|j-k|\leq 1}\Vert \Delta _{j}f\Vert _{L^{2}}\left\Vert
\int_{0}^{\infty }\Delta _{k}\mathcal{G}_{\mp }(\Omega t)[e^{t\Delta }\phi
(t)]\ dt\right\Vert _{L^{2}} \\
& \leq C2^{|s|}\sum_{|j-k|\leq 1}2^{js}\Vert \Delta _{j}f\Vert
_{L^{2}}2^{-ks}\left\Vert \int_{0}^{\infty }\Delta _{k}\mathcal{G}_{\mp
}(\Omega t)[e^{t\Delta }\phi (t)]\ dt\right\Vert _{L^{2}} \\
& \leq C2^{|s|}\Vert f\Vert _{\dot{B}_{2,q}^{s}}\left( \sum_{k\in \mathbb{Z
}2^{-ksq^{\prime }}\left\Vert \int_{0}^{\infty }\Delta _{k}\mathcal{G}_{\mp
}(\Omega t)[e^{t\Delta }\phi (t)]\ dt\right\Vert _{L^{2}}^{q^{\prime
}}\right) ^{\frac{1}{q^{\prime }}}.
\end{split}
\label{12}
\end{equation}
Now, we are going to prove that
\begin{equation*}
I_{k}^{2}\leq C|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert
\Delta _{k}\phi \Vert _{L^{\theta ^{\prime }}(0,\infty ;L^{p^{\prime
}})}^{2},
\end{equation*
where
\begin{equation*}
I_{k}:=\left\Vert \int_{0}^{\infty }\Delta _{k}\mathcal{G}_{\mp }(\Omega
t)[e^{t\Delta }\phi (t)]\ dt\right\Vert _{L^{2}}.
\end{equation*
In fact, using the Parseval formula, H\"{o}lder inequality, the embedding
\dot{B}_{p,2}^{0}(\mathbb{R}^{3})\hookrightarrow L^{p}(\mathbb{R}^{3})$ and
Lemma \ref{linearestimative1}, we have
\begin{equation*}
\begin{split}
I_{k}^{2}& =\left\langle \int_{0}^{\infty }\Delta _{k}\mathcal{G}_{\mp
}(\Omega t)[e^{t\Delta }\phi (t)]\ dt,\int_{0}^{\infty }\Delta _{k}\mathcal{
}_{\mp }(\Omega \tau )[e^{\tau \Delta }\phi (\tau )]\ d\tau \right\rangle
_{L^{2}} \\
& =\int_{0}^{\infty }\int_{0}^{\infty }\int_{\mathbb{R}^{3}}\Delta _{k
\mathcal{G}_{\mp }(\Omega t)[e^{t\Delta }\phi (t)](x)\overline{\Delta _{k
\mathcal{G}_{\mp }(\Omega \tau )[e^{\tau \Delta }\phi (\tau )](x)}\ dxd\tau
dt \\
& \leq \int_{0}^{\infty }\int_{0}^{\infty }\Vert \Delta _{k}\phi (t)\Vert
_{L^{p^{\prime }}}\Vert \Delta _{k}\mathcal{G}_{\pm }(\Omega (t-\tau
))[e^{(t+\tau )\Delta }\phi (\tau )]\Vert _{L^{p}}\ d\tau dt \\
& \leq C\int_{0}^{\infty }\int_{0}^{\infty }\Vert \Delta _{k}\phi (t)\Vert
_{L^{p^{\prime }}}\Vert \Delta _{k}\mathcal{G}_{\pm }(\Omega (t-\tau
))[e^{(t+\tau )\Delta }\phi (\tau )]\Vert _{\dot{B}_{p,2}^{0}}\ d\tau dt \\
& \leq C\int_{0}^{\infty }\int_{0}^{\infty }\Vert \Delta _{k}\phi (t)\Vert
_{L^{p^{\prime }}}\left( \frac{\log {(e+|\Omega ||t-\tau |)}}{1+|\Omega
||t-\tau |}\right) ^{\frac{1}{2}\left( 1-\frac{2}{p}\right) }\Vert
e^{(t+\tau )\Delta }\Delta _{k}\phi (\tau )\Vert _{\dot{B}_{p^{\prime
},2}^{3\left( 1-\frac{2}{p}\right) }}\ d\tau dt.
\end{split
\end{equation*
By Lemma \ref{heatbesov} and the embedding $L^{p^{\prime }}(\mathbb{R
^{3})\hookrightarrow \dot{B}_{p^{\prime },2}^{0}(\mathbb{R}^{3})$ for
p^{\prime }<2$, it follows that
\begin{equation*}
\begin{split}
\Vert e^{(t+\tau )\Delta }\Delta _{k}\phi (\tau )\Vert _{\dot{B}_{p^{\prime
},2}^{3\left( 1-\frac{2}{p}\right) }}& \leq C(t+\tau )^{-\frac{3}{2}\left( 1
\frac{2}{p}\right) }\Vert \Delta _{k}\phi (\tau )\Vert _{\dot{B}_{p^{\prime
},2}^{0}} \\
& \leq C|t-\tau |^{-\frac{3}{2}\left( 1-\frac{2}{p}\right) }\Vert \Delta
_{k}\phi (\tau )\Vert _{L^{p^{\prime }}}.
\end{split
\end{equation*
Thus,
\begin{equation}
\begin{split}
I_{k}^{2}& \leq C\int_{0}^{\infty }\int_{0}^{\infty }\Vert \Delta _{k}\phi
(t)\Vert _{L^{p^{\prime }}}\left( \frac{\log {(e+|\Omega ||t-\tau |)}}
1+|\Omega ||t-\tau |}\right) ^{\frac{1}{2}\left( 1-\frac{2}{p}\right)
}|t-\tau |^{-\frac{3}{2}\left( 1-\frac{2}{p}\right) }\Vert \Delta _{k}\phi
(\tau )\Vert _{L^{p^{\prime }}}\ d\tau dt \\
& \leq C\Vert \Delta _{k}\phi \Vert _{L^{\theta ^{\prime }}(0,\infty
;L^{p^{\prime }})}\left\Vert \int_{0}^{\infty }h(\cdot -\tau )\Vert \Delta
_{k}\phi (\tau )\Vert _{L^{p^{\prime }}}d\tau \right\Vert _{L^{\theta
}(0,\infty )},
\end{split}
\label{11}
\end{equation
where
\begin{equation*}
h(t)=\left( \frac{\log {(e+|\Omega ||t|)}}{1+|\Omega ||t|}\right) ^{\frac{1}
2}\left( 1-\frac{2}{p}\right) }|t|^{-\frac{3}{2}\left( 1-\frac{2}{p}\right)
}.
\end{equation*
We consider the cases $\frac{1}{\theta }>\frac{3}{4}-\frac{3}{2p}$ and
\frac{1}{\theta }=\frac{3}{4}-\frac{3}{2p}$. In the first case, notice that
\begin{equation*}
\Vert h\Vert _{L^{\frac{\theta }{2}}}=C|\Omega |^{-\frac{2}{\theta }+\frac{
}{2}-\frac{3}{p}}.
\end{equation*
Therefore, using Young inequality in (\ref{11}) and the above equality, we
obtain
\begin{equation*}
I_{k}^{2}\leq C|\Omega |^{-\frac{2}{\theta }+\frac{3}{2}-\frac{3}{p}}\Vert
\Delta _{k}\phi \Vert _{L^{\theta ^{\prime }}(0,\infty ;L^{p^{\prime
}})}^{2}.
\end{equation*
Now, multiplying by $2^{-ks}$, applying the $l^{q^{\prime }}(\mathbb{Z})
-norm and using (\ref{embedding}), we arrive at
\begin{equation}
\begin{split}
\left( \sum_{k\in \mathbb{Z}}2^{-ksq^{\prime }}I_{k}^{q^{\prime }}\right) ^
\frac{1}{q^{\prime }}}& \leq C|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}
\frac{3}{2p}}\left( \sum_{k\in \mathbb{Z}}2^{-ksq^{\prime }}\Vert \Delta
_{k}\phi \Vert _{L^{\theta ^{\prime }}(0,\infty ;L^{p^{\prime
}})}^{q^{\prime }}\right) ^{\frac{1}{q^{\prime }}} \\
& \leq C|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert \phi
\Vert _{L^{\theta ^{\prime }}(0,\infty ;\dot{B}_{p^{\prime },q^{\prime
}}^{-s})}.
\end{split}
\label{13}
\end{equation
It follows from (\ref{12}) and (\ref{13}) that
\begin{equation}
I\leq C|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert f\Vert _
\dot{B}_{r,q}^{s}}\Vert \phi \Vert _{L^{\theta ^{\prime }}(0,\infty ;\dot{B
_{p^{\prime },q^{\prime }}^{-s})}, \label{15a}
\end{equation
with $C>0$ independent of $\phi $ and $f$.
In the second case $\frac{1}{\theta }=\frac{3}{4}-\frac{3}{2p}$, we use the
fact $h(t)\leq |t|^{-\frac{3}{2}\left( 1-\frac{2}{p}\right) }$ and
Hardy-Littlewood-Sobolev inequality in (\ref{11}) to obtain
\begin{equation}
I_{k}^{2}\leq C\Vert \Delta _{k}\phi \Vert _{L^{\theta ^{\prime }}(0,\infty
;L^{p^{\prime }})}^{2}. \label{14}
\end{equation
Thus, using (\ref{14}) and proceeding as in (\ref{13}), we obtain a constant
$C>0$ (independent of $\phi $ and $f$) such tha
\begin{equation}
I\leq C\Vert f\Vert _{\dot{B}_{2,q}^{s}}\Vert \phi \Vert _{L^{\theta
^{\prime }}(0,\infty ;\dot{B}_{p^{\prime },q^{\prime }}^{-s})}. \label{15b}
\end{equation
Estimates (\ref{15a}) and (\ref{15b}) give the desired result. \fin
\begin{lemma}
\label{critical_linear_semigroup} Assume that $1\leq q<4 $ and $f\in \dot{B
_{2,q}^{\frac{1}{2}}(\mathbb{R}^{3})$. Then
\begin{equation}
\lim_{|\Omega |\rightarrow \infty }\Vert T_{\Omega }(\cdot )f\Vert
_{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}=0. \label{3.12}
\end{equation}
\end{lemma}
\textbf{Proof. } Since $\overline{\mathcal{S}_{0}(\mathbb{R}^{3})
^{\left\Vert \cdot \right\Vert _{\dot{B}_{2,q}^{1/2}}}=\dot{B}_{2,q}^{\frac{
}{2}}$ for $q\neq \infty $ (see Section 2), there exists $(w_{k})_{k\in
\mathbb{N}}$ in $\mathcal{S}_{0}(\mathbb{R}^{3})$ such that
w_{k}\rightarrow f$ in $\dot{B}_{2,q}^{\frac{1}{2}}(\mathbb{R}^{3})$ as
k\rightarrow \infty $. Next, using Lemma \ref{linearestimative2}, we obtain
\begin{equation}
\begin{split}
\limsup_{\left\vert \Omega \right\vert \rightarrow \infty }\Vert T_{\Omega
}(\cdot )f\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}& \leq
\limsup_{\left\vert \Omega \right\vert \rightarrow \infty }\Vert T_{\Omega
}(\cdot )(f-w_{k})\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2
})}+\limsup_{\left\vert \Omega \right\vert \rightarrow \infty }\Vert
T_{\Omega }(\cdot )w_{k}\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}
\\
& \leq C\Vert w_{k}-f\Vert _{\dot{B}_{2,q}^{\frac{1}{2}}}+\limsup_{\lef
\vert \Omega \right\vert \rightarrow \infty }\Vert T_{\Omega }(\cdot
)w_{k}\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}.
\end{split}
\label{3.13}
\end{equation
Choosing $p\in (\frac{8}{3},3),$ we have the conditions
\begin{equation*}
\frac{3}{4}-\frac{3}{2p}<\frac{1}{4}<\min \left\{ 1-\frac{2}{p},\frac{1}{q
\right\} \ \ \text{and}\ \ \frac{1}{2}-\frac{3}{2p}<0.
\end{equation*
Then, we can use $\dot{B}_{p,q}^{-\frac{1}{2}+\frac{3}{p}}(\mathbb{R
^{3})\hookrightarrow \dot{B}_{3,q}^{\frac{1}{2}}(\mathbb{R}^{3})$ and Lemma
\ref{linearestimative2} to estimate
\begin{equation}
\begin{split}
\limsup_{\left\vert \Omega \right\vert \rightarrow \infty }\Vert T_{\Omega
}(\cdot )w_{k}\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}& \leq
C\limsup_{\left\vert \Omega \right\vert \rightarrow \infty }\Vert T_{\Omega
}(\cdot )w_{k}\Vert _{L^{4}(0,\infty ;\dot{B}_{p,q}^{-\frac{1}{2}+\frac{3}{p
})} \\
& \leq C|\Omega |^{\frac{1}{2}-\frac{3}{2p}}\Vert w_{k}\Vert _{\dot{B
_{2,q}^{-\frac{1}{2}+\frac{3}{p}}}\rightarrow 0,\ \ \text{as}\ \ |\Omega
|\rightarrow \infty .
\end{split}
\label{3.15}
\end{equation
By (\ref{3.13}), (\ref{3.15}) and $\left\Vert w_{k}-f\right\Vert _{\dot{B
_{2,q}^{\frac{1}{2}}}\rightarrow 0$, it follows (\ref{3.12}). \fin\bigskip
The next two lemmas are concerned with the Duhamel term $\int_{0}^{t}T_
\Omega }(t-\tau )\mathbb{P}\nabla f(\tau )\ d\tau .$
\begin{lemma}
\label{nonlinear1} Let $s\in \mathbb{R}$ and $\Omega \in \mathbb{R
\backslash \{0\}$ and let $p,r,q,\theta $ be real numbers satisfying
\begin{gather*}
2<p<3,\ \ \ \frac{6}{5}<r<2,\ \ \ 1\leq q\leq \infty ,\ \ \ 1-\frac{1}{p
\leq \frac{1}{r}<\frac{1}{3}+\frac{1}{p}, \\
\max \left\{ 0,\frac{1}{2}-\frac{3}{2}\left( \frac{1}{r}-\frac{1}{p}\right)
\frac{1}{2}\left( 1-\frac{2}{p}\right) \right\} <\frac{1}{\theta }\leq \frac
1}{2}-\frac{3}{2}\left( \frac{1}{r}-\frac{1}{p}\right) .
\end{gather*
Then, there exists a universal constant $C>0$ such that
\begin{equation}
\left\Vert \int_{0}^{t}T_{\Omega }(t-\tau )\mathbb{P}\nabla f(\tau )\ d\tau
\right\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}\leq C|\Omega |^{
\frac{1}{2}+\frac{3}{2}\left( \frac{1}{r}-\frac{1}{p}\right) +\frac{1}
\theta }}\Vert f\Vert _{L^{\frac{\theta }{2}}(0,\infty ;\dot{B}_{r,q}^{s})}.
\label{aux-lemma-nonlinear-1}
\end{equation}
\end{lemma}
\textbf{Proof. } Using Lemma \ref{linearestimative1} it follows that
\begin{equation}
\begin{split}
\Bigl\Vert \int_{0}^{t}T_{\Omega }(t-\tau )& \mathbb{P}\nabla f(\tau )\
d\tau \Bigr\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}\leq C\left\Vert
\int_{0}^{t}\left\Vert T_{\Omega }(t-\tau )\mathbb{P}\nabla f(\tau
)\right\Vert _{\dot{B}_{p,q}^{s}}\ d\tau \right\Vert _{L^{\theta }(0,\infty
)} \\
& \leq C\left\Vert \int_{0}^{t}(t-\tau )^{-\frac{1}{2}-\frac{3}{2}\left(
\frac{1}{r}-\frac{1}{p}\right) }\left( \frac{\log {(e+|\Omega ||t-\tau |)}}
1+|\Omega ||t-\tau |}\right) ^{\frac{1}{2}\left( 1-\frac{2}{p}\right)
}\left\Vert f(\tau )\right\Vert _{\dot{B}_{r,q}^{s}}\ d\tau \right\Vert
_{L^{\theta }(0,\infty )}.
\end{split}
\label{16a}
\end{equation
We are going to prove (\ref{nonlinear1}) in two cases. First we consider the
case $\frac{1}{\theta }=\frac{1}{2}-\frac{3}{2}\left( \frac{1}{r}-\frac{1}{p
\right) $. Here, we note that
\begin{equation*}
\left( \frac{\log {(e+|\Omega ||t-\tau |)}}{1+|\Omega ||t-\tau |}\right) ^
\frac{1}{2}\left( 1-\frac{2}{p}\right) }\leq 1
\end{equation*
and employ Hardy-Littlewood-Sobolev inequality to estimate
\begin{equation}
\left\Vert \int_{0}^{t}(t-\tau )^{-\frac{1}{2}-\frac{3}{2}\left( \frac{1}{r}
\frac{1}{p}\right) }\left( \frac{\log {(e+|\Omega ||t-\tau |)}}{1+|\Omega
||t-\tau |}\right) ^{\frac{1}{2}\left( 1-\frac{2}{p}\right) }\left\Vert
f(\tau )\right\Vert _{\dot{B}_{r,q}^{s}}\ d\tau \right\Vert _{L^{\theta
}(0,\infty )}\leq C\Vert f\Vert _{L^{\frac{\theta }{2}}(0,\infty ;\dot{B
_{r,q}^{s})}. \label{16}
\end{equation
Consider now the case $\frac{1}{\theta }<\frac{1}{2}-\frac{3}{2}\left( \frac
1}{r}-\frac{1}{p}\right) .$ Selecting $\ell $ such that $\frac{1}{\theta }
\frac{1}{\ell }+\frac{2}{\theta }-1$, a direct computation gives
\begin{equation}
\left\Vert (t-\tau )^{-\frac{1}{2}-\frac{3}{2}\left( \frac{1}{r}-\frac{1}{p
\right) }\left( \frac{\log {(e+|\Omega ||t-\tau |)}}{1+|\Omega ||t-\tau |
\right) ^{\frac{1}{2}\left( 1-\frac{2}{p}\right) }\right\Vert _{L^{\ell
}(0,\infty )}=C|\Omega |^{\frac{1}{\theta }-\frac{1}{2}+\frac{3}{2}\left(
\frac{1}{r}-\frac{1}{p}\right) }. \label{aux-est-17}
\end{equation
By Young inequality and (\ref{aux-est-17}), we have that
\begin{equation}
\begin{split}
\Biggl\|& \int_{0}^{t}(t-\tau )^{-\frac{1}{2}-\frac{3}{2}\left( \frac{1}{r}
\frac{1}{p}\right) }\left( \frac{\log {(e+|\Omega ||t-\tau |)}}{1+|\Omega
||t-\tau |}\right) ^{\frac{1}{2}\left( 1-\frac{2}{p}\right) }\left\Vert
f(\tau )\right\Vert _{\dot{B}_{r,q}^{s}}\ d\tau \Biggr\|_{L^{\theta
}(0,\infty )} \\
& \leq \left\Vert t^{-\frac{1}{2}-\frac{3}{2}\left( \frac{1}{r}-\frac{1}{p
\right) }\left( \frac{\log {(e+|\Omega |t)}}{1+|\Omega |t}\right) ^{\frac{1}
2}\left( 1-\frac{2}{p}\right) }\right\Vert _{L^{\ell }(0,\infty )}\Vert
f\Vert _{L^{\frac{\theta }{2}}(0,\infty ;\dot{B}_{r,q}^{s})} \\
& =C|\Omega |^{\frac{1}{\theta }-\frac{1}{2}+\frac{3}{2}\left( \frac{1}{r}
\frac{1}{p}\right) }\Vert f\Vert _{L^{\frac{\theta }{2}}(0,\infty ;\dot{B
_{r,q}^{s})}.
\end{split}
\label{17}
\end{equation
The proof is completed by substituting (\ref{16}) and (\ref{17})\ into (\re
{16a}). \fin
\begin{lemma}
\label{nonlinearestimativecritical} Let $s,\Omega \in \mathbb{R}$ and $2\leq
q\leq \infty $. Then, there exists a universal constant $C>0$ such that
\begin{equation}
\left\Vert \int_{0}^{t}T_{\Omega }(t-\tau )\nabla f(\tau )\ d\tau
\right\Vert _{L^{\infty }(0,\infty ;\dot{B}_{2,q}^{s})\cap L^{4}(0,\infty
\dot{B}_{3,q}^{s})}\leq C\Vert f\Vert _{L^{2}(0,\infty ;\dot{B}_{2,q}^{s})}.
\label{aux-lemma-10}
\end{equation}
\end{lemma}
\textbf{Proof. } We denote $X=X_{1}\cap X_{2}$ where $X_{1}=L^{\infty
}(0,\infty ;\dot{B}_{2,q}^{s})$ and $X_{2}=L^{4}(0,\infty ;\dot{B
_{3,q}^{s}) $. We start with estimates for the $X_{1}$-norm. We have that
\begin{equation*}
\begin{split}
\left\Vert \Delta _{j}\int_{0}^{t}T_{\Omega }(t-\tau )\nabla f(\tau )\ d\tau
\right\Vert _{L^{2}}& =\left\Vert \int_{0}^{t}T_{\Omega }(t-\tau )\nabla
\Delta _{j}f(\tau )\ d\tau \right\Vert _{L^{2}} \\
& \leq C\left\Vert \int_{0}^{t}e^{-(t-\tau )|\xi |^{2}}|\xi ||\widehat{\phi
_{j}(\xi )\widehat{f}(\tau )|\ d\tau \right\Vert _{L^{2}} \\
& \leq C\left\Vert \Vert e^{-(t-\tau )|\xi |^{2}}\Vert _{L_{\tau
}^{2}(0,t)}|\xi |\Vert \widehat{\phi }_{j}(\xi )\widehat{f}(\tau )\Vert
_{L_{\tau }^{2}(0,t)}\right\Vert _{L^{2}} \\
& \leq C\Vert \Delta _{j}f\Vert _{L^{2}(0,\infty ;L^{2})}.
\end{split
\end{equation*
Multiplying by $2^{sj}$, applying $l^{q}(\mathbb{Z})$-norm and using
inequality (\ref{embedding}), we arrive at
\begin{equation*}
\begin{split}
\left\Vert \int_{0}^{t}T_{\Omega }(t-\tau )\nabla f(\tau )\ d\tau
\right\Vert _{\dot{B}_{2,q}^{s}}& \leq C\left( \sum_{j\in \mathbb{Z
}2^{sjq}\Vert \Delta _{j}f\Vert _{L^{2}(0,\infty ;L^{2})}^{q}\right) ^{\frac
1}{q}} \\
& \leq C\Vert f\Vert _{L^{2}(0,\infty ;\dot{B}_{2,q}^{s})}
\end{split
\end{equation*
and then
\begin{equation}
\left\Vert \int_{0}^{t}T_{\Omega }(t-\tau )\nabla f(\tau )\ d\tau
\right\Vert _{X_{1}}\leq C\Vert f\Vert _{L^{2}(0,\infty ;\dot{B}_{2,q}^{s})}.
\label{3.21}
\end{equation
In order to estimate the $X_{2}$-norm, we use Lemma \ref{linearestimative1}
and Hardy-Littlewood-Sobolev inequality to obtain
\begin{equation}
\begin{split}
\left\Vert \int_{0}^{t}T_{\Omega }(t-\tau )\nabla f(\tau )\ d\tau
\right\Vert _{X_{2}}& \leq \left\Vert \int_{0}^{t}\Vert T_{\Omega }(t-\tau
)\nabla f(\tau )\Vert _{\dot{B}_{3,q}^{s}}\ d\tau \right\Vert
_{L^{4}(0,\infty )} \\
& \leq C\left\Vert \int_{0}^{t}(t-\tau )^{-\frac{1}{2}-\frac{3}{2}\left(
\frac{1}{2}-\frac{1}{3}\right) }\Vert f(\tau )\Vert _{\dot{B}_{2,q}^{s}}\
d\tau \right\Vert _{L^{4}(0,\infty )} \\
& \leq C\Vert f\Vert _{L^{2}(0,\infty ;\dot{B}_{2,q}^{s})}.
\end{split}
\label{3.22}
\end{equation
Putting together (\ref{3.21}) and (\ref{3.22}), we arrive at (\re
{aux-lemma-10}). \fin
\section{Global existence}
In this section we state and prove our results about existence and
uniqueness of global solutions to (\ref{NSC}). Basically, we have two cases
1/2<s<3/4$ and $s=1/2$. We start with the former.
\begin{theorem}
\label{theorem1}
\begin{enumerate}
\item[$(i)$] For $1\leq q<\infty $, consider $s,p$ and $\theta $ satisfying
\begin{gather*}
\frac{1}{2}<s<\frac{3}{4},\ \ \ \frac{1}{3}+\frac{s}{9}<\frac{1}{p}<\frac{2}
3}-\frac{s}{3}, \\
\frac{s}{2}-\frac{1}{2p}<\frac{1}{\theta }<\frac{5}{8}-\frac{3}{2p}+\frac{s}
4},\ \ \ \frac{3}{4}-\frac{3}{2p}\leq \frac{1}{\theta }<\min \left\{ 1-\frac
2}{p},\frac{1}{q}\right\} .
\end{gather*
Let $\Omega \in \mathbb{R}\setminus \{0\}$ and $u_{0}\in \dot{B}_{2,q}^{s}
\mathbb{R}^{3})$ with $\nabla \cdot u_{0}=0.$ There is a constant
C=C(s,p,\theta )>0$ such that if $\Vert u_{0}\Vert _{\dot{B}_{2,q}^{s}}\leq
C|\Omega |^{\frac{s}{2}-\frac{1}{4}}$, then there exists a unique global
solution $u\in C([0,\infty );\dot{B}_{2,q}^{s}(\mathbb{R}^{3}))$ to (\re
{NSC}).
\item[$(ii)$] For $q=\infty $, consider $s,p$ and $\theta $ satisfying
\begin{gather*}
\frac{1}{2}<s<\frac{3}{4},\ \ \ \frac{1}{3}+\frac{s}{9}<\frac{1}{p}<\frac{2}
3}-\frac{s}{3}, \\
\frac{s}{2}-\frac{1}{2p}<\frac{1}{\theta }<\frac{5}{8}-\frac{3}{2p}+\frac{s}
4}.
\end{gather*
Let $\Omega _{0}>0$ and $u_{0}\in \mathcal{I}$ with $\nabla \cdot u_{0}=0$,
where
\begin{equation}
\mathcal{I}:=\left\{ f\in \mathcal{S}^{\prime }(\mathbb{R}^{3})\colon \Vert
f\Vert _{\mathcal{I}}:=\sup_{|\Omega |\geq \Omega _{0}}|\Omega |^{\frac{1}
\theta }-\frac{3}{4}+\frac{3}{2p}}\Vert T_{\Omega }(t)f\Vert _{L^{\theta
}(0,\infty ;\dot{B}_{p,\infty }^{s})}<\infty \right\} . \label{aux-space-77}
\end{equation
There is a constant $C=C(s,p,\theta )>0$ such that if $\Vert u_{0}\Vert _
\mathcal{I}}\leq C|\Omega |^{\frac{s}{2}-\frac{1}{4}}$ for $|\Omega |\geq
\Omega _{0}$, then the system (\ref{NSC}) has a unique global solution $u\in
L^{\theta }(0,\infty ;\dot{B}_{p,\infty }^{s}(\mathbb{R}^{3}))$. Moreover,
if in addition $u_{0}\in \dot{B}_{2,\infty }^{s}(\mathbb{R}^{3})$ then $u\in
C_{\omega }([0,\infty );\dot{B}_{2,\infty }^{s}(\mathbb{R}^{3}))$ where
C_{\omega }$ stands to time weakly continuous functions.
\end{enumerate}
\end{theorem}
\begin{remark}
Notice that the space $\mathcal{I}$ depends on the parameters $\Omega
_{0},\theta ,p$ and $s$, but for simplicity we have omitted them in the
notation.
\end{remark}
\bigskip
\textbf{Proof of Theorem \ref{theorem1}. }
Part $(i)$: By Lemma \ref{linearestimative1}, it follows that
\begin{equation}
\Vert T_{\Omega }(t)u_{0}\Vert _{L^{\theta }(0,\infty ;\dot{B
_{p,q}^{s})}\leq C_{0}|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p
}\Vert u_{0}\Vert _{\dot{B}_{2,q}^{s}}. \label{aux-linear-11}
\end{equation
Now, we define the operator $\Gamma $ and the set $Z$ by
\begin{equation}
\Gamma (u)(t)=T_{\Omega }(t)u_{0}-\mathfrak{B}(u,u)(t) \label{operatorB}
\end{equation
an
\begin{equation*}
Z=\left\{ u\in L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s}(\mathbb{R
^{3})):\Vert u\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}\leq
2C_{0}|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert
u_{0}\Vert _{\dot{B}_{2,q}^{s}},\ \nabla \cdot u=0\right\} .
\end{equation*
Taking $\frac{1}{r}=\frac{2}{p}-\frac{s}{3},$ we can employ Lemma \re
{nonlinear1} and (\ref{remark1}) to estimate $\Gamma (\cdot )$ as follows
\begin{equation}
\begin{split}
\Vert \Gamma (u)-\Gamma (v)& \Vert _{L^{\theta }(0,\infty ;\dot{B
_{p,q}^{s})}=\left\Vert \int_{0}^{t}T_{\Omega }(t-\tau )\mathbb{P}\nabla
\cdot (u\otimes (u-v)(\tau )+(u-v)\otimes v(\tau ))\ \tau \right\Vert
_{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})} \\
& \leq C|\Omega |^{\frac{1}{\theta }-\frac{1}{2}+\frac{3}{2}\left( \frac{1}{
}-\frac{1}{p}\right) }\Vert u\otimes (u-v)+(u-v)\otimes v\Vert _{L^{\frac
\theta }{2}}(0,\infty ;\dot{B}_{r,q}^{s})} \\
& \leq C|\Omega |^{\frac{1}{\theta }-\frac{1}{2}+\frac{3}{2}\left( \frac{1}{
}-\frac{1}{p}\right) }\left( \Vert u\Vert _{L^{\theta }(0,\infty ;\dot{B
_{p,q}^{s})}+\Vert v\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}\right)
\Vert u-v\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})} \\
& \leq C|\Omega |^{\frac{1}{\theta }-\frac{1}{2}+\frac{3}{2}\left( \frac{1}{
}-\frac{1}{p}\right) }4C_{0}|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{
}{2p}}\Vert u_{0}\Vert _{\dot{B}_{2,q}^{s}}\Vert u-v\Vert _{L^{\theta
}(0,\infty ;\dot{B}_{p,q}^{s})} \\
& =C_{2}|\Omega |^{\frac{1}{\theta }-\frac{1}{2}+\frac{3}{2}\left( \frac{1}{
}-\frac{1}{p}\right) -\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert
u_{0}\Vert _{\dot{B}_{2,q}^{s}}\Vert u-v\Vert _{L^{\theta }(0,\infty ;\dot{B
_{p,q}^{s})} \\
& =C_{2}|\Omega |^{\frac{1}{4}-\frac{s}{2}}\Vert u_{0}\Vert _{\dot{B
_{2,q}^{s}}\Vert u-v\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}, \\
&
\end{split}
\label{contraction}
\end{equation
for all $u,v\in Z$, where $C_{2}=C_{2}(s,p,\theta ).$ Moreover, using (\re
{aux-linear-11}) and (\ref{contraction}) with $v=0$, we obtain
\begin{equation}
\begin{split}
\Vert \Gamma (u)\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}& \leq
\Vert T_{\Omega }(t)u_{0}\Vert _{L^{\theta }(0,\infty ;\dot{B
_{p,q}^{s})}+\Vert \Gamma (u)-\Gamma (0)\Vert _{L^{\theta }(0,\infty ;\dot{B
_{p,q}^{s})} \\
& \leq C_{0}|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert
u_{0}\Vert _{\dot{B}_{2,q}^{s}}+C_{2}|\Omega |^{\frac{1}{4}-\frac{s}{2
}\Vert u_{0}\Vert _{\dot{B}_{2,q}^{s}}\Vert u\Vert _{L^{\theta }(0,\infty
\dot{B}_{p,q}^{s})} \\
& \leq C_{0}|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert
u_{0}\Vert _{\dot{B}_{2,q}^{s}}+C_{2}|\Omega |^{\frac{1}{4}-\frac{s}{2
}\Vert u_{0}\Vert _{\dot{B}_{2,q}^{s}}2C_{0}|\Omega |^{-\frac{1}{\theta }
\frac{3}{4}-\frac{3}{2p}}\Vert u_{0}\Vert _{\dot{B}_{2,q}^{s}} \\
& =C_{0}\Vert u_{0}\Vert _{\dot{B}_{2,q}^{s}}|\Omega |^{-\frac{1}{\theta }
\frac{3}{4}-\frac{3}{2p}}\left( 1+2C_{2}|\Omega |^{\frac{1}{4}-\frac{s}{2
}\Vert u_{0}\Vert _{\dot{B}_{2,q}^{s}}\right)
\end{split}
\label{differenceofoperator}
\end{equation
for all $u\in Z$. Thus, for $\Omega $ and $u_{0}$ satisfying
\begin{equation*}
C_{2}|\Omega |^{\frac{1}{4}-\frac{s}{2}}\Vert u_{0}\Vert _{\dot{B
_{2,q}^{s}}\leq \frac{1}{2},
\end{equation*
we get
\begin{equation*}
\Vert \Gamma (u)\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}\leq
2C_{0}|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert
u_{0}\Vert _{\dot{B}_{2,q}^{s}}\text{ and }\Vert \Gamma (u)-\Gamma (v)\Vert
_{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}\leq \frac{1}{2}\Vert u-v\Vert
_{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}.
\end{equation*
Then, Banach fixed point theorem implies that there exists a unique mild
solution $u\in Z$ to (\ref{NSC}), i.e.,
\begin{equation*}
u(t)=T_{\Omega }(t)u_{0}-\mathfrak{B}(u,u)(t).
\end{equation*
It remains to prove that $u\in C([0,\infty );\dot{B}_{2,q}^{s}(\mathbb{R
^{3}))$. Basically, we need to estimate the $\dot{B}_{2,q}^{s}$-norm of the
linear and nonlinear parts in (\ref{operatorB}). For the linear one, we use
Lemma \ref{linearestimative1} to get
\begin{equation}
\Vert T_{\Omega }(t)u_{0}\Vert _{\dot{B}_{2,q}^{s}}\leq C_{0}\Vert
u_{0}\Vert _{\dot{B}_{2,q}^{s}}. \label{4.9}
\end{equation
For the nonlinear part, taking $\frac{1}{r}=\frac{2}{p}-\frac{s}{3},$ we use
Lemma \ref{linearestimative1}, (\ref{remark1}) and H\"{o}lder inequality to
obtain
\begin{equation}
\begin{split}
\left\Vert \int_{0}^{t}T_{\Omega }(t-\tau )\mathbb{P}\nabla \cdot (u\otimes
u)(\tau )\ d\tau \right\Vert _{\dot{B}_{2,q}^{s}}& \leq
C\int_{0}^{t}\left\Vert T_{\Omega }(t-\tau )\mathbb{P}\nabla \cdot (u\otimes
u)(\tau )\right\Vert _{\dot{B}_{2,q}^{s}}\ d\tau \\
& \leq C\int_{0}^{t}(t-\tau )^{-\frac{1}{2}-\frac{3}{2}\left( \frac{1}{r}
\frac{1}{2}\right) }\left\Vert (u\otimes u)(\tau )\right\Vert _{\dot{B
_{r,q}^{s}}\ d\tau \\
& \leq C\int_{0}^{t}(t-\tau )^{-\frac{1}{2}-\frac{3}{2}\left( \frac{1}{r}
\frac{1}{2}\right) }\left\Vert u(\tau )\right\Vert _{\dot{B}_{p,q}^{s}}^{2}\
d\tau \\
& \leq C\left\Vert (t-\cdot )^{-\frac{1}{2}-\frac{3}{2r}+\frac{3}{4
}\right\Vert _{L^{\frac{\theta }{\theta -2}}(0<\tau <t)}\left\Vert \Vert
u(\tau )\Vert _{\dot{B}_{p,q}^{s}}^{2}\right\Vert _{L^{\frac{\theta }{2
}(0,\infty )} \\
& \leq Ct^{\frac{\theta -2}{\theta }\left( 1+\frac{\theta }{\theta -2}\left(
-\frac{1}{2}-\frac{3}{2r}+\frac{3}{4}\right) \right) }\Vert u\Vert
_{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}^{2},
\end{split}
\label{4.10}
\end{equation
where we need $\frac{1}{\theta }<\frac{5}{8}-\frac{3}{2p}+\frac{s}{4}$ in
order to ensure integrability at $\tau =t$. From (\ref{4.9}) and (\ref{4.10
), it follows that $u(t)\in \dot{B}_{2,q}^{s}(\mathbb{R}^{3})$ for $t>0$,
and then we have that $u\in C([0,\infty );\dot{B}_{2,q}^{s}(\mathbb{R}^{3}))
, as desired.
Part $(ii)$: In view of (\ref{aux-space-77}), we have that
\begin{equation}
\Vert T_{\Omega }(t)u_{0}\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,\infty
}^{s})}\leq |\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert
u_{0}\Vert _{\mathcal{I}},\text{ for all }\left\vert \Omega \right\vert \geq
\Omega _{0}\text{.} \label{aux-linear-11_q=infty}
\end{equation
Now, for $\left\vert \Omega \right\vert \geq \Omega _{0}$ consider
\begin{equation}
\Gamma (u)(t)=T_{\Omega }(t)u_{0}-\mathfrak{B}(u,u)(t)
\label{operatorB_q=infty}
\end{equation
an
\begin{equation*}
Z=\left\{ u\in L^{\theta }(0,\infty ;\dot{B}_{p,\infty }^{s}(\mathbb{R
^{3})):\Vert u\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,\infty }^{s})}\leq
2|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert u_{0}\Vert _
\mathcal{I}},\ \nabla \cdot u=0\right\} .
\end{equation*
Taking $\frac{1}{r}=\frac{2}{p}-\frac{s}{3},$ and proceeding similarly to
Part $(i)$, we obtain a constant $\tilde{C_{2}}=\tilde{C_{2}}(s,p,\theta )$
such that
\begin{equation}
\begin{split}
\Vert \Gamma (u)-\Gamma (v)\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,\infty
}^{s})}& \leq \tilde{C_{2}}|\Omega |^{\frac{1}{4}-\frac{s}{2}}\Vert
u_{0}\Vert _{\mathcal{I}}\Vert u-v\Vert _{L^{\theta }(0,\infty ;\dot{B
_{p,\infty }^{s})} \\
\Vert \Gamma (u)\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,\infty }^{s})}&
\leq \Vert u_{0}\Vert _{\mathcal{I}}|\Omega |^{-\frac{1}{\theta }+\frac{3}{4
-\frac{3}{2p}}\left( 1+2\tilde{C_{2}}|\Omega |^{\frac{1}{4}-\frac{s}{2
}\Vert u_{0}\Vert _{\mathcal{I}}\right) ,
\end{split}
\label{contraction-2222}
\end{equation
for all $u,v\in Z$. Thus, for $\Omega $ and $u_{0}$ satisfying
\begin{equation*}
\left\vert \Omega \right\vert \geq \Omega _{0}\text{ and }\tilde{C_{2}
|\Omega |^{\frac{1}{4}-\frac{s}{2}}\Vert u_{0}\Vert _{\mathcal{I}}\leq \frac
1}{2} ,
\end{equation*
we get
\begin{equation*}
\Vert \Gamma (u)\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,\infty }^{s})}\leq
2|\Omega |^{-\frac{1}{\theta }+\frac{3}{4}-\frac{3}{2p}}\Vert u_{0}\Vert _
\mathcal{I}}\text{ and }\Vert \Gamma (u)-\Gamma (v)\Vert _{L^{\theta
}(0,\infty ;\dot{B}_{p,\infty }^{s})}\leq \frac{1}{2}\Vert u-v\Vert
_{L^{\theta }(0,\infty ;\dot{B}_{p,\infty }^{s})}.
\end{equation*
Again, we can apply the Banach fixed point theorem in order to obtain a
unique mild solution $u\in Z$ to (\ref{NSC}). Assume now that $u_{0}\in \dot
B}_{2,\infty }^{s}(\mathbb{R}^{3})$. Since (\ref{4.9}) and (\ref{4.10}) hold
true for $q=\infty $, it follows that $u\in C_{\omega }([0,\infty );\dot{B
_{2,\infty }^{s}(\mathbb{R}^{3}))$. \fin
\bigskip
Before proceeding, for $\Omega _{0}>0$ and $1\leq q\leq \infty $ we define
the space
\begin{equation}
\mathcal{F}:=\left\{ f\in \mathcal{S}^{\prime }(\mathbb{R}^{3})\colon \Vert
f\Vert _{\mathcal{F}}:=\sup_{|\Omega |\geq \Omega _{0}}\Vert T_{\Omega
}(t)f\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{1/2})}<\infty \right\} ,
\label{aux-space-771}
\end{equation
where, for simplicity, we have omitted the dependence on $\Omega _{0}$ and
q $ in the notation $\mathcal{F}$. We also define
\begin{equation}
\mathcal{F}_{0}:=\left\{ f\in \mathcal{F}\colon \limsup_{|\Omega
|\rightarrow \infty }\Vert T_{\Omega }(t)f\Vert _{L^{4}(0,\infty ;\dot{B
_{3,q}^{1/2})}=0\right\} . \label{aux-space-772}
\end{equation
Both spaces $\mathcal{F}$ and $\mathcal{F}_{0}$ are endowed with the norm
\Vert \cdot \Vert _{\mathcal{F}}.$ The next theorem deals with the critical
case $s=1/2$.
\begin{theorem}
\label{theorem3}Let $2\leq q\leq \infty $ and $u_{0}\in D$ with $\nabla
\cdot u_{0}=0$ where $D$ is a precompact set in $\mathcal{F}_{0}$. Then,
there exist $\widetilde{\Omega }=\widetilde{\Omega }(D)>0$ and a unique
global solution $u$ to (\ref{NSC}) in $L^{4}(0,\infty ;\dot{B}_{3,q}^{1/2}
\mathbb{R}^{3}))$ provided that $|\Omega |\geq \widetilde{\Omega }$.
Moreover, if in addition $u_{0}\in \dot{B}_{2,q}^{1/2}(\mathbb{R}^{3})$ with
$q\neq \infty $, then $u\in C([0,\infty );\dot{B}_{2,q}^{1/2}(\mathbb{R
^{3}))$. In the case $q=\infty ,$ we obtain $C_{\omega }([0,\infty );\dot{B
_{2,\infty }^{1/2}(\mathbb{R}^{3})).$
\end{theorem}
\textbf{Proof. } Let $\delta $ be a positive number that will be chosen
later. Given that $D$ is a precompact set in $\mathcal{F}_{0}$, there exist
L=L(\delta ,D)\in \mathbb{N}$ and $\{g_{k}\}\subset \mathcal{F}_{0}$ such
that
\begin{equation*}
D\subset \bigcup_{k=1}^{L}B(g_{k},\delta ),
\end{equation*
where $B(g_{k},\delta )$ denotes the ball in $\mathcal{F}_{0}$ with center
g_{k}$ and radius $\delta $. On the other hand, using the definition (\re
{aux-space-772}), there exists $\tilde{\Omega}=\tilde{\Omega}(\delta ,D)\geq
\Omega _{0}>0$ such that
\begin{equation*}
\sup_{k=1,2,\ldots ,L}\Vert T_{\Omega }(t)g_{k}\Vert _{L^{4}(0,\infty ;\dot{
}_{3,q}^{\frac{1}{2}})}\leq \delta
\end{equation*
provided that $|\Omega |\geq \tilde{\Omega}$. Now, given $g\in D$ there
exists $k\in \{1,2,\ldots ,L\}$ such that $g\in B(g_{k},\delta )$.
Therefore, for $|\Omega |\geq \tilde{\Omega}$ we can estimate
\begin{equation*}
\begin{split}
\Vert T_{\Omega }(t)g\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}&
\leq \Vert T_{\Omega }(t)(g_{k}-g)\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^
\frac{1}{2}})}+\Vert T_{\Omega }(t)g_{k}\Vert _{L^{4}(0,\infty ;\dot{B
_{3,q}^{\frac{1}{2}})} \\
& \leq C\Vert g_{k}-g\Vert _{\mathcal{F}}+\delta \\
& \leq (C+1)\delta .
\end{split
\end{equation*
Thus, there exists $C_{1}>0$ such that
\begin{equation}
\sup_{g\in D}\Vert T_{\Omega }(t)g\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^
\frac{1}{2}})}\leq C_{1}\delta ,\text{ for all }|\Omega |\geq \tilde{\Omega}.
\label{aux-3.5}
\end{equation
Now, we consider the complete metric space $Z$ defined by
\begin{equation}
Z=\left\{ u\in \ L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}}):\Vert u\Vert
_{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}\leq 2C_{1}\delta ,\nabla
\cdot u=0\right\} ,\ \label{space-1003}
\end{equation
endowed with the metric $d(u,v)=\Vert u-v\Vert _{L^{4}(0,\infty ;\dot{B
_{3,q}^{\frac{1}{2}})}.$ Also, we consider the operator $\Gamma $ defined in
the proof of Theorem \ref{theorem1}. For $u,v\in Z$, using Lemma \re
{nonlinearestimativecritical}, (\ref{remark1}) and H\"{o}lder inequality, we
can estimate
\begin{equation}
\begin{split}
\Vert \Gamma (u)-\Gamma (v)\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2
})}& =\left\Vert \int_{0}^{t}T_{\Omega }(t-\tau )\mathbb{P}\nabla \cdot
(u\otimes (u-v)+(u-v)\otimes v)(\tau )\ d\tau \right\Vert _{L^{4}(0,\infty
\dot{B}_{3,q}^{\frac{1}{2}})} \\
& \leq C\Vert u\otimes (u-v)+(u-v)\otimes v\Vert _{L^{2}(0,\infty ;\dot{B
_{2,q}^{\frac{1}{2}})} \\
& \leq C\left( \left\Vert \Vert u\Vert _{\dot{B}_{3,q}^{\frac{1}{2}}}\Vert
u-v\Vert _{\dot{B}_{3,q}^{\frac{1}{2}}}\right\Vert _{L^{2}(0,\infty
)}+\left\Vert \Vert v\Vert _{\dot{B}_{3,q}^{\frac{1}{2}}}\Vert u-v\Vert _
\dot{B}_{3,q}^{\frac{1}{2}}}\right\Vert _{L^{2}(0,\infty )}\right) \\
& \leq C_{2}\left( \Vert u\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2
})}+\Vert v\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}\right)
\Vert u-v\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}.
\end{split}
\label{3.6}
\end{equation
Taking $v=0$ in (\ref{3.6}), for $u\in Z$ it follows that
\begin{equation}
\begin{split}
\Vert \Gamma (u)\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}& \leq
\Vert \Gamma (0)\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}+\Vert
\Gamma (u)-\Gamma (0)\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})} \\
& \leq \Vert T_{\Omega }(t)u_{0}\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac
1}{2}})}+C_{2}\Vert u\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2
})}^{2}.
\end{split}
\label{3.7}
\end{equation
Choosing $0<\delta <\frac{1}{8C_{1}C_{2}}$, estimates (\ref{aux-3.5}), (\re
{3.6}) and (\ref{3.7}) yield
\begin{equation*}
\begin{split}
& \Vert \Gamma (u)\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}\leq
2C_{1}\delta ,\text{ for all }u\in Z, \\
& \Vert \Gamma (u)-\Gamma (v)\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}
2}})}\leq \frac{1}{2}\Vert u-v\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{
}{2}})},\text{ for all }u,v\in Z,
\end{split
\end{equation*
provided that $|\Omega |\geq \tilde{\Omega}$. Therefore, we can apply the
Banach fixed point theorem to obtain a unique global solution $u\in \
L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}}).$
Moreover, using Lemma \ref{linearestimative1}, Lemma \re
{nonlinearestimativecritical} and $u\in L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac
1}{2}})$, we have that
\begin{equation}
\Vert u(t)\Vert _{\dot{B}_{2,q}^{\frac{1}{2}}}=\Vert \Gamma (u)(t)\Vert _
\dot{B}_{2,q}^{\frac{1}{2}}}\leq C\Vert u_{0}\Vert _{\dot{B}_{2,q}^{\frac{1}
2}}}+C\Vert u\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2
})}^{2}<\infty , \label{aux-3.8}
\end{equation
for a.e. $t>0$. Since $\Vert u\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{
}{2}})}\leq 2C_{1}\delta $ and $\delta <\frac{1}{8C_{1}C_{2}}$, it follows
that
\begin{equation}
\Vert u(t)\Vert _{\dot{B}_{2,q}^{\frac{1}{2}}}\leq C(\Vert u_{0}\Vert _{\dot
B}_{2,q}^{\frac{1}{2}}}+1)<\infty ,\text{ for all }\left\vert \Omega
\right\vert \geq \tilde{\Omega}, \label{aux-3.9}
\end{equation
and so $u(t)\in \dot{B}_{2,q}^{\frac{1}{2}}(\mathbb{R}^{3})$ for a.e. $t>0$.
Using this and above estimates, standard arguments yield $u\in C([0,\infty )
\dot{B}_{2,q}^{\frac{1}{2}}(\mathbb{R}^{3}))$ for $q\neq \infty $ and $u\in
C_{\omega }([0,\infty );\dot{B}_{2,q}^{\frac{1}{2}}(\mathbb{R}^{3}))$ for
q=\infty $.\fin
\begin{theorem}
\label{theorem2}Let $2\leq q\leq \infty $ and $u_{0}\in \mathcal{F}_{0}$
with $\nabla \cdot u_{0}=0$. Then, there exist $\tilde{\Omega}=\tilde{\Omega
(u_{0})$ and a unique global solution $u\in L^{4}(0,\infty ;\dot{B}_{3,q}^
\frac{1}{2}}(\mathbb{R}^{3}))$ to (\ref{NSC}) provided that $|\Omega |\geq
\tilde{\Omega}$.
\end{theorem}
\textbf{Proof. } It is sufficient to apply Theorem \ref{theorem3} to the set
$D=\{u_{0}\}$. \fin
\begin{cor}
\label{cor1} Let $2\leq q<4$ and $u_{0}\in D$ with $\nabla \cdot u_{0}=0$
where $D$ is a precompact set in $\dot{B}_{2,q}^{\frac{1}{2}}(\mathbb{R
^{3}) $. Then, there exist $\tilde{\Omega}(D)>0$ and a unique global
solution $u$ to (\ref{NSC}) in the class $C([0,\infty );\dot{B}_{2,q}^{\frac
1}{2}}(\mathbb{R}^{3}))\cap L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}}
\mathbb{R}^{3}))$ provided that $|\Omega |\geq \tilde{\Omega}(D)$.
\end{cor}
\textbf{Proof. } In view of Lemma \ref{critical_linear_semigroup}, we have
that $\dot{B}_{2,q}^{\frac{1}{2}}\hookrightarrow \mathcal{F}_{0}$ for $1\leq
q<4$. Now the result follows by applying Theorem \ref{theorem3}.\fin
\section{Asymptotic behavior as $|\Omega |\rightarrow \infty $}
In this section we study the asymptotic behavior of the mild solutions as
|\Omega |\rightarrow \infty $. For convenience, we denote
\begin{equation*}
\alpha _{0}=-\frac{1}{\theta }+\frac{1}{2}-\frac{3}{2p}+\frac{s}{2}\ \ \text{
and }\ \ \beta _{0}=\frac{1}{\theta }-\frac{3}{4}+\frac{3}{2p}.
\end{equation*}
First, we consider the case $1/2<s<3/4$.
\begin{theorem}
\label{properties_of_u_and_v_Omega}
\begin{enumerate}
\item[$(i)$] Let $0\leq \epsilon <\frac{1}{12}$ and $1\leq q< \infty$, and
suppose that $s,p$ and $\theta $ satisf
\begin{gather*}
\frac{1}{2}+3\epsilon <s<\frac{3}{4},\ \ \ \ \frac{1}{3}+\frac{s}{9}<\frac{
}{p}<\frac{2}{3}-\frac{s}{3}, \\
\frac{s}{2}-\frac{1}{2p}<\frac{1}{\theta }<\frac{5}{8}-\frac{3}{2p}+\frac{s}
4}-\frac{\epsilon }{4},\ \ \ \ \frac{3}{4}-\frac{3}{2p}\leq \frac{1}{\theta
<\min\left\{1-\frac{2}{p},\frac{1}{q}\right\}.
\end{gather*
Let $u$ and $v$ be solutions of (\ref{NSC}) with initial data $u_{0}$ and
v_{0}$ in $\dot{B}_{2,q}^{s}(\mathbb{R}^{3})$, respectively. Then, for
\alpha <2\beta _{0}$
\begin{equation}
\lim_{|\Omega |\rightarrow \infty }|\Omega |^{\alpha }\Vert u(t)-v(t)\Vert _
\dot{B}_{2,q}^{s+\epsilon }}=0\ \ \ \text{if and only if}\ \ \ \lim_{|\Omega
|\rightarrow \infty }|\Omega |^{\alpha }\Vert T_{\Omega
}(t)(u_{0}-v_{0})\Vert _{\dot{B}_{2,q}^{s+\epsilon }}=0,\text{ for each
fixed }t>0. \label{asymp-200}
\end{equation}
\item[$(ii)$] Let $0\leq \epsilon <\frac{1}{6}$ and $1\leq q< \infty$.
Assume that $s$, $p$ and $\theta $ satisfy
\begin{gather*}
\frac{1}{2}+\frac{3\epsilon }{2}<s<\frac{3}{4},\ \ \ \ \frac{1}{3}+\frac{s}{
}<\frac{1}{p}<\frac{2}{3}-\frac{s}{3}, \\
\frac{s}{2}-\frac{1}{2p}<\frac{1}{\theta }<\frac{5}{8}-\frac{3}{2p}+\frac{s}
4},\ \ \ \ \frac{3}{4}-\frac{3}{2p}\leq \frac{1}{\theta }<\min\left\{1-\frac
2}{p},\frac{1}{q}\right\}.
\end{gather*
Let $\alpha <\alpha _{0}+2\beta _{0}-\frac{\epsilon }{2}$ and assume that $u$
and $v$ are solutions of (\ref{NSC}) with initial data $u_{0}$ and $v_{0}$
in $\dot{B}_{2,q}^{s}(\mathbb{R}^{3})$, respectively. Then, for each fixed
t>0$,
\begin{equation}
\lim_{|\Omega |\rightarrow \infty }|\Omega |^{\alpha }\Vert u-v\Vert _{
L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s+\epsilon })}}=0\ \ \ \text{if and
only if}\ \ \ \lim_{|\Omega |\rightarrow \infty }|\Omega |^{\alpha }\Vert
T_{\Omega }(t)(u_{0}-v_{0})\Vert _{L^{\theta }(0,\infty ;\dot{B
_{p,q}^{s+\epsilon })}=0. \label{asymp-201}
\end{equation}
\end{enumerate}
\end{theorem}
\textbf{Proof. } First we write
\begin{equation}
u-v=T_{\Omega }(t)(u_{0}-v_{0})+\mathfrak{B}(u,u)(t)-\mathfrak{B}(v,v)(t).
\label{aux-1}
\end{equation}
Considering $\frac{1}{r}=\frac{2}{p}-\frac{s}{3}$, we estimate the $\dot{B
_{2,q}^{s+\epsilon }$-norm of the nonlinear term in (\ref{aux-1}) as follows
\begin{equation*}
\begin{split}
\Vert \mathfrak{B}(u,u)(t)-\mathfrak{B}(v,v)(t)\Vert _{\dot{B
_{2,q}^{s+\epsilon }}& \leq C\int_{0}^{t}(t-\tau )^{-\frac{1}{2}-\frac{3}{2
\left( \frac{1}{r}-\frac{1}{2}\right) }\Vert e^{\frac{1}{2}(t-\tau )\Delta
}(u\otimes (u-v)+(u-v)\otimes v)(\tau )\Vert _{\dot{B}_{r,q}^{s+\epsilon }}\
d\tau \\
& \leq C\int_{0}^{t}(t-\tau )^{-\frac{1}{2}-\frac{3}{2}\left( \frac{1}{r}
\frac{1}{2}\right) -\frac{\epsilon }{2}}(\Vert u(\tau )\Vert _{\dot{B
_{p,q}^{s}}+\Vert u(\tau )\Vert _{\dot{B}_{p,q}^{s}})\Vert (u-v)(\tau )\Vert
_{\dot{B}_{p,q}^{s}}\ d\tau \\
& \leq Ct^{\frac{1}{2}-\frac{2}{\theta }-\frac{3}{2}\left( \frac{1}{r}-\frac
1}{2}\right) -\frac{\epsilon }{2}}(\Vert u\Vert _{L^{\theta }(0,\infty ;\dot
B}_{p,q}^{s})}+\Vert v\Vert _{L^{\theta }(0,\infty ;\dot{B
_{p,q}^{s})})\Vert u-v\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})},
\end{split
\end{equation*
where we have the integrability at $\tau =t$ due to the condition
\begin{equation*}
\frac{1}{\theta }<\frac{5}{8}-\frac{3}{2p}+\frac{s}{4}-\frac{\epsilon }{4
\Longrightarrow \frac{1}{2}-\frac{2}{\theta }-\frac{3}{2}\left( \frac{1}{r}
\frac{1}{2}\right) -\frac{\epsilon }{2}>0.
\end{equation*
Thus,
\begin{equation*}
|\Omega |^{\alpha }\Vert \mathfrak{B}(u,u)(t)-\mathfrak{B}(v,v)(t)\Vert _
\dot{B}_{2,q}^{s+\epsilon }}\leq Ct^{\frac{1}{2}-\frac{2}{\theta }-\frac{3}{
}\left( \frac{1}{r}-\frac{1}{2}\right) -\frac{\epsilon }{2}}|\Omega
|^{\alpha -2\beta _{0}},\text{ for all }|\Omega |\geq \Omega _{0}.
\end{equation*
Since $\alpha <2\beta _{0}$, it follows that
\begin{equation}
\lim_{|\Omega |\rightarrow \infty }|\Omega |^{\alpha }\Vert \mathfrak{B
(u,u)(t)-\mathfrak{B}(v,v)(t)\Vert _{\dot{B}_{2,q}^{s+\epsilon }}=0,\text{
for each }t>0. \label{aux-101}
\end{equation
In view of (\ref{aux-1}) and (\ref{aux-101}), we obtain the desired property.
For item $(ii)$, we proceed similarly as in the proof of Lemma \re
{nonlinear1} by taking $f=u\otimes (u-v)+(u-v)\otimes v$ in the nonlinear
term of (\ref{aux-1}). Since $\frac{1}{\theta }<\frac{1}{2}-\frac{3}{2
\left( \frac{1}{r}-\frac{1}{p}\right) -\frac{\epsilon }{2},$ we can estimate
\begin{equation*}
\Vert \mathfrak{B}(u,u)-\mathfrak{B}(v,v)\Vert _{L^{\theta }(0,\infty ;\dot{
}_{p,q}^{s+\epsilon })}\leq C|\Omega |^{-\alpha _{0}+\frac{\epsilon }{2
}(\Vert u\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}+\Vert v\Vert
_{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})})\Vert u-v\Vert _{L^{\theta
}(0,\infty ;\dot{B}_{p,q}^{s})}
\end{equation*
and then
\begin{equation}
|\Omega |^{\alpha }\Vert \mathfrak{B}(u,u)-\mathfrak{B}(v,v)\Vert
_{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s+\epsilon })}\leq C|\Omega |^{\alpha
-\alpha _{0}+\frac{\epsilon }{2}-2\beta _{0}},\text{ for all }|\Omega |\geq
\Omega _{0}. \label{aux-201}
\end{equation
Finally, we obtain (\ref{asymp-201}) by letting $|\Omega |\rightarrow \infty
$ and using (\ref{aux-201}) and (\ref{aux-1}).\fin
\begin{remark}
Let $1\leq q<\infty $, and consider $s,\gamma _{2},p$ and $\theta $ such
that
\begin{gather}
\frac{1}{2}<s<\frac{3}{4},\ \ \ 0<\gamma _{2}<\frac{1}{2}\left( 1-\frac{1}
\epsilon }\right) ,\ \ \ \frac{1}{2\gamma _{2}}\left( \frac{1}{8}-\frac{s}{4
+\gamma _{2}\right) \leq \frac{1}{p}<\frac{2}{3}-\frac{s}{3},
\label{new_assumptions1} \\
\frac{s}{2}-\frac{1}{2p}<\frac{1}{\theta }<\frac{5}{8}-\frac{3}{2p}+\frac{s}
4},\ \ \ \frac{3}{4}-\frac{3}{2p}\leq \frac{1}{\theta }<1-\frac{2}{p}.
\label{new_assumptions2}
\end{gather
Since $\frac{1}{\theta }<\frac{1}{2}-\frac{3}{2}\left( \frac{1}{r}-\frac{1}{
}\right) -\gamma _{2}\left( 1-\frac{2}{p}\right) $ and
\begin{equation*}
\left( \frac{\log {(e+|\Omega |t)}}{1+|\Omega |t}\right) ^{\frac{1}{2}\left(
1-\frac{2}{p}\right) }=\frac{\log {(e+|\Omega |t)^{\frac{1}{2}\left( 1-\frac
2}{p}\right) }}}{(1+|\Omega |t)^{\gamma _{1}\left( 1-\frac{2}{p}\right) }
\frac{1}{(1+|\Omega |t)^{\gamma _{2}\left( 1-\frac{2}{p}\right) }} \\
\leq (|\Omega |t)^{-\gamma _{2}\left( 1-\frac{2}{p}\right) },
\end{equation*
where $\gamma _{1},\gamma _{2}>0$, $\gamma _{1}+\gamma _{2}=\frac{1}{2}$ and
$\gamma _{2}<\frac{1}{2}\left( 1-\frac{1}{e}\right) $, we can estimate
(similarly to Lemma \ref{nonlinear1})
\begin{equation*}
\left\Vert \mathfrak{B}(u,u)-\mathfrak{B}(v,v)\right\Vert _{L^{\theta
}(0,\infty ;\dot{B}_{p,q}^{s+\epsilon })}\leq C|\Omega |^{-\alpha
_{0}-\gamma _{2}\left( 1-\frac{2}{p}\right) }(\Vert u\Vert _{L^{\theta
}(0,\infty ;\dot{B}_{p,q}^{s})}+\Vert v\Vert _{L^{\theta }(0,\infty ;\dot{B
_{p,q}^{s})})\Vert u-v\Vert _{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s})}
\end{equation*
which implies
\begin{equation*}
|\Omega |^{\alpha }\left\Vert \mathfrak{B}(u,u)-\mathfrak{B}(v,v)\right\Vert
_{L^{\theta }(0,\infty ;\dot{B}_{p,q}^{s+\epsilon })}\leq C|\Omega |^{\alpha
-\alpha _{0}-\gamma _{2}\left( 1-\frac{2}{p}\right) -2\beta _{0}}.
\end{equation*
Thus, for $\alpha <\alpha _{0}+2\beta _{0}+\gamma _{2}\left( 1-\frac{2}{p
\right) $, we obtain the property (\ref{asymp-201}).
\end{remark}
In what follows, we address the asymptotic behavior of solutions in the
critical case ($s=1/2$).
\begin{theorem}
\label{propertiescritical_u_and_v} Let $2\leq q\leq \infty $ and let $u$ and
$v$ be mild solutions of (\ref{NSC}) with initial data $u_{0}$ and $v_{0}$
in $\mathcal{F}_{0}$, respectively. Then, for all $\alpha \geq 0$
\begin{equation}
\lim_{|\Omega |\rightarrow \infty }|\Omega |^{\alpha }\Vert u-v\Vert
_{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}=0\ \ \ \text{if and only if
\ \ \ \lim_{|\Omega |\rightarrow \infty }|\Omega |^{\alpha }\Vert T_{\Omega
}(t)(u_{0}-v_{0})\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}=0.
\label{asym-crit-2}
\end{equation
Moreover, for each $t>0$, we have tha
\begin{equation}
\lim_{|\Omega |\rightarrow \infty }|\Omega |^{\alpha }\Vert u(t)-v(t)\Vert _
\dot{B}_{2,q}^{\frac{1}{2}}}=0 \label{asym-crit-1}
\end{equation
provided that
\begin{equation}
\lim_{|\Omega |\rightarrow \infty }|\Omega |^{\alpha }\left( \Vert T_{\Omega
}(t)(u_{0}-v_{0})\Vert _{\dot{B}_{2,q}^{\frac{1}{2}}}+\Vert T_{\Omega
}(t)(u_{0}-v_{0})\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2
})}\right) =0. \label{asymp-hip-1}
\end{equation}
\end{theorem}
\textbf{Proof. } By the proof of Theorem \ref{theorem3}, we know that $u\in
L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})$ with
\begin{equation*}
\Vert u\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}\leq
2C_{1}\delta ,\text{ for all }|\Omega |\geq \tilde{\Omega},
\end{equation*
and similarly for $v$. Thus,
\begin{equation}
\sup_{|\Omega |\geq \tilde{\Omega}}\Vert u\Vert _{L^{4}(0,\infty ;\dot{B
_{3,q}^{\frac{1}{2}})}<2C_{1}\delta \text{ and }\sup_{|\Omega |\geq \tilde
\Omega}}\Vert v\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2
})}<2C_{1}\delta . \label{aux-assint-10}
\end{equation
Next, we estimate
\begin{equation*}
\Vert u-v\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}\leq \Vert
T_{\Omega }(t)(u_{0}-v_{0})\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2
})}+C_{2}(\Vert u\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}+\Vert
v\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})})\Vert u-v\Vert
_{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}
\end{equation*
which yields
\begin{equation}
(1-4C_{1}C_{2}\delta )|\Omega |^{\alpha }\Vert u-v\Vert _{L^{4}(0,\infty
\dot{B}_{3,q}^{\frac{1}{2}})}\leq |\Omega |^{\alpha }\Vert T_{\Omega
}(t)(u_{0}-v_{0})\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})},
\label{aux-assint-11}
\end{equation
where $C_{1},C_{2}$ and $\delta $ are as in the proof of Theorem \re
{theorem3}. Since $1-4C_{1}C_{2}\delta >0$ and the term on the right side
converges to zero (by hypothesis), it follows the \textquotedblleft \textit
if}\textquotedblright\ part in (\ref{asym-crit-2}). For the reverse, we
write (\ref{aux-1}) as
\begin{equation}
T_{\Omega }(t)(u_{0}-v_{0})=u-v-[\mathfrak{B}(u,u)(t)-\mathfrak{B}(v,v)(t)]
\label{aux-1-2}
\end{equation
\ and proceed similarly.
Next, we turn to (\ref{asym-crit-1}). Applying the $\dot{B}_{2,q}^{\frac{1}{
}}$-norm and using Lemma \ref{nonlinearestimativecritical}, we obtain
\begin{equation}
\Vert u(t)-v(t)\Vert _{\dot{B}_{2,q}^{\frac{1}{2}}}\leq \Vert T_{\Omega
}(t)(u_{0}-v_{0})\Vert _{\dot{B}_{2,q}^{\frac{1}{2}}}+C(\Vert u\Vert
_{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}+\Vert v\Vert
_{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})})\Vert u-v\Vert
_{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}, \label{aux-assint-30}
\end{equation
for each $t>0$. Multiplying (\ref{aux-assint-30}) by $\left\vert \Omega
\right\vert ^{\alpha }$, letting $\left\vert \Omega \right\vert \rightarrow
\infty $, and using (\ref{aux-assint-10}), (\ref{asymp-hip-1}) and (\re
{asym-crit-2}), we get (\ref{asym-crit-1}). \fin
\begin{remark}
Notice that we can take $v_{0}=0$ and $v=0$ in Theorems \re
{properties_of_u_and_v_Omega} and \ref{propertiescritical_u_and_v} and
obtain asymptotic behavior properties for $u=u_{\Omega }$ as $\left\vert
\Omega \right\vert \rightarrow \infty $. In particular, in Theorem \re
{propertiescritical_u_and_v}, we have that
\begin{equation}
\lim_{|\Omega |\rightarrow \infty }|\Omega |^{\alpha }\Vert u_{\Omega }\Vert
_{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}=0\text{ provided that
\lim_{|\Omega |\rightarrow \infty }|\Omega |^{\alpha }\Vert T_{\Omega
}(t)u_{0}\Vert _{L^{4}(0,\infty ;\dot{B}_{3,q}^{\frac{1}{2}})}=0.
\label{aux-asy-50}
\end{equation
In the case $\alpha =0$, notice that the latter limit holds true for
u_{0}\in \dot{B}_{2,q}^{1/2}(\mathbb{R}^{3})$ with $1\leq q<4$ (see Lemma
\ref{critical_linear_semigroup}) and for all $u_{0}\in \mathcal{F}_{0}$.
\end{remark}
| proofpile-arXiv_065-14364 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The GW approximation~\cite{aulbur_quasiparticle_1999, aryasetiawan_gw_1998,hybertsen_first-principles_1985,godby_accurate_1986}, introduced by Hedin in 1965~\cite{hedin} remains the most widely used method for quasiparticle (QP) calculations of semiconductors and insulators. Over the years it has been extensively applied to inorganic solids~\cite{shishkin_implementation_2006, kotani_all-electron_2002, marini_yambo:_2009} and more recently also to molecules~\cite{rostgaard_fully_2010, caruso_self-consistent_2013, bruneval_$gw$_2009, blase_first-principles_2011} and atomically thin two-dimensional (2D) materials~\cite{falco, filip, louie_2d}.
The GW self-energy can be obtained by iterating Hedin’s equations once starting from $\Sigma=0$ (i.e. the Hartree approximation). This produces the trivial vertex function $\Gamma(1,2,3)=\delta(1,2)\delta(1,3)$, which corresponds to invoking the time-dependent Hartree approximation for the dynamical screening (i.e. the random phase approximation (RPA)). For this approach to be consistent, the Green's function which should be used for the calculation of the self-energy is the Hartree $G$. This is known to be a poor approximation, and instead practical GW calculations follow a “best $G$, best $W$” philosophy~\cite{hybertsen_first-principles_1985}. Most often one uses a non-interacting $G_0$ from density functional theory (DFT) and evaluates $W$ within the RPA from the polarisability $\chi_0=G_0 G_0$. This approximation is referred to as G$_0$W$_0$ and has shown to yield reasonably good, although somewhat underestimated, band gaps~\cite{shishkin_implementation_2006, Galli}. Carrying out self-consistency in the Green's function only, GW$_0$, has been found to improve the band gaps~\cite{vasp_2007}. Iterating to full self-consistency in both the Green's function and screened interaction, GW, systematically overestimates the band gaps and worsens the agreement with experiments~\cite{vasp_2007}.
The most obvious way to go beyond the GW approximation is to perform another iteration of Hedin’s equations starting from $\Sigma = iGW$. Neglecting derivatives of $W$ this produces the kernel $\delta \Sigma(1,2)/\delta G(3,4) =iW(1,2,3,4)$, which is known from the Bethe-Salpeter Equation. The four-point nature of this kernel makes it difficult to invert the vertex equation, $\Gamma = \delta + K G G \Gamma$, without loss of accuracy. Instead one can perform a single iteration of the vertex equation to obtain $\Gamma=\delta+ WGG$, which leads to a self-energy consisting of a second-order screened exchange term in addition to the usual $iGW$ term. Gruneis \emph{et al.} have shown, using a static approximation for $W$ in the vertex, that this GW$\Gamma^1$ approximation, performed in a fully self-consistent manner, leads to significant improvements for band gaps and ionization potentials of solids~\cite{gruneis}. From a theoretical point of view this is a highly satisfactory result. The drawback is the higher complexity of the formalism and the concomitant loss of physical transparency as well as the significant computational overhead as compared to the GW method.
Time-dependent density-functional theory (TDDFT) in principle offers a framework for including exchange-correlation (xc)-effects in the dynamical response via a two-point vertex function rather than the computationally challenging three-point vertex function that arises naturally in the diagrammatic many-body formalism. While it appears attractive to use TDDFT derived vertex functions for many-body calculations, progress along these lines has been hindered by the poor quality of the local xc-kernels derived from standard local xc-potentials. However, recent work has shown that a simple renormalization of the adiabatic LDA xc-kernel can overcome these problems and yield a dramatic improvement over the RPA for total energy calculations based on the adiabatic connection fluctuation dissipation theorem (ACDFT)~\cite{thomas, thomas2, chris}.
Here we show that the renormalized adiabatic LDA (rALDA) kernel, when introduced in Hedin’s equations, produces a simple two-point vertex function that leads to systematically improved QP energies for a range of semiconductors and insulators. The most striking effect of the vertex is that it raises the absolute QP energies from G$_0$W$_0$ by around 0.5 eV while the gaps are almost unaffected. These effects can be traced to an improved description of the short range correlation hole and thus the (absolute) correlation energy of electrons in the ground state.
\section{Self-energy and xc-kernel}
As originally observed by Hybertsen and Louie~\cite{hybertsen_first-principles_1985}, it is possible to start the iterative solution of Hedin’s equation not with $\Sigma=0$ (which leads to the GW approximation), but rather with a local xc-potential: $\Sigma^0(1,2)=\delta(1,2)v_{xc}(1)$. As shown by Del Sole \emph{et al.}~\cite{delsole} this leads to a self-energy of the form
\begin{equation}\label{eq.gw}
\Sigma(1,2) = i G(1,2) \widetilde W(1,2),
\end{equation}
where
\begin{equation}\label{eq.tildeW}
\widetilde W = v [1-\chi^0(v+f_{xc})]^{-1}
\end{equation}
and $f_{xc}(1,2)=\delta v_{xc}(1)/\delta n(2)$ is the adiabatic xc-kernel. Crucially, $\widetilde W(1,2)$ is the screened \emph{effective} potential at 2 generated by a charge at 1. It consists of the bare potential plus the induced Hartree and xc-potential. It is thus the potential felt by a (Kohn-Sham) electron in the system. For comparison the potential felt by a classical test charge is the bare potential screened only by the induced Hartree potential:
\begin{equation}
\widehat{W} = v + v[1-\chi^0(v+f_{xc})]^{-1}\chi^0 v
\end{equation}
Using $\widehat{W}$ in Eq. \eqref{eq.gw} corresponds to including the vertex in the polarisability $P$ (or irreducible response function) but neglecting it in the self-energy. We shall refer to the use of $\widetilde W$ or $\widehat{W}$ in Eq. \eqref{eq.gw} as G$_0$W$_0\Gamma$ and G$_0$W$_0$P, respectively. The subscripts indicate that the self-energies are evaluated non-self-consistently starting from DFT. Note that in contrast to the GW approximation, which strictly should be based on the Hartree $G$, the use of a DFT starting point is perfectly justified within the G$_0$W$_0\Gamma$ theory.
The adiabatic LDA kernel is given by
$$ f_{xc}^\text{ALDA}[n](\mathbf{r},\mathbf{r}') = \delta(\mathbf{r}-\mathbf{r}') f_{xc}^\text{ALDA}[n]$$
where
$$f_{xc}^\text{ALDA}[n] = \frac{d^2}{dn^2} \bigg( ne_{xc}^\text{HEG} \bigg) \bigg\vert_{n=n(\mathbf{r})},$$
While $f_{xc}^\text{ALDA}$ equals the exact xc-kernel of the homogeneous electron gas (HEG) in the $q\to 0$ and $\omega \to 0$ limits, it violates a number of other exact conditions. In particular, it does not incorporate the correct asymptotic $\propto q^{-2}$ behaviour for large $q$. This deficiency leads to a divergent on-top correlation hole~\cite{Furche}. Moroever, the ALDA kernel diverges at small densities where $f_{x}^\text{ALDA} \sim n^{-2/3}$. We have observed that when applying the ALDA kernel to systems other than silicon (which was the system addressed by Del Sole \emph{et al.}~\cite{delsole} and again by R. Shaltaf \emph{et al.}~\cite{shaltaf}), these divergences make it impossible to obtain converged results in practice. This is exemplified in Fig. \ref{fig:conv}\subref{subfig:conv2} and emphasizes the critical importance of renormalizing the local kernel.
The rALDA kernel is defined for the HEG by setting $f^{\text{rALDA}}_{xc}[n](q)=f^{\text{ALDA}}_{xc}[n]$ for $q<2k_F[n]$ and $-v(q)$ otherwise (this ensures continuity at $q=2k_F$). This results in a non-local kernel with the (almost) exact asymptotic $q\to \infty$ behaviour and without the divergences of the ALDA kernel~\cite{chris}. In this work we have followed the wave vector symmetrization scheme (see Eq. (38) in \onlinecite{chris}) to generalize the rALDA to inhomogeneous densities. Furthermore, we include only the dominant exchange part of the kernel. We mention that a small inconsistency of our QP scheme is that we iterate Hedin’s equations from $\Sigma^0(1,2)=\delta(1,2)v_{xc}^{\text{LDA}}(1)$ while $f_{xc}^{\text{rALDA}}$ does not exactly equal $\delta v_{xc}^{\text{LDA}}/\delta n$ due to the truncation at $q=2k_F$.
The rALDA kernel provides an essentially exact description of the correlation hole of the HEG across a wide range of densities and has been shown to improve the RPA description of bond energies in molecules and solids~\cite{thomas, thomas2, chris}. However, more important for the present work is that the rALDA kernel provides a dramatic improvement of absolute correlation energies compared to RPA. For example, the RPA correlation energy of the HEG is 0.3-0.5 eV/electron too negative while the rALDA error is below 0.03 eV/electron. Very similar trends are seen for small atoms and molecules~\cite{thomas2} as well as for bulk silicon~\cite{chris}.
\section{Computational details}
We have calculated the QP band gaps, ionization potentials (IP) and electron affinities (EA) for a range of semiconductors and insulators using five different approximations to the self-energy: (i) conventional G$_0$W$_0$ (ii) eigenvalue self-consistent GW$_0$ (iii) full eigenvalue self-consistent GW (iv) G$_0$W$_0\Gamma$ and (v) G$_0$W$_0$P. The non-self-consistent calculations employed an LDA starting point and the exchange only rALDA kernel was used to obtain $\widetilde W$ from Eq. \eqref{eq.tildeW}.
The QP calculations for the bulk and 2D crystals in their experimental geometries were performed using the GPAW code~\cite{gpaw_enko}. See Table \ref{tab:bulkstructs} and \ref{tab:2dstructs} for lattice constants and thickness of the 2D materials.
Response functions and screened interactions were evaluated along the real frequency axis using a non-linear frequency grid. The number of unoccupied bands included in $\chi_0$ was set equal to the number of plane wave basis functions. A $8\times8\times8$ ($18 \times 18$) \textbf{k}-point grid was used for all bulk (2D) materials. For the 2D materials we employed a recently developed method for treating the critical $\mathbf q \to \mathbf 0$ limit of the screened interaction while avoiding spurious interactions with neighbouring supercells~\cite{filip}. $15\,\mathrm{\AA}$ of vacuum was used in the out-of-plane direction. The reported band positions and gaps were extrapolated to the infinite plane wave basis limit and the results are estimated to be converged to within 0.02 eV.
Band edge positions with respect to vacuum were determined by aligning the Hartree potential at the nuclei in the bulk calculations to that inside a slab with surface orientation and reconstruction as reported in available experimental studies. The considered surfaces are (111) $2\times1$ for Si in the diamond structure, (100) for MgO and LiF in the rocksalt structure and (110) for the rest of the compounds in the zinc-blende structure. The slabs are represented by 10 layers (rocksalt), 14 layers (zinc-blende) and 24 layers (diamond). The surfaces were relaxed with the PBE xc-functional~\cite{pbe}, rescaled to the experimental lattice constant, and recalculated with LDA.
\begin{table}[h]
\centering
\begin{tabularx}{\columnwidth}{L{1.5cm}L{2cm}C{3cm}r}
\hline\hline \noalign{\smallskip}
& Structure & Latt. const. (\AA) & \multicolumn{1}{c}{\textbf{k}-points} \\
\hline\noalign{\smallskip}
MgO & rocksalt & 4.212 & $8\times 8\times 8$ \\
SiC & zincblende & 4.350 & $8\times 8\times 8$ \\
LiF & rocksalt & 4.024 & $8\times 8\times 8$ \\
CdS & zincblende & 5.818 & $8\times 8\times 8$ \\
Si & diamond & 5.431 & $8\times 8\times 8$ \\
C & diamond & 3.567 & $8\times 8\times 8$ \\
BN & zincblende & 3.615 & $8\times 8\times 8$ \\
AlP & zincblende & 5.451 & $8\times 8\times 8$ \\
\hline\hline
\end{tabularx}
\caption{The bulk crystal structures considered in this study. The lattice constants and \textbf{k}-point grids applied in the quasiparticle calculations are shown. }\label{tab:bulkstructs}
\end{table}
\begin{table}[h]
\centering
\begin{tabularx}{\columnwidth}{>{\hsize=.5\hsize}X>{\hsize=.3\hsize}Xccc}
\hline\hline \noalign{\smallskip}
& & Latt. const. (\AA) & Thickness (\AA) & \textbf{k}-points \\
\hline\noalign{\smallskip}
MoS$_2$ & 2H & 3.160 & 3.170 & $18\times 18\times 1$ \\
MoSe$_2$ & 2H & 3.289 & 3.340 & $18\times 18\times 1$ \\
WS$_2$ & 2H & 3.153 & 3.360 & $18\times 18\times 1$ \\
\hline\hline
\end{tabularx}
\caption{The 2D crystal structures considered in this study. Lattice constant, layer thickness and \textbf{k}-point grids are shown. }\label{tab:2dstructs}
\end{table}
\section{Results}
\begin{table}[h]
\centering
\begin{tabularx}{\columnwidth}{lcccccc}
\hline\hline \noalign{\smallskip}
& LDA & G$_0$W$_0$ & GW$_0$ & \multicolumn{1}{c}{G$_0$W$_0$P$_0$} & \multicolumn{1}{c}{G$_0$W$_0\Gamma_0$} & Exp.\\
\hline\noalign{\smallskip}
MgO & 4.68 & 7.70 & 8.16 & 7.10 & 7.96 & 7.98 \\
CdS & 0.86 & 1.76 & 2.27 & 1.84 & 1.87 & 2.48 \\
LiF & 8.83 & 14.00 & 14.75 & 13.25 & 14.21 & 14.66 \\
SiC & 1.31 & 2.54 & 2.72 & 2.38 & 2.57 & 2.51 \\
Si & 0.52 & 1.23 & 1.34 & 1.16 & 1.29 & 1.22 \\
C & 4.10 & 5.74 & 5.97 & 5.62 & 5.69 & 5.88 \\
BN & 4.36 & 6.54 & 6.81 & 6.27 & 6.60 & 6.6 \\
AlP & 1.44 & 2.48 & 2.67 & 2.34 & 2.51 & 2.47 \\
\hline\noalign{\smallskip}
ML-MoS$_2$ & 1.71 & 2.47 & 2.61 & 2.28 & 2.47 & 2.50 \\
ML-MoSe$_2$ & 1.43 & 2.08 & 2.23 & 1.99 & 2.07 & 2.31 \\
ML-WS$_2$ & 1.33 & 2.75 & 3.07 & 2.56 & 2.81 & 2.72\\
\hline\noalign{\smallskip}
MAE & \phantom{-}1.89 & \phantom{-}0.20 & 0.17 & \phantom{-}0.41 & \phantom{-}0.16 & - \\
MSE & -1.89 & -0.19 & 0.12 & -0.41 & -0.12 & - \\
\hline \hline\noalign{\smallskip}
\end{tabularx}
\caption{\label{tab:res} Band gaps obtained using different self-energy approximations (see text). Experimental values are from \cite{shishkin} and the references therein and corrected for zero-point motion (MgO: 0.15 eV, CdS: 0.06 eV, LiF: 0.46 eV, SiC: 0.11 eV, Si: 0.05 eV, C: 0.40 eV, BN: 0.26 eV, AlP: 0.02 eV) as found in \cite{fxc_bootstrap,zpr40} and the references therein. The experimental values for the 2D materials have not been corrected for zero-point motion since only a value for MoS$_2$ was found in the literature (75 meV)\cite{mos2zpr}. Calculated values for the 2D materials include spin-orbit coupling.
\end{table}
\begin{figure*
\includegraphics[width=1.9\columnwidth]{IP_EA_barplot_29_09.pdf}
\caption{\label{fig:IP_EA} IP and EA of a range of 3D and 2D semiconductors calculated with EXX (green), G$_0$W$_0$ (blue), G$_0$W$_0\Gamma$ (red), G$_0$W$_0$P (orange), GW$_0$ (magenta) and compared with experimental values where available (black)~\cite{IP_EA_exp}.}
\end{figure*}
Table \ref{tab:res} shows the band gaps obtained with the different methods and their deviations from experimental reference values for both bulk and 2D materials. The 2D materials are included because they are scientifically interesting but also because they offer a unique opportunity for obtaining accurate energy levels due to their well-defined surface structures. On the contrary, energy levels in bulk solids are greatly affected by variations and uncertainties in the surface orientation/termination which makes a comparison between theoretical and experimental results problematic.
In agreement with previous findings G$_0$W$_0$ underestimates the experimental band gaps slightly while GW$_0$ generally overestimates the gaps. The overestimation becomes even larger in GW (see Appendix) which is therefore not considered further in this work.
The experimental VBM are from reference \cite{taisuke_acsnano}. The band gap of MoSe$_2$ is from \cite{MoSe2gapnature} where a gap of 2.18 $\pm$ 0.04 eV is reported for MoSe$_2$ on top of bilayer graphene. The effect of the substrate is calculated in the same reference to be a lowering of the band gap of 0.13 eV, giving a band gap of 2.31 eV for free-standing MoSe$_2$. \\
The band gap of 2.5 eV for free-standing MoS$_2$ is from \cite{MoS2gapnature}. In \cite{2dgapsnanoletter} a band gap of 2.18 $\pm$ 0.05 eV for MoS$_2$ on top of quartz is reported. Comparing the two numbers, quartz is expected to lower the gap by 0.32 eV. In \cite{2dgapsnanoletter} the band gap of WS$_2$ on top of quartz is reported to be 2.40 $\pm$ 0.06 eV. Assuming the same substrate effect, the band gap of free-standing WS$_2$ is 2.72 eV.\\
The numbers in Table \ref{tab:res} are including spin-orbit corrections. These are a splitting of the VB by 0.15, 0.19, 0.45 eV and of the CB by 0.00, 0.02, 0.02 eV for MoS$_2$, MoSe$_2$ and WS$_2$ respectively \cite{filip_compdata}.
From Table \ref{tab:res} we conclude that G$_0$W$_0\Gamma$ shows the best agreement with experiments, closely followed by GW$_0$, but the mean signed error of the two methods come with opposite sign. Including the vertex only in the polarizability (G$_0$W$_0$P) leads to a closing of the gap (as previously reported in \cite{fxc_bootstrap, shishkin}) resulting in significantly underestimated gaps.
In Fig. \ref{fig:IP_EA} we show the absolute positions of the valence band maximum (VBM) and conduction band minimum (CBM) relative to vacuum. The most striking effect of including the vertex is a systematic upshift of the band edges by around 0.5 eV. Remarkably, this upshift leads to a better overall agreement with experiments (dashed black lines). The upshift of band energies is not observed when the vertex is included only in the polarisability, i.e. when employing a test charge-test charge screened interaction (G$_0$W$_0$P). Moreover, no systematic upshift of the band edges is observed for the self-consistent GW flavours which also employ test charge-test charge screening. We conclude that the upshift of band energies originates from the presence of the vertex in the self-energy, i.e. the use of a test charge-electron screened interaction.
\section{Discussion}
In the following we analyse our results from a total energy perspective focusing on the G$_0$W$_0$ and G$_0$W$_0\Gamma$ methods by a generalization of Koopmans' theorem. Subsequently the effect of short- and long-range screening is discussed. It is then exemplified how the vertex affects the results depending on if its included in the polarizability, self-energy or both. Finally the improved numerical convergence behaviour upon inclusion of the kernel is shown.
\subsection{Generalized Koopmans' theorem}
\begin{figure*}[t!
\centering
\includegraphics[clip, trim=0.5cm 11.cm 0.5cm 1.5cm, width=1.7\columnwidth]{diagram_hole6_14_07.pdf}
\caption{\label{fig:diagram}(a) Schematic illustration of the different contributions to the highest occupied and lowest unoccupied QP levels of a semiconductor. (b) The energy cost of removing a valence electron consists of the Hartree-Fock energy ($\varepsilon_N^{\text{HF}}$), the correlation energy of an electron in the ground state ($\varepsilon_c$), and a stabilising screening contribution ($\Delta_c^\pm$). The two latter are predominantly of short-range and long-range nature, respectively.}
\end{figure*}
From Koopmans’ theorem it follows that the highest occupied and lowest unoccupied QP energies can be expressed as
\begin{eqnarray}\label{eq.qp1exact}
\varepsilon^{\text{QP}}_{N} &=& \varepsilon^{\text{HF}}_{N} + E_{c}[N]-E_c[N-1] \\ \label{eq.qp2exact}
\varepsilon^{\text{QP}}_{N+1} &=& \varepsilon^{\text{HF}}_{N+1} + E_{c}[N+1]-E_c[N]
\end{eqnarray}
where $\varepsilon^{\text{HF}}$ are the Hartree-Fock single particle energies (evaluated on Kohn-Sham orbitals) and $E_c[N]$ is the correlation energy of the $N$-particle ground state. The latter can be calculated from the ACDFT, which can be cast in the form
\begin{equation}\label{eq.acdft}
E_c = -\int_0^1 d \lambda\int_0^{\infty} \frac{d \omega}{2\pi} \text{Tr}\{ \chi^0(i\omega) (\widetilde W^\lambda(i\omega)-v)\}
\end{equation}
Here $\widetilde W^{\lambda}$ equals the screened test charge-electron interaction of Eq. (\ref{eq.tildeW}). Setting $f_{xc}=0$ we have $\widetilde W = W$ and $E_c$ becomes the RPA correlation energy.
Assuming no orbital relaxations (which is justified for an extended periodic crystal), Niquet \emph{et al.}~\cite{niquet} have shown that the ionization potential (IP) and electron affinity (EA) calculated as total energy differences with the ACDFT-RPA equal the highest occupied and lowest unoccupied QP energies from G$_0$W$_0$, respectively (when setting the renormalization factor $Z$ to unity). In the same way it can be shown, at least for an exchange only kernel, that the IP and EA obtained from the ACDFT with $\widetilde W$ from Eq. (\ref{eq.tildeW}), equal the respective QP band edges obtained from G$_0$W$_0\Gamma$ when $\Gamma$ is the vertex corresponding to $f_x$ (see the Appendix for a proof). These results represent a generalization of Koopmans’ theorem of Hartree-Fock theory.
In general, HF is known to significantly overestimate the band gap of solids (see Fig. \ref{fig:IP_EA}). Comparing with Eqs. (\ref{eq.qp1exact}-\ref{eq.qp2exact}) this means that the correlation energy in the $N \pm 1$ states must be larger (more negative) than the correlation energy in the neutral $N$-particle ground state. It might seem surprising that $E_c[N-1]<E_c[N]$ since naively one would expect the correlation energy to be a monotonic decreasing function of $N$. However, the addition of an electron/hole to the system changes its character from insulating to metallic and this entails an increase in the correlation energy. To make this idea more explicit we can split the change in correlation energy into two terms: the correlation energy per electron in the neutral ground state ($\varepsilon_c\equiv E_c[N]/N<0$) and a remainder representing the extra correlation energy due to the insulator-metal transition ($\Delta_c^{+/-} \equiv E_c[N\pm 1]-(E_c[N]\pm \varepsilon_c)$). With these definitions we can write
\begin{eqnarray}\label{eq.qp1}
\varepsilon^{\text{QP}}_{N} &=& \varepsilon^{\text{HF}}_{N} + \varepsilon_c-\Delta_c^- \\ \label{eq.qp2}
\varepsilon^{\text{QP}}_{N+1} &=& \varepsilon^{\text{HF}}_{N+1} + \varepsilon_c+\Delta_c^+
\end{eqnarray}
The relations are illustrated in Fig. \ref{fig:diagram}(a). Clearly, the effect of $\varepsilon_c$ is to downshift the band edges from their HF positions while the $\Delta_c^{\pm}$ closes the gap. In the quasiparticle picture, $\Delta_c^{\pm}$ represent the screening of the added electron/hole, see Fig. \ref{fig:diagram}(b), and we shall therefore refer to them as screening terms. By its stabilization of the final states (the $N\pm 1$ states) the effect of the screening terms is similar to that of orbital relaxations in finite systems, yet the underlying physics is completely different: orbital relaxations are vanishingly small in periodic crystals and occur even in non-correlated theories like HF. In contrast $\Delta_c^{\pm}$ describes a pure correlation effect and does not vanish in infinite, periodic systems.
We find it useful to analyse the QP energies in terms of the band gap and the band gap center. These are related to $\varepsilon_c$ and $\Delta_c^{\pm}$ by
\begin{eqnarray}\label{eq.gap1}
E^{\text{QP}}_{\text{gap}} &=& E^{\text{HF}}_{\text{gap}} + (\Delta_c^- + \Delta_c^+)\\ \label{eq.gap2}
E^{\text{QP}}_{\text{cen}} &=& E^{\text{HF}}_{\text{cen}} + \varepsilon_c+ (\Delta_c^+-\Delta_c^-)/2
\end{eqnarray}
The correlation contribution to the gap is determined only by the screening terms $\Delta_c^{\pm}$. From the close agreement between the G$_0$W$_0$ and G$_0$W$_0\Gamma$ band gaps (red columns in Fig. \ref{fig:epsilon_c}) we conclude that the vertex has little effect on the (sum of the) screening terms. In contrast, the band gap center also depends on the ground state correlation energy, $\varepsilon_c$. We have calculated $\varepsilon_c$ for all the investigated materials using the RPA and rALDA total energy methods (see the Appendix for the exact values). In Fig. \ref{fig:epsilon_c} we compare the difference between the RPA and rALDA calculated $\varepsilon_c$ (black line) with the difference between the G$_0$W$_0$ and G$_0$W$_0\Gamma$ calculated band gap centers (blue columns). The rather close agreement shows that the main difference in band gap center can be ascribed to $\varepsilon_c$. It is now clear that the upshift of QP energies obtained with G$_0$W$_0\Gamma$ originates from the smaller (less negative) correlation energy of electrons in the neutral ground state predicted by rALDA compared to RPA. The well documented superiority of the rALDA over the RPA for the description of ground state correlation energies, in combination the improved agreement with experimental band energies (Fig. \ref{fig:IP_EA}) constitutes strong evidence that our QP-rALDA scheme represents a genuine improvement over the GW approximation.
\begin{figure
\includegraphics[width=1.\columnwidth]{epsilon_c_exp_12_10.pdf}
\caption{\label{fig:epsilon_c} The difference in band gap, band gap center, and $\epsilon_c$ upon inclusion of the rALDA kernel.}
\end{figure}
\subsection{Short- versus long-range screening}
We have seen that the dominant effect of the vertex correction is to shift the band gap center while the band gap itself is less affected. Physically, the main effect of the rALDA kernel is to modify the effective Coulomb interaction at short distances. More precisely, given a density variation, $\delta n$, the corresponding induced electron potential, $\delta v_{\text{Hxc}} = (v+f_{xc})\delta n$, is generally weaker than the bare Hartree potential $\delta v_H = v \delta n$, because $v(q)+f_{xc}(q)<v(q)$. However, by definition of the rALDA kernel, the reduction is stronger for larger $q$, which translates to shorter distances in real space. From these observations we can conclude that the QP band gap is mainly determined by long-range interactions while the band gap center is sensitive to the short-range interactions. This agrees well with the quasiparticle picture illustrated in Fig. \ref{fig:diagram}: Namely, adding a particle/hole without accounting for the screening represents a local (short-range) perturbation while the screening of the added charge is a long-range process. While the rALDA kernel mainly reduces the short-range interactions it also reduces the long-range components slightly. This leads to a slightly weaker long-range screening (smaller $\Delta_c$) and slightly larger band gaps as seen in Table \ref{tab:res}.
Returning to Fig. \ref{fig:IP_EA} we note that for the 2D materials Hartree-Fock predicts a lower IP than the GW methods in clear contrast to the situation for bulk solids. This anomalous behaviour is a result of the relatively more important effect of short- compared to long-range correlations in reduced dimensions. Indeed, the dielectric function of a 2D semiconductor approaches unity in the long wavelength limit, which reduces the screening terms $\Delta_c^{\pm}$. At the same time we find that the 2D materials present the largest values for $\varepsilon_c$ of all the materials (see Table V in the Appendix) showing that the absolute correlation energy per electron is larger for the 2D materials compared to the bulk materials.
\subsection{Vertex in the polarizability and/or self-energy}
\begin{figure
\includegraphics[width=1.\columnwidth]{BN_4methods_extrap_27_09.png}
\caption{\label{fig:BN_4methods}Absolute position of the VBM and CBM relative to vacuum for BN calculated with the four different methods. The band gap center is shown with a dotted line.}
\end{figure}
As mentioned previously, the vertex enters Hedin’s equations at two places, namely the polarisability and the self-energy. For a consistent description the vertex should be included in both places (the G$_0$W$_0\Gamma_0$ approach). However, it is instructive to study the effect of including the vertex separately in the polarisability and self-energy. To this end we note that self-energy (excluding the exchange part) can be written
\begin{equation}
\Sigma = iG v \chi (v+f_{xc})
\end{equation}
where $\chi = P+Pv\chi$ and $P=\chi^0+\chi^0 f_{xc} P$. Including $f_{xc}$ in $P$ affects the description of screening while including it in $\Sigma_c$ affects the form of the potential created by the induced density when subject to a test-charge (as explained above it mainly reduces the short range part of the potential). Based on this we can obtain four different GW-like self-energies, where the only one not mentioned up to now is including the vertex in the self-energy but not in the polarisability - this we term GW$\Sigma$.
Fig. \ref{fig:BN_4methods} shows the band gap size and center obtained for BN with these four self-energies. It is clear that the band gap center depends mainly on the $f_{xc}$ in the self-energy, i.e. a correct description of the band gap center requires the inclusion of xc-effects in the induced potential. The size of the band gap increases in the order G$_0$W$_0$P, G$_0$W$_0$, G$_0$W$_0\Gamma$, G$_0$W$_0\Sigma$.
As previously argued the size of the gap depends mainly on the long range screening (and less on the short-range form of the final induced potential). The $f_{xc}$ reduces the long range interactions somewhat. Thus the total induced potential will generally be smaller when xc-effects are included in the final potential. This explains the larger band gaps found for G$_0$W$_0\Gamma$ and G$_0$W$_0\Sigma$. The remaining ordering comes from noting that $\chi^{\text{rALDA}}>\chi^{\text{RPA}}$ because the higher order diagrams, which reduce the effect of $\chi^0$, are larger in RPA.
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{1.0\columnwidth}
\includegraphics[clip, trim=0.cm 0.cm 1.5cm 0.cm, width=\columnwidth]{BN_conv_ecut_bands_RPA_rALDA_blue_02_10.png}
\caption{\label{subfig:conv1}}
\end{subfigure}
\begin{subfigure}[b]{1.0\columnwidth}
\includegraphics[clip, trim=0.cm .5cm 0.cm 0.cm,width=1.04\columnwidth]{BN_conv_26_09_3.png}
\caption{\label{subfig:conv2}}
\end{subfigure}
\caption{\label{fig:conv} (a) Convergence of the band gap in BN with respect to plane wave cutoff and the number of bands included using the RPA (bottom) and rALDA (top) kernel. (b) Plane wave convergence of the band gap of bulk BN using the ALDA and rALDA kernel in the G$_0$W$_0\Gamma_0$ method as well as the RPA kernel (G$_0$W$_0$ method).}
\end{figure*}
\subsection{Improved convergence}
Finally, we mention that the reduction of large $q$ components by the rALDA kernel not only improves the description of short range correlations but also leads to faster convergence with respect to plane waves and number of unoccupied states compared to RPA/GW calculations, as shown in Fig. \ref{fig:conv}\subref{subfig:conv1} for the case of bulk BN. On the y-axis we show the difference in the band gap compared to that from a calculation at a cutoff of 50 eV with the corresponding number of bands. The improved convergence also manifests itself in the QP corrections to the individual bands.
In Fig. \ref{fig:conv}\subref{subfig:conv2} the band gap of bulk BN is shown versus the plane wave cutoff applying the G$_0$W$_0\Gamma_0$ method with the ALDA and rALDA kernel as well as the standard G$_0$W$_0$ method. The need of the renormalization of the kernel in order to obtain converged results, is very apparent.
\section{Conclusion}
In conclusion, we have demonstrated that a more accurate description of short-range correlations in QP calculations can be obtained with a simple TDDFT-inspired vertex function. Inclusion of the vertex improves the agreement with experimental data for the absolute band energies of bulk and two-dimensional semiconductors. Moreover, it justifies the use of DFT as a starting point for non-self-consistent QP calculations and is thus formally more rigorous than the G$_0$W$_0$@DFT approach. Importantly, these advantages come without increasing the numerical complexity or computational cost compared to G$_0$W$_0$ calculations.
\section{Acknowledgements}
The Center for Nanostructured Graphene (CNG) is sponsored by the Danish Research Foundation, Project DNRF103.
\section{Appendix}
| proofpile-arXiv_065-14370 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction }
In this paper, we only consider simple graphs without loops and multiple edges. For terminology and notation not
defined but used, we refer the reader to \cite{BondyM2008}. Let $G$ be a connected graph with vertex set
$V(G)=\{v_{1},v_{2},\ldots,v_{n}\}$ and edge set $E(G)$. Write by $m=e(G)=|E(G)|$ the number of edges of graph $G$.
For each $v_{i}\in V(G)$, denote by $N_{G}(v_{i})$ the set of vertices adjacent to $v_{i}$ in $G$ and $d_{i}=d_{G}(v_{i})=|N_{G}(v_{i})|$ the degree of $v_{i}$. Moreover,
$N[v]=N_{G}[v]=N_{G}(v)\cup \{v\}$. Denote by $\delta=\delta(G)$ the minimum degree of $G$ and $\Delta=\Delta(G)$ the maximum degree of $G$. For
convenience, we use ($0^{x_{0}}, 1^{x_{1}},\ldots, k^{x_{k}},\ldots,\Delta^{x_{\Delta}}$) to denote the degree sequence of $G$, where $x_{k}$ is the number of vertices of degree $k$ in $G$.
We use
$G+H$ and $G\vee H$ to denote the disjoint union and the join of $G$ and $H$ respectively. The union of $k$ disjoint
copies of the same graph $G$ is denoted by $kG$.
Let $G$ be a graph. The adjacency matrix and degree diagonal matrix of $G$ are denoted by $A(G)$ and $D(G)$, respectively. The largest eigenvalue of $A(G)$, denoted by $\rho(G)$, is called to be the spectral radius of $G$. The matrix
$Q(G)=D(G)+A(G)$ is the signless Laplacian matrix of $G$. The largest eigenvalue of $Q(G)$, denoted by $q(G)$, is
called to be the signless Laplacian spectral radius of $G$.
Hamilton path is a path containing all vertices of $G$, and Hamilton cycle is a cycle containing all vertices of $G$. A graph is called to be traceable if it contains a Hamilton path, and a graph is called to be Hamiltonian if it contains a Hamilton cycle. A graph is called to be Hamilton-connected if every two vertices of $G$ are connected by a Hamilton path.
The problem of determining whether a given graph is Hamiltonian, traceable, Hamilton-connected is NP-complete.
Recently, there are many reasonable sufficient or necessary conditions that were given for a
graph to be Hamiltonian, traceable or Hamilton-connected. Fiedler and Nikiforov \cite{Fiedler2010} firstly gave
sufficient conditions in terms of the spectral radius of a graph or its complement for the existence of Hamilton
cycles. This work motivated further research, one may refer to \cite{Benediktovich2015,Liu2015,Lu2012,NingGe2015,fy,YuYeCai2014,ZhouBo2010,ZhouQN22017}.
Recently, by imposing the minimum degree of a graph as a new parameter, Li and Ning \cite{LiNing2016LMA,LiNing2017LAA} extended some the results in \cite{Fiedler2010,Liu2015,NingGe2015}. Now, their
results were improved by Nikiforov \cite{Nikiforov2016}, Chen et al. \cite{ChenHou2017}, Ge et al. \cite{GeNing} and Li et al. \cite{LiLiuPeng}, in some sense.
The following sufficient condition involving the number of edge is due to Ore \cite{Ore1963}.
\noindent\begin{theorem}\label{th:5c2}(\cite{Ore1963}) Let $G$ be a graph on $n$ vertices and $m$ edges. If
$$m\geq \dbinom{n-1}{2}+3,$$
then $G$ is Hamilton-connected.
\end{theorem}
Observing that $\delta\geq 3$ is a trivial necessary condition for $G$ to be Hamilton-connected. Zhou and Wang
\cite{ZhouQN12017} refined the above edge number condition.
\noindent\begin{theorem}\label{th:5c3}(\cite{ZhouQN12017})
Let $G$ be a connected graph on $n\geq 6$ vertices and $m$ edges with minimum degree $\delta\geq 3$. If $$m\geq
\dbinom{n-2}{2}+6,$$ then $G$ is Hamilton-connected unless $G\in \mathbb{NP}_{1}=\{K_{3}\vee (K_{n-5}+2K_{1}),
K_{6}\vee 6K_{1}, K_{4}\vee (K_{2}+3K_{1}), 5K_{1}\vee K_{5}, K_{4}\vee (K_{1,4}+K_{1}), K_{4}\vee (K_{1,3}+K_{2}),
K_{3}\vee K_{2,5}, K_{4}\vee 4K_{1}, K_{3}\vee (K_{1}+K_{1,3}), K_{3}\vee (K_{1,2}+K_{2}), K_{2}\vee K_{2,4}\}.$
\end{theorem}
For $n\geq 5$ and $1\leq k\leq n/2$, we define:
$$S_{n}^{k}=K_{k}\vee (K_{n-(2k-1)}+(k-1)K_{1})~and~T_{n}^{k}=K_{2}\vee (K_{n-(k+1)}+K_{k-1}).$$
Moreover, for $t\geq 1$, let $\mathcal{S}_{n}^{k}(t)$ $(resp., \mathcal{T}_{n}^{k}(t))$ denote the set of all possible graphs obtained from $S_{n}^{k}$ ($resp., T_{n}^{k}$) by deleting exactly $t$ edges such that $\delta\geq 3$.
Obviously, $\mathcal{S}_{n}^{k}(0)=\{S_{n}^{k}\}$, $\mathcal{T}_{n}^{k}(0)=\{T_{n}^{k}\}$.
In this paper, we first make a further improvement for Theorem \ref{th:5c3}.
\noindent\begin{theorem}\label{th:5main1} Let $G$ be a connected graph on $n\geq 11$ vertices and $m$ edges with minimum degree $\delta\geq 3$. If
$$m\geq \dbinom{n-3}{2}+13,$$
then $G$ is Hamilton-connected unless $G\in (\bigcup_{i=0}^{n-10}\mathcal{S}_{n}^{3}(i))~\bigcup~
(\bigcup_{i=0}^{n-11}\mathcal{T}_{n}^{3}(i))$, or for $n=11$, $G=S_{11}^{5}$, or for $n=12$,
$G\in \bigcup_{i=0}^{2}\mathcal{S}_{12}^{6}(i)$, or for $n=13$, $G=S_{13}^{6}$, or for $n=14$,
$G\in \bigcup_{i=0}^{2}\mathcal{S}_{14}^{7}(i)~\bigcup~S_{14}^{3}(4)$,
or for $n=16$,
$G\in \bigcup_{i=0}^{1}\mathcal{S}_{16}^{8}(i)~\bigcup~K_{7}\vee (K_{2}+K_{1,6})$.
\end{theorem}
By Theorem \ref{th:5main1}, we can get the following two corollaries immediately.
\noindent\begin{corollary}\label{co:5main2} Let $G$ be a connected graph on $n\geq 14$ vertices and $m$ edges with minimum degree $\delta\geq 3$. If
$$m\geq \dbinom{n-2}{2}+4,$$
then $G$ is Hamilton-connected unless $G\in (\bigcup_{i=0}^{2}\mathcal{S}_{n}^{3}(i))~\bigcup~
(\bigcup_{i=0}^{1}\mathcal{T}_{n}^{3}(i))$, or for $n=14$, $G=S_{14}^{7}$.
\end{corollary}
\noindent\begin{corollary}\label{co:5main3} Let $G$ be a connected graph on $n\geq 13$ vertices and $m$ edges with minimum degree $\delta\geq 3$. If
$$m\geq \dbinom{n-2}{2}+3,$$
then $G$ is Hamilton-connected unless $G\in (\bigcup_{i=0}^{3}\mathcal{S}_{n}^{3}(i))~\bigcup~
(\bigcup_{i=0}^{2}\mathcal{T}_{n}^{3}(i))$,
or for $n=13$, $G=S_{13}^{6}$,
or for $n=14$, $G\in
\bigcup_{i=0}^{1}\mathcal{S}_{14}^{7}(i)$.
\end{corollary}
In \cite{fy}, Yu and Fan have established sufficient conditions for a graph to be Hamilton-connected in terms of the spectral radius and signless Laplacian spectral radius. Write $G=K_{n-1}+e+e'$ for $K_{n-1}$ together
with a vertex joining two vertices of $K_{n-1}$ by the edges $e$, $e'$, respectively.
\noindent\begin{theorem}\label{th:5cfy}(\cite{fy}) Let $G$ be a graph on $n$ vertices.
\begin{enumerate}[(i)]
\item If $\rho(G)> -\frac{1}{2}+\sqrt{n^{2}-3n+\frac{17}{4}}$, then $G$ is Hamilton-connected unless $G=K_{n-1}+e+e'$.
\item If $\rho(\overline{G})< \sqrt{\frac{(n-2)^{2}}{n}}$, then $G$ is Hamilton-connected.
\item If $q(G)> 2n-4+\frac{2}{n-1}$, then $G$ is Hamilton-connected unless $G=K_{n-1}+e+e'$.
\end{enumerate}
\end{theorem}
Recently, Zhou and Wang \cite{ZhouQN12017} gave some spectral sufficient conditions on spectral radius and signless
Laplacian spectral radius for a graph to be Hamilton-connected, which extended the result of Yu and Fan \cite{fy} in
some sence.
\noindent\begin{theorem}\label{th:5czw}(\cite{ZhouQN12017}) Let $G$ be a connected graph on $n\geq 6$ vertices with minimum degree $\delta\geq 3$.
\begin{enumerate}[(i)]
\item If $\rho(G)\geq \sqrt{n^{2}-6n+19}$, then $G$ is Hamilton-connected.
\item If $q(G)\geq 2n-6+\frac{14}{n-1}$, then $G$ is Hamilton-connected unless $G=K_{4}\vee 4K_{1}$.
\end{enumerate}
\end{theorem}
In this paper, we continue to study new sufficient spectral conditions for a graph to be Hamilton-connected. We will
use Corollaries \ref{co:5main2} and \ref{co:5main3} to give the spectral sufficient conditions for a graph to be
Hamilton-connected.
\noindent\begin{theorem}\label{th:5main4} Let $G$ be a connected graph on $n\geq 14$ vertices with minimum degree $\delta\geq 3$. If
$$\rho(G)> n-3,$$
then $G$ is Hamilton-connected unless $G\in \{S_{n}^{3},T_{n}^{3}\}$.
\end{theorem}
It is easy to see that if we do 1 Kelmans operation on $T_{n}^{3}$, then we can obtain a proper
subgraph of $S_{n}^{3}$. Hence $\rho(S_{n}^{3})> \rho(T_{n}^{3})>\rho(K_{n-2})=n-3$. By Theorem \ref{th:5main4},
we have the following corollary.
\noindent\begin{corollary}\label{co:5main5} Let $G$ be a connected graph on $n\geq 14$ vertices with minimum degree $\delta\geq 3$. If
$$\rho(G)\geq \rho(S_{n}^{3}),$$
then $G$ is Hamilton-connected unless $G=S_{n}^{3}$.
\end{corollary}
\noindent\begin{theorem}\label{th:5main6} Let $G$ be a connected graph on $n\geq 13$ vertices with minimum degree $\delta\geq 3$. If
$$q(G)> 2n-6+\frac{6}{n-1},$$
then $G$ is Hamilton-connected unless $G=S_{n}^{3}$.
\end{theorem}
Obviously, we have the following corollary.
\noindent\begin{corollary}\label{co:5main7} Let $G$ be a connected graph on $n\geq 13$ vertices with minimum degree $\delta\geq 3$. If
$$q(G)\geq q(S_{n}^{3}),$$
then $G$ is Hamilton-connected unless $G=S_{n}^{3}$.
\end{corollary}
We can see that our results improve the previous work. Furthermore, let $\mathcal{G}_{n}$ be the class of
non-Hamilton-connected graphs of order $n$. In Corollaries \ref{co:5main5} and \ref{co:5main7}, we determine the maximum
spectral radius and the maximum signless
Laplacian spectral radius in $\mathcal{G}_{n}$. And the extremal graphs with maximum spectral radius and the maximum
signless Laplacian spectral radius are determined.
The rest of this paper is organized as follows. In Section~2, we present some useful techniques and lemmas. In
Section~3, we present the proofs of Theorems \ref{th:5main1},~\ref{th:5main4} and \ref{th:5main6}.
\section{Preliminaries}
In this section, we list some useful techniques and lemmas that will be used in later sections.
Firstly, let us recall the Kelmans transformation \cite{Kelmans1981}. Given a graph $G$ and two specified vertices
$u$, $v$ construct a new graph $G^{*}$ by replacing all edges $vx$ by $ux$ for $x\in N(v)\setminus N[u]$. Obviously,
the new graph $G^{*}$ has the same number of vertices and edges as $G$, and all vertices different from $u$ and $v$
retain their degrees. The vertices $u$ and $v$ are adjacent in $G^{*}$ if and only if they are adjacent in $G$.
\noindent\begin{lemma}\label{le:5c4}(\cite{Csikvari2009}) Let $G$ be a graph and $G^{*}$ be
a graph obtained from $G$ by some Kelmans's transformation. Then $\rho(G)\leq \rho(G^{*})$.
\end{lemma}
\noindent\begin{lemma}\label{le:5c5}(\cite{LiNing2016LMA}) Let $G$ be a graph and $G^{*}$ be
a graph obtained from $G$ by a Kelmans transformation. Then $q(G)\leq q(G^{*})$.
\end{lemma}
Suppose $M$ is a symmetric real matrix whose rows and columns are indexed by $X=\{1,\ldots,n\}$. Let
$\pi=\{X_{1},\ldots,X_{m}\}$ be a partition of $X$. Let $M$ be partitioned according to $\{X_{1},\ldots,X_{m}\}$,
i.e.,
\begin{displaymath}
A(\Gamma_{1})=\left(
\begin{array}{cccccc}
M_{11}& \ldots & M_{1m} \\
\vdots& & \vdots \\
M_{m1}& \ldots & M_{mm} \\
\end{array}
\right),
\end{displaymath}
where $M_{ij}$ denotes the block of $M$ formed by rows in $X_{i}$ and the columns in $X_{j}$. Let $b_{ij}$ denote
the average row sum of $M_{ij}$, i.e., $b_{ij}=\frac{\textbf{1}^{T}M_{ij}\textbf{1}}{|X_{i}|}$, where $\textbf{1}$
is a column vector with all the elements 1. Then the matrix $M/\pi=(b_{ij})_{m\times m}$ is called the quotient
matrix of $M$. If the row sum of each block $M_{ij}$ is a constant, then the partition is called equitable.
\noindent\begin{lemma}\label{le:5c6}(\cite{GodsilRo2001}) Let $G$ be a graph. If $\pi$ is an equitable partition of
$V(G)$ corresponding to $A(G)$ $(Q(G))$, then $\rho(A(G)/\pi)=\rho(A(G))$ $(q(Q(G)/\pi)=q(Q(G)))$.
\end{lemma}
\noindent\begin{lemma}\label{le:5c7}(\cite{Brouwer2011,GodsilRo2001}) Let $G$ be a connected graph. If $H$ is a subgraph (proper subgraph) of $G$, then $\rho(H)\leq \rho(G)$ $(\rho(H)< \rho(G))$ and $q(H)\leq q(G)$ $(q(H)< q(G))$.
\end{lemma}
Hong et al. \cite{HongShu2001} proved the following spectral inequality for connected graphs. Nikiforov
\cite{Nikiforov2002} proved it for general graphs independently, and the case of equality was characterized in
\cite{ZhouCho2005}.
\noindent\begin{lemma}\label{le:5c8}(\cite{Nikiforov2002}) Let $G$ be a graph on $n$ vertices and $m$ edges with minimum degree $\delta$. Then $\rho(G)\leq
\frac{\delta-1}{2}+\sqrt{2m-n\delta+\frac{(\delta+1)^{2}}{4}}$.
\end{lemma}
The following result is also useful for us.
\noindent\begin{lemma}\label{le:5c9}(\cite{HongShu2001, Nikiforov2002}) For nonnegative integers $p$ and $q$ with
$2q\leq p(p-1)$ and $0\leq x\leq p-1$, the function $f(x)=\frac{x-1}{2}+\sqrt{2q-px+\frac{(1+x)^{2}}{4}}$
is decreasing with respect to $x$.
\end{lemma}
\noindent\begin{lemma}\label{le:5c10}(\cite{LFGU,fy}) Let $G$ be a connected graph on $n$ vertices and $m$ edges.
Then $q(G)\leq \frac{2m}{n-1}+n-2$.
\end{lemma}
\noindent\begin{lemma}\label{le:5c1
(\cite{Berge,rao}) Let $G$ be a
graph on $n\geq 3$ vertices with degree sequence $(d_{1},d_{2},\ldots,d_{n})$, where $d_{1}\leq d_{2}\leq \cdots
\leq d_{n}$. If there is no integer $2\leq k\leq \displaystyle\frac{n}{2}$ such that $d_{k-1}\leq k$ and $d_{n-k}\leq
n-k$, then $G$ is Hamilton-connected.
\end{lemma}
\section{Proofs}
\noindent {\bf \emph{The proof of Theorem~\ref{th:5main1}}.} In this proof, we assume that a sequence $\vec{d}$ is called a
permissible graphic sequence
if there is a simple graph with degree sequence $\vec{d}$ satisfying the condition of Lemma \ref{le:5c1}. Suppose by contradiction that $G\notin (\bigcup_{i=0}^{n-10}\mathcal{S}_{n}^{3}(i))~\bigcup~
(\bigcup_{i=0}^{n-11}\mathcal{T}_{n}^{3}(i))$, and for $n=11$, $G\neq S_{11}^{5}$, and for $n=12$,
$G\notin \bigcup_{i=0}^{2}\mathcal{S}_{12}^{6}(i)$, and for $n=13$, $G\neq S_{13}^{6}$, and for $n=14$,
$G\notin \bigcup_{i=0}^{2}\mathcal{S}_{14}^{7}(i)~\bigcup~S_{14}^{3}(4)$, and for $n=16$,
$G\notin \bigcup_{i=0}^{1}\mathcal{S}_{16}^{8}(i)~\bigcup~K_{7}\vee (K_{2}+K_{1,6})$, but $G$ is
non-Hamilton-connected. Suppose that $G$ has the degree sequence $(d_{1},d_{2},\ldots,d_{n})$, where $d_{1}\leq
d_{2}\leq \cdots \leq d_{n}$. By Lemma \ref{le:5c1}, there exists an integer $k\leq n/2$ such that $d_{k-1}\leq k$ and $d_{n-k}\leq
n-k$. For convenience, we call this condition to be NHC-condition. Thus, we have
\begin{align}\label{eq:1}
m&=\nonumber\frac{1}{2}\sum_{i=1}^{n}d_{i}\\
&=\nonumber\frac{1}{2}(\sum_{i=1}^{k-1}d_{i}+\sum_{i=k}^{n-k}d_{i}+\sum_{i=n-k+1}^{n}d_{i})\\
&\nonumber\leq \frac{1}{2}(k(k-1)+(n-k)(n-2k+1)+(n-1)k)\\
&=\dbinom{n-3}{2}+13+\frac{f(k)}{2},
\end{align}
where $f(x):=3x^{2}-(2n+3)x+8n-38$. Since $e(G)\geq \dbinom{n-3}{2}+13$, combining with \eqref{eq:1}, we have
$f(k)>0$. Moreover, note that $3\leq \delta\leq d_{k-1}\leq k\leq n/2$, by a direct
computation, we obtain:
\begin{itemize}
\item for $n\geq 11$, we have $f(3)=2n-20>0$, $f(4)=-2<0$.
\end{itemize}
Then we calculate $f(k)$ for $k\geq 5$. We have:
\begin{itemize}
\item if $n=11$, then $k\leq 5$ and, $f(5)=0$;
\item if $n=12$, then $k\leq 6$ and, $f(5)=-2<0$, $f(6)=4>0$;
\item if $n=13$, then $k\leq 6$ and, $f(5)=-4<0$, $f(6)=0$;
\item if $n=14$, then $k\leq 7$ and, $f(5)=-6<0$, $f(6)=-4<0$, $f(7)=4>0$;
\item if $n=15$, then $k\leq 7$ and, $f(5)=-8<0$, $f(6)=-8<0$, $f(7)=-2<0$;
\item if $n=16$, then $k\leq 8$ and, $f(5)=-10<0$, $f(6)=-12<0$, $f(7)=-8<0$, $f(8)=2>0$;
\item if $n=17$, then $k\leq 8$ and, $f(5)=-12<0$, $f(6)=-16<0$, $f(7)=-14<0$, $f(8)=-6<0$;
\item if $n\geq 18$, then for $5\leq k\leq n/2$, we have $f(k)<0$. To see this, we consider two roots of $f(x)=0$, which are
$$r_{1}=\frac{2n+3-\sqrt{(2n+3)^{2}-12(8n-38)}}{6}, r_{2}=\frac{2n+3+\sqrt{(2n+3)^{2}-12(8n-38)}}{6}.$$
By simple calculation, we have both $r_{1}<5$ and $r_{2}>n/2$ hold for $n\geq 18$, and then the desired result follows.
\end{itemize}
From the above computing results, we discuss the following cases.
\noindent \textbf{Case~1.} $k=3$ and $n\geq 11$.
In this case, we shall show when $G$ is not Hamilton-connected, $G\in (\bigcup_{i=0}^{n-10}\mathcal{S}_{n}^{3}(i))~\bigcup~
(\bigcup_{i=0}^{n-11}\\\mathcal{T}_{n}^{3}(i))$, which is a contradiction to our assumption.
By NHC-condition, we have
\begin{equation}\label{eq:2}
d_{1}=d_{2}=3, d_{3}\leq \cdots \leq d_{n-3}\leq n-3, d_{n-2}\leq d_{n-1}\leq d_{n}\leq n-1.
\end{equation}
Furthermore, note that $f(3)=2n-20$ when $n\geq 11$, by \eqref{eq:1}, we have
$$\dbinom{n-3}{2}+13\leq e(G)\leq \dbinom{n-3}{2}+13+(n-10).$$
If $e(G)=\dbinom{n-3}{2}+13+(n-10)$, then $\sum_{i=1}^{n}d_{i}=2e(G)=n^{2}-5n+18$. Hence it is direct that
the inequalities in \eqref{eq:2} must be equalities, and then the degree sequence of $G$ is
$(3,3,\underbrace{n-3,\ldots,n-3}_{n-5~times},n-1,n-1,n-1)$, which implies that
$G=S_{n}^{3}=\mathcal{S}_{n}^{3}(0)$, a contradiction.
If $e(G)=\dbinom{n-3}{2}+13+(n-10)-t$, where $1\leq t\leq n-10$, then we have
\begin{equation}\label{eq:3}
e(G)=e(S_{n}^{3})-t ~and ~ e(G)=e(T_{n}^{3})-(t-1).
\end{equation}
Moreover, note that any three 3-degree vertices are incident with at most 9 edges, and any $n-3$ vertices are
incident with $\dbinom{n-3}{2}$ edges, and $e(G)\geq \dbinom{n-3}{2}+13$, we conclude that $G$ has exactly two
3-degree vertices. Without loss of generality, we may suppose that $d_{G}(v_{1})=d_{G}(v_{2})=3$. Then we discuss
the following two subcases.
\noindent \textbf{Subcase~1.1.} $v_{1}$ is not adjacent to $v_{2}$.
If $N_{G}(v_{1})=N_{G}(v_{2})$, i.e., $v_{1}$ and $v_{2}$ have the same neighbour, then combining the definition
of $S_{n}^{k}$ and \eqref{eq:3}, $G$ is a subgraph of $S_{n}^{3}$ obtained by deleting $t$$(1\leq t\leq n-10)$
edges from its clique $K_{n-2}$. That is to say, $G\in \bigcup_{t=1}^{n-10}\mathcal{S}_{n}^{3}(t)$, which is a contradiction to our assumption.
Now we assume $N_{G}(v_{1})\neq N_{G}(v_{2})$. Let $H_{1}=G[V(G)\backslash \{v_{1}\}]$. Then, $|V(H_{1})|=n-1$,
$\delta(H_{1})\geq 3$ and $e(H_{1})=e(G)-3\geq \dbinom{n-3}{2}+10=\dbinom{(n-1)-2}{2}+10> \dbinom{(n-1)-2}{2}+6$.
Hence by Theorem \ref{th:5c3}, we have $H_{1}$ is Hamilton-connected. If every two vertices in $V(H_{1})$ can be connected by a
Hamilton path in $G$, then $G$ is also Hamilton-connected, a contradiction. Then there must exist two vertices
$w$ and $w'$ such that they are connected by a path passing through all vertices in $V(G)$ but not
$v_{1}$. Let $P$ be this path in a given direction (from $w$ to $w'$). Suppose the vertices
in $P$ are $w=y_{1},
y_{2}, \ldots, y_{n-1}=w'$ in sequence. Let $y_{i}$, $y_{j}$ and $y_{l}$ $(1\leq i <j <l\leq n-1)$ be three
vertices adjacent to $v_{1}$. Then it is obvious that $\{v_{1}, y_{i+1}, y_{j+1}\}$ is an independent set since $G$
is not Hamilton-connected. We claim that
\begin{equation}\label{eq:4}
d_{G}(y_{i+1})+d_{G}(y_{j+1})\leq n.
\end{equation}
To see this, consider the set $K=\{y_{r}|y_{r-1}\in N_{G}(y_{i+1})\cap V(H_{1}), r-1\leq i~or~r-1\geq j+2\}\bigcup
\{y_{s}|y_{s+1}\in N_{G}(y_{i+1})\cap V(H_{1}), i+1< s+1\leq j\}$. Note that $|K|\geq d_{G}(y_{i+1})-2$. This
follows
since the vertex $y_{i+1}$ is possibly adjacent to $y_{n-1}=w'$ and $y_{i+1}$ is both the successor of $y_{i}$
and the predecessor of $y_{i+2}$. Since $\{v_{1}, y_{i+1}, y_{j+1}\}$ is an independent set, we obtain
$d_{H_{1}}(y_{i+1})=d_{G}(y_{i+1})$, $d_{H_{1}}(y_{j+1})=d_{G}(y_{j+1})$, and $|K\cup N_{G}(y_{j+1})|\leq
|V(H_{1})|-|\{y_{j+1}\}|=n-2$. Thus, if $d_{G}(y_{i+1})+d_{G}(y_{j+1})\geq n+1$, then
\begin{align*}
|K\cap N_{G}(y_{j+1})|&=|K|+|N_{G}(y_{j+1})|-|K\cup N_{G}(y_{j+1})|\\
&\geq d_{G}(y_{i+1})-2+d_{G}(y_{j+1})-(n-2)\\
&\geq n+1-2-(n-2)=1,
\end{align*}
implying that $K$ and $N_{G}(y_{j+1})$ have a common vertex, say $y_{t}$. Obviously, $t\neq i+1, j+1$. If $t=i$,
then $y_{i+1}y_{j+1}\in E(G)$, a contradiction. If $t\leq i-1$, then $y_{i+1}y_{t-1}\in E(G)$. Then
$y_{1}\overrightarrow{P}y_{t-1}y_{i+1}\overrightarrow{P}y_{j}v_{1}y_{i}\overleftarrow{P}y_{t}y_{j+1}
\overrightarrow{P}y_{n-1}$
is a Hamilton path in $G$ connecting $y_{1}$ and $y_{n-1}$, a contradiction. If $t\geq j+2$, then $y_{i+1}y_{t-1}\in
E(G)$. Then $y_{1}\overrightarrow{P}y_{i}v_{1}y_{j}\overleftarrow{P}y_{i+1}y_{t-1}\overleftarrow{P}
y_{j+1}y_{t}\overrightarrow{P}y_{n-1}$ is a Hamilton path in $G$ connecting $y_{1}$ and $y_{n-1}$, a contradiction.
If $i+1< t\leq j$, then $y_{i+1}y_{t+1}\in E(G)$. Then $y_{1}\overrightarrow{P}y_{i}v_{1}y_{j}\overleftarrow{P}
y_{t+1}y_{i+1}\overrightarrow{P}y_{t}y_{j+1}\overrightarrow{P}y_{n-1}$ is a Hamilton path in $G$ connecting $y_{1}$
and $y_{n-1}$, a contradiction. Hence \eqref{eq:4} holds. Next, according the distribution of the neighbors of $v_{1}$,
we discuss the following subcases.
\noindent \textbf{Subcase~1.1.1.} We assume $y_{i+1}\neq y_{j-1}$ and $y_{j+1}\neq y_{l-1}$, then by the similar
method as above, we have
$$d_{G}(y_{j-1})+d_{G}(y_{l-1})\leq n.$$
Consequently, by considering the number of edges in $G$, we can get the following contradiction:
\begin{align*}
e(G)&\leq 3+(d_{G}(y_{i+1})+d_{G}(y_{j+1}))+(d_{G}(y_{j-1})+d_{G}(y_{l-1}))\\
&~~~+e(G[V(G)\setminus \{v_{1},y_{i+1},y_{j+1},y_{j-1},y_{l-1}\}])\\
&\leq 3+n+n+\dbinom{n-5}{2}=\dbinom{n-3}{2}+12<\dbinom{n-3}{2}+13\\
&\leq e(G),
\end{align*}
which is a contradiction and the first inequality follows from that there may be edges in vertex set $\{y_{i+1},y_{j+1},y_{j-1},y_{l-1}\}$.
\noindent \textbf{Subcase~1.1.2.} We assume $y_{i+1}= y_{j-1}$ and $y_{j+1}\neq y_{l-1}$, then $y_{j}=y_{i+2}$,
$y_{j+1}=y_{i+3}$.
If $y_{l}\neq y_{n-1}$, then by using the similar method as that of Subcase~1.1.1, we can obtain
\begin{align*}
e(G)&\leq 3+(d_{G}(y_{i+1})+d_{G}(y_{l-1}))+(d_{G}(y_{i+3})+d_{G}(y_{l+1}))\\
&~~~+e(G[V(G)\setminus \{v_{1},y_{i+1},y_{l-1},y_{i+3},y_{l+1}\}])\\
&\leq 3+n+n+\dbinom{n-5}{2}=\dbinom{n-3}{2}+12<\dbinom{n-3}{2}+13\\
&\leq e(G),
\end{align*}
which is a contradiction and the first inequality follows from that there may be edges in vertex set
$\{y_{i+1},y_{l-1},y_{i+3},y_{l+1}\}$.
Then we suppose $y_{l}=y_{n-1}$. If $y_{i}\neq y_{1}$, then by using the similar method as that of Subcase~1.1.1, we
can obtain
\begin{align*}
e(G)&\leq 3+(d_{G}(y_{i-1})+d_{G}(y_{l-1}))+(d_{G}(y_{i+1})+d_{G}(y_{i+3}))\\
&~~~+e(G[V(G)\setminus \{v_{1},y_{i-1},y_{l-1},y_{i+1},y_{i+3}\}])\\
&\leq 3+n+n+\dbinom{n-5}{2}=\dbinom{n-3}{2}+12<\dbinom{n-3}{2}+13\\
&\leq e(G),
\end{align*}
which is a contradiction and the first inequality follows from that there may be edges in vertex set $\{y_{i-1},y_{l-1},y_{i+1},y_{i+3}\}$.
If $y_{i}=y_{1}$, then $v_{1}$ is adjacent to $y_{1}$, $y_{3}$ and $y_{n-1}$. Let $W_{1}=V(G)\setminus \{v_{1},
y_{1}, y_{3}\}$. Then we show $\delta(G[W_{1}])\geq 3$.
If $d_{G[W_{1}]}(y_{2})=1$, then $y_{2}=v_{2}$. Since $N_{G}(v_{1})\neq N_{G}(v_{2})$ and $\{v_{1},y_{2},y_{4}\}$ is
an independent set, $W_{1}\setminus\{y_{4}, y_{n-1}\}$ has one vertex $w_{1}$ such that $y_{2}w_{1}\in E(G)$. Let
$W_{2}=W_{1}\setminus \{y_{2}\}$. Then
\begin{align*}
e(G[W_{2}])&\geq \dbinom{n-3}{2}+13-3-d_{G}(y_{2})-d_{G[W_{2}\cup \{y_{1},y_{3}\}]}(y_{1})-d_{G[W_{2}\cup \{y_{3}\}]}(y_{3})\\
&\geq \dbinom{n-3}{2}+13-3-3-(n-3)-(n-4)=\dbinom{(n-4)-1}{2}+5,
\end{align*}
which, together with the fact that $|W_{2}|=n-4$ and Theorem \ref{th:5c2}, we have $G[W_{2}]$ is
Hamilton-connected. Then there is a Hamilton path $w_{1}Py_{n-1}$ which connects $w_{1}$ and $y_{n-1}$ in
$G[W_{2}]$. Then $y_{1}v_{1}y_{3}y_{2}w_{1}Py_{n-1}$ is a Hamilton path connecting $y_{1}$ and $y_{n-1}$ in
$G$, a contradiction.
If $d_{W_{1}}(y_{2})=2$, then $d_{G}(y_{2})=4$. There always exists $w_{1}$ that we discussed above. Then
\begin{align*}
e(G[W_{2}])&\geq \dbinom{n-3}{2}+13-3-d_{G}(y_{2})-d_{G[W_{2}\cup \{y_{1},y_{3}\}]}(y_{1})-d_{G[W_{2}\cup \{y_{3}\}]}(y_{3})\\
&\geq \dbinom{n-3}{2}+13-3-4-(n-3)-(n-4)=\dbinom{(n-4)-1}{2}+4.
\end{align*}
Hence we can also get a contradiction by a similar method as above.
If $d_{W_{1}}(y_{4})=1$ or $2$, then $d_{G}(y_{4})=3$ or $4$. Let $W_{3}=W_{1}\setminus \{y_{4}\}$. By a similar
discussion as above, we can obtain $G[W_{3}]$ is Hamilton-connected and also get a contradiction.
If there exist a vertex $y_{k}\in W_{1}\setminus \{y_{2}, y_{4}\}$ which satisfies that $d_{G[W_{1}]}(y_{k})\leq 2$,
then $d_{G}(y_{k})\leq 4$. Since $d_{G}(y_{2})+d_{G}(y_{4})\leq n$,
\begin{align*}
e(G)&\leq d_{G}(v_{1})+d_{G}(y_{k})+(d_{G}(y_{2})+d_{G}(y_{4}))+e(G[V(G)\setminus\{v_{1},y_{2},y_{4},y_{k}\}])\\
&\leq 3+4+n+\dbinom{n-4}{2}=\dbinom{n-3}{2}+11<\dbinom{n-3}{2}+13,
\end{align*}
which is a contradiction.
Hence $\delta(G[W_{1}])\geq 3$. Note that $|W_{1}|=n-3$, and
\begin{align*}
e(G[W_{1}])&\geq \dbinom{n-3}{2}+13-3-d_{G[W_{1}\cup \{y_{1},y_{3}\}]}(y_{1})-d_{G[W_{1}\cup \{y_{3}\}]}(y_{3})\\
&\geq \dbinom{n-3}{2}+13-3-(n-2)-(n-3)=\dbinom{(n-3)-2}{2}+6.
\end{align*}
Then by Theorem \ref{th:5c3}, we have $G[W_{1}]$ is either Hamilton-connected, or $G[W_{1}]=K_{3}\vee (K_{(n-3)-5}+2K_{1})$.
If $G[W_{1}]$ is Hamilton-connected, then there is a Hamilton path connecting $y_{2}$ and $y_{n-1}$ in $G[W_{1}]$,
say $y_{2}Py_{n-1}$. Then $y_{1}v_{1}y_{3}y_{2}Py_{n-1}$ is a Hamilton path connecting $y_{1}$ and $y_{n-1}$ in $G$,
a contradiction. If $G[W_{1}]=K_{3}\vee (K_{(n-3)-5}+2K_{1})$, then $e(G[W_{1}])=\dbinom{(n-3)-2}{2}+6$,
$d_{G[W_{1}\cup \{y_{1},y_{3}\}]}(y_{1})=n-2$ and $d_{G[W_{1}\cup \{y_{3}\}]}(y_{3})=n-3$. Therefore, we have
$d_{G}(y_{1})=d_{G}(y_{3})=n-1$ and $G$ has only one 3-degree vertex $v_{1}$, which contradicts the fact that $G$
has exactly two 3-degree vertices.
Furthermore, the case of $y_{i+1}\neq y_{j-1}$ and $y_{j+1}= y_{l-1}$ can be proved in a similar method, thus we
omit it.
\noindent \textbf{Subcase~1.1.3.} We assume $y_{i+1}= y_{j-1}$ and $y_{j+1}= y_{l-1}$, then $y_{j}=y_{i+2}$,
$y_{l}=y_{i+4}$.
If $v_{1}$ is adjacent to neither $y_{1}$ nor $y_{n-1}$, then there must exist $y_{i-1}$ and $y_{i+5}$ since $n\geq
11$. Then by using a similar method as that of Subcase~1.1.1, we can obtain
\begin{align*}
e(G)&\leq 3+(d_{G}(y_{i-1})+d_{G}(y_{i+3}))+(d_{G}(y_{i+1})+d_{G}(y_{i+5}))\\
&~~~+e(G[V(G)\setminus \{v_{1},y_{i-1},y_{i+3},y_{i+1},y_{i+5}\}])\\
&\leq 3+n+n+\dbinom{n-5}{2}=\dbinom{n-3}{2}+12<\dbinom{n-3}{2}+13\\
&\leq e(G),
\end{align*}
which is a contradiction and the first inequality follows from that there may be edges in vertex set $\{y_{i-1},y_{i+3},y_{i+1},y_{i+5}\}$.
If $v_{1}$ is adjacent to $y_{1}$, then $v_{1}$ is also adjacent to $y_{3}$ and $y_{5}$. One may easily get a
contradiction by a similar discussion as that of Subcase~1.1.2.
Similarly, if $v_{1}$ is adjacent to $y_{n}$, then $v_{1}$ is also adjacent to $y_{n-3}$ and $y_{n-5}$. One can also
get a contradiction by a similar discussion as that of Subcase~1.1.2.
\noindent \textbf{Subcase~1.2.} $v_{1}$ is adjacent to $v_{2}$.
Consider the graph $H_{2}:= G[V(G)\setminus \{v_{1},v_{2}\}]$. It is not difficult to see $|V(H_{2})|=n-2$
and $e(H_{2})=e(G)-5\geq \dbinom{(n-2)-1}{2}+8$. Then by Theorem \ref{th:5c2}, we get that $H_{2}$ is
Hamilton-connected. There must exist two vertices $w$ and $w'$ such that they are connected by a path
passing through all vertices in $V(H_{2})$ but not $v_{1}$ and $v_{2}$ at the same time. We denote this path by
$y_{1}P'y_{n-2}$, where $y_{1}=w$, $y_{n-2}=w'$, and give this path a direction (from $w$ to
$w'$). If $u$ is on this path, we use $u^{+}$ and $u^{-}$ to denote the successor and predecessor of $u$,
respectively.
Since $d_{G}(v_{1})=d_{G}(v_{2})=3$, there must
be two vertices of $H_{2}$, say, $z_{1}$, $z_{2}$, (they are in order on this path)
which are adjacent to $v_{1}$. Also, there must be two vertices $z_{3}$ and $z_{4}$ (they are in order on this path) of $H_{2}$, which are adjacent
to $v_{2}$.
We now claim that $z_{1}=z_{3}$ and $z_{2}=z_{4}$, which, together with \eqref{eq:3}, would yield that $G$
is a subgraph of $T_{n}^{3}$ obtained by deleting $t-1$ edges from its clique $K_{n-2}$, that is, $G\in
\mathcal{T}_{n}^{3}(t-1)$, where $1\leq t\leq n-10$.
Suppose to the contrary that $z_{1}\neq z_{3}$ or $z_{2}\neq z_{4}$. We can easily see that $z_{i}\neq z_{1}^{-},z_{1}^{+},z_{2}^{-},z_{2}^{+}$ ($i=3,4$), $z_{j}\neq z_{3}^{-},z_{3}^{+},z_{4}^{-},z_{4}^{+}$ ($j=1,2$).
And, if $z_{2}=z_{1}^{+}$,
then $z_{4}\neq z_{3}^{+}$ and vise versa.
If $z_{1}\neq z_{3}$ and $z_{2}\neq z_{4}$, then by the same discussion on \eqref{eq:4}, we have
$d_{G}(z_{1}^{+})+d_{G}(z_{3}^{+})\leq n-1$. Then,
\begin{align*}
e(G)&=5+(d_{G}(z_{1}^{+})+d_{G}(z_{3}^{+}))+e(G[V(G)\setminus \{v_{1},v_{2},z_{1}^{+},z_{3}^{+}\}])\\
&\leq 5+n-1+\dbinom{n-4}{2}=\dbinom{n-3}{2}+8<\dbinom{n-3}{2}+13\\
&\leq e(G),
\end{align*}
which is a contradiction.
If $z_{1}= z_{3}$ and $z_{2}\neq z_{4}$, then by the same discussion on \eqref{eq:4}, we have
$d_{G}(z_{2}^{-})+d_{G}(z_{4}^{-})\leq n-1$. Then,
\begin{align*}
e(G)&=5+(d_{G}(z_{2}^{-})+d_{G}(z_{4}^{-}))+e(G[V(G)\setminus \{v_{1},v_{2},z_{2}^{-},z_{4}^{-}\}])\\
&\leq 5+n-1+\dbinom{n-4}{2}=\dbinom{n-3}{2}+8<\dbinom{n-3}{2}+13\\
&\leq e(G),
\end{align*}
which is a contradiction.
The case of $z_{1}\neq z_{3}$ and $z_{2}= z_{4}$ can be discussed in a similar way.
Summing up the above discussion, we have $G\in (\bigcup_{i=0}^{n-10}\mathcal{S}_{n}^{3}(i))~\bigcup~
(\bigcup_{i=0}^{n-11}\mathcal{T}_{n}^{3}(i))$, as desired.
\noindent \textbf{Case~2.} $n=11$ and $k=5$.
In this case, by NHC-condition, we have
\begin{equation}\label{eq:5}
d_{1}\leq d_{2}\leq d_{3}\leq d_{4}\leq5,~d_{5}\leq d_{6}\leq6,~d_{7}\leq\cdots\leq d_{11}\leq10.
\end{equation}
Moreover, since $f(5)=0$ when $n=11$, we obtain $e(G)=\dbinom{n-3}{2}+13=41$, and then $\sum_{i=1}^{11}d_{i}=82$.
Combining with \eqref{eq:5}, we have the degree sequence of $G$ is $(5^{4},6^{2},10^{5})$, which implies
$G=K_{5}\vee (K_{2}+4K_{1})=S_{11}^{5}$, a contradiction.
\noindent \textbf{Case~3.} $n=12$ and $k=6$.
Again, by NHC-condition, we have
\begin{equation}\label{eq:6}
d_{1}\leq d_{2}\leq d_{3}\leq d_{4}\leq d_{5}\leq 6, d_{6}\leq6,~d_{7}\leq\cdots\leq d_{12}\leq11.
\end{equation}
Note that $f(6)=4$ when $n=12$, and by \eqref{eq:1}, we have $49\leq e(G)\leq 51$. If $e(G)=51$, then
$\sum_{i=1}^{12}d_{i}=102$, which, together with \eqref{eq:6}, yields that the degree sequence of $G$ is
$(6^{6},11^{6})$. From this one can check directly that $G=K_{6}\vee 6K_{1}=S_{12}^{6}= \mathcal{S}_{12}^{6}(0)$.
Now assume that
\begin{equation}\label{eq:7}
e(G)=51-t=e(S_{12}^{6})-t,~where~t\in \{1,2\}.
\end{equation}
Since $\sum_{i=1}^{12}d_{i}=102-2t\geq 98$, $G$ has at least one 6-degree vertex and has no 3-degree vertex. Let
$d_{G}(x_{0})=6$ and $H_{3}:=G[V(G)\setminus \{x_{0}\}]$. It is easy to see that $|V(H_{3})|=11$,
$\delta(H_{3})\geq 3$ and $e(H_{3})=e(G)-6\geq 49-6=43> \dbinom{11-2}{2}+6$, by Theorem \ref{th:5c3}, we have that $H_{3}$
is Hamilton-connected. Let $w P w'$ be a Hamilton path in $H_{3}$ from $w$ to $w'$. Since
$G$ is not Hamilton-connected, there is no Hamilton path connecting $w$ and $w'$ in $G$.
Suppose that $x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}$ are the distinct vertices of $H_{3}$ which are adjacent to
$x_{0}$. Without loss of generality, we assume that $x_{6}= w'$. Then
$\{x_{0},x_{1}^{+},x_{2}^{+},x_{3}^{+},x_{4}^{+},x_{5}^{+}\}$ is an independent set, which, together with
\eqref{eq:7}, would yield that $G$ is a subgraph of $S_{12}^{6}$ obtained by deleting any $t$ edges, that is,
$G\in \mathcal{S}_{12}^{6}(t)$, where $t\in \{1,2\}$.
Summing up the above discussion, we eventually obtain $G\in \bigcup_{i=0}^{2}\mathcal{S}_{12}^{6}(i)$, a
contradiction.
\noindent \textbf{Case~4.} $n=13$ and $k=6$.
This case is completely analogous to Case ~2. We can obtain $e(G)=58$, $\sum_{i=1}^{13}d_{i}=116$, and the degree
sequence of $G$ is $(6^{5},7^{2},12^{6})$, which implies that $G=K_{6}\vee (K_{2}+5K_{1})=S_{13}^{6}$, a
contradiction.
\noindent \textbf{Case~5.} $n=14$ and $k=7$.
As Case~3, we have $d_{1}\leq \cdots\leq d_{6}\leq 7$, $d_{7}\leq 7$, $d_{8}\leq\cdots \leq d_{14}\leq 13$ and
$68\leq e(G)\leq 70$. Since $\sum_{i=1}^{14}d_{i}\geq 136$, $G$ has at least one 7-degree vertex and has no 3-degree
vertex. Let $d_{G}(x_{0})=7$ and $H_{4}=G[V(G)\setminus \{x_{0}\}]$. Obviously, $|V(H_{4})|=13$ and
$\delta(H_{4})\geq 3$.
If $e(G)=70$, then the degree sequence of $G$ must be $(7^{7}, 13^{7})$, which implies that $G=K_{7}\vee
7K_{1}=S_{14}^{7}= \mathcal{S}_{14}^{7}(0)$.
If $e(G)=70-1=69$, then $e(H_{4})=e(G)-7=62> \dbinom{13-2}{2}+6=61$, by Theorem \ref{th:5c3}, $H_{4}$ is
Hamilton-connected. Hence, by a similar argument as that in Case~3, we would get $G\in \mathcal{S}_{14}^{7}(1)$.
If $e(G)=70-2=68$, then $e(H_{4})=e(G)-7=61= \dbinom{13-2}{2}+6$, by Theorem \ref{th:5c3}, $H_{4}$ is either Hamilton-connected or
$H_{4}=K_{3}\vee (K_{8}+2K_{1})$. If $H_{4}$ is Hamilton-connected, by a similar argument as that in Case~3, we would get
$G\in \mathcal{S}_{14}^{7}(2)$, a contradiction. Hence, $H_{4}=K_{3}\vee (K_{8}+2K_{1})=S_{13}^{3}$. In this case, if $x_{0}$ is
not adjacent to the two 3-degree vertices of $H_{4}$, then it is evident that $G$ is a subgraph of $S_{14}^{3}$ with
$e(S_{14}^{3})-4$ edges, that is, $G\in \mathcal{S}_{14}^{3}(4)$; otherwise, one may check easily that $G$ is
Hamilton-connected, contradicting our assumption.
Summing up the above discussion, we eventually get $G\in \bigcup_{i=0}^{2}\mathcal{S}_{14}^{7}(i)\bigcup
\mathcal{S}_{14}^{3}(4)$.
\noindent \textbf{Case~6.} $n=16$ and $k=8$.
Similarly, we have $d_{1}\leq \cdots\leq d_{7}\leq 8$, $d_{8}\leq 8$, $d_{9}\leq \cdots\leq d_{16}\leq 15$,
$91\leq e(G)\leq 92$ and hence $182\leq \sum_{i=1}^{16}d_{i}\leq 184$. From the inequality
$\sum_{i=9}^{16}d_{i}=2m- \sum_{i=1}^{8}d_{i}\geq 182-64=118$, we get that $d_{11}=\cdots=d_{16}=15$ and
$d_{9}+d_{10}\geq 28$. Note that $\sum d_{i}$ is even and the total degree is between 182 and 184. If
$d_{9}=d_{10}=15$, then the permissible graphic sequence is $(8^{8},15^{8})$, which implies that
$G=K_{8}\vee 8K_{1}=S_{16}^{8}\in \mathcal{S}_{16}^{8}(0)$. If $d_{9}=14$ and $d_{10}=15$, then the permissible
graphic sequence is $(7^{1},8^{7},14^{1},15^{7})$, which implies $G=K_{7}\vee (K_{1}+K_{1,7})$. If $d_{9}=13$ and
$d_{10}=15$, then the permissible graphic sequence is $(8^{8},13^{1},15^{7})$, which implies that $G=K_{7}\vee
(K_{2}+K_{1,6})$. If $d_{9}=d_{10}=14$, then the permissible graphic sequence is $(8^{8},14^{2},15^{6})$. If $v_{9}$
is adjacent to $v_{10}$, then $G$ can be obtained as follows. Let $X=K_{8}$, $Y=8K_{1}$, $x_{1},x_{2}\in V (X)$ and
$y_{1},y_{2}\in V (Y)$. $G$ is a subgraph of $X\vee Y$ obtained by deleting $x_{1}y_{1}$, $x_{2}y_{2}$ and adding
a new edge $y_{1}y_{2}$. Note that in this case, $G$ is Hamilton-connected. If $v_{9}$ is not adjacent to $v_{10}$,
then $G=K_{6}\vee K_{2,8}$.
Since $\mathcal{S}_{16}^{8}(1)=\{K_{7}\vee (K_{1}+K_{1,7}), K_{6}\vee K_{2,8}\}$, summing up the above discussion,
we get $G\in \bigcup_{i=0}^{1}\mathcal{S}_{16}^{8}(i)\bigcup (G=K_{7}\vee (K_{2}+K_{1,6}))$.
The proof is complete.
\hfill$\blacksquare$
\noindent {\bf \emph{The proof of Theorem~\ref{th:5main4}}.} Suppose that $G$ is not Hamilton-connected. By Lemmas \ref{le:5c8} and \ref{le:5c9}, we have
$$\rho(G)\leq 1+\sqrt{2m-3n+4},$$
which, together with the condition of Theorem \ref{th:5main4}, yields
$$n-3< \rho(G)\leq 1+\sqrt{2m-3n+4}.$$
We obtain $2m> n^{2}-5n+12$. Furthermore, by parity, we have $m\geq \dbinom{n-2}{2}+4$. By Corollary
\ref{co:5main2}, we have $G\in (\bigcup_{i=0}^{2}\mathcal{S}_{n}^{3}(i))~\bigcup~
(\bigcup_{i=0}^{1}\mathcal{T}_{n}^{3}(i))$
or for $n=14$, $G=S_{14}^{7}$.
Note that $K_{n-2}$ is a proper subgraph of $S_{n}^{3}$ and $T_{n}^{3}$, by Lemma \ref{le:5c7}, we have
$\rho(S_{n}^{3})> \rho(K_{n-2})=n-3$ and $\rho(T_{n}^{3})> \rho(K_{n-2})=n-3$. So $S_{n}^{3}$ and $T_{n}^{3}$
enter the list of exceptions of the theorem.
For $G\in \mathcal{S}_{n}^{3}(1)$, that is, $G$ is obtained from the graph $S_{n}^{3}$ by removing one edge, which
can have only one of the following degree sequences:
\begin{enumerate}[(1)]
\item $H_{1}$ has degree sequence $(3,3,\underbrace{n-3,\ldots,n-3}_{n-5~times},n-2,n-2,n-1)$, i.e., $H_{1}=K_{1,2}\vee (K_{n-5}+2K_{1})$;
\item $H_{2}$ has degree sequence $(3,3,n-4,\underbrace{n-3,\ldots,n-3}_{n-6~times},n-2,n-1,n-1)$;
\item $H_{3}$ has degree sequence $(3,3,n-4,n-4,\underbrace{n-3,\ldots,n-3}_{n-7~times},n-1,n-1,n-1)$, i.e., $H_{3}=K_{3}\vee ((2K_{1}\vee K_{n-7})+2K_{1})$.
\end{enumerate}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{H1-H3.eps}
\caption{Graphs, obtained from the graph $S_{n}^{3}$ by removing one edge.}
\end{figure}
The graphs which correspond to these degree sequences are depicted in Figure~1. Let $V_{1},V_{2}$ and $V_{3}$ be the
sets of vertices of $S_{n}^{3}$ with degree 3, $n-1$ and $n-3$. Therefore,
$H_{1}$ is the graph obtained from $S_{n}^{3}$ by deleting an edge $uz$ with $\{u, z\}\in V_{2}$. $H_{2}$ is the
graph obtained from $S_{n}^{3}$ by deleting an edge $uz$ with $u\in V_{2}$ and $z\in V_{3}$. $H_{3}$ is the graph
obtained from $S_{n}^{3}$ by deleting an edge $vz$ with $\{v,z\}\in V_{3}$.
Then we show $\rho(H_{i})< n-3$, $i=1,2,3$. Firstly, we claim that $\rho(H_{1})\leq \rho(H_{2})\leq \rho(H_{3})$.
Indeed, for graph $H_{i}$, let $u,z\in V(H_{i})$ be two vertices defined above and $v\in V_{3}\setminus\{u,z\}$,
where $i=1,2$. We have $H_{2}=H_{1}-vz+uz$ and $H_{3}=H_{2}-vz+uz$. Thus by Lemma \ref{le:5c4}, we can obtain the
conclusion. Hence it is sufficient to show only that $\rho(H_{3})< n-3$.
Let us consider the following partition of $V(H_{3})$ $\pi$: $X_{1}=V(K_{n-7})$, $X_{2}=\{z,v\}$, $X_{3}=\{u,
u_{1},u_{2}\}$, $X_{4}=\{y_{1},y_{2}\}$. It can easily be checked that this partition is equitable with the
adjacency matrix of its quotient $H_{3}/\pi$:
\begin{displaymath}
A(H_{3}/\pi)=\left(
\begin{array}{cccccc}
n-8& 2 & 3& 0 \\
n-7& 0 & 3& 0\\
n-7& 2 & 2& 2 \\
0& 0 & 3& 0 \\
\end{array}
\right).
\end{displaymath}
The characteristic polynomial $det(xI_{4}-A(H_{3}/\pi))$ of $A(H_{3}/\pi)$ is equal to:
\begin{equation}\label{eq:8}
f_{1}(x)=x^{4}-(n-6)x^{3}-(3n-7)x^{2}+4(n-10)x+12n-84.
\end{equation}
Therefore, by Lemma \ref{le:5c6}, the spectral radius of $H_{3}$ is the largest root of polynomial \eqref{eq:8}.
Next, we will show that there is no root of the polynomial $f_{1}(x)$ in the interval $[n-3,+\infty)$. In fact,
when $n\geq 14$, it is obvious that the following inequalities are true:
\begin{align*}
f_{1}(n-3)&=2n^{2}-28n+18> 0;\\
f_{1}'(n-3)&=4x^{3}-3(n-6)x^{2}-2(3n-7)x+4(n-10)\mid_{x=n-3}\\
&=n(n-3)^{2}-28> 0;\\
f_{1}''(n-3)&=12x^{2}-6(n-6)x-2(3n-7)\mid_{x=n-3}=6n^{2}-24n+14>0;\\
f_{1}'''(n-3)&=24x-6(n-6)\mid_{x=n-3}=18(n-2)>0;\\
f_{1}^{(4)}(n-3)&=24>0.
\end{align*}
Therefore, by the Fourier-Budan theorem \cite{Prasolov2001}, all roots of $f_{1}(x)$ lie to the left of the number
$n-3$. In particular, $\rho(H_{3})< n-3$. Hence, non-Hamilton-connected graphs $G\in \mathcal{S}_{n}^{3}(1)$,
satisfy $\rho(G)< n-3$, a contradiction.
For $G\in \mathcal{S}_{n}^{3}(2)$, by Lemma \ref{le:5c7}, we also have $\rho(G)< n-3$, a contradiction.
For $G\in \mathcal{T}_{n}^{3}(1)$, that is, $G$ is obtained from the graph $T_{n}^{3}$ by removing one edge, which
can have only one of the following degree sequences:
\begin{enumerate}[(1)]
\item $T_{1}$ has degree sequence $(3,3,\underbrace{n-3,\ldots,n-3}_{n-4~times},n-2,n-2)$, i.e., $T_{1}=2K_{1}\vee (K_{n-4}+K_{2})$;
\item $T_{2}$ has degree sequence $(3,3,n-4,\underbrace{n-3,\ldots,n-3}_{n-5~times},n-2,n-1)$;
\item $T_{3}$ has degree sequence $(3,3,n-4,n-4,\underbrace{n-3,\ldots,n-3}_{n-6~times},n-1,n-1)$, i.e., $T_{3}=K_{2}\vee ((2K_{1}\vee K_{n-6})+K_{2})$.
\end{enumerate}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{T1-T3.eps}
\caption{Graphs, obtained from the graph $T_{n}^{3}$ by removing one edge.}
\end{figure}
The graphs which correspond to these degree sequences are depicted in Figure~2. Let $V_{1},V_{2}$ and $V_{3}$ be the
sets of vertices of $T_{n}^{3}$ with degree 3, $n-1$ and $n-3$. Therefore,
$T_{1}$ is the graph obtained from $T_{n}^{3}$ by deleting an edge $uz$ with $\{u, z\}\in V_{2}$. $T_{2}$ is the
graph obtained from $T_{n}^{3}$ by deleting an edge $uz$ with $u\in V_{2}$ and $z\in V_{3}$. $T_{3}$ is the graph
obtained from $T_{n}^{3}$ by deleting an edge $vz$ with $\{v,z\}\in V_{3}$.
Then we show $\rho(T_{i})< n-3$, $i=1,2,3$. Firstly, we claim that $\rho(T_{1})\leq \rho(T_{2})\leq \rho(T_{3})$.
Indeed, for graph $T_{i}$, let $u,z\in V(T_{i})$ be two vertices defined above and $v\in V_{3}\setminus\{u,z\}$,
where $i=1,2$. We have $T_{2}=T_{1}-vz+uz$ and $T_{3}=T_{2}-vz+uz$. Thus by Lemma \ref{le:5c4}, we can obtain the
conclusion. Hence it is sufficient to show only that $\rho(T_{3})< n-3$.
Let us consider the following partition of $V(T_{3})$ $\pi$: $X_{1}=V(K_{n-6})$, $X_{2}=\{z,v\}$, $X_{3}=\{u,
u_{1}\}$, $X_{4}=\{y_{1},y_{2}\}$. It can easily be checked that this partition is equitable with the
adjacency matrix of its quotient $T_{3}/\pi$:
\begin{displaymath}
A(T_{3}/\pi)=\left(
\begin{array}{cccccc}
n-7& 2 & 2& 0 \\
n-6& 0 & 2& 0\\
n-6& 2 & 1& 2 \\
0& 0 & 2& 1 \\
\end{array}
\right).
\end{displaymath}
The characteristic polynomial $det(xI_{4}-A(T_{3}/\pi))$ of $A(T_{3}/\pi)$ is equal to:
\begin{equation}\label{eq:9}
f_{2}(x)=x^{4}-(n-5)x^{3}-(2n-3)x^{2}+(5n-33)x+10n-56.
\end{equation}
Therefore, by Lemma \ref{le:5c6}, the spectral radius of $T_{3}$ is the largest root of polynomial \eqref{eq:9}.
Next, we will show that there is no root of the polynomial $f_{2}(x)$ in the interval $[n-3,+\infty)$. In fact,
when $n\geq 14$, it is obvious that the following inequalities are true:
\begin{align*}
f_{2}(n-3)&=2n^{2}-20n+16> 0;\\
f_{2}'(n-3)&=4x^{3}-3(n-5)x^{2}-2(2n-3)x+5n-33\mid_{x=n-3}\\
&=(n-3)^{2}(n-2)+n^{2}-7n-6> 0;\\
f_{2}''(n-3)&=12x^{2}-6(n-5)x-2(2n-3)\mid_{x=n-3}=2(n-4)(3n-3)+2n>0;\\
f_{2}'''(n-3)&=24x-6(n-5)\mid_{x=n-3}=6(3n-7)>0;\\
f_{2}^{(4)}(n-3)&=24>0.
\end{align*}
Therefore, by the Fourier-Budan theorem \cite{Prasolov2001}, all roots of $f_{2}(x)$ lie to the left of the number
$n-3$. In particular, $\rho(T_{3})< n-3$. Hence, non-Hamilton-connected graphs $G\in \mathcal{T}_{n}^{3}(1)$,
satisfy $\rho(G)< n-3$, a contradiction.
For $n-14$, $G=S_{14}^{7}$, by direct calculation, we have $\rho(S_{14}^{7})=10.6158< 14-3=11$, a contradiction.
The proof is complete.
\hfill$\blacksquare$
\noindent {\bf \emph{The proof of Theorem~\ref{th:5main6}}.} Combining Lemma \ref{le:5c10} and Theorem \ref{th:5main6}, we have
$$2n-6+\frac{6}{n-1}< q(G)\leq \frac{2m}{n-1}+n-2,$$
then $2m>n^{2}-5n+10$, which, by parity, is equivalent to $2m\geq n^{2}-5n+12=\dbinom{n-2}{2}+3$. Now, suppose that
$G$ is not Hamilton-connected, by Corollary \ref{co:5main3}, $G\in
(\bigcup_{i=0}^{3}\mathcal{S}_{n}^{3}(i))~\bigcup~
(\bigcup_{i=0}^{2}\\ \mathcal{T}_{n}^{3}(i))$,
or for $n=13$, $G=S_{13}^{6}$, or for $n=14$, $G\in
\bigcup_{i=0}^{1}S_{14}^{7}(i)$.
For $G=S_{n}^{3}$, it has been shown that $q(S_{n}^{3})$ is the largest zero of the function
$g_{1}(x)=x^{3}-(3n-5)x^{2}+(2n^{2}-n-24)x-6(n-3)(n-4)$ in \cite{ZhouQN12017}. Note that
$g_{1}(2n-6+\frac{6}{n-1})=-\frac{18(3n^{3}-18n^{2}+47n-44)}{(n-1)^{3}}< 0$ holds for $n\geq 13$, then we obtain
$$q(S_{n}^{3})>2n-6+\frac{6}{n-1},$$
so $S_{n}^{3}$ enter the list of exceptions of the theorem.
For $G\in \mathcal{S}_{n}^{3}(1)$, by the discussion in the proof of Theorem \ref{th:5main4} and Lemma \ref{le:5c5},
we have $q(H_{1})\leq q(H_{2})\leq q(H_{3})$. So it is sufficient to show only that $q(H_{3})<2n-6+\frac{6}{n-1}$.
Recall that the $(q,X)$-eigenequation in $G$ is
\begin{equation}\label{eq:10}
[q-d_{G}(v)]X_{v}=\sum_{u\in N_{G}(v)}X_{u},
\end{equation}
for each $v\in V(G)$, where $X$ is an eigenvalue of $Q(G)$ corresponding to the eigenvalue $q$ and $X_{u}$ is the
entry of $X$ corresponding to the vertex $u$. For $G=H_{3}$, let $X=(x_{1},x_{2},\ldots,x_{n})^{T}$ be the
eigenvector corresponding to $q(G)$. Then all vertices of degree $n-1$ have the same values given by
$X$, say $X_{1}$; all vertices of degree $n-3$ have the same values given by $X$, say $X_{2}$; all vertices of
degree $n-4$ have the same values given by $X$, say $X_{3}$.
Denote by $X_{4}$ the values of the vertices of degree 3 given by $X$. Assume
$\tilde{X}=(X_{1},X_{2},X_{3},X_{4})^{T}$, by \eqref{eq:10}, we have
\begin{align*}
(q(G)-(n-1))X_{1}&=2X_{1}+(n-7)X_{2}+2X_{3}+2X_{4};\\
(q(G)-(n-3))X_{2}&=3X_{1}+(n-8)X_{2}+2X_{3};\\
(q(G)-(n-4))X_{3}&=3X_{1}+(n-7)X_{2};\\
(q(G)-3)X_{4}&=3X_{1}.
\end{align*}
Transform the above equations into a matrix equation $(A-q(G)I)\tilde{X}=0$, we get
\begin{displaymath}
A=\left(
\begin{array}{cccccc}
n+1& n-7& 2& 2&\\
3& 2n-11& 2& 0&\\
3& n-7& n-4& 0&\\
3& 0& 0& 3&\\
\end{array}
\right).
\end{displaymath}
Thus, $q(G)$ is the largest root of the following equation:
$$q^{4}-(4n-11)q^{3}+(5n^{2}-24n+10)q^{2}-(2n^{3}-7n^{2}-56n+220)q+6(n^{3}-13n^{2}+56n-80)=0.$$
Let $g_{2}(x)=x^{4}-(4n-11)x^{3}+(5n^{2}-24n+10)x^{2}-(2n^{3}-7n^{2}-56n+220)x+6(n^{3}-13n^{2}+56n-80)$, note that
$g_{2}(2n-6+\frac{6}{n-1})=\frac{2(4n^{6}-77n^{5}+445n^{4}-1471n^{3}+2939n^{2}-3856n+2664)}{(n-1)^{4}}> 0$ for
$n\geq 13$, which implies $q(H_{3})< 2n-6+\frac{6}{n-1}$. Hence $q(H_{1})\leq q(H_{2})\leq q(H_{3})<
2n-6+\frac{6}{n-1}$, a contradiction.
For $G\in \mathcal{S}_{n}^{3}(2)$, which is obtained from $S_{n}^{3}$ by deleting two edges, by Lemma \ref{le:5c7},
we also have $q(G)< 2n-6+\frac{6}{n-1}$, a contradiction.
For $G\in \mathcal{S}_{n}^{3}(3)$, which is obtained from $S_{n}^{3}$ by deleting three edges, by Lemma
\ref{le:5c7}, we also have $q(G)< 2n-6+\frac{6}{n-1}$, a contradiction.
For $G=T_{n}^{3}$, by a similar method as above, we get that the $q(G)$ is the largest zero of the function
$g_{3}(x)=x^{3}-(3n-4)x^{2}+2(n^{2}+n-14)x-8(n^{2}-6n+8)$. Note that
$g_{3}(2n-6+\frac{6}{n-1})=\frac{4(n^{4}-19n^{3}+102n^{2}-250n+220)}{(n-1)^{3}}> 0$ for $n\geq 13$, which implies
that $q(T_{n}^{3})< 2n-6+\frac{6}{n-1}$.
For $G\in \bigcup_{i=1}^{2}\mathcal{T}_{n}^{3}(i)$, which is a subgraph of $T_{n}^{3}$ by deleting $i\in \{1,2\}$
edges. By Lemma \ref{le:5c7}, we also have $q(G)< 2n-6+\frac{6}{n-1}$, a contradiction.
For $n=13$, $G=S_{13}^{6}$, by direct calculation, we have $q(S_{13}^{6})=20.1157< 2n-6+\frac{6}{n-1}$, a
contradiction.
For $n=14$, $G=S_{14}^{7}$, by direct calculation, we have $q(S_{14}^{7})=22.2195< 2n-6+\frac{6}{n-1}$,
which implies a contradiction. Hence, for $G\in \mathcal{S}_{14}^{7}(1)$, which is obtained from $S_{14}^{7}$ by
deleting one edge, by Lemma \ref{le:5c7}, we also have $q(G)< 2n-6+\frac{6}{n-1}$, a contradiction.
The proof is complete.
\hfill$\blacksquare$
| proofpile-arXiv_065-14553 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
When designing vehicles components it is important to know the distributions of loads expected to act on them. The life time of a component in a vehicle-- such as control arms, ball joints, etc.-- is determined by its strength and the loads acting on it.
Where the effect of a given force acting on a component is well known, the distributions of loads, and hence forces,
are more random. This is because the distribution of the loads depends on the driving environment, driver's behavior, usage of the vehicle, and other things. For a more detailed description of loads acting on vehicles see \cite{Johannesson2}.
Although it is not financially possible to design a vehicle for specific customer, it is important to
tailor the design for groups of customers, depending on, for instance, geographical regions and usage. Obviously, components weakly designed for the specific environments leads to increased costs due to call-backs and badwill for the company, while too heavily designed components gives increased material cost and unnecessarily heavy vehicles.
Traditionally, one has used a specially equipped test vehicle to study the distributions of customer loads. This gives very precise measurements, but with disadvantage of a statistically small sample size for the studied group. In addition, it is a very expensive way of acquiring data. However, all modern vehicles are equipped with computers measuring many signals, known as Controller Area Network (CAN) bus data, where the signal is for instance speed and lateral acceleration. The goal of this article is to develop a statistical algorithm that uses these signals, to extract information about the driving events for the specific vehicle. This data can then be collected from several vehicles to generate a load distribution for groups of customers.
The desired algorithm needs several key properties to be practically useful:
First, it obviously needs to be able to extract the driving events from the CAN data.
Second, since the data will be extracted over long periods of time the computational cost of estimation of the driving events needs to be low. It is also desirable that the method does not require the storage of all the data.
Finally, the algorithm should allow for changing frequency of driving events over time, since the frequency of driving events changes depending on the driving environment such as highway driving or city driving.
To address the first property, our algorithm uses a hidden Markov Model (HMM) to extract the driving events from the CAN data. More specifically each state in the HMM represents a driving state where we define a driving event as a sequence of consecutive driving states. The CAN data for a given driving state is assumed to follow a generalized Laplace (GAL) distribution. Laplace distributions are well known methods to describe responses measured on driving vehicles, see \cite{Rychlik13}, \cite{Kvanstrom} and \cite{Bogsjo}. The idea of using HMMs to identify driving events has previously been used in for example Maghsood and Johannesson \cite{Maghsood2, Maghsood1}, Mitrovi\'c \cite{Mitrovic1,Mitrovic2} and Berndt and Dietmayer \cite{Berndt}.
For the HMM we divide the parameter into two sets: the transition matrix, which is vehicle type independent, depending rather on the driving environment, the driver's behavior etc. The parameter of the GAL distribution is vehicle type specific, and can thus be found in laboratory tests or in proving grounds. Thus the second property, in the case of an HMM, is equivalent to efficiently estimating the transition matrix of driving states. In previous articles, the EM algorithm has been used successfully to estimate the transition matrix,\cite{Maghsood3}; however an iteration of the algorithm has computational complexity $\mathcal{O}(n)$ (where $n$ is the number of observation) and is thus not practically feasible. Here, we instead propose using the online EM algorithm from Capp\'e \cite{Cappe1} to estimate the matrix. This gives the desired computational efficiency, since one iteration of the algorithm has a computational cost of $\mathcal{O}(1)$.
The final property is addressed by using a fixed forgetting factor in the online EM algorithm. Capp\'e \cite{Cappe1} proposes a diminishing forgetting factor to ensure that the EM algorithm converges to a stationary point. However, this is not the goal here and we do not want the algorithm to converge to a stationary point but rather be an adaptive algorithm. The usage forgetting factors is a well-studied area in automatic control, time series analysis and vehicle engineering \cite{arvastson2000asymptotic, ljung1983theory,vahidi2005recursive}.
Further, the algorithm also calculates online the expected damage for a given component. This could be useful for the specific vehicle, on which the algorithm is applied, by using the expected damage to tailor service times to specific vehicle and components.
The paper is organized as follows: In the second section, the HMM and the proposed online algorithm are presented. In the third section, the method for estimating the fatigue damage is proposed. In the forth section, the algorithm is applied to simulated data to verify its performance, and it is also evaluated on real data, CAN data from a Volvo truck. The final section contains the conclusions of the paper.
\section{Hidden Markov models}
\label{sec:HMM}
Hidden Markov models are statistical models often used in signal processing, such as speech recognition and modeling the financial time series, see for instance Capp\'e \cite{Cappe} and Frühwirth-Schnatter \cite{F-Schnatter}. An HMM is a bivariate Markov process $\left\{Z_t,Y_t\right\}^{\infty}_{t=0}$ where the underlying process $Z_t$ is an unobservable Markov chain and is observed only through the $Y_t$. The observation sequence $Y_t$ given $Z_t$ is a sequence of independent random variables and the conditional distribution of $Y_t$ depends only on $Z_t$.
In this article, all HMMs are such that $Z_t$ takes values on a discrete space $\{1,2,\ldots,m\}$, and the HMM is determined by two sets of parameters. The first set is the transition probabilities of Markov chain $Z_t$:
\begin{equation}\label{Tra}
q(i,j)=P(Z_{t+1}=j|Z_{t}=i),\ i,j=1,2,...,m.
\end{equation}
The second set is the parameter vector, $\mv{\theta}$, of the conditional distribution of $Y_t$ given $Z_t$:
\begin{equation}\label{Emi}
g_{\mv{\theta}}(i,y_t)=f_{Y_t}(y_t|Z_{t}=i;\mv{\theta}), \ i=1,2,...,m, \ y_t\in \mathbb{R}.
\end{equation}
\noindent
Here, we denote the set of parameters by $\Theta = (\mv{Q},\mv{\theta})$ where $\mv{Q}=(q(i,j))$ for $ i,j=1,2,...,m$.
In an HMM, the state where the hidden process will start is modeled by the initial state probabilities $\mv{\pi}=(\pi_{i})$, where $\pi_{i}$ is denoted by:
\begin{align*}
\pi_{i}=P(Z_{0}=i), \ i=1,2,...,m
\end{align*}
with $\sum^{m}_{i=1}\pi_{i}=1$.
\subsection{Parameter estimation}
For the parameter estimation in this article we use the EM (expectation maximization) algorithm, which is described below. The principle aim is to estimate the transition matrix $\mv{Q}$ based on an observation sequence. For this, we use an online EM algorithm, derived in \cite{Cappe1}. To introduce the algorithm we first describe the EM algorithm and then describe the modification needed for online usage of the algorithm.
In our study, the parameter $\mv{\theta}$ is not estimated recursively, but rather found through maximum likelihood estimation
on a training set. This is because the conditional distribution of $Y_t$ given $Z_t$ in our case study represents the vehicle specific data which can be estimated under well-defined conditions on the proving ground.
\subsection{The EM algorithm}
Here, we present the EM algorithm following Capp\'e \cite{Cappe1}. The EM algorithm is a common method for estimating the parameters in HMMs. It is an optimization algorithm to find the parameters that maximize the likelihood. The algorithm is both robust -- it does not diverge easily-- and is often easy to implementation.
The EM algorithm is an iterative procedure. If the distribution of complete-data $(Z_t,Y_t)$ given $Z_{t-1}$, $p(z_t,y_t|z_{t-1})$, belongs to an exponential family, then the $n^{th}$ iteration consists of the two following steps:
\begin{itemize}
\item The E-step, where the conditional expectation of the complete-data sufficient statistics, $s(Z_{t-1},Z_t,Y_t)$, given the observation sequence $Y_{0},Y_{1},...,Y_{t}$ and $\Theta^{(n)}$, is computed,
\begin{align}\label{Suff}
\mv{S}^{(n+1)}_t=\frac{1}{t}E\left[\sum^{t}_{l=1}s(Z_{l-1},Z_l,Y_l)\bigg|Y_0,...,Y_t;\Theta^{(n)}\right],
\end{align}
\item The M-step, where the new parameter value $\Theta^{(n+1)}$ is calculated using $\mv{S}^{(n+1)}_t$, which can be formulated as $\Theta^{(n+1)} = f(\mv{S}^{(n+1)}_t)$.
\end{itemize}
The sequence $\Theta^{(n)}$ converges to a stationary point of the likelihood function, for more details see \cite{Cappe1}.
For our specific model, where the parameter of interest is $\mv{Q}$, the sufficient statistics in the E-step is:
\begin{align}\label{Suff1}
S^{(n+1)}_t(i,j)=\frac{1}{t}E\left[\sum^{t}_{l=1} I (Z_{l-1}=i,Z_{l}=j)\bigg|Y_0,...,Y_t;\Theta^{(n)}\right].
\end{align}
Thus $S_t(i,j)$ is the expected number of transitions from state $i$ to state $j$ given $Y_0,...,Y_t$ and $\Theta$. For $\mv{Q}=(q(i,j))$, the M-step is given by:
\begin{align}\label{ReEstiTra}
q^{(n+1)}(i,j)= \frac{S^{(n+1)}_{t}(i,j)}{\sum^{m}_{j=1}S^{(n+1)}_{t}(i,j) }.
\end{align}
\subsubsection{Recursive formulation of the E-step}
Zeitouni and Dembo \cite{Zeitouni} noted that the conditional expectation of the complete-data sufficient statistics $\mv{S}_t$ can be computed recursively. To see this, define
\begin{align}\label{Recursion1}
&\phi_{t}(k)=P(Z_t=k|Y_0,...,Y_t;\Theta),\\
&\rho_{t}(i,j,k)=\frac{1}{t}E[\sum^{t}_{l=1} I (Z_{l-1}=i,Z_{l}=j)|Y_0,...,Y_t, Z_t=k ;\Theta],
\end{align}
then $S_{t}(i,j)$ can be written as $S_{t}(i,j)=\sum^{m}_{k=1} \phi_{t}(k)\rho_{t}(i,j,k)$.\\
\noindent
Note that $(\mv{\phi}_t)_k=\phi_{t}(k)$ is an $N$-dimensional (row) vector. For a vector $\mv{a}$, let $\mv{D}(\mv{a})$ be a diagonal matrix where $\mv{D}(\mv{a})_{kk} = a_k$. The recursive implementation of the EM algorithm, using the observation sequence $Y_{0},Y_{1},...,Y_{T}$, is initialized with
$$
\mv{\phi}_{0}=\frac{\mv{\pi}\mv{D}(g_{\mv{\theta}}(k,y_0))}{(\mv{\pi}\mv{D}(g_{\mv{\theta}}(k,y_0)))\mv{1'}}, \ \text{and} \
\rho_0(i,j,k)=0,
$$
for all $1 \le i,j,k \le m$. Let $g_{\mv{\theta}}(y_t)=\left(g_{\mv{\theta}}(1,y_t), g_{\mv{\theta}}(2,y_t), ..., g_{\mv{\theta}}(m,y_t) \right)$ and $\mv{1}=\left(1,1,...,1 \right)$. Then, for $n^{th}$ iteration and $t\geq1$, the components are updated as follows:
\begin{eqnarray}
\mv{\phi}_{t+1}&=&\frac{\mv{1}(\mv{D}(\mv{\phi}_{t})\mv{Q}^{(n)}\mv{D}(g_{\mv{\theta}}(y_t)))}{\mv{1}(\mv{D}(\mv{\phi}_{t})\mv{Q}^{(n)}\mv{D}(g_{\mv{\theta}}(y_t)))\mv{1'}},\label{Recursion2} \\
\rho_{t+1}(i,j,k)&=&\gamma_{t+1}I(j-k)r_{t+1}(i|j)+(1-\gamma_{t+1})\sum^{m}_{k'=1}\rho_{t}(i,j,k')r_{t+1}(k'|k), \label{Recursion3}
\end{eqnarray}
where $\mv{r}_{t+1}=\mv{D}(\mv{\phi_{t}}./\mv{1}(\mv{D}(\mv{\phi}_{t})\mv{Q}^{(n)}))\mv{Q}^{(n)}$ and $./$ represents the element-wise division of two matrices. The forgetting factor, $\gamma_{t}$, equals $1/t$.\\
Note that in $n^{th}$ iteration of EM algorithm, all elements in $\phi_1, \phi_2, ..., \phi_t$ and $\rho_1, \rho_2, ..., \rho_t $ depends on $\mv{Q}^{(n)}$. Thus, for updating $\mv{Q}$ in $(n+1)^{th}$ iteration, all elements of the two quantities need to be recalculated. Therefore one needs to store the entire observation vector to use the EM-algorithm.
\subsection{Online estimation of HMM parameters}
\label{sec:Q}
As we will see soon, the online EM algorithm remedies the issue of requiring the entire observation vector to estimate parameters. Here we use the notation $\hat{\mv{Q}}_t$ rather then $\mv{Q}^{(t)}$. This is because, as we will see, one can not compute more than one iteration at each time point $t$ for the online EM.
The terms $\hat{\mv{\phi}}_{0}$ and $\hat \rho_0(i,j,k)$ are initialized the same way as in the regular EM algorithm.
For $t=0, 1,\ldots$ the components are updated as follows: (the E-step)
\begin{eqnarray}
\hat {\mv{\phi}}_{t+1}&=&\frac{\mv{1}(\mv{D}(\mv{\phi}_{t})\hat{\mv{Q}}_t\mv{D}(g_{\mv{\theta}}(y_t)))}{\mv{1}(\mv{D}(\mv{\phi}_{t})\hat{\mv{Q}}_t\mv{D}(g_{\mv{\theta}}(y_t)))\mv{1'}},\label{Recursion4} \\
\label{eq:rhohat}
\hat \rho_{t+1}(i,j,k)&=&\gamma_{t+1}I(j-k)\hat r_{t+1}(i|j)+(1-\gamma_{t+1})\sum^{m}_{k'=1}\hat \rho_{t}(i,j,k')\hat r_{t+1}(k'|k), \label{Recursion5}
\end{eqnarray}
where $\hat{\mv{r}}_{t+1}=\mv{D}(\hat{\mv{\phi}_{t}}./\mv{1}(\mv{D}(\hat{\mv{\phi}}_{t})\hat{\mv{Q}}_t))\hat{\mv{Q}}_t$.\ And in the M-step, the transition matrix $\hat{\mv{Q}}_{t+1}=(\hat{q}_{t+1}(i,j))$ is updated by:
\begin{align}\label{onlineEstiTra}
\hat{q}_{t+1} (i,j)=\frac{\hat S_{t+1}(i,j)}{\sum^{m}_{j=1}\hat S_{t+1}(i,j) },
\end{align}
where $\hat S_{t+1}(i,j)=\sum^{m}_{k=1} \hat \phi_{t+1}(k)\hat \rho_{t+1}(i,j,k)$.\\
As can be seen, Eqs.~$(\ref{Recursion4})$ and~$(\ref{Recursion5})$ are the modifications of Eqs.~$(\ref{Recursion2})$ and~$(\ref{Recursion3})$ where $\hat{\mv{\phi}}_{1}, \hat{\mv{\phi}}_{2}, ..., \hat{\mv{\phi}}_{t}$ and $\hat \rho_{1}, \hat \rho_{2}, ..., \hat \rho_{t}$ did not depend on the parameter $\mv{Q}$, but rather $\hat{\mv{Q}}_{t}$, and thus do not need to be recalculated.
In the proposed online EM algorithm by Capp\'e \cite{Cappe1}, a decreasing sequence of forgetting factors $\{\gamma_{t}\}^{\infty}_{t=1}$ is chosen such that $\sum^{\infty}_{t=1}\gamma_{t}=\infty$ and $\sum^{\infty}_{t=1}\gamma^{2}_{t}<\infty$. The choice of $\gamma_{t}$ strongly affects the convergence of the parameters. To converge to a stationary point one can choose $\gamma_{t}=1/t^{\alpha}$ with $0.5<\alpha<1$, which is the common choice suggested in \cite{Cappe1}. By setting $\gamma_{t}$ to a fixed value, the algorithm will never converge to any fixed point but behave like a stochastic processes. As we will see later, this can be useful when the data comes from a non-stationary process, where the parameters are not fixed over time.
\subsubsection{Setting forgetting factor}
When using a fixed value for $\gamma_t \, (=\gamma)$ it is crucial that this value is well chosen. A smaller $\gamma$ gives a more stable parameter trajectory, at the price of a slower adaptation. In the present form, it can be hard to see what a reasonable value of $\gamma$ is. To show this clearly, we introduce two explanatory parameters ($K$, $R$), which represent the weight, $R$, that is put on the $K$ latest observations, when estimating $\mv{Q}$. So for instance, if $K = 100$, and $R=0.9$, then the weight given to the hundred latest observations is such that, they represent $90\%$ of the information from the data used to estimate the parameters.
To link the parameters $K$ and $R$ to $\gamma$, note that (\ref{eq:rhohat}) is approximately a geometric series with ratio $\gamma$, thus approximately it holds that\begin{align}\label{gam}
\gamma \sum_{i=0}^K (1-\gamma)^i = R.
\end{align}
This gives an explicit $\gamma$ for each $(R,K)$.
A further issue is that in general, one observations does not contain equal information about all the entires in $\mv{Q}$, some states (events) might occur rarely and thus most observations contain no information about the corresponding column in the transition matrix. To address this, one can set a separate $\gamma$ for each column. One way is to set $\gamma_{t,i} = \gamma \cdot (\mv{\pi}_t)_i$ where $\mv{\pi}_t$ is the averaged stationary distribution vector defined below.
\subsection{Online estimation of the number of events}
In previous work, see Maghsood, Rychlik and Wallin \cite{Maghsood3}, the Viterbi algorithm was used to calculate the number the driving events. However, the Viterbi algorithm requires access to the entire data sequences and thus can not be used for online estimation when the data is not stored. Instead we compute the expected number of events as follows:
Suppose that at each time $t$, the Markov chain $\{Z_t\}$ with transition matrix $\mv{Q}_t$ by solving equation $(\mv{Q}_t-I)\mv{\pi_t}=\mv{0}$, one gets the stationary distribution of $\mv{Q}_t$ . If the data comes from a stationary distribution then $\mv{\pi_t}$ would be the stationary distribution of $\{Z_t\}$. If the data is not stationary one could estimate the stationary distribution by taking the average, over time, of $\mv{\pi_t}$. By the same reasoning we estimate the expected number of $i^{th}$ event up to time $T$ as
\begin{align}\label{IntEvents}
\eta_i(T)=E[\sum^{T}_{t=1} \xi_i(t)]=\sum^{T}_{t=1} \sum_{j\neq i}\pi_{t,j} q_t(j,i),
\end{align}
where $\xi_i(t)=\sum_{j\neq i} I (Z_{t}=j,Z_{t+1}=i)$.
The above formula works if we substitute $\mv{Q}_t$ with the online estimate $\hat{\mv{Q}}_t$ for each $t$. Then, one can compute and update the number of events based on each new observation.
\subsection{HMMs with Laplace distribution}
As mentioned in the introduction, we set the conditional distribution of $Y_t$ given $Z_t$, denoted by $g_{\theta}(i,y_t)$, to be a generalized asymmetric Laplace distribution (GAL), see \cite{kotz}. The GAL distribution is a flexible distribution with four parameters: $\mv{\delta}-$ location vector, $\mv{\mu}-$ shift vector, $\nu>0-$ shape parameter, and $\mv{\Sigma}- $ scaling matrix and denoted by $GAL(\mv{\delta},\mv{\mu}, \nu, \mv{\Sigma})$. The probability density function (pdf) of a $GAL(\mv{\delta},\mv{\mu}, \nu, \mv{\Sigma})$ distribution is
\begin{eqnarray}
g(\mv{y}) &=& \frac{1}{\Gamma(1/\nu) \sqrt{2\pi}} \left( \frac{\sqrt{(\mv{y}-\mv{\delta})^T\mv{\Sigma}^{-1}(\mv{y}-\mv{\delta}) }}{c_2}\right)^{\frac{1/\nu-d/2}{2}} e^{(\mv{y}-\mv{\delta})\Sigma^{-1}\mv{\mu}}\nonumber\\
&\,&\qquad\qquad\qquad\qquad\qquad K_{1/\nu-d/2} \left( c_2\sqrt{(\mv{y}-\mv{\delta})^T\mv{\Sigma}^{-1}(\mv{y}-\mv{\delta}) }\right),\nonumber
\end{eqnarray}
where $d$ is the dimension of $\mv{Y}$, $c_2 = \sqrt{2+\mv{\mu}^T\mv{\Sigma}^{-1}\mv{\mu}}$ and $K_{1/\nu - d/2}(.)$ is the modified Bessel function of the second kind. The normal mean variance mixture representation can give an intiutive feel of the distribution. That is a random variable $\mv{Y}$ having GAL distribution and the following equality works:
$$
\mv{Y} \overset{d}{=} \mv{\delta} + \Gamma \mv{\mu} + \sqrt{\Gamma} \mv{\Sigma}^{1/2} \mv{Z},
$$
where $\Gamma$ is a Gamma distributed random variable with shape $1/\nu$ and scale one, and $\mv{Z}$ is a vector of $d$ independent standard normal random variable. For more details see \cite{Barndorff}.
\section{Estimation of fatigue damage}
Fatigue is a random process of material deterioration caused by variable stresses. For a vehicle, stresses depend on environmental loads, like road roughness, vehicle usage or driver's behavior.
Often, the rainflow cycles are calculated in order to describe the environmental loads \cite{kotz}, and the fatigue damage is then approximated by a function of the rainflow cycles.
Typically, the approximations are done in order to reduce the length of the load signals storing only the events relevant for fatigue. The reduced signal is then used to find the fatigue life of components in a laboratory (or to estimate the fatigue life mathematically). The reduction is mainly done in order to speed up the testing which is very expensive (or simplify calculations).
In this section, we present a method to approximate the environmental load using driving events. The method is similar to a well-known method in fatigue analysis, the rainflow filter method \cite{Johannesson2}. We show that one can explicitly calculate the expected damage intensity (which describes the expected life time of a component) online.
We start with a short introduction to rainflow cycles and expected damage, then show the approximation method that uses the driving event to derive the expected damage.
\subsection{Rainflow counting distribution and the expected damage} \label{sec:damageindex}
The rainflow cycle count algorithm is one of the most commonly used methods to compute fatigue damage. The method was first proposed by Matsuishi and Endo \cite{Matsuishi}. Here, we use the definition given by Rychlik \cite{Rychlik1} which is more suitable for statistical analysis of damage index. The rainflow cycles are defined as follows.
Assume that a load $L_T$, the processes up to time $T$, has $N$ local maxima. Let $M_i$ denote the height of $i^{th}$ local maximum. Denote $m^{+}_{i}$ ($m^{-}_{i}$) the minimum value in forward (backward) direction from the location of $M_i$ until $L_T$ crosses $M_i$ again. The rainflow minimum, $m^{rfc}_{i}$, is the maximum value of $m^{+}_{i}$ and $m^{-}_{i}$. The pair $(m^{rfc}_{i},M_{i})$ is the $i^{th}$ rainflow pair with the rainflow range $h_{i}(L_T) =M_{i}-m^{rfc}_{i}$. Figure~\ref{fig:rfc} illustrates the definition of the rainflow cycles.\\[0.5cm]
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{RFC.eps}
\caption{The rainflow cycle.}
\label{fig:rfc}
\end{figure}
By using the rainflow cycles found in $L_T$, the fatigue damage can be defined by means of Palmgren-Miner (PM) rule~\cite{Palmgren}, \cite{Miner},
\begin{equation}\label{Damind}
D_\beta(L_T) = \alpha \sum_{i=1}^N h_{i}(L_T)^\beta,
\end{equation}
where $\alpha,\beta$ are material dependent constants.
The parameter $\alpha^{-1}$ is equal to the predicted number of cycles with range one leading to fatigue failure (throughout the article it is assumed that $\alpha$ equals one). Various choices of the damage exponent $\beta$ can be considered, like $\beta=3$ which is the standard value for the crack growth process or $\beta=5$ which is often used when a fatigue process is dominated by the crack initiation phase.
A more convenient representation, from computational viewpoint, of damage is:
\begin{align} \label{eq:D41}
D_\beta(L_T)=\beta(\beta-1)\int^{+\infty}_{-\infty}\int^{v}_{-\infty}\,(v-u)^{\beta-2}N^{osc}(u,v)\,du\,dv,
\end{align}
where $N^{osc}(u,v)$ is the number of interval ($[u,v]$) upcrossing by a load, see \cite{Rychlik2} for details.
Since $L_T$ is a random process, one uses the expected damage as a tool to describe damage. The damage intensity of a process is
\begin{equation}\label{dint}
d_\beta=\lim_{T\rightarrow\infty}\frac{1}{T} E[D_\beta(L_T)].
\end{equation}
Finally, using Eq.~(\ref{eq:D41}), we get that
\begin{align}
d_\beta=\beta(\beta-1)\int^{+\infty}_{-\infty}\int^v_{-\infty} (v-u)^{\beta-2}\mu^{osc}(u,v)\,du\,dv,
\label{eq:D7}
\end{align}
where
\begin{equation}\label{int_cross}
\mu^{osc}(u,v)=\lim_{T\rightarrow\infty}\frac{E\left[N^{osc}(u,v)\right]}{T}.
\end{equation}
which is called the intensity of interval up-crossings.
\subsection{Reduced load and expected damage given driving events}
In general the lateral loads are not available and will vary between vehicles. The reduced load, we propose below, is constructed using estimated frequencies of driving events from the HMM, and the distributions of extreme loads associated with driving events, which can be measured on testing grounds or in laboratories.
We now describe how to construct a reduced load from the driving events left turn, $LT$, and right turn, $RT$ (the method could of course be generalized to other driving events); these events are known to cause the majority of the damage for steering components.
Let $\{Z_t\}_{t=0}^T$ be the hidden processes in a HMM, with three possible driving states right turn, left turn or straight forward, at time $t$. Here, we define $Z^{*}_i$ as the driving event representing the $i^{th}$ turn, occurring in the time interval $[t_{i,start},t_{i,stop}]$, and is equal one if the turn is left, and two if the turn is right. The relation between the two sequences $\{Z^{*}_i\}_{i=0}^N$ and $\{Z_t\}_{t=0}^T$ is that the event $\{Z^{*}_i=1\}(\mbox{ or }\{Z^{*}_i=2\})$ is equivalent to that $Z_{t_{i,start}},..., Z_{t_{i,stop}}$ are all equal to, the same driving state, left turn (or right turn).
Now to create the reduced load, from the sequence driving events, assume that $M_i$ and $m_i$ are the $i^{th}$ maximum and minimum load during a turn, that is
\begin{align}
M_i=\displaystyle \max_{t \in I_i}L_t,\qquad m_i=\displaystyle\min_{t \in I_i}L_t,
\label{eq:Mm}
\end{align}
where $I_i= [t_{i,start},t_{i,stop}]$ represents the start and stop points of $i^{th}$ turn. The reduced load $\left\{X_i\right\}^{N}_{i=0}$ is defined as follows
\begin{align}
X_i =
\begin{cases}
0, &\mbox{if $i$ is odd integer},\\
M_{i/2}, &\mbox{if } Z^{*}_i=1, \mbox{$i$ is even integer},\\
m_{i/2}, &\mbox{if } Z^{*}_i=2, \mbox{$i$ is even integer}.
\end{cases}
\label{eq:RV}
\end{align}
Here the zeros are inputed since between each left and right turn event there must be a straight forward event.
Figure~\ref{fig:MinMax} illustrates a lateral load and the corresponding reduced load.
\begin{figure}[H]
\centering
\includegraphics[width=11cm]{Reduced_load.eps}
\caption{Reduced load represented by dots where the observed load is represented by the irregular solid line.}
\label{fig:MinMax}
\end{figure}
\noindent
To compute the damage intensity $d_\beta$, per driving event, one needs the interval up-crossing intensity $\mu^{osc}(u,v)$ of $\{Z^{*}_i\}_{i=0}^N$. Assuming that both $\{M_i\}_{i=0}^N$ and $\{m_i\}_{i=0}^N$ are sequences of iid r.v, and that the transition matrix $\mv{P}$ of $Z^*$ is known (it can be derived from transition matrix $\mv{Q}$ in the HMM, see \ref{app:A}), one gets the closed form solution
\begin{align}
\mu^{osc}(u,v)=\frac{1}{2}
\begin{cases}
\pi'_{2}P(m_1<u), &\mbox{$u<v<0$}, \\
\pi'_2\,P(m_1<u)\,p_{2}(u,v), &\mbox{$u\leq 0 \leq v$}, \\
\pi'_{1}P(M_1>v), &\mbox{$0<u<v$}.
\end{cases}
\label{eq:mu1}
\end{align}
Here $\pi'=(\pi'_1,\pi'_2)$ is the stationary distribution of the $\mv{P}$ and $p_{2}(u,v)$ can be derived from the systems
\begin{align}
p_{j}(u,v)=&\, p(j,1)P(M_1>v) + P(M_1\le v)\,p(j,1)\,p_1(u,v)\nonumber\\
& + P(m_1\ge u)\,p(j,2)\,p_2(u,v), \, j=1,2.
\label{eq:p2}
\end{align}
For more details see \cite{Maghsood3}.
\section{Examples}
We evaluate the proposed algorithm with simulated and measured data sets. We consider the steering events occurring when the vehicle is driving at a speed higher than 10 km/h, e.g.\ when driving in curves. We estimate the number of left and right turns for a costumer. We further investigate the damage caused by steering events and compute the expected damage using the online estimation of transition matrix.
In our simulation study, a training set is used to estimate the parameters of the model which contains all steering events. We also use the simulation study to show the effects of different values of forgetting factor $\gamma$.
Finally, we use the measured data which is dedicated field measurements from a Volvo Truck. The measured signals come from the CAN (Controller Area Network) bus data, which is a systematic data acquisition and contains customer data.
\subsection{Simulation study}
We want to imitate a real journey during different road environments, such as city streets and highways. This is done by first generating a sequence of steering states using a Markov chain. We consider three states right turn (RT), left turn (LT) and straight forward (SF). We set these events as three hidden states and construct the HMM based on them as follows: We assume that the probabilities of going from a right turn to a left turn and vice versa are small and most often we will have straight forward after a right or a left turn. It has been also assumed that the average duration of straight forward during a city road is less than highway. Two different transition matrices $\mv Q_{city}$ and $\mv Q_{highway}$ have been considered for city and highway respectively:
\begin{math}
\mv Q_{city}=
\bordermatrix{&\textrm{RT}&\textrm{SF}&\textrm{LT} \cr
\textrm{RT} & 0.85 & 0.1 & 0.05\cr
\textrm{SF} & 0.025 & 0.95 & 0.025\cr
\textrm{LT} & 0.05 & 0.1 & 0.85\cr
},
\end{math}
\begin{math}
\mv Q_{highway}=
\bordermatrix{&\textrm{RT}&\textrm{SF}&\textrm{LT} \cr
\textrm{RT} & 0.90 & 0.08 & 0.02\cr
\textrm{SF} & 0.005 & 0.99 & 0.005\cr
\textrm{LT} & 0.02 & 0.08& 0.90\cr
}.
\end{math}\\
Second, we use Laplace distribution to simulate the lateral acceleration signal, $Y_t$. The Laplace parameters $(\delta,\mu, \nu, \Sigma)$ for each state are set as follows:
\begin{itemize}
\item $\delta_{RT}=-\delta_{LT}=-1,\ \delta_{SF}=0$,
\item $\mu_{RT}=-\mu_{LT}=-0.5,\ \mu_{SF}=0$,
\item $\nu_{RT}=\nu_{LT}=10,\ \nu_{SF}=0.5$,
\item $\Sigma_{RT}=\Sigma_{LT}=0.2,\ \Sigma_{SF}=1$.
\end{itemize}
The fitted distributions for lateral acceleration values within each state are shown in Figure~\ref{fig:dislap}.
\begin{figure}[H]
\hspace{-1.2cm}
\begin{tabular}{ccc }
(a) & (b)& (c) \\
\includegraphics[width=4.2cm]{MarginalDensityRT.eps} &\includegraphics[width=4.2cm]{MarginalDensitySF.eps}&\includegraphics[width=4.2cm]{MarginalDensityLT.eps}
\end{tabular}
\caption{(a), (b) and (c) represent the Laplace distributions fitted on lateral acceleration values for right turns, straight forward and left turns respectively.}
\label{fig:dislap}
\end{figure}
We compare four different values of $\gamma_t$ for the estimation of the transition matrix. First, we set $\gamma_t=1/t^{\alpha}$ where $\alpha=0.9$. This value of forgetting factor satisfies the convergence conditions given by Capp\'e \cite{Cappe1}. Second we use three different values of fixed $\gamma$, $0.01$, $0.002$ and $0.001$--corresponding to $R=0.9$ and $K=200, 1000$ and $2400$ (which corresponds to a duration $2$ min, $10$ min, and $20$ min) in Eq.~(\ref{gam}). Figure~\ref{fig:EstTra1} shows the estimated diagonal elements of the transition matrices for one simulated signal. The simulated signal represents a journey on a city road, a highway and then back to a city road and again highway over $10^5$ seconds, where the sampling period is $1/2$ seconds. The straight thick black lines show the diagonal elements of true transition matrices $\mv Q_{city}$ and $\mv Q_{highway}$.
\vspace{-3mm}
\begin{figure}[H]
\centering
\includegraphics[width=11.5cm]{Sim_city_highway_city_highway_update_gam3.eps}
\caption{Diagonal elements of online estimated transition matrix, simulated signal from City road+Highway+City road+Highway, with four different values of $\gamma$. Straight thick black lines show the diagonal elements of true transition matrices $\mv Q_{city}$ and $\mv Q_{highway}$. }
\label{fig:EstTra1}
\end{figure}
In Figure~\ref{fig:EstTra1}, one can see that the online algorithm with variable $\gamma$ can not follow the changes of the parameters well and that the adaption diminishes over time, as is to be expected. The fixed forgetting factor, however, seems to adapt well to the chaining environment.
\subsubsection*{Expected number of events}
Here, we compute the expected number of turns. We simulate independently hundred signals in order to investigate the accuracy of the online algorithm with different forgetting factors $\gamma$. In that case, we choose as before four different values of forgetting factors, which the fixed values correspond to the weight $R=0.9$ given by the $K=200, 1000$ and $2400$ latest observations in Eq.~$(\ref{gam})$.
We perform 100 simulations and estimate the intensities of occurrences of turns by Eq.~$(\ref{IntEvents})$:
\begin{align}
&\eta_{LT}=\sum_{t=1}^{T}(\pi_{t,2} \hat{q}_t(2,3)+\pi_{t,1} \hat{q}_t(1,3)),\label{countEvents_L}\\
&\eta_{RT}=\sum_{t=1}^{T}(\pi_{t,2} \hat{q}_t(2,1)+\pi_{t,3} \hat{q}_t(3,1)).\label{countEvents_R}
\end{align}
In order to validate the results, we compute an error rate which is the difference between the estimated and observed number of turns in each simulation. The expected number of turns from the model (using $\mv Q_{city}$ and $\mv Q_{highway}$) are $\eta_{LT}=\eta_{RT}=2840$. The average number of observed left and right turns are $n_{LT}=2834$ and $n_{RT}=2836$, respectively. The average and the standard deviations of errors for 100 simulations are computed. The results are presented in Table~\ref{tab:DetectionOnline1}. According to the average error, the forgetting factor $\gamma_t=0.002$ performs the best. However there is, surprisingly, only a small difference between all the fixed forgetting factors.
\begin{table}[H]
\caption{The expected number of turns estimated by online algorithm and Eqs.~$(\ref{countEvents_L}), (\ref{countEvents_R})$. The errors are the average of the differences between the estimated and observed number of turns.}
\label{tab:DetectionOnline1}
\footnotesize
\begin{center}
\begin{tabular}{|l | c c | c c | c c | c c |}
\hline
\multicolumn{9}{|c |}{Online algorithm}\\ \cline{1-9}
$\gamma_{t}$& \multicolumn{2}{c |}{ $1/t^{0.9}$}& \multicolumn{2}{c |}{ $0.01$} & \multicolumn{2}{c |}{ $0.002$}& \multicolumn{2}{c | }{ $0.001$} \\
\hline
Turns & $\eta_{LT}$ & $\eta_{RT}$ & $\eta_{LT}$ & $\eta_{RT}$ & $\eta_{LT}$ & $\eta_{RT}$ & $ \eta_{LT}$ & $\eta_{RT}$ \\
\hline
Mean\ Est. & 3236 & 3241 & 2928 & 2932 & 2882 & 2886 & 2920 & 2924\\
\hline
Mean\ Error & 402.48 & 405.30 & 94.40 & 96.46 & 48.45 & 49.93 & 86.84 & 88.68 \\
\hline
Std\ Error & 28.45 & 33.78 & 15.41 & 15.79 & 16.61 &17.77 & 20.65 & 21.43 \\
\hline
\end{tabular}
\end{center}
\end{table}
In our previous work, an HMM combined with a Viterbi algorithm~\cite{Viterbi} has been used to identify the driving events. The Viterbi algorithm gives a reconstructed sequence of events which maximizes the conditional probability of the observation sequence. In that approach, all data has to be used to estimate the driving events and is thus not suitable to on-board usage in a vehicle. However, in order to compare the previously proposed approach with the online estimation and to evaluate the frequencies of driving events, we also compute the number of turns by the Viterbi algorithm for each simulation. The counted number of turns from the Viterbi algorithm are on average $\eta_{LT}=2923$ and $\eta_{RT}=2925$. One can see that the Viterbi algorithm overestimates the number of turns.
\subsubsection*{Damage investigation}
In this section we compute the damage intensity based on online estimation of transition matrix per kilometer. We use one of the simulated lateral acceleration signals in order to calculate the damage. The speed of the vehicle is considered 50 kilometers per hour and the mileage is 1000 km (for a sampling period of $1/2$ seconds). We split the signal into 1000 equally sized frames. For each frame, the expected number of turns are computed by $\Delta \eta_k=\eta_k-\eta_{k-1}$ where $\eta_k$ is the estimated number of turns occurring up to $k^{th}$ frame. The expected damage based on turns for each frame is calculated by:
\begin{align*}
\Delta d_k=\Delta \eta_kd_k,
\end{align*}
where $d_k$ is the expected damage per turn and calculated by means of Eqs.~(\ref{eq:D7}) and~(\ref{eq:mu1}). The empirical distribution of $M_i$ and $m_i$ are used to calculate the intensity of interval crossings $\mu^{osc}(u,v)$. We use the online estimation of transition matrix $\mv Q$ with $\gamma=0.002$ to estimate the transition matrix $\mv P$ by using Eqs.~(\ref{eq:p11}) and~(\ref{eq:p22}), see \ref{app:A}.
The result for damage exponent $\beta=3$ is shown in Figure~\ref{fig:simdam2}.\ The straight thick red line shows $\Delta d_k(\mv Q_{true})$ which is the damage intensity computed using the model transition matrices $\mv Q_{city}$ and $\mv Q_{highway}$ for city and highway respectively. We can observe the change in damage between highway and city road. As might be expected the damage intensities (per km) estimated for the city are higher than for highway, since the number of turns occurring in a city road are larger than on a highway.
\begin{figure}[H]
\centering
\includegraphics[width=12cm]{sim_city_high_city_high_update_gam4_deltad_k_emprical_dist.eps}
\caption{Damage intensity per km according to the online estimation of transition matrix with $\gamma=0.002$. The upper plot shows the results for damage exponent $\beta=3$. The straight thick red line shows $\Delta d_k(\mv Q_{true})$ which is the damage intensity computed using model transition matrices $\mv Q_{city}$ and $\mv Q_{highway}$ for city and highway, respectively.}
\label{fig:simdam2}
\end{figure}
Further, the expected damage from the model (theoretical damage) is compared with the total damage and the damage calculated from the reduced load. One can see that the expected damage for the whole signal -- based on online estimation of transition matrix-- is equal to $\sum ^{1000}_{k=1}\Delta d_k$. The total damage is calculated from the lateral acceleration signal using the rainflow method. The damage evaluated for the load (lateral acceleration), reduced load and the expected damage is compared in Table~\ref{tab:Dam1}. The numerical integration in~(\ref{eq:D7}) as well as the rainflow cycle counting has been done using the WAFO (Wave Analysis for Fatigue and Oceanography) toolbox, see \cite{Brodtkorb2000.C1,WAFO2011.M1}.
\begin{table}[H]
\caption{Comparison of damage computed for the simulated load, the corresponding reduced load and the expected damage.}
\label{tab:Dam1}
\begin{center}
\begin{tabular}{|l| c| c|c| }
\hline
\multirow{ 2}{*}{Damage} & \multirow{ 2}{*}{Total} & \multirow{ 2}{*}{Reduced load} & Expected\\
& & & Online\ with $\gamma=0.002$\\
\hline
$\beta=3$ & $1.88\cdot 10^6$ & $1.68\cdot 10^6$ & $1.68\cdot 10^6$ \\
\hline
$\beta=5$ & $1.77\cdot 10^8$ & $1.72\cdot 10^8$ & $1.67\cdot 10^8$ \\
\hline
\end{tabular}
\end{center}
\end{table}
Figure~\ref{fig:simdam2} and Table~\ref{tab:Dam1} demonstrate high accuracy of the proposed approach to estimate the expected damage for the studied load. Obviously this load is a realistic mathematical model of a real load. In the next section we will apply our method to estimate the steering events and compute the damage for a measured load on a VOLVO truck.
\subsection{On-board logging data from Volvo}
To evaluate the method on a real data set, we study field measurements coming from a Volvo Truck. We use the measured lateral acceleration signal from the CAN (Controller Area Network) bus data.
We fit the Laplace distribution for the lateral acceleration within each steering state. To estimate the Laplace distribution parameters considered, we need a training set which contains all history about the curves. We detect the events manually by looking at video recordings from the truck cabin to see what happened during the driving. The manual detections are not completely correct because of the visual errors and the low quality of videos used for the manual detection.
The online algorithms are used to count the number of left and right turns. Figure~\ref{fig:meas1} shows the estimation results using online algorithm with $\gamma_t=0.0008\, (R=0.8, K = 2000)$ for the measured signal. It is interesting to note that there is a sudden change in the driving environment after around 5000 sec.
\begin{figure}[H]
\centering
\includegraphics[width=11cm]{Meas_CivaD1_update_gam00008.eps}
\caption{Diagonal elements from online estimation of transition matrix with $\gamma_t=0.0008$ for measured data}
\label{fig:meas1}
\end{figure}
The expected number of left and right turns computed by online algorithm are $\eta_{LT}=228$ and $\eta_{RT}=241$ respectively.
\subsubsection*{Damage investigation}
Here, we compute the damage intensity based on the model. In order to do that we split data into the frames containing $250$ seconds (approximately 4-5 km) of measurement and we compute the distance based on the average speed in each frame. Figure~\ref{fig:measdam2} shows the expected damage based on turns computed by $\Delta d_k=\Delta \eta_kd_k$ where $\Delta \eta_k=\eta_k-\eta_{k-1}$ and $n_k$ is the estimated number of turns occur over $k$ kilometers. Here, the results are based on the damage exponent $\beta=3$.
\begin{figure}[H]
\centering
\includegraphics[width=12cm]{real_data_update_gam3_deltad_k74.eps}
\caption{Damage intensity with damage exponent $\beta=3$ regarding mileage. The online estimation of transition matrix with $\gamma=0.0008$ has been used to estimate the expected damage.}
\label{fig:measdam2}
\end{figure}
The total expected damage using online estimation of transition matrix can be computed by $\sum_{k=1}\Delta d_k$. The damage evaluated for the load (lateral acceleration), reduced load and the expected damage is compared in Table~\ref{tab:Dam2}.
The Rayleigh distributions which have been fitted to positive and negative values of the reduced load are
\begin{equation*}
P(M_1>v)=e^{-\frac{1}{2}\left(\frac{v}{2.2}\right)^2},\, v\ge 0, \qquad P(m_1<u)=e^{-\frac{1}{2}\left(\frac{u}{2.3}\right)^2},\, u\le 0.
\end{equation*}
\begin{table}[H]
\caption{Comparison of damage values computed from the measured load, the corresponding reduced load and the expected damage.}
\label{tab:Dam2}
\begin{center}
\begin{tabular}{|l| c| c|c|}
\hline
\multirow{ 2}{*}{Damage} & \multirow{ 2}{*}{Total} & \multirow{ 2}{*}{Reduced load} & Expected\\
& & & Online\ with $\gamma=0.0008$ \\
\hline
$\beta=3$ & $8.1\cdot 10^3$ & $7.4\cdot 10^3$ & $7.7\cdot 10^3$ \\
\hline
$\beta=5$ & $1.5\cdot 10^5$ & $1.5\cdot 10^5$ & $1.9\cdot 10^5$ \\
\hline
\end{tabular}
\end{center}
\end{table}
We also compare the damage accumulation process from the model, $\sum_{k=1}\Delta d_k$, with the empirical accumulated damage in the signal. The expected damage based on fitted model will be called the theoretical damage. Figure~\ref{fig:measdam3} shows the theoretical and observed accumulated damage processes. It can be seen that the accumulated damage from the model is close to the observed damage and there are two damage rates in both theoretical and observed damage processes.
\begin{figure}[H]
\centering
\includegraphics[width=12cm]{real_data_accumulated_dam_total_expected_k74.eps}
\caption{The theoretical and observed accumulated damage processes for damage exponent $\beta=3$. The online estimation of transition matrix with $\gamma=0.0008$ has been used to estimate the expected damage.}
\label{fig:measdam3}
\end{figure}
Results shown in Figure~\ref{fig:measdam3} and Table~\ref{tab:Dam2} demonstrate the accuracy of the proposed methodology for this measured load.
\section{Conclusion}
In this article, we have derived a method to estimate the number of driving events for a vehicle using the CAN data through the use of an HMM. The method uses an online EM algorithm to estimate the parameters of the HMM. The online version has three major advantages over the regular EM algorithm, making it possible to implement the method on-board a vehicle: the computational complexity of each iteration of the algorithm is $\mathcal{O}(1)$, making it a computationally tractable method; the parameters are estimated without the need to store any data; the formulation of the online algorithm allows for an adaptive parameter estimation method, using a fixed forgetting factor, so that the parameters can adapt over chaining driving environment.
The proposed estimation algorithm was validated using simulated and measured data sets. The results show that the online algorithm works well and can adapt to a chaining environment when the driving conditions are not constant over time.
\section*{Acknowledgment}
We are thankful to Prof.\ Igor Rychlik and Dr.\ P\"ar Johannesson for their useful ideas and helpful suggestions in this study. We would like to thank Volvo Trucks for supplying the data in this study and to the members in our research group at Volvo for their valuable advice. Finally, we gratefully acknowledge the financial support from VINNOVA. The second author has been supported by the Knut and Alice Wallenberg foundation.
\nocite{*}
\bibliographystyle{plain}
| proofpile-arXiv_065-14818 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
The top quark is the heaviest known elementary particle and as such has a privileged interaction with the Higgs boson.
Its mass, \mtop, is hence an important input to global fits of electroweak parameters together with measurements of the \PW~boson and Higgs~boson masses, and serves as an important cross-check of the consistency of the standard model (SM).
Moreover, by comparing precision electroweak measurements and theoretical predictions, a precisely measured \mtop{} can place strong constraints on contributions from physics beyond the SM.\@
The top quark is the only colored particle that decays before forming a color-neutral state through hadronization and thus presents a unique opportunity to directly probe the properties of color charges.
Direct determinations of the mass of the top quark have been carried out with ever-increasing precision since it was discovered at the Tevatron by the CDF and D0 experiments~\cite{Abe:1995hr,Abachi:1995iq}.
More recently, the most precise measurements reconstruct top quarks in hadronic decays and calibrate the energy of hadronic jets in-situ, using constraints from the reconstructed \PW{}~boson mass~\cite{Chatrchyan:2012cz,Chatrchyan:2013xza,Aad:2015nba}.
Other analyses exploit the purity of leptonic top quark decays and constrain the neutrino momenta analytically~\cite{Chatrchyan:2012ea,Aad:2015nba}.
All four experiments where the top quark mass is being studied (ATLAS, CDF, CMS, and D0) have combined their results in a world average~\cite{worldcomb}.
A recent combination of measurements at 7 and 8\TeV\ by the CMS experiment yields the best determination of the top quark mass to date, with a result of $172.44\pm0.48\GeV$, \ie\ reaching a precision of $0.28\%$~\cite{Khachatryan:2015hba}.
The most precise top quark mass measurements are systematically limited by experimental uncertainties related to the calibration of reconstructed jet energies and their resolution, with other important uncertainties concerning the modeling of the fragmentation and hadronization of bottom quarks.
To improve further the precision of the value of the top quark mass and our understanding of the modeling of top quark decays, the development and application of alternative and complementary methods is essential.
Complementarity to ``standard'' methods can be gained by using observables with reduced sensitivity to certain sources of systematic uncertainties, such as the $\PQb$~hadron decay length~\cite{Hill:2005zy,Abulencia:2006rz,Aaltonen:2009hd} or kinematic properties of leptons~\cite{Frixione:2014ala}, or by extracting the mass from endpoints of kinematic distributions~\cite{Chatrchyan:2013boa} or from the production cross section~\cite{Khachatryan:2016mqs}.
This paper describes a measurement performed with the CMS experiment at the CERN LHC that minimizes the sensitivity to experimental systematic uncertainties such as jet energy scale.
This is achieved by constructing a mass-dependent observable that uses only the individually-measured momenta of charged decay products (tracks) of the top quark.
The mass of the top quark is estimated by measuring the invariant mass of a charged lepton from the \PW~boson decay and the tracks used in the reconstruction of a secondary vertex (SV) resulting from the long lifetime of $\PQb$~hadrons.
The dependence of the observable on the top quark mass is calibrated using simulated Monte Carlo (MC) events.
This approach is similar to a proposed measurement using the invariant mass of leptons and reconstructed \PJGy{} mesons~\cite{Kharchilava:1999yj}, but requires a lower integrated luminosity to become sensitive.
The paper is organized as follows: Section~\ref{sec:experiment} describes the experiment, the collected and simulated data, and the event reconstruction and selection; Section~\ref{sec:modeling} describes control region studies of \cPqb\ quark fragmentation and secondary vertex reconstruction; Section~\ref{sec:topmass} describes the measurement of the top quark mass and the assigned systematic uncertainties; and Section~\ref{sec:conclusions} concludes and gives an outlook of prospects in the ongoing LHC run.\@
\section{Experimental setup}\label{sec:experiment}
\subsection{The CMS detector}
The central feature of the CMS apparatus is a superconducting solenoid of 6~m internal diameter, providing a magnetic field of 3.8~T.
Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections.
The tracker has a track-finding efficiency of more than 99\% for muons with transverse momentum $\pt > 1\GeV$ and pseudorapidity $|\eta| < 2.5$.
The ECAL is a fine-grained hermetic calorimeter with quasi-projective geometry, and is segmented in the barrel region of $|\eta| < 1.48$ and in two endcaps that extend up to $|\eta| < 3.0$.
The HCAL barrel and endcaps similarly cover the region $|\eta| < 3.0$.
In addition to the barrel and endcap detectors, CMS has extensive forward calorimetry.
Muons are measured in gas-ionization detectors which are embedded in the flux-return yoke outside of the solenoid.
A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}.
\subsection{Data and simulation}
This analysis makes use of a large sample of top quark pair, \ttbar, event candidates with either one or two isolated charged leptons (electrons or muons) in the final state.
In the semileptonic (only one lepton) case,
at least four reconstructed hadronic jets are required,
whereas in the dilepton case at least two jets are required.
Events are selected from the data sample acquired in proton-proton ($ \Pp \Pp $) collisions at a center-of-mass energy of $\sqrt{s}=8\TeV$ by the CMS experiment throughout 2012, corresponding to an integrated luminosity of 19.7\fbinv.
{\sloppy
At that energy the predicted \ttbar\ cross section in $ \Pp \Pp $ collisions,
computed at the next-to-next-to-leading-order (NNLO) quantum chromodynamics (QCD) and including corrections and next-to-next-to-leading-logarithmic resummation accuracy~\cite{Czakon:2013goa}, is
\mbox{$245.8^{+8.7}_{-10.6}~{\rm pb}$}
for a top quark mass of $173\GeV$,
where the uncertainty covers missing higher orders in the calculation as well as variations of the parton distribution functions (PDFs).
Signal \ttbar\ events are simulated with the leading-order (LO) \MADGRAPH\ (v5.1.3.30) generator~\cite{Alwall:2014hca} matched to LO \PYTHIA\ (v6.426)~\cite{Sjostrand:2006za} for parton showering and fragmentation.
The $\tau$ lepton decays are simulated with the \TAUOLA\ package (v27.121.5)~\cite{Was:2000st}.
The LO CTEQ6L1 PDF set~\cite{Pumplin:2002vw} and the {\rm Z2*}\xspace\ underlying event tune~\cite{Chatrchyan:2011id} are used in the generation.
The {\rm Z2*}\xspace\ tune is derived from the Z1 tune~\cite{Field:2010bc}, which uses the CTEQ5L PDF set, whereas {\rm Z2*}\xspace\ adopts CTEQ6L.
Matrix elements describing up to three partons in addition to the \ttbar\ pair are included in the generator used to produce the simulated signal samples, and the MLM prescription~\cite{Mangano:2006rw} is used for matching of matrix-element jets to parton showers.
Following the generator chain, the response of the CMS detector
is simulated using \GEANTfour\ (v.9.4p03) for both signal and background samples~\cite{Agostinelli:2002hh}.
}
The most relevant background for the semileptonic channel is the production of a \PW~boson in association with hadronic jets.
This background is modeled with \MADGRAPH\ and normalized to a total cross section of 36.3\unit{nb}, computed with \textsc{fewz} (v3.1)~\cite{Gavin:2012sy} at NNLO.\@
Multijet QCD processes are also relevant and studied in simulations using \PYTHIA.\@
Single top quark processes are modeled with \POWHEG\ (v1.0, r1380)~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd,Alioli:2009je,Re:2010bp} with the CTEQ6M PDF set and normalized to the cross sections of 22.2, 86.1, and 5.6\unit{pb} for the $\cPqt\PW$, $t$, and $s$ production channels, respectively~\cite{Kidonakis:2012eq}.
Charged-lepton production from Drell--Yan (DY) processes is modeled with \MADGRAPH\ for dilepton invariant masses above 10\GeV\ and is normalized to a cross section of 4.4\unit{nb},
computed with \textsc{fewz}~\cite{Gavin:2010az,Li:2012wna}.
The production of $\PW \PW$, $\PW \PZ$, and $\PZ \PZ$ pairs is modeled with \PYTHIA\ and normalized to cross sections of 54.8, 33.2, and 17.7\unit{pb}, respectively, computed at next-to-leading order (NLO) accuracy using \MCFM\ (v6.6)~\cite{Campbell:2010ff}.
All simulated samples include the effects of pileup, \ie\ multiple $ \Pp \Pp $ collisions in the same and neighboring beam crossings (within 50\unit{ns}) as the generated hard interaction.
The distribution of the number of pileup events in simulation matches that in the data and has an average of about 21 interactions per bunch crossing.
\subsection{Event reconstruction and selection}\label{sec:eventsel}
The event selection is designed to identify the \ttbar\ final state in the semileptonic and dileptonic channels.
Single-lepton triggers are used to collect the data samples for the semileptonic channels with a minimum \pt\ of 27 for electrons and 24\GeV for muons.
In the dilepton channel double-lepton triggers are required with a minimum \pt\ of 17 and 8\GeV for the leading and sub-leading leptons, respectively.
In both cases isolation and identification criteria are required at the trigger level.
More information can be found in Refs.~\cite{Khachatryan:2016mqs,Khachatryan:2016yzq}.
The events are reconstructed using a particle-flow (PF) algorithm that optimally combines the information from all subdetectors
to reconstruct and identify all individual particles in the event~\cite{CMS-PAS-PFT-09-001,CMS-PAS-PFT-10-001}.
In addition, improved electron and muon reconstruction, identification and calibration algorithms have been employed as described in~\cite{Khachatryan:2015hwa,Chatrchyan:2013sba}.
Electron candidates are required to have $\pt>30\GeV$ and to be in the fiducial region of the detector, \ie\ $\vert\eta\vert\leq2.4$.
Muon candidates are selected with $\pt>26\GeV$ and $\vert\eta\vert\leq2.1$.
In the dilepton channel these requirements are relaxed to $\pt>20\GeV$ and $\vert\eta\vert\leq 2.4$ for all lepton candidates.
The track associated with each lepton candidate is required to have an impact parameter compatible with prompt production.
A particle-based relative isolation is computed for each lepton and is corrected on an event-by-event basis for contributions from pileup events~\cite{Khachatryan:2016mqs}.
The scalar sum of the transverse momenta of all reconstructed particle candidates---except for the leptons themselves---within a cone of size $\Delta R=\sqrt{\smash[b]{(\Delta \eta)^{2}+(\Delta \phi)^{2}}}<0.3$ ($<0.4$ for muons) built around the lepton direction must be less than 10\% of the electron \pt\ and less than 12\% of the muon \pt.
In the dilepton channels, the electron isolation threshold is relaxed to less than $15\%$.
Events in the semileptonic channel are required to have exactly one selected lepton, with a veto on additional leptons.
In the dilepton channel, at least two selected leptons are required.
Jets are reconstructed using the anti-\kt algorithm with a distance parameter of $0.5$ and taking PF candidates as input to the clustering~\cite{Cacciari:2008gp}.
The jet momentum is defined as the vectorial sum of all particle momenta associated to the jet and is determined from the simulation to be within 5--10\% of the generated jet momentum at particle level over the whole \pt\ range and detector acceptance.
An offset correction is applied to take into account the extra energy clustered into the jets due to pileup, following the procedure described in Refs.~\cite{Cacciari:2008gn,Cacciari:2007fd}.
Jet energy scale corrections are derived from the simulation and are cross-checked with in-situ measurements of the energy balance in dijet and photon+jet events.
The selected jets are required to have a corrected \pt\ greater than $30\GeV$ and $\vert\eta\vert\leq 2.5$.
Jets within $\Delta R = 0.4$ of any selected lepton are rejected, but the event is retained if it passes the other selection criteria.
The magnitude of the vectorial sum of the transverse momenta of all PF candidates reconstructed in the event is used as an estimator of the energy imbalance in the transverse plane, \MET.\@
For each jet, the charged PF candidates used in the clustering are given as input to an adaptive vertex fitter algorithm to reconstruct secondary vertices~\cite{Fruhwirth:2007hz}.
Secondary vertex candidates that share $65\%$ or more of their tracks with the primary vertex (defined as the vertex with highest $\sum{\pt^2}$ of its associated tracks) or that have a flight direction outside a $\Delta R=0.5$ cone around the jet momentum are rejected.
Furthermore, if the radial distance from the primary vertex is greater than 2.5~cm, candidates with an invariant mass consistent with that of a \PKz, or higher than 6.5\GeV, are rejected (assuming each decay particle to have the rest mass of a charged \Pgp).
In case an event does not have any jet with a valid secondary vertex candidate it is discarded from the analysis.
Secondary vertices are used together with track-based lifetime information in a likelihood ratio algorithm to provide a discriminant for jets originating from the hadronization of a \cPqb~quark (``\cPqb\ jets'')~\cite{Chatrchyan:2012jua}.
The chosen threshold on the discriminant output value has an efficiency for selecting a genuine \cPqb~jet of about 60\%, selects charm-initiated jets with an efficiency of about 15\%, while the probability to misidentify a light-flavor jet as a \cPqb\ jet is about 1.5\%.
Jets passing this selection are referred to as \cPqb-tagged.
Events in the three dilepton channels (\ensuremath{\Pe\Pgm}\xspace, \ensuremath{\Pe\Pe}\xspace, and \ensuremath{\Pgm\Pgm}\xspace) are selected with at least two jets, of which at least one is required to have a reconstructed secondary vertex.
The dilepton invariant mass is required to be greater than $20\GeV$ to remove low-mass QCD resonances.
To suppress contributions from DY production in the \ensuremath{\Pe\Pe}\xspace\ and \ensuremath{\Pgm\Pgm}\xspace\ channels, the dilepton mass is further required to differ by at least $15\GeV$ from the \PZ~boson mass ($91\GeV$), and $\MET>40\GeV$ is required.
In the two semileptonic channels, events are selected with at least four jets, of which at least one has a reconstructed secondary vertex and one more has either another secondary vertex or is \cPqb-tagged.
Table~\ref{tab:eventyields} shows the number of selected data events in the five channels and the purity of events containing top quarks as expected from simulation.
Figure~\ref{fig:lxy} shows the distribution of the transverse decay length, \lxy, between the secondary vertex reconstructed from charged-particle tracks inside the jets selected for this analysis and the primary vertex of each event.
Good agreement is observed between data and expectations based on $\mtop = 172.5\GeV$.
The background expectations are obtained from the simulation, except for the multijet background which is determined from a control region in the data, as described in Section~\ref{ssec:signalmodel}.
\begin{table}[htp]
\centering
\topcaption{
Number of observed events and expected purity of top quark production (\ttbar\ and single top quarks) for the five channels investigated in this analysis.
}
\label{tab:eventyields}
\begin{scotch}{lccccc}
& \ensuremath{\Pe\Pgm}\xspace\ & \ensuremath{\Pe\Pe}\xspace\ & \ensuremath{\Pgm\Pgm}\xspace\ & \ensuremath{\Pe}\xspace\ & \ensuremath{\Pgm}\xspace\ \\
\hline
Observed events & 31\,639 & 9\,558 & 10\,674 & 103\,586 & 117\,198 \\
Expected purity & 98.6\% & 95.8\% & 95.4\% & 93.7\% & 92.8\% \\
\end{scotch}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics[width=0.45\textwidth]{Figure_001-a}
\includegraphics[width=0.45\textwidth]{Figure_001-b}
\caption{
Distributions of the transverse decay length of secondary vertices with respect to the primary vertex in dilepton (\cmsLeft) and semileptonic channels (\cmsRight).
The expectations from simulation and estimates from the data for the multijet background are compared to the reconstructed data.
The last bin contains the overflow events.
}
\label{fig:lxy}
\end{figure}
\section{Analysis of \texorpdfstring{\cPqb\ quark}{b quark} fragmentation in data}\label{sec:modeling}
The crucial objects used in this measurement are the charged leptons from a \PW{}~boson decay and the charged decay products of a $\PQb$~hadron, forming a reconstructed secondary vertex.
While the reconstruction of leptons is well-controlled in the experiment, the modeling of hadronization of the colored decay products of the top quark is subject to theoretical uncertainties.
These uncertainties affect the kinematic properties of the produced tracks, as well as their flavor composition and multiplicity.
The parton-to-hadron momentum transfer in the hadronization of \cPqb{} quarks---referred to in the following as \cPqb\ quark fragmentation---has been measured before in $\Pep\Pem$ collisions by the ALEPH, DELPHI, OPAL, and SLD Collaborations~\cite{Heister:2001jg,Abbiendi:2002vt,DELPHI:2011aa,Abe:1999ki,Abe:2002iq}
, and in \Pp{}\Pap{} collisions by the CDF Collaboration~\cite{Affolder:1999iq}.
However, no measurement at the LHC has been published so far.
In this section, two complementary studies are presented that attempt to constrain the uncertainties from the modeling of \cPqb\ quark fragmentation, which are expected to be the main contributors to the final uncertainty in this top quark mass measurement.
These studies constitute a first step towards measuring the \cPqb\ quark fragmentation using \ttbar{} events, but, as will become clear, the 2012 LHC data do not provide the necessary statistical precision, and significant constraints on the \cPqb\ quark fragmentation will be possible only with future data.
In this study we compare the \PYTHIA\ {\rm Z2*}\xspace\ tune, used by the CMS experiment at 8\TeV~\cite{Chatrchyan:2011id} with an updated version which includes the $\Pep\Pem$ data to improve the description of the fragmentation.
Without the inclusion of this data, the default {\rm Z2*}\xspace\ \cPqb\ quark fragmentation function is found to be too soft.
The $r_{\cPqb}$ parameter in \PYTHIA\ (\verb|PARJ(47)|) can be optimized to fit the $\Pep\Pem$ data using the \textsc{Professor} tool~\cite{Buckley:2009bj},
resulting in a value of $0.591^{+0.216}_{-0.274}$.
In contrast, the default central value used in {\rm Z2*}\xspace\ is 1.0~\cite{Seidel:2015vla}.
In this analysis, the improved tune using the
$r_{\cPqb}$ central value of 0.591 (and variations within the uncertainty band) is denoted as \ztwostar\,LEP $r_{\PQb}$\xspace\ (\ztwostar\,LEP $r_{\PQb}$\xspace~$\!^{\pm}$) and is used to calibrate the measurement and evaluate the systematic uncertainty associated with the calibration.
For completeness, we also include other alternatives of the {\rm Z2*}\xspace\ tune using the Peterson and Lund parameterizations~\cite{Sjostrand:2006za}.
All the considered \PYTHIA\ tunes use the so-called Lund string fragmentation model~\cite{Andersson:1983ia}.
The impact on the measurement of \mtop\ when using the alternative cluster model~\cite{Webber:1983if,Winter:2003tt} is discussed in Section~\ref{sssec:theosysts}.
\subsection{Secondary vertex properties \texorpdfstring{in $\PZ$+jets and \ttbar\ events}{in Z+jets and t-tbar events}}
Events with a leptonically-decaying \PZ\ boson recoiling against hadronic jets provide an independent and low-background sample to study the properties of secondary vertices.
Candidate \PZ\ events are selected by requiring two opposite-sign leptons with an invariant mass compatible with the \PZ{}~boson mass within 15\GeV.
To minimize effects from mismodeling of kinematic properties of the \PZ{}~boson, events are reweighted such that the predicted $\pt{}(\PZ)$ distribution reflects the one observed in the data.
Furthermore, events are required to have a leading jet with $\pt>30\GeV$ that is spatially separated from the \PZ~boson candidate by $\Delta R>2.1$.
The flavor of jets with reconstructed secondary vertices in such events changes with increasing number of tracks associated with the vertex.
From simulation, we expect vertices with two tracks to predominantly correspond to jets from light and \cPqc~quarks, with the fraction of jets from \cPqb{} quarks increasing to above 90\% for vertices with five or more tracks.
Several observables of secondary vertex kinematic properties are investigated for their sensitivity to modeling of \cPqb\ quark fragmentation.
Of those, the highest sensitivity is achieved when studying the ratio of SV transverse momentum---\ie\ the transverse component of the vectorial sum of all charged particle momenta used in the reconstruction of the vertex---to the total transverse momentum of the jet carried by charged particles,
\begin{equation*}
\mathcal{F}_\mathrm {ch} = \frac{\pt(\mathrm{SV})}{\vert\sum_{\mathrm{ch}} \vec{\pt}\vert}.
\end{equation*}
Effects arising from mismodeling of the overall kinematic properties of the event are canceled, to first approximation, by studying the ratio of the two momenta, in which the secondary vertex serves as a proxy for the $\PQb$~hadron and the charged particles represent the full momentum of the initial \cPqb{} quark.
Note that this observable is not sensitive to variations in the jet energy scale, as it makes use only of the charged constituents of the selected jets.
The observed and predicted distributions for $\mathcal{F}_\mathrm{ch}$ in $\PZ$+jets events are shown in Fig.~\ref{fig:svchfrac} (top), separately for vertices with three, four, and five tracks.
For each plot the average of the distribution in the data is compared to the MC prediction using different \cPqb\ fragmentation tunes.
The data appear to favor softer fragmentation shapes such as the {\rm Z2*}\xspace\ and Peterson tunes.
However, in this selection a significant fraction of the selected jets stems from the hadronization of light and charm quarks which are not changed by the event reweighting procedure used to compare the different tunes.
Likewise, the \ztwostar\,LEP $r_{\PQb}$\xspace\ tune only affects the simulated fragmentation of \cPqb\ quarks and was obtained using data from LEP enriched in jets from \cPqb\ quark hadronizations, and hence is not expected to correctly describe charm and light quark fragmentation.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.32\textwidth]{Figure_002-a.pdf}
\includegraphics[width=0.32\textwidth]{Figure_002-b.pdf}
\includegraphics[width=0.32\textwidth]{Figure_002-c.pdf}\\
\includegraphics[width=0.32\textwidth]{Figure_002-d.pdf}
\includegraphics[width=0.32\textwidth]{Figure_002-e.pdf}
\includegraphics[width=0.32\textwidth]{Figure_002-f.pdf}
\caption{
Distributions of the ratio of the transverse momentum of secondary vertices to the charged component of the jet with three, four, and five tracks (from left to right) in $\PZ$+jets dilepton (top) and \ttbar\ dilepton events (bottom), compared to the expected shape using the \ztwostar\,LEP $r_{\PQb}$\xspace\ fragmentation tune. In each plot, the top panels compare the average of the distribution measured in data and its statistical uncertainty (shaded area) with that expected from different choices of the \cPqb\ quark fragmentation function in \PYTHIA.\@ For \ztwostar\,LEP $r_{\PQb}$\xspace, the error bar represents the $\pm$ variations of \ztwostar\,LEP $r_{\PQb}$\xspace.
}
\label{fig:svchfrac}
\end{figure*}
In the sample of \ttbar{} events, selected as described in Section~\ref{sec:eventsel}, and used later for the top quark mass extraction, the selected jets are expected to contain a significantly larger fraction of \cPqb{} quarks.
From simulation, we expect a negligible dependence of $\mathcal{F}_\mathrm {ch}$ on the kinematic properties and mass of the top quarks, making this distribution appropriate to compare different fragmentation models.
The equivalent distributions of secondary vertex properties in \ttbar{} events are shown in Fig.~\ref{fig:svchfrac} (bottom).
The observed distributions in this signal selection are generally well described by the central (\ztwostar\,LEP $r_{\PQb}$\xspace) tune, but the comparison of the mean values of $\mathcal{F}_\mathrm {ch}$---as shown in the top panels of the plots---reveals differences between the various fragmentation shapes.
Unlike in the $\PZ$+jets data, the {\rm Z2*}\xspace\ tune shows the largest deviation with respect to the \ttbar\ data among the studied variations, whereas the \ztwostar\,LEP $r_{\PQb}$\xspace\ fragmentation shape is in better agreement.
Furthermore, the hard and soft variations of \ztwostar\,LEP $r_{\PQb}$\xspace, corresponding to one standard deviation variations of the $r_{\cPqb}$ parameter, provide a bracketing that encloses or approaches the data.
The \ztwostar\,LEP $r_{\PQb}$\xspace\ tune is therefore used as the nominal \cPqb\ quark fragmentation shape in the following analysis, with the shape variations used to estimate systematic uncertainties in the top quark mass measurement.
\subsection{Inclusive charm mesons \texorpdfstring{in \ttbar{} events}{in t-tbar events}}
Kinematic properties of inclusively reconstructed charmed mesons inside \cPqb\ jets from top quark decays are expected to be sensitive to the modeling of \cPqb\ quark fragmentation.
We limit the study to meson decays with large branching fractions and high expected signal-to-background ratios: $\PJGy{}\rightarrow\Pgmp{}\Pgmm{}$, $\PDz{}\rightarrow\PKm{}\Pgpp{}$ in semileptonic $\PB$ decays, and inclusive $\ensuremath{{\PD^{\ast}{(2010)}^{+}}}{}\rightarrow\PDz{}\Pgpp{}$, with $\PDz{}\rightarrow\PKm{}\Pgpp{}$.
Top quark pair signal events are selected as described above, but with the requirement of at least one \cPqb-tagged jet replacing that of the presence of a reconstructed secondary vertex.
In the dilepton channels the \cPqb\ tagging algorithm output threshold is relaxed, as the expected background is lower.
All five leptonic decay channels of the \ttbar{} state are considered, as discussed above.
To gather as much data as possible, both \cPqb{} jets in each event are considered, selected by their tagging discriminant value and their transverse momentum.
All charged PF candidates used in the jet clustering are used to reconstruct mesons, with particle identification restricted to distinguishing electrons and muons from charged hadrons.
Candidates for \PJGy{} mesons are reconstructed by requiring two opposite-sign muon candidates among the charged jet constituents, and fitting their invariant mass in the range of 2.5--3.4\GeV, as shown in Fig.~\ref{fig:charmfits}.
The distribution is modeled with the sum of two Gaussian functions for the \PJGy{} signal and a falling exponential for the combinatorial backgrounds.
Neutral charm mesons, \PDz{}, are produced in the majority of $\PQb$~hadron decays, and are reconstructed via their decay to a \PKm{} and \Pgpp{}.
To reduce combinatorial backgrounds they are selected together with a soft lepton from a semileptonic $\PQb$~hadron decay, whose charge determines the respective flavor of the two hadron tracks.
All opposite-sign permutations of the three leading charged constituents of the jet are considered for \PK{} and \Pgp{} candidates and no additional vertex reconstruction is attempted.
The \PK{}\Pgp{} invariant mass is then fitted between 1.7 and 2.0\GeV, using a Crystal Ball~\cite{SLAC-R-236} shape for the signal and an exponential for the combinatorial backgrounds, as shown in Fig.~\ref{fig:charmfits}.
A large fraction of \PDz{} mesons is produced in the decays of intermediate excited charmed hadron states, such as the \ensuremath{{\PD^{\ast}{(2010)}^{+}}}{}, which can be reconstructed by considering the difference in invariant mass between the three-track (\PK{}\Pgp{}\Pgp{}) and the two-track (\PK{}\Pgp{}) systems, where a soft pion is emitted in the $\ensuremath{{\PD^{\ast}{(2010)}^{+}}}\rightarrow\PDz{}\Pgpp{}$ decay.
The \PDz{} mesons are reconstructed among the three leading tracks as described in the previous paragraph, and selected in a mass window of 50\MeV{} around the nominal \PDz{} mass.
A third track of the same charge as the \Pgp{} candidate from the \PDz{} decay is then added, and the mass difference is fitted in a range of 140--170\MeV{}, as shown in Fig.~\ref{fig:charmfits}.
The shape of the mass difference showing the \ensuremath{{\PD^{\ast}{(2010)}^{+}}}{} resonance is modeled using a sum of two Gaussian functions for the signal and a threshold function for the combinatorial backgrounds.
The position of the fitted invariant mass peaks---reconstructed purely in the silicon tracker---agree with the expected meson rest masses within about $0.05\%$ for the \PDz{} and \ensuremath{{\PD^{\ast}{(2010)}^{+}}}{}, indicating that the pion and kaon momentum scales are very well described.
The observed \PJGy{} meson mass, reconstructed using muons, agrees with the expectation~\cite{Agashe:2014kda} within about $0.3\%$, well within the muon momentum scale uncertainty.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.32\textwidth]{Figure_003-a.pdf}
\includegraphics[width=0.32\textwidth]{Figure_003-b.pdf}
\includegraphics[width=0.32\textwidth]{Figure_003-c.pdf}
\caption{
Fits to the invariant mass peaks of the three considered charmed mesons in \ttbar{} events in the data, as described in the text: \PJGy{} (left), \PDz{} (middle), and \ensuremath{{\PD^{\ast}{(2010)}^{+}}}{} (right).
}
\label{fig:charmfits}
\end{figure*}
The fitted signal and background distributions are then used to extract the kinematic properties of the reconstructed mesons using the $_{s}{\mathcal Plot}$ technique~\cite{2005NIMPA.555..356P}, where a discriminating observable (in this case the invariant mass of the candidates) is used to separate the signal and background contributions to the distribution of an observable of interest.
The same method is applied to simulated events with different generator tunes and a range of different \cPqb\ quark fragmentation functions, and the results are compared
with data.
Among several investigated kinematic properties of the charm meson candidates, the fraction of transverse momentum relative to the charged component of the jet momentum shows the highest sensitivity to variations in the \cPqb\ quark fragmentation shape.
The results are displayed in Fig.~\ref{fig:charmchfrac}.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.32\textwidth]{Figure_004-a.pdf}
\includegraphics[width=0.32\textwidth]{Figure_004-b.pdf}
\includegraphics[width=0.32\textwidth]{Figure_004-c.pdf}
\caption{
Distribution of the relative transverse momentum of \PJGy{} (left), \PDz{} (middle), and \ensuremath{{\PD^{\ast}{(2010)}^{+}}}{} (right) meson candidates with respect to the charged components of the jet in \ttbar{} events for the data and the nominal \ztwostar\,LEP $r_{\PQb}$\xspace\ fragmentation function.
The top panels show the average of the distributions observed in the data and its statistical uncertainty (shaded area), as well as expectations obtained with different \cPqb\ quark fragmentation functions and with an alternative generator setup using \HERWIG\ 6 with the AUET2 tune.
}
\label{fig:charmchfrac}
\end{figure*}
The reconstructed mesons are observed to carry about 50--60\% of the overall charged jet momentum.
These results are in good agreement with the predictions obtained from simulated \ttbar{} events for the central fragmentation function choice and corresponding variations.
The conclusions from the study of secondary vertex properties in the previous section are confirmed by the charm meson properties, with the \ztwostar\,LEP $r_{\PQb}$\xspace\ fragmentation showing better agreement with the data than the nominal {\rm Z2*}\xspace\ shape, albeit with a large statistical uncertainty.
The numbers of meson candidates observed in the data are reproduced within about 10\% when \PYTHIA\ with the {\rm Z2*}\xspace\ tune is used in the parton shower and hadronization, whereas \HERWIG~6~\cite{Corcella:2000bw} with the AUET2 tune~\cite{ATL-PHYS-PUB-2011-008} underestimates both the \ensuremath{{\PD^{\ast}{(2010)}^{+}}}{} and \PJGy{} yields by more than 50\%, and overestimates \PDz{} production by about 30\%.
\section{Top quark mass measurement}\label{sec:topmass}
Observables that are dependent on the top quark mass are constructed using the kinematic properties of the decay products of the top quark.
The choice of observable is a compromise between sensitivity to the mass on the one hand and susceptibility to systematic uncertainties on the other hand.
The most precise measurements to date have approached this trade-off by fully reconstructing the top quark from three jets in hadronic decays, heavily relying on precise calibrations of the reconstructed jet energies.
In the analysis presented here, a different approach is used that sacrifices some sensitivity to minimize the reliance on detector calibrations.
This exposes the result to uncertainties in the modeling of top quark decays and \cPqb{} hadronization, but has reduced experimental uncertainties.
The analysis will therefore immediately benefit from a future improvement of our understanding of these effects.
\subsection{Observable and measurement strategy}\label{ssec:strategy}
The observable exploited in this analysis is built from the measured properties of the charged lepton from the \PW~boson decay and the charged constituents of a hadronic jet compatible with originating from a common secondary vertex.
The invariant mass of the secondary vertex-lepton system, \ensuremath{m_{\mathrm{svl}}}\xspace, then serves as a proxy for the top quark mass.
In building the invariant mass, the vertex constituents are assumed to be charged pions.
The \ensuremath{m_{\mathrm{svl}}}\xspace\ variable shows a strong dependence on the mass of the top quark despite not accounting for the neutrino from the \PW~boson decay or from semileptonic $\PQb$~hadron decays, nor for neutral products of the \cPqb~quark hadronization.
Using only charged particles and well-modeled leptons reduces the main experimental uncertainties to acceptance effects.
For each selected event, all possible combinations of leptons and secondary vertices---up to two in semileptonic events and up to four in dileptonic events---are taken into account in the measurement.
Hence, by construction, the same number of correct and wrong combinations (\ie\ pairing the lepton with the vertex associated with the other top quark decay) enter the analysis.
In simulation, in about 11\% of cases the selected vertex could not be attributed to the decay products of either \cPqb\ quarks and is most likely spurious, either from a light quark from a hadronic \PW~boson decay, or from a gluon or light quark from initial-state radiation.
Figure~\ref{fig:msvldatamc} shows the observed \ensuremath{m_{\mathrm{svl}}}\xspace\ distribution for a combination of all five channels, compared to simulated distributions at three different generated top quark mass values.
\begin{figure}[htp]
\centering
\includegraphics[width=0.45\textwidth]{Figure_005.pdf}
\caption{
Lepton-SV invariant mass distribution for a combination of all five channels, for a simulation of three different top quark mass values (166.5, 172.5, and 178.5\GeV), and the observed data distribution.
Note that all possible lepton-vertex combinations for each event enter the distribution.
}
\label{fig:msvldatamc}
\end{figure}
The shape of the \ensuremath{m_{\mathrm{svl}}}\xspace{} observable depends considerably on the number of tracks associated with the secondary vertex, shifting to higher values as more tracks are included.
The analysis is therefore carried out in three exclusive track multiplicity categories of exactly three, four, or five tracks.
Vertices with only two tracks show an increased level of backgrounds and reduced sensitivity to \mtop{} and are therefore excluded from the analysis.
Furthermore, when evaluating systematic uncertainties, the results from the individual categories are assigned weights corresponding to the observed event yields in each, to absorb any mismodeling of the vertex multiplicity distribution in simulated events.
Hence the analysis is carried out in fifteen mutually exclusive categories---three track multiplicities and five lepton flavor channels---and combined to yield the final result.
\subsection{Signal and background modeling}\label{ssec:signalmodel}
The observed \ensuremath{m_{\mathrm{svl}}}\xspace{} distributions in each category are fitted with a combination of six individual components:
\begin{itemize}
\item[--] ``correct'' pairings for the \ttbar{} signal where leptons and vertices are matched to the same top quark decay;
\item[--] ``wrong'' pairings for the \ttbar{} signal where leptons and vertices are matched to the opposite top quark decay products;
\item[--] ``unmatched'' pairings for the \ttbar{} signal where leptons are paired with vertices that cannot be matched to a \cPqb{} quark hadronization, \ie{} either from a hadronic \PW~boson decay or from initial- or final-state radiation;
\item[--] ``correct'' pairings for the single top quark signal;
\item[--] ``unmatched'' pairings for the single top quark signal, where there can be no ``wrong'' pairs in the sense of the above;
\item[--] leptons and vertices from background processes.
\end{itemize}
Among those, the ``correct'' pairings both for \ttbar{} and single top quarks, and the ``wrong'' pairings in the \ttbar{} signal carry information about the top quark mass and are parametrized as a function of \mtop.
The relative fractions of correct, wrong, and unmatched pairings for both \ttbar{} and single top quarks and their dependence on \mtop{} are determined from simulated events.
Furthermore, the relative contributions of \ttbar{} and single top quark events are calculated using the top quark mass-dependent theoretical predictions of the production cross sections at NNLO for \ttbar{}, and single top quark $t$ channel as well as $ \PQt \PW $ channel.
The overall combined signal strength of \ttbar{} and single top quark signal is left floating in the final fit, together with \mtop{}.
The background contribution is a combination of different processes, depending on the channel, with dominant contributions from DY+jets in the dilepton channels, and $\PW$+jets and QCD multijet processes in the semileptonic channels.
The overall background yields are fixed to the predictions from simulation, with the exception of QCD multijets, the normalization of which is determined from a fit to the \MET{} distribution in the data, and DY+jets, which is normalized in a data control sample selecting dilepton pairs compatible with a \cPZ~boson decay.
The total (statistical plus systematic) uncertainty in the normalization of the QCD multijets and DY+jets backgrounds is about 30\%.
For each channel and track multiplicity category, the full signal model is given by:
\begin{equation*}
\begin{split}
\ifthenelse{\boolean{cms@external}}
{
N & \big[\ensuremath{m_{\mathrm{svl}}}\xspace|\mtop,\mu,\theta_{\rm bkg}\big]\:=\: \\
&\mu N_{\rm top}^{\rm exp} \Big[\alpha^{\rm cor} f_{\ttbar}^{\rm cor}(\ensuremath{m_{\mathrm{svl}}}\xspace|\mtop) + \alpha^{\rm wro} f_{\ttbar}^{\rm wro}(\ensuremath{m_{\mathrm{svl}}}\xspace|\mtop)\\
& \qquad \qquad +(1-\alpha^{\rm cor}-\alpha^{\rm wro})f_{\ttbar}^{\rm unm}(\ensuremath{m_{\mathrm{svl}}}\xspace) \\
& + \kappa_{\cPqt{}}\big[ \alpha^{\rm cor}_{\cPqt{}} f_{\cPqt{}}^{\rm cor}(\ensuremath{m_{\mathrm{svl}}}\xspace|\mtop) + (1-\alpha^{\rm cor}_{\cPqt{}}) f_{\cPqt{}}^{\rm noncor}(\ensuremath{m_{\mathrm{svl}}}\xspace)\big] \Big] \\
+ & \: N_{\rm bkg}^{\rm exp}(1+\theta_{\rm bkg}) f_{\rm bkg}(\ensuremath{m_{\mathrm{svl}}}\xspace), \\
}{
N\big[\ensuremath{m_{\mathrm{svl}}}\xspace|\mtop,\mu,\theta_{\rm bkg}\big]\:=\:
&\mu N_{\rm top}^{\rm exp} \Big[\alpha^{\rm cor} f_{\ttbar}^{\rm cor}(\ensuremath{m_{\mathrm{svl}}}\xspace|\mtop)+\alpha^{\rm wro} f_{\ttbar}^{\rm wro}(\ensuremath{m_{\mathrm{svl}}}\xspace|\mtop)\\
&\qquad\qquad\;+(1-\alpha^{\rm cor}-\alpha^{\rm wro})f_{\ttbar}^{\rm unm}(\ensuremath{m_{\mathrm{svl}}}\xspace) \\
&\qquad\qquad\;+\kappa_{\cPqt{}}\big[ \alpha^{\rm cor}_{\cPqt{}} f_{\cPqt{}}^{\rm cor}(\ensuremath{m_{\mathrm{svl}}}\xspace|\mtop) + (1-\alpha^{\rm cor}_{\cPqt{}}) f_{\cPqt{}}^{\rm noncor}(\ensuremath{m_{\mathrm{svl}}}\xspace)\big] \Big] \\
&+N_{\rm bkg}^{\rm exp}(1+\theta_{\rm bkg}) f_{\rm bkg}(\ensuremath{m_{\mathrm{svl}}}\xspace), \\
}
\end{split}
\end{equation*}\
where $N_{\rm top}^{\rm exp}$ and $N_{\rm bkg}^{\rm exp}$ are the number of top quark events (\ttbar{} and single top quarks) and background events expected from simulation;
the $f_{k}^{i}$
are the six \ensuremath{m_{\mathrm{svl}}}\xspace{} templates of which three are parametrized in \mtop{};
$\alpha^{\rm cor}$, $\alpha^{\rm wro}$, and $\alpha^{\rm cor}_{\cPqt}$, are the fractions of correct and wrong lepton-vertex pairings for \ttbar{} and single top quark production, determined from simulated events as a function of \mtop{};
$\kappa_{\cPqt{}}$ is the relative fraction of single top quark events, fixed as a function of \mtop{} from the theoretical prediction;
$\theta_{\rm bkg}$ is a Gaussian penalty for a correction of the background yield;
and finally $\mu$ is the overall signal strength of top quark events, determined in the fit.
The parameters of each of the $f_{k}^{i}$ templates and their possible \mtop\ dependence is determined in a fit to \ensuremath{m_{\mathrm{svl}}}\xspace{} distributions of simulated events in the corresponding category and pairing classification.
The combined background template is built from fits to dedicated samples of simulated events of the corresponding processes, weighted by the expected event yields.
The shape for QCD multijet processes is determined from a control sample of nonisolated leptons in the data and normalized using a fit to the \MET{} distribution.
For correct and wrong pairings in \ttbar{} and for correct pairings in single top quark events, the fit is done for a range of generated top quark mass points in the range 163.5--181.5\GeV, from which a linear dependence of the parameters on \mtop{} is extracted.
The \ensuremath{m_{\mathrm{svl}}}\xspace{} distributions for unmatched pairings and background events do not depend on \mtop.
Each distribution is fitted with the sum of an asymmetric Gaussian ($\mathcal{G}_{\rm asym}$) and a Gamma distribution ($\Gamma$), of which four of the six total parameters are found to provide sensitivity to the top quark mass:
\begin{equation*}
\ifthenelse{\boolean{cms@external}}
{
\begin{split}
f_{k}^{i}(\ensuremath{m_{\mathrm{svl}}}\xspace|\mtop) \;=\; & \lambda \, \mathcal{G}_{\rm asym} \big(\ensuremath{m_{\mathrm{svl}}}\xspace|\mu(\mtop),\sigma_{\rm L}(\mtop),\sigma_{\rm R}(\mtop)\big) \\
& \:+\: (1-\lambda) \, \Gamma\big(\ensuremath{m_{\mathrm{svl}}}\xspace|\gamma,\beta,\nu(\mtop)\big).
\end{split}
}
{
f_{k}^{i}(\ensuremath{m_{\mathrm{svl}}}\xspace|\mtop) \;=\; \lambda \, \mathcal{G}_{\rm asym}\big(\ensuremath{m_{\mathrm{svl}}}\xspace|\mu(\mtop),\sigma_{\rm L}(\mtop),\sigma_{\rm R}(\mtop)\big) \:+\: (1-\lambda)\, \Gamma\big(\ensuremath{m_{\mathrm{svl}}}\xspace|\gamma,\beta,\nu(\mtop)\big).
}
\end{equation*}
The shape parameters are the mean of the Gaussian peak ($\mu$), the left and right width parameters of the Gaussian ($\sigma_{\rm L}$ and $\sigma_{\rm R}$), the shape parameter of the Gamma distribution ($\gamma$), its scale ($\beta$), and its shift ($\nu$).
Of these, all but $\gamma$ and $\beta$ show some usable sensitivity to the top quark mass.
The results of the fits to the observed \ensuremath{m_{\mathrm{svl}}}\xspace{} distributions in all fifteen categories are shown in Figs.~\ref{fig:msvlfitsdil} and~\ref{fig:msvlfitslj} for the dilepton and semileptonic channels, respectively.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.32\textwidth]{Figure_006-a}
\includegraphics[width=0.32\textwidth]{Figure_006-b}
\includegraphics[width=0.32\textwidth]{Figure_006-c}\\
\includegraphics[width=0.32\textwidth]{Figure_006-d}
\includegraphics[width=0.32\textwidth]{Figure_006-e}
\includegraphics[width=0.32\textwidth]{Figure_006-f}\\
\includegraphics[width=0.32\textwidth]{Figure_006-g}
\includegraphics[width=0.32\textwidth]{Figure_006-h}
\includegraphics[width=0.32\textwidth]{Figure_006-i}
\caption{
Template fits to the observed \ensuremath{m_{\mathrm{svl}}}\xspace{} distributions for the three dilepton channels (\ensuremath{\Pe\Pgm}\xspace, \ensuremath{\Pe\Pe}\xspace, \ensuremath{\Pgm\Pgm}\xspace\ from top to bottom row), and for exactly three, four, and five tracks assigned to the secondary vertex (from left to right column).
The top panels show the bin-by-bin difference between the observed data and the fit result, divided by the statistical uncertainty (pull).
The inset shows the scan of the negative log-likelihood as a function of the calibrated top quark mass, accounting only for the statistical uncertainty, when performed exclusively in each event category.
}
\label{fig:msvlfitsdil}
\end{figure*}
\begin{figure*}[htp]
\centering
\includegraphics[width=0.32\textwidth]{Figure_007-a}
\includegraphics[width=0.32\textwidth]{Figure_007-b}
\includegraphics[width=0.32\textwidth]{Figure_007-c}
\includegraphics[width=0.32\textwidth]{Figure_007-d}
\includegraphics[width=0.32\textwidth]{Figure_007-e}
\includegraphics[width=0.32\textwidth]{Figure_007-f}
\caption{
Template fits to the observed \ensuremath{m_{\mathrm{svl}}}\xspace{} distributions for the semileptonic channels (\ensuremath{\Pe}\xspace\ and \ensuremath{\Pgm}\xspace\ from top to bottom row), and for exactly three, four, and five tracks assigned to the secondary vertex (from left to right column).
The top panels show the bin-by-bin difference between the observed data and the fit result, divided by the statistical uncertainty (pull).
The inset shows the scan of the negative log-likelihood as a function of the calibrated top quark mass, accounting only for the statistical uncertainty, when performed exclusively in each event category.
}
\label{fig:msvlfitslj}
\end{figure*}
The final results for the top quark mass are then extracted by performing a binned maximum-likelihood estimation where the observed data are compared to the expectations using Poisson statistics.
The combined likelihood is then written as:
\begin{equation*}
\ifthenelse{\boolean{cms@external}}
{
\begin{split}
\mathcal{L}&(\mtop,\mu,\vec{\theta}_{\rm bkg})\;=\; \prod_{c=1}^{5}\,\prod_{n=3}^{5}\,\prod_{i=1}^{N_{\rm bins}} \\ & \mathcal{P}\big[N_{\rm obs}(\ensuremath{m_{\mathrm{svl}}}\xspace^{i}),N_{\rm exp}(\mtop,\mu,\theta_{\rm bkg},\ensuremath{m_{\mathrm{svl}}}\xspace^{i})] \, \mathcal{G}(0,\theta^{c,n}_{\rm bkg},0.3),
\end{split}
}
{
\mathcal{L}(\mtop,\mu,\vec{\theta}_{\rm bkg})\;=\; \prod_{c=1}^{5}\,\prod_{n=3}^{5}\,\prod_{i=1}^{N_{\rm bins}} \mathcal{P}\big[N_{\rm obs}(\ensuremath{m_{\mathrm{svl}}}\xspace^{i}),N_{\rm exp}(\mtop,\mu,\theta_{\rm bkg},\ensuremath{m_{\mathrm{svl}}}\xspace^{i})] \, \mathcal{G}(0,\theta^{c,n}_{\rm bkg},0.3),
}
\end{equation*}
where the products of the Poisson-distributed yields ($\mathcal{P}$) over every channel ($c$), track multiplicity category ($n$), and \ensuremath{m_{\mathrm{svl}}}\xspace{} bin ($i$) are multiplied by a penalty Gaussian function for the correction of the expected background yields ($\mathcal{G}$), with a fixed width of 30\%, corresponding to the uncertainty in the background normalization.
Finally, the combined likelihood is maximized to obtain the final \mtop{} result.
The analysis has been developed using simulated events, without performing the final fit on the data until the full measurement procedure had been validated.
The method is calibrated separately in each channel and track multiplicity bin before combining them by running pseudo-experiments for each generated top quark mass point and calculating a linear calibration function from the respective extracted mass points.
Pseudo-data are generated from the combined expected shape of the top quark signals and the mixture of backgrounds with the number of generated events taken from a Poisson distribution around the expected number of events in each category.
The width of the pull distributions, \ie\ the observed bias of each fit divided by its uncertainty, indicate a proper coverage of the statistical uncertainty.
The post-calibration mass difference is below 100\MeV{} for the entire range of generated \mtop{} values, well within the statistical uncertainty of the overall measurement of 200\MeV.
\subsection{Systematic uncertainties}\label{ssec:systematics}
The size of the systematic uncertainties is evaluated from their impact on the \ensuremath{m_{\mathrm{svl}}}\xspace{} shape and its propagation to the extracted \mtop{} value in the combined fit.
Modified pseudo-data are generated for each variation of the signal shape at the central mass point of 172.5\GeV, and the difference between the mass extracted from the modified data and the nominal fit is quoted as the systematic uncertainty.
The individual sources of systematic uncertainties and the determination of the shape variation are described in the following.
The final systematic uncertainties are summarized in Table~\ref{tab:systematics}.
\begin{table}[htbh!]
\begin{center}
\topcaption{Summary of the systematic uncertainties in the final measurement. In cases where there are two variations of one source of uncertainty, the first and second numbers correspond, respectively, to the down and up variations. The total uncertainties are taken as the separate quadratic sum of all positive and negative shifts. For the contributions marked with a (*), the shift of the single variation including its sign is given, but the uncertainty is counted symmetrically in both up and down directions for the total uncertainty calculation.\label{tab:systematics}}
\begin{scotch}{ll}
Source & $\Delta$\mtop{}~[$\GeV$] \\
\hline
\\[-1.5ex]
\multicolumn{2}{l}{Theoretical uncertainties}\\
\hline
$\ensuremath{\mu_{\mathrm{R}}}\xspace/\ensuremath{\mu_{\mathrm{F}}}\xspace$ scales \ttbar{} & $ +0.22 \ -\!0.20$ \\
$\ensuremath{\mu_{\mathrm{R}}}\xspace/\ensuremath{\mu_{\mathrm{F}}}\xspace$ scales \cPqt\ ($t$ channel) & $ -0.04 \ -\!0.02$ \\
$\ensuremath{\mu_{\mathrm{R}}}\xspace/\ensuremath{\mu_{\mathrm{F}}}\xspace$ scales $ \cPqt\PW $ & $ +0.21 \ +\!0.17$ \\
Parton shower matching scale & $ -0.04 \ +\!0.06$ \\
Single top quark fraction & $ -0.07 \ +\!0.07$ \\
Single top quark diagram interference~(*) & $ +0.24$ \\
Parton distribution functions & $ +0.06 \ -\!0.04$ \\
Top quark \pt{} & $ +0.82$ \\
Top quark decay width~(*) & $ -0.05$ \\
\cPqb\ quark fragmentation & $ +1.00 \ -\!0.54$ \\
Semileptonic \PB{} decays & $ -0.16 \ +\!0.06$ \\
\cPqb\ hadron composition~(*) & $ -0.09$ \\
Underlying event & $ +0.07 \ +\!0.19$ \\
Color reconnection~(*) & $ +0.08$ \\
Matrix element generator~(*) & $ -0.42$ \\
$\sigma(\ttbar+{\rm heavy~flavor})$ & $ +0.46 \ -\!0.36$ \\
\hline
Total theoretical uncertainty & $ +1.52 \ -\!0.86$ \\
\hline
\\[-1.5ex]
\multicolumn{2}{l}{Experimental uncertainties} \\
\hline
Jet energy scale & $ +0.19 \ -\!0.17$ \\
Jet energy resolution & $ -0.05 \ +\!0.05$ \\
Unclustered energy & $ +0.07 \ -\!0.00$ \\
Lepton energy scale & $ -0.26 \ +\!0.22$ \\
Lepton selection efficiency & $ +0.01 \ +\!0.01$ \\
\cPqb\ tagging & $ -0.02 \ -\!0.00$ \\
Pileup & $ -0.05 \ +\!0.07$ \\
Secondary vertex track multiplicity~(*) & $ -0.06$ \\
Secondary vertex mass modeling~(*) & $ -0.29$ \\
Background normalization & $ {<}0.03$ \\
\hline
Total experimental uncertainty & $ +0.43 \ -\!0.44$ \\
\hline
\\[-1.5ex]
Total systematic uncertainty & $+1.58 \ -\!0.97$ \\
Statistical uncertainty & ${\pm}0.20$ \\
\end{scotch}
\end{center}
\end{table}
\subsubsection{Modeling and theoretical uncertainties}\label{sssec:theosysts}
\begin{itemize}
\item {\bf Choice of renormalization and factorization scales:}
\sloppy{
The factorization and renormalization scales used in the signal simulation are set to a common value, $Q$, defined by \mbox{$Q^2=\mtop^2 + \sum{(\pt^\text{parton})}^2$}, where the sum runs over all extra partons in the event.
Two alternative data sets with a variation $\ensuremath{\mu_{\mathrm{R}}}\xspace=\ensuremath{\mu_{\mathrm{F}}}\xspace=2Q$ or $Q/2$ are used to estimate the systematic effect from the choice of scales.
These variations are observed to provide a conservative envelope of the additional jet multiplicity observed in data~\cite{Khachatryan:2015mva}.
The scale choice for single top quark $t$ and $ \cPqt\PW $~channels has a smaller effect on the measurement because the production happens through an electroweak interaction and because single top quark events only make up about 5\% of the total yield.
Dedicated single top quark data samples with \ensuremath{\mu_{\mathrm{F}}}\xspace\ and \ensuremath{\mu_{\mathrm{R}}}\xspace\ varied by a factor $2$ or $1/2$ are generated and used to estimate the effect.
}
\item {\bf Matrix element to parton shower matching scale:}
The choice of the threshold in the event generation at which additional radiation is produced by the \PYTHIA{} showering instead of matrix element calculations in \MADGRAPH{} is expected to have a small impact on the shape of \ensuremath{m_{\mathrm{svl}}}\xspace, affecting mostly the ``unmatched'' lepton-SV pairings, which constitute only about 5\% of the total.
Variations of this threshold are furthermore observed to have small impact on the kinematic properties of extra jets~\cite{Khachatryan:2015mva}.
The effect is estimated using dedicated samples with the nominal threshold (20\GeV) varied up and down by a factor of 2.
\item {\bf Single top quark fraction:}
The overall signal shapes in each category are constructed from \ttbar{} events and events from single top quark production, with their relative fractions fixed to the expectation from theory.
Because of a relative difference in their respective shapes, a deviation in this fraction can have an impact on the final mass measurement.
The effect is estimated by repeating the fits with the relative fraction of single top quark events in the signal shape varied by $\pm20\%$.
The size of the variation reflects the experimental uncertainty in the overall cross section of single top quark production~\cite{Chatrchyan:2014tua,Khachatryan:2014iya}.
\item {\bf Single top quark interference:}
Interference between \ttbar\ pair production and single top quark production in the $ \cPqt\PW $ channel at next-to-leading order in QCD is resolved in the $ \cPqt\PW $ signal generation by removing all doubly-resonant diagrams in the calculation~\cite{Frixione:2008yi,Belyaev:1998dn,White:2009yt}.
A different scheme for the resolution of the diagram interference can be defined where a gauge-invariant subtraction term modifies the $ \cPqt\PW $ cross section to cancel the contributions from \ttbar.
Samples using the second scheme are generated and compared and the difference is quoted as a systematic uncertainty~\cite{Tait:1999cf,Frixione:2008yi}.
\item {\bf Parton distribution functions:}
Uncertainties from the modeling of parton momentum distributions inside the incoming protons are evaluated using the diagonalized uncertainty sources of the CT10 PDF set~\cite{Pumplin:2002vw}.
Each source is used to derive event-by-event weights, which are then applied to obtain a variation of the signal \ensuremath{m_{\mathrm{svl}}}\xspace{} shape.
The maximal difference with respect to the nominal signal sample is quoted as the systematic uncertainty.
\item {\bf Top quark \pt{} modeling:}
Measurements of the differential \ttbar{} production cross section reveal an observed top quark \pt{} spectrum that is softer than what is predicted from simulation~\cite{Khachatryan:2015oqa}.
The difference between the unfolded data and the simulation based on \MADGRAPH{} is parametrized and can be used to calculate event-by-event weights correcting the spectrum.
This reweighting is not applied when calibrating the measurement, as it introduces a dependence on the true top quark mass.
The impact of the full difference between the predicted spectrum used in the calibration (at \mtop=172.5\GeV) and the data-corrected spectrum is estimated by comparing the result from reweighted pseudo-data to the nominal value.
The difference is then added as a one-sided systematic uncertainty in the extracted mass value.
The effect of the reweighting on the simulated \ensuremath{m_{\mathrm{svl}}}\xspace{} shape for correct and wrong lepton-vertex pairings is shown in Fig.~\ref{fig:systvariations}.
\item {\bf Top quark decay width:}
The decay width of the top quark has been experimentally determined with a precision of about 10\%~\cite{Khachatryan:2014nda}.
A dedicated sample with an increased width is used to estimate the impact on the mass measurement, and the difference is quoted as an uncertainty.
\item {\bf \cPqb{} quark fragmentation:}
A variation in the momentum transfer from \cPqb{} quark to $\PQb$~hadron has a direct impact in the \ensuremath{m_{\mathrm{svl}}}\xspace\ distribution, and correspondingly, the uncertainty from the used \cPqb\ quark fragmentation function on the extracted top quark mass is expected to be significant.
As shown in Section~\ref{sec:modeling}, the average momentum transfer in the nominal \PYTHIA{} {\rm Z2*}\xspace{} tune is found to be significantly softer than that seen in \ttbar\ events in the data, whereas the \ztwostar\,LEP $r_{\PQb}$\xspace\ variation that follows a fragmentation function measured at LEP is in better agreement.
Its soft and hard variations provide one standard deviation variations of the shape parameters, and are used to estimate the systematic uncertainty.
Variations of the \ensuremath{m_{\mathrm{svl}}}\xspace{} shape for the central \ztwostar\,LEP $r_{\PQb}$\xspace\ fragmentation function, its soft and hard variations, as well as the nominal {\rm Z2*}\xspace\ fragmentation are shown in Fig.~\ref{fig:systvariations}.
The impact of the choice of \cPqb\ quark fragmentation function on the extracted top quark mass is shown in Fig.~\ref{fig:mtopvsfrag}.
To first order the measured \mtop{} value depends only on the average momentum transfer, as indicated by the linear dependence on $\langle\pt(\PB)/\pt(\cPqb)\rangle$.
The extracted mass changes by about 0.61\GeV{} for each percent of change in the average momentum transfer.
\begin{figure}[htp]
\centering
\includegraphics[width=0.45\textwidth]{Figure_008-a.pdf}
\includegraphics[width=0.45\textwidth]{Figure_008-b.pdf}
\caption{
Variation of the simulated \ensuremath{m_{\mathrm{svl}}}\xspace{} shape with systematic effects: reweighting of the simulated top quark \pt{} shape to the observed distribution, separately for correct and wrong lepton-vertex pairings (\cmsLeft); and different \cPqb\ quark fragmentation function shapes (\cmsRight).
The bottom panels in the two plots show the ratios between the top quark \pt{} reweighted and nominal shapes for the correct and wrong pairings (\cmsLeft), and between the various fragmentation models and the central \ztwostar\,LEP $r_{\PQb}$\xspace\ tune (\cmsRight).
}
\label{fig:systvariations}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=0.45\textwidth]{Figure_009.pdf}
\caption{
Impact of the average \cPqb\ quark fragmentation, $\langle\pt(\PB)/\pt(\cPqb)\rangle$, on the extracted \mtop\ value, for various different fragmentation shapes.
The horizontal band represents the contribution of the \cPqb{} quark fragmentation model to the systematic uncertainty in the measurement of the top quark mass, which is estimated from the \ztwostar\,LEP $r_{\PQb}$\xspace~$\!^{\pm}$ variations.
A linear fit to the effects on the different variations (the line in the figure) suggests a relative change in the measured top quark mass of 0.61\GeV\ for each percent change in average momentum transfer.
}\label{fig:mtopvsfrag}
\end{figure}
\item {\bf Semileptonic \PB~meson branching fractions:}
The effect of the uncertainties in semileptonic $\PQb$~hadron branching fractions is estimated by varying the fraction of \cPqb{} jets containing neutrinos down by 0.45\% and up by 0.77\%, covering the uncertainties in the experimentally measured semileptonic branching fractions of \PBz{} and \PBpm{} mesons~\cite{Agashe:2014kda}.
\item {\bf \cPqb\ hadron composition:}
The \PYTHIA{} {\rm Z2*}\xspace{} tune produces an average composition of about 40\%~\PBz{}, 40\%~\PBpm{}, 12\%~\PBs{}, and 8\% heavier $\PQb$~hadron states in the hadronization of \cPqb{} quarks.
An improved version of this tune that takes into account hadron multiplicity measurements~\cite{Agashe:2014kda} is used to estimate the uncertainty due to the composition of $\PQb$~hadrons in the \cPqb{} jets.
\item {\bf Hadronization model cross-check:}
To test for additional uncertainties arising from the usage of the Lund string hadronization model in \PYTHIA~\cite{Andersson:1983ia} in the default simulation, additional cross-checks are performed with alternative hadronization models as used in \HERWIG{}.
However, an inclusive comparison of the two parton shower and hadronization frameworks entangles various different effects in an incoherent and nontransparent manner and includes uncertainties that are already evaluated in dedicated studies in more sound ways.
The inclusive \PYTHIA{}-\HERWIG{} difference is therefore not included as a systematic uncertainty.
Evaluating whether there are indeed additional sources of uncertainty arising when comparing different hadronization models requires a comparison without changing the parton shower model, the hard-scattering simulation, or the \cPqb\ quark fragmentation functions.
Such a check is possible in the \SHERPA{} 2.1.0 framework~\cite{Gleisberg:2008ta}, which permits a \pt-ordered parton shower model to be used, interfaced with a cluster hadronization model as used in \HERWIG{} or with the Lund string model of \PYTHIA{}.
The change in hadronization model entails a difference in hadron flavor multiplicities, with the cluster model tending to yield more heavy $\PB_{\rm c}$ mesons and $\Lambda_{\cPqb}$ baryons.
Restricting the study to the dominant production of \PBz{} and \PBpm{} mesons reveals a different \cPqb\ quark fragmentation function shape between the two models.
As the uncertainty from this effect is already covered by a more extreme variation in the dedicated \cPqb\ quark fragmentation uncertainty, the distributions are reweighted to a common \cPqb\ parton to \cPqb\ hadron momentum transfer distribution to remove any difference in fragmentation shapes.
The resulting lepton + \cPqb{} jet invariant mass distributions for cluster and Lund string fragmentation are found to be in very good agreement and do not warrant any additional uncertainty in the top quark mass measurement.
\item {\bf Underlying event and color reconnection:}
Effects from the modeling of the proton collision remnants and multiparton interactions (the underlying event) and from the color connection of the \cPqb{} quark fragmentation products to the rest of the event (color reconnection) are estimated using dedicated samples with variations of the Perugia 11 (P11) underlying event tunes~\cite{Skands:2010ak}.
Two variations, one with altered multiparton interactions and one based on the Tevatron data are used to evaluate the effect of the underlying event modeling.
A separate sample, in which color reconnection effects are not simulated, is used to gauge the impact from the modeling of this effect.
In both cases, the full difference of the results obtained on the modified samples and the case of using pseudo-data from the central P11 tune are quoted as the systematic uncertainty.
\item {\bf Matrix element generator:}
The default Born-level matrix element generator, \MADGRAPH{}, is substituted by a \POWHEG{} simulation based on the heavy-quark pair production (hvq) model~\cite{Frixione:2007nu} at NLO accuracy for \ttbar{} production and at leading order for the top quark decays.
In both cases, the matrix element generators are interfaced with \PYTHIA{} for parton showering.
The difference, propagated to the mass measurement, is reported as a systematic uncertainty.
Furthermore, the effect of including NLO corrections in the modeling of the top quark decay is studied using the parton-level \MCFM\ program~\cite{Campbell:2010ff, Campbell:2012uf}.
Since no fragmentation or parton shower evolution is included in the simulation and therefore the actual impact on the mass measurement is uncertain, the result is only reported here but not included as a systematic uncertainty.
By reweighting the mass of the lepton-\cPqb-jet system generated by \MADGRAPH\ to the differential cross sections predicted by \MCFM, with and without applying NLO corrections to the top quark decay, a +1.29\GeV\ shift in the calibrated mass in the $\ensuremath{\Pe\Pgm}\xspace$ channel is observed.
\item {\bf Modeling of the associated production of \ttbar\ with heavy flavors:}
While the simulation is observed to describe the shape of the different distributions for \ttbar\kern -3pt+heavy flavors well (most notably \ttbar\kern -3pt+\bbbar), these predictions tend to underestimate the total cross section~\cite{Aad:2015yja,Khachatryan:2015mva}.
To evaluate the impact on the measurement, the nominal simulation is compared to the one obtained after reweighting the contribution from extra \cPqb\ jets in the simulation by the data-to-theory scale factor measured in~\cite{Khachatryan:2015mva}.
A symmetric variation of the expected extra heavy-flavor content is used to estimate this uncertainty.
\end{itemize}
\subsubsection{Experimental uncertainties}\label{sssec:expsysts}
\begin{itemize}
\item {\bf Jet energy scale and jet energy resolution:}
By design, the reconstructed jet energy does not affect the \ensuremath{m_{\mathrm{svl}}}\xspace{} observable.
However jet momenta are used in the event selection and therefore variations of the jet energy have an effect on the event yields that enter the bins of the \ensuremath{m_{\mathrm{svl}}}\xspace{} distributions.
The effects are estimated by rescaling the reconstructed jet energies depending on \pt{} and $\eta$ before performing the event selection.
The effect of jet energy resolution on the measured distributions is estimated by inflating or deflating the resolution within the measured uncertainties and propagating the effects to the final distributions.
The varied \ensuremath{m_{\mathrm{svl}}}\xspace{} distributions are used to generate pseudo-data, and the full differences to the nominal sample are quoted as the systematic uncertainties.
\item {\bf Unclustered energy:}
The missing transverse energy is used only in the event selection for the \ensuremath{\Pe\Pe}\xspace{} and \ensuremath{\Pgm\Pgm}\xspace{} channels to suppress events containing neutrinoless \cPZ~boson decays.
Since the DY yield is normalized from a dedicated data control region, the effect from the \MET{} resolution is expected to be small.
It is estimated by varying the amount of energy that is not clustered into jets in the \MET{} calculation by ${\pm}10\%$ and studying its impact on the observed \ensuremath{m_{\mathrm{svl}}}\xspace{} distributions.
\item {\bf Lepton momentum scale:}
The reconstructed lepton momenta directly affect the \ensuremath{m_{\mathrm{svl}}}\xspace{} spectrum.
The uncertainty in the measured energy scale for electrons depends on \pt{} and $\eta$ and varies between about 0.6\% in the central barrel region and about 1.5\% in the forward region~\cite{Khachatryan:2015hwa}.
The muon momentum scale is known within an uncertainty of about 0.2\%~\cite{Chatrchyan:2013sba}.
Varying the scales up and down within their measured uncertainties---as a function of \pt{} and $\eta$ for electrons---produces a shift in the \ensuremath{m_{\mathrm{svl}}}\xspace{} distribution that is propagated to the final mass measurement and quoted as a systematic uncertainty.
\item {\bf Lepton selection efficiency:}
Similar to the jet energy scales, the requirements applied when selecting lepton candidates for the analysis affect the event yields in the \ensuremath{m_{\mathrm{svl}}}\xspace{} distributions and can cause a slight change in the extracted top quark mass.
The measured electron and muon selection efficiencies are varied within their uncertainties and the difference is quoted as a systematic uncertainty.
\item {\bf \cPqb{} tagging efficiency and mistag rate:}
The \ttbar{} event selection relies on the use of a \cPqb{} tagging algorithm to select jets originating from the hadronization of a \cPqb{} quark.
The impact on \ensuremath{m_{\mathrm{svl}}}\xspace{} from the uncertainties in the signal and background efficiencies of this algorithm are estimated by varying the efficiencies within their measured uncertainties and propagating the effect to the final result.
\item {\bf Pileup:}
The effect of additional concurrent \Pp\Pp{} interactions on the measured precision is estimated by varying the cross section for inelastic \Pp\Pp{} collisions used in the pileup generation by ${\pm}5\%$, and propagating the difference to the extracted \mtop{} result.
\item {\bf Secondary-vertex track multiplicity:}
The distribution of the number of tracks assigned to secondary vertices is not well described by simulation, as has been observed in several processes involving \cPqb{} quarks.
Generally, the data shows about 5--10\% fewer tracks than the simulation.
As the analysis is carried out in exclusive bins of track multiplicity to minimize the impact of this issue, it only enters as a second-order effect when combining the results from different bins, as the individual bins would be assigned slightly different weights in simulation.
This is corrected for by reweighting each bin content by the yield observed in the data, and the impact of this reweighting on the final result is quoted as a remaining systematic uncertainty.
\item {\bf Secondary-vertex mass modeling:}
A discrepancy between the observed secondary vertex mass (\ie\ the invariant mass of the tracks used to reconstruct the vertex) and the one predicted in the simulation is observed.
The effect is propagated in the \ensuremath{m_{\mathrm{svl}}}\xspace{} shape by weighting the simulated events to reflect the observed distributions in each bin of track multiplicity, and the resulting shift in the extracted top quark mass is quoted as a systematic uncertainty.
\item {\bf Background normalization:}
Processes not involving top quarks constitute about 5\% of the overall selected events and their combined yield is allowed to float within about 30\% in the fit.
The normalization of the main background processes is furthermore determined in dedicated control samples in the data.
To estimate the uncertainty in the result stemming from the uncertainty in the background normalization, the expected yields of backgrounds are varied within their uncertainties, and the resulting change in the \ensuremath{m_{\mathrm{svl}}}\xspace{} shape is propagated to the final result.
These variations are observed to have a negligible impact on the measurement
as they are absorbed by upward/downward variations of the background yields in the fit.
\end{itemize}
\subsection{Results}\label{ssec:results}
The top quark mass is measured from the invariant mass distribution of leptons and reconstructed secondary vertices from $\PQb$~hadron decays using only charged particles.
After calibrating the measurement with simulated events, a value of
\begin{equation*}
\mtop=173.68 \ \pm 0.20 {\rm (stat)} \ ^{+1.58}_{-0.97}{\rm (syst)}\, \GeV
\end{equation*}
is obtained from the data, with a combined uncertainty of $^{+1.59}_{-\!0.99}\GeV$.
The overall systematic uncertainty is dominated by the uncertainty in the \cPqb\ quark fragmentation and the modeling of kinematic properties of top quarks with minimal sensitivity to experimental uncertainties.
Figure~\ref{fig:results} shows the combined result as well as the values obtained separately for the five lepton channels and the three track multiplicity bins.
The observed trend as a function of the track multiplicity is compatible with the results obtained regarding the modeling of the relative momentum of secondary vertices inside jets, as discussed in Section~\ref{sec:modeling}.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.90\textwidth]{Figure_010.pdf}
\caption{
Results of the \mtop{} measurement in the individual channels and their combination.
Smaller and thicker error bars show the statistical uncertainty, whereas the thinner bars show the combined statistical and systematic uncertainty.
The right panel shows the extracted mass when performing the analysis in separate track multiplicity bins, combining the lepton channels.
}
\label{fig:results}
\end{figure*}
\section{Summary and prospects}\label{sec:conclusions}
A novel measurement of the top quark mass has been presented, using an observable that relies entirely on the reconstruction of charged particles.
It shows minimal sensitivity to experimental sources of uncertainty.
The final result yields a value of $\mtop=173.68^{+1.59}_{-0.99}\GeV$, equivalent to a precision of well below one percent.
The overall uncertainty is dominated by the \cPqb\ quark fragmentation modeling uncertainty of $+1.00/\!\!-\!0.54\GeV$ and the uncertainty in the modeling of the top quark \pt{} of $+0.82\GeV$.
Experimental uncertainties related to the understanding of jet energy scales only affect the event acceptance and are therefore virtually irrelevant to the final result.
Studies of the \cPqb\ quark fragmentation with reconstructed secondary vertices and inclusively reconstructed charm quark mesons are used to select the central \cPqb\ quark fragmentation shape and to validate the systematic uncertainty.
With the significantly larger data sets becoming available for analysis from the current 13\TeV\ run of the LHC, this method could be extended to constrain the \cPqb\ quark fragmentation, using the properties of the secondary vertices or charmed mesons, while measuring the top quark mass.
This is expected to lead to a significant reduction of the overall uncertainty.
Furthermore, theoretical uncertainties related to kinematic properties of top quarks and scale choices in QCD calculations are expected to decrease with the next generation of Monte Carlo event generators.
Finally, this result is complementary to standard measurements relying on kinematic properties of jets.
The precision of such analyses is typically limited by the uncertainty from the modeling of hadronization effects, influencing the understanding of the jet energy scale, while not much affected by the choice of \cPqb\ quark fragmentation model and the modeling of top quark kinematic properties.
Therefore, a combination of this result with standard measurements could optimally benefit from independent sources of systematic uncertainties.
\section*{Acknowledgments}
\hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research Promotion Foundation, Cyprus; the Ministry of Education and Research, Estonian Research Council via IUT23-4 and IUT23-6 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Innovation Office, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (CINVESTAV, CONACYT, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Center, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Education, Science and Technological Development of Serbia; the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation.
Individuals have received support from the Marie-Curie program and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund; the OPUS program of the National Science Center (Poland); the Compagnia di San Paolo (Torino); MIUR project 20108T4XTM (Italy); the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University (Thailand); and the Welch Foundation, contract C-1845.
\clearpage
| proofpile-arXiv_065-14907 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Problems concerning planar graphs have always been among the most extensively studied topics in graph theory.
In this paper,
we study a generalization of proper coloring introduced by Ore and Plummer in 1969~\cite{bib-ore69+}:
a {\em cyclic coloring} of a plane graph is a vertex-coloring such that any two vertices
incident with the same face receive distinct colors.
The Cyclic Coloring Conjecture of Borodin~\cite{bib-borodin84} asserts that every plane graph with maximum face size $\Delta^\star$ has a cyclic coloring with at most $\lfloor 3\Delta^\star/2\rfloor$ colors.
There are many results on the Cyclic Coloring Conjecture and related problems.
We would like to particularly mention the Facial Coloring Conjecture of~\cite{bib-kral05+},
which implies the Cyclic Coloring Conjecture for odd values of $\Delta^\star$.
This conjecture,
which was addressed e.g.~in~\cite{bib-havet10+,bib-havet+,bib-kral05+,bib-kral07+},
asserts that every plane graph has an $\ell$-facial coloring with at most $3\ell+1$ colors,
i.e.~a vertex coloring such that any vertices joined by a facial walk of size at most $\ell$ receive different colors.
We refer the reader to in an excellent survey~\cite{bib-borodin13} by Borodin
for further results related to the Cyclic Coloring Conjecture.
Despite a significant amount of interest (see e.g.~\cite{bib-borodin92,bib-borodin99+,bib-havet+,bib-sanders01+,bib-zlamalova}),
the Cyclic Coloring Conjecture has been proven only for three values of $\Delta^\star$: the case $\Delta^\star=3$,
which is equivalent to the Four Color Theorem proven in~\cite{bib-appel77,bib-appel77b} (a simplified proof was given in~\cite{bib-robertson99+}),
the case $\Delta^\star=4$ known as Borodin's Six Color Theorem~\cite{bib-borodin84,bib-borodin95},
and the recently proven case $\Delta^\star=6$~\cite{bib-hebdige+}.
Amini, Esperet and van den Heuvel~\cite{bib-amini08+},
building on the work in~\cite{bib-havet07+,bib-havet08+},
proved an asymptotic version of the Cyclic Coloring Conjecture: for every $\varepsilon>0$,
there exists $\Delta_0$ such that every plane graph with maximum face size $\Delta^\star\ge\Delta_0$ has a cyclic coloring with at most $\left(\frac{3}{2}+\varepsilon\right)\Delta^\star$ colors.
The graphs which witness that the bound in the Cyclic Coloring Conjecture is the best possible contain vertices of degree two;
in particular, they are not $3$-connected.
In 1987, Plummer and Toft~\cite{bib-plummer87+} conjectured the following:
\begin{conjecture}[Plummer and Toft~\cite{bib-plummer87+}]
\label{conj}
Every $3$-connected plane graph with maximum face size $\Delta^\star$ has a cyclic coloring with at most $\Delta^\star+2$ colors.
\end{conjecture}
This conjecture is the main subject of this paper.
We would like to remark that Conjecture~\ref{conj} fails if the assumption on $3$-connectivity is replaced with the weaker assumption that the minimum degree is at least $3$~\cite{bib-plummer87+}.
It should also be noted that the upper bound stated in Conjecture~\ref{conj} is not tight for large $\Delta^*$: Borodin and Woodall~\cite{bib-borodin02+} showed that every $3$-connected plane graph with maximum face size $\Delta^\star\ge 122$ has a cyclic coloring with at most $\Delta^\star+1$ colors,
and this bound was lowered to $\Delta^*\geq60$ by Enomoto,
Hor{\v n}{\'a}k and Jendrol'~\cite{bib-enomoto01+}.
Conjecture~\ref{conj} has been proven for all but finitely many values of $\Delta^\star$.
The cases $\Delta^\star=3$ and $\Delta^\star=4$ follow from Four Color Theorem and Borodin's Six Color Theorem,
respectively.
The conjecture was proven for $\Delta^\star\ge 61$ in~\cite{bib-borodin02+},
for $\Delta^\star\ge 40$ in~\cite{bib-hornak00+},
for $\Delta^\star\ge 24$ in~\cite{bib-hornak99+} and finally for $\Delta^\star\ge 18$ in~\cite{bib-hornak10+}.
Our main result is a proof of the cases $\Delta^\star = 16$ and $\Delta^\star = 17$ of Conjecture~\ref{conj}:
\begin{theorem}
\label{main-thm}
Every 3-connected plane graph with maximum face size $\Delta^\star\in \{16,17\}$ has a cyclic coloring that uses at most $\Delta^\star+2$ colors.
\end{theorem}
We employ the discharging method to prove Theorem~\ref{main-thm};
we refer to the survey~\cite{bib-cranston13+} for a detailed exposition of the method.
We start by identifying a set of configurations that cannot be contained in a minimal counterexample,
i.e., a counterexample with the smallest number of vertices, in Section~\ref{sec-redu}.
Such configurations are referred to as \emph{reducible configurations}.
We then consider a minimal counterexample $G$ and assign \emph{initial charges} to the vertices and faces of $G$
with the property that the sum of the initial charges is negative.
We then redistribute the charge using \emph{discharging rules},
which are described in Section~\ref{sec-rules}.
The redistribution preserves the overall sum of the charges.
Finally, we show that if $G$ contains none of the reducible configurations, then
every vertex and face has non-negative charge after applying the rules in Section~\ref{sec-analysis}, which is a contradiction.
Unfortunately, the arguments related to checking the reducibility of some of the configurations and
the analysis of the final charge turned out to be complex and
we had to resort to computer assistance.
We have made our programs verifying the correctness of our proof
available on-line at~\url{http://www.ucw.cz/~kral/cyclic-16/};
we have also uploaded their source codes to arXiv as ancillary files.
\section{Notation}
In this section, we briefly review the notation used in our proof.
Throughout this paper, all of the graphs that will be considered are plane graphs unless explicitly stated.
A \emph{$k$-vertex} is a vertex of degree $k$.
We also define a \emph{$(\geq k)$-vertex} to be a vertex with degree at least $k$, and
a \emph{$(\leq k)$-vertex} to be a vertex with degree at most $k$.
The \emph{size} of a face $f$ of a plane graph, denoted by $|f|$,
is the number of vertices that are incident with it.
Analogous to the definition of a $k$-vertex, a $k$-face is a face of size $k$.
Similarly, a \emph{$(\geq k)$-face} and a \emph{$(\leq k)$-face} are faces that
have size at least $k$ and at most $k$, respectively.
The \emph{boundary walk} of a face in a plane graph is the sequence of vertices that bounds the face. A pair of vertices are
said to be \emph{cyclically adjacent} if they are incident to a common face.
The \emph{cyclic degree} of a vertex is the number of vertices which are cyclically adjacent to it.
Most of the configurations are depicted in Figures \ref{fig-triangles+column}--\ref{fig-gen} using the notation that we now describe.
A circled vertex in a configuration depicts its exact degree,
i.e., the vertex must be incident with as many edges as depicted in the figure.
The vertices depicted by bold circles are required to have the cyclic degree equal to the number given in the figure next to the vertex
in addition to having the degree as depicted.
In addition, we sometimes restrict the face sizes by writing the constraint on the face size in the middle of the face;
for example in the first configuration depicted in Figure~\ref{fig-gen},
the bottom middle face is required to have size at least $9$ and
the top left face is required to have size $\Delta^*+6-\ell$,
where $\ell$ is the size of the bottom middle face.
When describing the discharging rules,
we use the following notation (see Figure~\ref{fig-triangles+column} for an illustration).
Let $v_1v_2$ be a part of the boundary walk of a face $f$.
With respect to a face $f$,
a triangle $T=v_1v_2v_3$ is an \emph{A-triangle} if $\deg(v_1)=\deg(v_2)=\deg(v_3)=3$,
a \emph{B-triangle} if $\deg(v_1)=\deg(v_2)=3$, $\deg(v_3)=4$, and
the neighbors $x_1$ and $x_2$ of $v_3$ distinct from $v_1$ and $v_2$ are adjacent, and
a \emph{C-triangle} if $T$ is neither an A-triangle nor a B-triangle.
If $v_1v_2$ is incident with a $4$-face $Q=v_1v_2v_3v_4$, $\deg(v_1)=\deg(v_2)=\deg(v_3)=\deg(v_4)=3$ and
$v_3v_4$ is incident with another $4$-face, we say that $Q$ is a \emph{column} (with respect to $f$).
If $u_1vu_2$ is a part of the boundary walk of a face $f$ and neither $u_1v$ nor $u_2v$ is contained in a $(\le\!4)$-face,
we say that $v$ is \emph{isolated} (with respect to $f$).
If $\deg(v)=4$ and $v$ is contained in a $(\le\!4)$-face $f'$ that does not share an edge with $f$,
then we say that $f'$ is the \emph{sink} of $v$;
otherwise, $v$ is the sink of itself.
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.38} \epsfbox{cyclic-16.39} \epsfbox{cyclic-16.40}
\end{center}
\caption{An A-triangle, a B-triangle and a column.}
\label{fig-triangles+column}
\end{figure}
\section{Construction of discharging rules}
A large part of our proof is computer-assisted.
However, the proof itself was also constructed in a computer-assisted way.
Once the types of the discharging rules are fixed, e.g.,
we have decided to transfer some amount of charge from a face of some given size $\ell$ to an incident $A$-triangle,
the conditions that the final charge of every vertex and face is non-negative become linear constraints.
More precisely,
there is a single variable for each rule type, and
a single linear constraint for each possible neighborhood structure of a vertex or a face that
is not excluded by reducible configurations.
Hence,
the amounts of charge transferred by individual rules can be determined by solving a linear program (or
it is determined that no such amounts for the given set of rule types using existing reducible configurations).
We remark that this approach that we have followed is not new and has actually been used by various researchers earlier.
Let us give more details about how we have proceeded in the case of our proof.
First, it is not clear what types of discharging rules should be considered.
We started with rule types close to those in Subsections~\ref{sec-large}, \ref{sec-med} and \ref{sec-heavy} and
later added further rule types in an ad hoc way.
We then repeated the following steps.
We ran a linear program solver to determine a minimal set of infeasible constraints.
Each such constraint corresponds to a particular neighborhood structure (configuration).
To proceed further, it was necessary to either find a new reducible configuration that would exclude one of these configurations or
add a new rule type that would move charge inside the configuration.
After this, we reran the linear program solver to determine a new minimal set of infeasible constraints.
When the solver produced a solution, we found a possible set of discharging rules, i.e., a proof.
Since each rule type adds a new variable to the linear program,
it is necessary to be careful with adding new rule types to keep the linear program of manageable size.
For example, it would have been ideal to have rules of the types as those in Subsection~\ref{sec-addit} for all face sizes
but this would have resulted in a linear program too large to be solved in a reasonable amount of time.
As a compromise, we have started with rougher rules from Subsection~\ref{sec-med} and
combine them with finer rules from Subsection~\ref{sec-addit}.
Another concern might be that most linear program solvers (we have used the Gurobi solver) work in floating arithmetic;
however, the solution output by the solver can be rounded to rational values and checked with exact arithmetic computations.
\section{Reducible configurations}\label{sec-redu}
In this section, we describe reducible configurations that are used in our proof.
The reducible configurations in Figures \ref{triangle0}--\ref{fig-gen} are named using the following convention:
the name of the configuration refers to the size of a face that the configuration primarily concerns and
the subscript is used to distinguish different types of configurations related to faces of the same size.
In addition to the configurations presented in the figures,
there are two additional reducible configurations:
DEG is the configuration comprised of a single vertex with cyclic degree at most $\Delta^*+1$, and
TFEDGE is the configuration comprised of a $3$-face and a $(\le\!4)$-face sharing an edge.
These two configurations are reducible by \cite[Lemma 3.1(e)]{bib-hornak99+} and \cite[Lemma 3.6]{bib-hornak10+}, respectively.
We will also need the following proposition
to justify the reducibility of some of our configurations.
\begin{proposition}[Halin~\cite{bib-halin69}]
\label{prop-3ver}
If $G$ is a $3$-connected graph with at least five vertices,
then every vertex of degree three is incident with an edge $e$ such that
the contraction of $e$ yields a $3$-connected graph.
\end{proposition}
\subsection{Configurations TRIANGLE}
In this subsection, we introduce reducible configurations TRIANGLE$_0$, TRIANGLE$_1$ and TRIANGLE$_2$, and
argue that they are reducible.
The configurations can be found in Figures~\ref{triangle0}, \ref{triangle1} and \ref{triangle2}.
When analyzing the configurations, we use the following notation.
Let $A$ be the face that is incident with the vertices $a$ and $c$ but is not the $3$-face,
and denote by $A_{{v_1},\ldots,{v_n}}$ the set of colors that appear on all of the vertices that
are incident with $A$ except for the vertices $v_1,\ldots,v_n$.
Similarly, we define the faces $B$ and $C$ to be the faces incident with the edges $ab$ and $bc$, respectively.
\begin{lemma}\label{triangle-Lemma}
The three configurations denoted by TRIANGLE$_0$, which are depicted in Figure~\ref{triangle0}, and
the configuration denoted by TRIANGLE$_1$, which is depicted in Figure~\ref{triangle1}, and
the two configurations denoted by TRIANGLE$_2$, which are depicted in Figure~\ref{triangle2},
are reducible.
\end{lemma}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.47}
\hskip 4mm
\epsfbox{cyclic-16.48}
\hskip 4mm
\epsfbox{cyclic-16.49}
\end{center}
\caption{The configurations TRIANGLE$_0$.}
\label{triangle0}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.50}
\end{center}
\caption{The configuration TRIANGLE$_1$.}
\label{triangle1}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.51}
\hskip 4mm
\epsfbox{cyclic-16.52}
\end{center}
\caption{The configurations TRIANGLE$_2$.}
\label{triangle2}
\end{figure}
\begin{proof}
Assume that a minimal counterexample $G$ contains one of the configurations in Figures~\ref{triangle0},~\ref{triangle1} or~\ref{triangle2} and let $\ell=|C|$.
Note that $\ell\ge 5$ (otherwise, $G$ would contain the reducible configuration TFEDGE).
First contract the edge $cf$.
If the resulting graph were not $3$-connected,
then there would exist a vertex $x$ such that $c$, $f$ and $x$ form a vertex cut,
which implies that $f$ and $x$ form a vertex cut of size two in $G$, which is impossible.
Hence, the resulting graph is $3$-connected and
the minimality of $G$ implies that
the resulting graph has a cyclic coloring with $\Delta^\star+2$ colors.
This yields a coloring of $G$ of all vertices except for $c$.
If $A_c\cap C_{cf}$ is non-empty, then $c$ is cyclically adjacent to vertices of at most $\Delta^\star+1$ colors and
we can complete the coloring. Hence, assume that $A_c\cap C_{cf}=\emptyset$.
Without loss of generality,
we can assume that $a$ was colored with $1$, $b$ with $2$, $e$ with $3$, $f$ with $\ell$ and $d$ with $\Delta^\star+2$.
We can also assume that $C_{bcef}$ contains the colors from $4$ to $\ell-1$ and $A_{acdf}$ contains the colors from $\ell+1$ to $\Delta^\star+1$.
We first analyze the three configurations depicted in Figure~\ref{triangle0}.
If we can recolor $a$ with a color from $3,\dots,\ell-1$, then we can color $c$ with $1$, so $\{3,\dots, \ell-1\} \subseteq B_{abd}$.
If we can recolor $b$ with a color from $\ell+1,\dots,\Delta^*+1$, then $c$ can be colored with $2$, hence $\{\ell+1,\dots,\Delta^*+1\} \subseteq B_{abd}$.
Therefore, $B_{abd}$ contains all the colors from $3$ to $\Delta^\star+1$,
which is impossible since $|B|\le\Delta^*$.
We next analyze the configuration depicted in Figure~\ref{triangle1}.
If we can recolor $b$ with a color from $\ell+1,\dots,\Delta^*+2$, then
we can color $c$ with $2$, hence $\{\ell+1,\dots,\Delta^*+2\} \subseteq B_{abe}$.
Likewise, if we can recolor $a$ with a color from $4,\dots,\ell-1$, then
we can color $c$ with $1$.
In particular, the vertex $a$ is cyclically adjacent to vertices with the colors from $4$ to $\ell-1$ and
to two vertices of each of the colors from $\{\ell+1,\dots,\Delta^*+2\}$ (once on the face $A$ and once on the face $B$).
In addition to these $\ell-4+2(\Delta^*-\ell+2)=2\Delta^*-\ell$ vertices, $a$ is also cyclically adjacent to $b$, $c$, $e$ and $f$.
Hence, its cyclic degree must be at least $2\Delta^*-\ell+4$,
which violates the description of the configuration.
It remains to analyze the configurations depicted in Figure~\ref{triangle2}.
Let $D$ be the set containing all colors in $B_{ab}$ and
the color assigned to the vertex of the $4$-face containing $b$ that is not adjacent to $b$.
Since the cyclic degree of $b$ is at most $\Delta^*+3$, it holds $|D|\le \Delta^*+3-\ell$.
If we can recolor $b$ with a color from $A_{acf}$, then $c$ can be colored with $2$.
Hence, $A_{acf}\subseteq D$.
If $a$ can be recolored with $3$ or $4$, we can color $c$ with $1$.
Since this is impossible and $4\not\in A_c$, it holds that $\{3,4\}\subseteq D$.
We conclude that $D$ contains at least $|A_{acf}|+2=\Delta^*+4-\ell$ colors,
which exceeds its size.
\end{proof}
\subsection{Computer assisted cases}
The remaining reducible configurations used in the proof are depicted in Figures~\ref{fig-four}--\ref{fig-gen}.
The configuration FOUR$_0$, which is depicted in Figure~\ref{fig-four},
is reducible by~\cite[Lemma 3.1(c)]{bib-hornak99+} and~\cite[Lemma 3.1(d)]{bib-hornak99+}.
The reducibility of the remaining configurations was verified with the assistance of a computer.
We have independently prepared two programs,
which are available at \url{http://www.ucw.cz/~kral/cyclic-16/} as
\texttt{test-reducibility1.c} and \texttt{test-reducibility2.cc}.
The input files needed to check the reducibility of the configurations are also available on-line.
We next describe the structure of the input files and the way used to reduce the configurations.
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.14} \epsfbox{cyclic-16.15} \epsfbox{cyclic-16.16}
\end{center}
\caption{The configurations FOUR$_0$, FOUR$_1$ and FOUR$_2$.}
\label{fig-four}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.42}
\end{center}
\caption{The configuration FIVE.}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.1} \hskip 4mm \epsfbox{cyclic-16.2} \hskip 4mm \epsfbox{cyclic-16.3}
\vskip 2mm
\epsfbox{cyclic-16.4} \hskip 4mm \epsfbox{cyclic-16.5} \hskip 4mm \epsfbox{cyclic-16.6}
\end{center}
\caption{The configurations SIX$_0$.}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.8} \hskip 4mm \epsfbox{cyclic-16.43} \hskip 4mm \epsfbox{cyclic-16.44}
\end{center}
\caption{The configurations SIX$_1$, SIX$_2$ and SIX$_3$.}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.7} \hskip 4mm \epsfbox{cyclic-16.9} \hskip 4mm \epsfbox{cyclic-16.10}
\end{center}
\caption{The configurations SEVEN$_0$.}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.11} \hskip 4mm \epsfbox{cyclic-16.12} \hskip 4mm \epsfbox{cyclic-16.45}
\end{center}
\caption{The configurations SEVEN$_1$, SEVEN$_2$ and SEVEN$_3$.}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.13} \hskip 4mm \epsfbox{cyclic-16.17} \hskip 4mm \epsfbox{cyclic-16.18} \hskip 4mm \epsfbox{cyclic-16.19}
\end{center}
\caption{The configurations EIGHT$_0$.}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.20} \hskip 4mm \epsfbox{cyclic-16.21} \hskip 4mm \epsfbox{cyclic-16.22}
\end{center}
\caption{The configurations EIGHT$_1$, EIGHT$_2$ and EIGHT$_3$.}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.23} \hskip 4mm \epsfbox{cyclic-16.24}
\end{center}
\caption{The configurations EIGHT$_4$.}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.25} \hskip 4mm \epsfbox{cyclic-16.26}
\end{center}
\caption{The configurations EIGHT$_5$ and EIGHT$_6$.}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.27}
\end{center}
\caption{The configuration TEN.}
\end{figure}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.28} \hskip 4mm \epsfbox{cyclic-16.29} \hskip 4mm \epsfbox{cyclic-16.30}
\end{center}
\caption{The configurations GEN$_0$, GEN$_1$ and GEN$_2$.}
\label{fig-gen}
\end{figure}
We have verified that all configurations depicted in Figures~\ref{fig-four}--\ref{fig-gen} are reducible in the following manner.
If a possible minimal counterexample $G$ contains the configuration in question,
we replace it with a configuration with a smaller number of vertices to obtain a graph $G'$.
Each input file consists of two blocks: the first block describes the new configuration and
the second block the original configuration, i.e., the configuration that we are verifying to be reducible.
The two blocks have a similar structure.
The first line of each block contains two integers $m$ and $n$.
The integer $m$ is the number of faces forming the configuration and
$n$ is the number of vertices with no neighbors outside the configuration (these are the circled vertices in the figures).
Let us call such vertices \emph{internal}.
Each of the following $m$ lines describes one of the faces of the configuration.
There are two kinds of faces: \emph{bounded} faces with a specific size and
\emph{unbounded} faces with size between $\Delta^\star-c_1$ and $\Delta^\star-c_2$ (inclusively) for some $c_1$ and $c_2$.
A line describing a bounded face starts with $0$ and it is followed by the list of vertices incident with the face.
The internal vertices incident with the face are represented by numbers between $1$ and $n$ and
the remaining vertices incident with the face are represented by lowercase letters.
A description of an unbounded face in the first block starts with a range $a_1$--$a_2$; it is possible that $a_1=a_2$.
The rest of the line contains all internal vertices of the face and possibly some others represented by lowercase letters.
In addition to these vertices, the face is incident with $k$ vertices
where $k$ satisfies that $\Delta^\star+2-a_2\le k\le \Delta^\star+2-a_1$ (note that $a_i\not=c_i$ in general).
In the second block,
the line describing an unbounded face starts with a positive integer giving the index of the corresponding face
in the first block (the indices start from one).
For example, the input file to verify the reducibility of the configuration EIGHT$_0$ that is depicted in Figure~\ref{fig-explred}
is the following.
\begin{verbatim}
5 9
5-7 a 8,9
8-9 - 5,6,7,9
7-9 - 2,3,4,5,6
5-7 b 1,2
0 ab 1,3,4,7,8
5 10
1 a 8,9
2 - 5,6,7,9,10
3 - 2,3,4,5,6
4 b 1,2
0 ab 1,3,4,7,8,10
\end{verbatim}
\begin{figure}
\begin{center}
\epsfbox{cyclic-16.53} \hskip 4mm \epsfbox{cyclic-16.56}
\end{center}
\caption{An example of how one of the configurations EIGHT$_0$ is reduced; the new configuration is in the right.}
\label{fig-explred}
\end{figure}
\noindent Note that we have not specified the three $3$-faces formed by internal vertices,
e.g., the one formed by the vertices $1$, $2$ and $3$,
since the constraints that they impose on the coloring are implied by the presence of the other faces.
In addition, we have also not specified the existence of the other $3$-face containing the vertex $2$.
If the configuration can be checked to be reducible without this additional assumption,
it is also reducible with this additional assumption (we give more details further).
The program assumes the existence of a cyclic coloring of $G'$ using at most $\Delta^\star+2$ colors and
checks using this assumption that $G$ also has a cyclic coloring using at most $\Delta^\star+2$ colors.
When doing so, we assume that all the faces described in the input are pairwise different.
For example, the face incident with $8$ and $9$ is different from the face incident with $1$ and $2$
in Figure~\ref{fig-explred}.
In all the configurations that we analyze, all the faces share vertices with a single face of the configuration and
hence this assumption is valid because the graph $G$ is $3$-connected.
Another fact that needs to be verified is that $G'$ is $3$-connected;
for most of our reductions, this is implied by Proposition~\ref{prop-3ver}
since the contracted edge is incident with a $3$-vertex contained in a $3$-face formed by three $3$-vertices (in the example,
the edge contracted joins the vertices $7$ and $10$).
In the remaining few cases, this follows by an easy analysis of the configurations.
In addition, the ranges of the numbers of non-internal vertices on unbounded faces
are determined using the absence of the configurations DEG and TRIANGLE$_0$.
For example, since the vertex $3$ has cyclic degree at least $\Delta^\star+3$,
the unbounded face incident with it must have size between $\Delta^\star-2$ and $\Delta^\star$.
Consequently, the number of non-internal vertices incident with this face is between $\Delta^\star-7$ and $\Delta^\star-5$,
which corresponds to the range $7$--$9$ given in the input file.
We now describe how the program checks the existence of a cyclic coloring of $G$.
The program enumerates all possible colorings of non-internal vertices and
checks whether the coloring extends in $G'$, and if so,
it also checks that it extends in $G$.
Note that some of the colorings of non-internal vertices considered by the program are not feasible.
For example, we have neglected in the considered configuration one of the $3$-faces containing the vertex $2$ and
the constraints that it imposes.
Since testing the extendibility of a larger set of colorings does not harm the validity of our arguments,
this does not affect the correctness of our arguments as long as all the constraints on the coloring of internal vertices are represented.
In fact, this negligence is useful in the considered case
since the very same input file can be used to justify the reducibility of all the configurations EIGHT$_0$.
\section{Discharging rules}
\label{sec-rules}
In this section, we describe the discharging phase of our proof.
Each vertex $v$ of a minimal counterexample is assigned charge $\deg(v)-4$ and each face $f$ is assigned $|f|-4$.
Using Euler's formula, the overall sum of the initial charge is $-8$.
The charge then gets redistributed among the vertices and faces as follows.
First,
each $3$-vertex $v$ that is contained in exactly one $(\le\!4)$-face gets $1$ from this face, and
each $3$-vertex $v$ that is contained in two $4$-faces $f_1$ and $f_2$ gets $\frac{1}{2}$ from each of $f_1$ and $f_2$;
note that a $3$-vertex cannot be contained in a $3$-face and a $(\le\!4)$-face.
Other rules are more complex and are described in the rest of the section.
We start with simpler rules to redistribute the charge, which we call basic rules, and
we then tune the discharging process by introducing more complex rules.
\subsection{Basic rules for faces of size at least 12}\label{sec-large}
Each face $f_0$ of size $\ell\ge 12$ redistributes its charge as follows.
\begin{itemize}
\item Each A-triangle, B-triangle, and column incident with $f_0$ receives \texttt{weak}$_{\ell}$ from $f_0$.
\item Each C-triangle and non-column $4$-face that shares an edge $v_1v_2$ with $f_0$
receives $\text{\texttt{small}}_{\ell, a(v_1)}+\text{\texttt{small}}_{\ell, a(v_2)}$ from $f_0$,
where $a(v_i)$ is $0$ if the third face incident with $v_i$ is a $(\le\!4)$-face, and $a(v_i)$ is $1$, otherwise.
\item The sink of each isolated vertex incident with $f_0$ receives \texttt{iso}$_{\ell}$ from $f_0$.
\end{itemize}
\noindent The amounts that are sent are defined in the following table.
\begin{center}
\begin{tabular}{|c|cccc|}
\hline
$\ell$ & \texttt{weak}$_{\ell}$ & \texttt{small}$_{\ell,0}$ & \texttt{small}$_{\ell,1}$ & \texttt{iso}$_{\ell}$\\
\hline
$12$ & $\frac{4}{3}$ & $\frac{2}{3}$ & $\frac{1}{3}$ & $\frac{23827}{36960}$ \\
$13$ & $\frac{14023}{10080}$ & $\frac{14023}{20160}$ & $\frac{6137}{20160}$ & $\frac{1097}{1680}$\\
$(\ge\!14)$ & $2\bigl(1-\frac{4}{\ell}\bigr)$ & $1-\frac{4}{\ell}$ & $\frac{1}{2}\bigl(1-\frac{4}{\ell}\bigr)$ & $1-\frac{4}{\ell}$ \\
\hline
\end{tabular}
\end{center}
\subsection{Basic rules from faces of size between 5 and 11}\label{sec-med}
Fix an $\ell$-face $f_0$ with $5\le \ell\le 11$.
Let $uv_1v_2$ be a part of the boundary walk of a face $f_0$, and
$f$ the other face containing the edge $v_1v_2$.
Suppose that $f$ is either a C-triangle or a $4$-face, and
let $f'$ be the face incident with $uv_1$ distinct from $f_0$.
We define $t_f(v_1)$ as follows (when the face $f$ is clear from the context, we will omit the subscript).
\[ t_f(v_1)=
\begin{cases}
0 & \mbox{if $\deg(v_1)\ge 4$ and $|{f'}|\geq 5$,} \\
1 & \mbox{if $|{f'}| \leq 4$, and} \\
2 & \mbox{if $\deg(v_1)=4$ and $|{f'}| \geq 5$.}
\end{cases}
\]
\noindent We next define $t(f)$ to be the value given by the following table.
\begin{center}
\begin{tabular}{|c|ccc|}
\hline
\diagbox{$t(v_1)$}{$t(v_2)$}&0&1&2\\
\hline
0&0&1&2\\
1&1&3&4\\
2&2&4&5\\
\hline
\end{tabular}
\end{center}
The face $f_0$ sends $A_\ell$ to each incident A-triangle,
sends $B_\ell$ to each incident B-triangle, and sends $G_\ell$ to each incident column.
The amounts of charge that are sent are determined in the following table (note that $\ell\ge 6$
since a minimal counterexample does not contain TRIANGLE$_0$ or FOUR$_0$).
\begin{center}
\begin{tabular}{|c|ccc|}
\hline
$\ell$ & $A_\ell$ & $B_\ell$ & $G_\ell$ \\
\hline
6 & $\frac{1}{1}$ & $\frac{3}{4}$ & $\frac{3}{4}$ \\
7 & $\frac{1}{1}$ & $\frac{14}{15}$ & $\frac{9}{10}$ \\
8 & $\frac{1}{1}$ & $\frac{14}{15}$ & $\frac{82}{105}$ \\
9 & $\frac{17383}{15120}$ & $\frac{17383}{15120}$ & $\frac{2743}{2520}$ \\
10 & $\frac{8983}{7560}$ & $\frac{8983}{7560}$ & $\frac{16217}{15120}$ \\
11 & $\frac{4}{3}$ & $\frac{4}{3}$ & $\frac{4}{3}$ \\
\hline
\end{tabular}
\end{center}
If $f_0$ shares an edge $v_1v_2$ with a C-triangle $f$,
then $f_0$ sends $C_{\ell,t(f)}$ to $f$ if $\ell\le 7$, and $C_{\ell,t(v_1)}+C_{\ell,t(v_2)}$ to $f$ if $\ell>7$.
Similarly, if $f_0$ shares an edge $v_1v_2$ with a non-column $4$-face $f$,
then $f_0$ sends $D_{\ell,t(f)}$ to $f$ if $\ell\le 7$, and $D_{\ell,t(v_1)}+D_{\ell,t(v_2)}$ to $f$ if $\ell>7$.
The amounts of charge sent are given in the following table.
\begin{center}
\begin{tabular}{|c|cccccc|cccccc|}
\hline
$\ell$ & $C_{\ell,0}$ & $C_{\ell,1}$ & $C_{\ell,2}$ & $C_{\ell,3}$ & $C_{\ell,4}$ & $C_{\ell,5}$ & $D_{\ell,0}$ & $D_{\ell,1}$ & $D_{\ell,2}$ & $D_{\ell,3}$ & $D_{\ell,4}$ & $D_{\ell,5}$ \\
\hline
5 & $-\frac{11507}{36960}$ & $-\frac{7}{40}$ & $\frac{349}{840}$ & $-\frac{1}{7}$ & $\frac{13}{30}$ & $\frac{53}{120}$ & $\frac{1}{4}$ & $\frac{0}{1}$ & $\frac{349}{840}$ & $\frac{1}{15}$ & $\frac{1}{8}$ & $\frac{4}{7}$ \\
6 & $-\frac{10}{33}$ & $\frac{1}{336}$ & $\frac{1}{2}$ & $\frac{0}{1}$ & $\frac{1}{2}$ & $\frac{97}{160}$ & $\frac{1}{4}$ & $\frac{0}{1}$ & $\frac{3}{8}$ & $-\frac{13}{60}$ & $\frac{1}{8}$ & $\frac{67}{120}$ \\
7 & $\frac{2}{55}$ & $\frac{1}{336}$ & $\frac{211}{336}$ & $\frac{0}{1}$ & $\frac{1}{2}$ & $\frac{13}{15}$ & $\frac{1}{4}$ & $\frac{0}{1}$ & $\frac{3}{8}$ & $\frac{3}{7}$ & $\frac{3281}{20160}$ & $\frac{13}{15}$ \\
8 & $\frac{583}{1680}$ & $\frac{193}{840}$ & $\frac{7}{15}$ & & & & $\frac{41}{105}$ & $\frac{1}{4}$ & $\frac{1}{3}$ & & & \\
9 & $\frac{7223}{30240}$ & $\frac{1517}{7560}$ & $\frac{17383}{30240}$ & & & & $\frac{1009}{2160}$ & $\frac{5}{18}$ & $\frac{5017}{10080}$ & & & \\
10 & $\frac{83}{378}$ & $\frac{47851}{166320}$ & $\frac{8983}{15120}$ & & & & $\frac{20743}{40320}$ & $\frac{0}{1}$ & $\frac{4615}{8064}$ & & & \\
11 & $\frac{17}{33}$ & $\frac{7}{22}$ & $\frac{3}{5}$ & & & & $\frac{13}{22}$ & $\frac{26}{165}$ & $\frac{13}{22}$ & & & \\
\hline
\end{tabular}
\end{center}
Finally, suppose that $u_1vu_2$ is a part of the boundary of $f_0$ and $v$ is isolated.
For $i=1,2$, let $f_i$ be the face incident with $u_iv$ distinct from $f_0$.
If $|f_1|\ge r(\ell)$ and $|f_2|\ge r(\ell)$, then $f_0$ sends $E_{\ell,1}$ to the sink of $v$.
Otherwise, $f_0$ sends $E_{\ell,0}$ to the sink of $v$.
The values of $r(\ell)$, $E_{\ell,0}$ and $E_{\ell,1}$ are given in the following table.
\begin{center}
\begin{tabular}{|c|c|cc|}
\hline
$\ell$ & $r(\ell)$ & $E_{\ell,0}$ & $E_{\ell,1}$ \\
\hline
5 & 15 & $\frac{1}{7}$ & $-\frac{61}{240}$ \\
6 & 14 & $\frac{49}{240}$ & $-\frac{1}{15}$ \\
7 & 13 & $\frac{79}{240}$ & $-\frac{1}{15}$ \\
8 & 13 & $\frac{41}{105}$ & $\frac{9}{28}$ \\
9 & 12 & $\frac{7}{15}$ & $\frac{1}{3}$ \\
10 & 12 & $\frac{7}{15}$ & $\frac{1}{3}$ \\
11 & 11 & $\frac{7}{15}$ & $\frac{1}{3}$ \\
\hline
\end{tabular}
\end{center}
\subsection{Basic rules for vertices of degree five and more}\label{sec-heavy}
In this subsection, we present basic rules for $(\ge\!5)$-vertices.
First, every $(\ge\!5)$-vertex sends $\frac{1}{4}$ to each incident $4$-face.
Every $5$-vertex incident with exactly one triangle $f$ and $m$ $4$-faces sends $\texttt{5\_to\_tri\_1}_m$ to $f$,
where $\texttt{5\_to\_tri\_1}_0=\frac{767}{1680}$, $\texttt{5\_to\_tri\_1}_1=\frac{737}{1680}$ and
$\texttt{5\_to\_tri\_1}_2=\frac{37}{120}$.
Finally, if a $5$-vertex $v$ is incident faces $f_1,\ldots,f_5$ (in this order) and $|f_2|=|f_4|=3$,
$\texttt{5\_to\_tri\_2\_light}=\frac{83}{140}$ to $f_2$ if $|f_1|\le 7$ and $|f_3|\le 7$, and
$\texttt{5\_to\_tri\_2\_heavy}=\frac{57}{140}$, otherwise.
This rule also applies to $f_4$ in the symmetric way.
Every $6$-vertex incident with either exactly one triangle or two triangles sharing an edge
sends $\texttt{6\_to\_tri\_le2\_adj}=\frac{63}{80}$ to each incident triangle.
If a $6$-vertex is incident with two triangles that do not share an edge,
then it sends $\texttt{6\_to\_tri\_2\_opp}=\frac{767}{1680}$ to each of the two triangles.
Finally, if a $6$-vertex $v$ is incident faces $f_1,\ldots,f_6$ (in this order) and $|f_2|=|f_4|=|f_6|=3$,
then $v$ sends $\texttt{6\_to\_tri\_3\_light}=\frac{113}{120}$ to $f_2$ if $\min(|f_1|,|f_3|)=5$ and $\max(|f_1|,|f_3|)\le 7$,
$v$ sends $\texttt{6\_to\_tri\_3\_all6}=\frac{8}{15}$ to $f_2$ if $|f_1|=|f_3|=6$, and
$v$ sends $\texttt{6\_to\_tri\_3\_heavy}=\frac{881}{1680}$ to $f_2$, otherwise.
The rule symmetrically applies to $f_4$ and $f_6$.
Finally, let $v$ be a $(\ge\!7)$-vertex and
let $f_1,\ldots,f_5$ be consecutive faces incident with $v$ such that $f_3$ is a $3$-face.
The vertex $v$ sends to $f_3$ $\texttt{6\_to\_tri\_3\_light}=\frac{113}{120}$ if $|f_1|=|f_5|=3$,
$\texttt{6\_to\_tri\_le2\_adj}=\frac{63}{80}$ if $\min\{|f_1|,|f_5|\}\le 4$ and $\max\{|f_1|,|f_5|\}\not=3$, and
$\texttt{6\_to\_tri\_2\_opp}=\frac{767}{1680}$, otherwise.
\subsection{Additional charge sent to 3-faces and 4-faces}\label{sec-addit}
Let $f_0$ be a $(\ge\!6)$-face and $u_1v_1v_2u_2$ be a part of its boundary walk.
Let $f_i$ be the other face incident with $u_iv_i$, $i=1,2$, and $f$ the other face incident with $v_1v_2$.
By symmetry, we can assume that $|f_1|\le |f_2|$.
If $f_0$ is a $6$-face, both $v_1$ and $v_2$ are $3$-vertices, $|f_i|\le \Delta^\star-1$ for $i=1,2$, and
$f$ is a non-column $4$-face, then $f_0$ sends $\texttt{light\_D\_extra} = \frac{1}{30}$ to $f$.
If $f_0$ is a $7$-face, both $v_1$ and $v_2$ are $3$-vertices, $|f_i|\le \Delta^\star-1$ for $i=1,2$, and
$f$ is a C-triangle, then $f_0$ sends $\texttt{light\_C\_extra}=\frac{1}{30}$ to $f$.
If $f_0$ is a $7$-face, $f$ is an A-triangle, and
$|f_1|=\Delta^\star-1$ (note that $|f_1|\ge\Delta^\star-1$ because of the absence of the configuration TRIANGLE$_0$
in a minimal counterexample),
then $f$ sends $\texttt{short\_to\_lightA}_{7,\Delta^\star-1,\Delta^\star-1}=\frac{1}{15}$ to $f$ if $|f_2|=\Delta^\star-1$, and
$f$ sends $\texttt{short\_to\_lightA}_{7,\Delta^\star-1,\Delta^\star}=\frac{1}{30}$ to $f$ if $|f_2|=\Delta^\star$.
If $f_0$ is a $7$-face, $f$ is an A-triangle, and $|f_1|\in\{\Delta^\star-2,\Delta^\star-1\}$,
then $f_0$ sends \texttt{short\_to\_lightA}$_{|f_1|,|f_2|}$ to $f$,
where the amounts are given by the following table.
\begin{center}
\begin{tabular}{|c|c|}
\hline
\texttt{short\_to\_lightA}$_{8,\Delta^\star-2,\Delta^\star-2}$ &$\frac{1}{7}$\\
\texttt{short\_to\_lightA}$_{8,\Delta^\star-2,\Delta^\star-1}$ &$\frac{1}{7}$\\
\texttt{short\_to\_lightA}$_{8,\Delta^\star-2,\Delta^\star}$ &$\frac{3}{28}$\\
\texttt{short\_to\_lightA}$_{8,\Delta^\star-1,\Delta^\star-1}$ &$\frac{1}{15}$\\
\texttt{short\_to\_lightA}$_{8,\Delta^\star-1,\Delta^\star}$ &$\frac{1}{30}$\\
\hline
\end{tabular}
\end{center}
If $f_0$ is a $9$-face, $f$ is an A-triangle, and $|f_1|=\Delta^\star-3$,
then $f_0$ sends $\texttt{face\_to\_lightA}_{9,2}=\frac{3257}{30240}$ to $f$ if $|f_2|=|f_1|$, and
$\texttt{face\_to\_lightA}_{9,1}=\frac{185}{6048}$, otherwise.
Finally, if $f_0$ is a $10$-face, $f$ is an A-triangle, and $|f_1|=\Delta^\star-4$,
then $f_0$ sends $\texttt{face\_to\_lightA}_{10,2}=\frac{583}{5040}$ to $f$ if $|f_2|=|f_1|$, and
$\texttt{face\_to\_lightA}_{10,1}=\frac{583}{10080}$, otherwise.
\subsection{Two-phase rules}\label{sec-th}
The rules described in this subsection have two phases:
first, some charge is sent to an edge of $G$ and then $e$ sends the received charge to one of the faces.
This description will be more convenient for the analysis of the sent charge in our proof.
Let $uv$ be an edge such that both faces containing the edge $uv$ are $(\ge\!12)$-faces and $u$ is not a $3$-vertex contained in a $3$-face.
If $u$ is a $(\le\!4)$-vertex that is contained in exactly one $(\le\!4)$-face $f$, then $f$ sends $\texttt{through\_heavy}=\frac{17}{80}$ to $e$.
If $u$ is a $4$-vertex contained two $4$-faces or it is a $5$-face contained in two $3$-faces,
one each of the two faces send $\frac{1}{2}\texttt{through\_heavy}=\frac{17}{160}$ to $e$.
Otherwise, $u$ sends $\texttt{through\_heavy}=\frac{17}{80}$ to $e$.
If $v$ is a $3$-vertex contained in a $3$-face $vv'v''$ and the other face $f'$ containing the edge $v'v''$ is a $(\le\!11)$-face,
then $e$ sends $\texttt{through\_heavy}=\frac{17}{80}$ to $f$.
Let $uv$ be an edge such that one of the faces containing $uv$ has size between $5$ and $10$ (inclusively).
Let $f$ be one of the face containing $uv$ and $u'$ the neighbor of $u$ incident with $f$ that is different from $v$.
If the edge $uu'$ is contained in a $3$-face $f'$, and
either $u$ is a $(\ge\!6)$-vertex or $u$ is a $5$-vertex contained in only one $3$-face,
then $u$ sends $\texttt{through\_heavy}=\frac{17}{80}$ to $e$, which then sends $\texttt{through\_heavy}=\frac{17}{80}$ to $f'$.
Note that if $u$ is a $6$-vertex, then this rule may apply twice, once for each face containing the edge $uv$.
Finally, let $uv$ be an edge contained in a face of size between $5$ and $10$ (inclusively) and a $(\ge\!12)$-face $f$.
Let $u'$ be the neighbor of $u$ incident with $f$ that is different from $v$.
If the edge $uu'$ is contained in a $3$-face $f'$ and $u$ is a $5$-vertex contained in two $3$-faces,
then $f'$ sends $\texttt{through\_heavy}=\frac{17}{80}$ to $e$,
which then sends $\texttt{through\_heavy}=\frac{17}{80}$ to the $3$-face containing $u$ that is different from $f'$.
\subsection{Additional special rules}\label{sec-spec}
Let a $4$-face $f=v_1v_2v_3v_4$ and a $5$-face $f'$ share the edge $v_1v_2$.
If $v_3$ is a $(\ge\!4)$-vertex, $v_4$ is a $(\ge\!4)$-vertex or $v_3v_4$ is contained in a $(\ge\!6)$-face,
then $f$ sends $\texttt{four\_to\_five}=\frac{109}{840}$ to $f'$.
If $f$ and $f'=v_1v_2v_3v_4$ are two $4$-faces sharing the edge $v_1v_2$,
both $v_1$ and $v_2$ are $4$-faces, and the other faces containing $v_1v_4$ and $v_2v_3$ are also $4$-faces,
then $f$ sends $\texttt{four1}=\frac{1}{2}$ to $f'$.
If $f=v_1v_2v_3v_4$ is a $4$-face, $v_1$ is a $4$-vertex contained in a $3$-face $f'$, and
the other faces containing $v_1v_4$ and $v_2v_3$ are $\ge \Delta^\star-1$-faces,
then $f$ sends $\texttt{four2}=\frac{1}{2}$ to $f'$.
If $v_1v_2v_3$ is a part of the boundary walk of an $\ell$-face $f$, $\ell\in\{5,11\}$,
$v_1$ is a $3$-vertex, and
both $v_1v_2$ and $v_2v_3$ are incident with C-triangles,
then the $3$-face containing $v_1v_2$ sends $\texttt{$\star$\_CC\_to\_5\_extra}=\frac{37}{240}$ to $f$ if $f$ is a $5$-face, and
$\texttt{$\star$\_CC\_to\_11\_extra}=\frac{14}{165}$, otherwise (when $f$ is a $11$-face).
If $v_1v_2v_3$ is a part of the boundary walk of a 10-face $f$,
$v_2v_3$ is contained in an A-triangle $f'$, and
the other face containing $v_1v_2$ is a $(\ge\!13)$-face,
then $f$ sends $\texttt{10\_to\_13\_A\_extra}=\frac{89}{6048}$ to $f'$.
Finally, if $v_1v_2v_3$ is a part of the boundary walk of an 11-face $f$,
both $v_1v_2$ and $v_2v_3$ are contained in faces of size $5$ or $6$,
$v_2$ is a $4$-vertex contained in a $3$-face $f'$,
then $f$ sends $\texttt{11\_to\_opp\_66tri\_extra}=\frac{28}{165}$ to $f'$.
\section{Analysis of final charges}
\label{sec-analysis}
In this section,
we argue that if a graph $G$ that does not contain any of the reducible configurations (which were identified in Section~\ref{sec-redu}),
is assigned charge as described at the beginning of Section~\ref{sec-rules} and
then this charge is redistributed using the rules described in the rest of Section~\ref{sec-rules},
then the final charge of each vertex, edge and face of $G$ is non-negative.
Since the charge is preserved by the rules and the initial amount of charge was negative,
this contradicts the existence of a counterexample to Theorem~\ref{main-thm}.
The final charge of edges is easy to analyze.
The edges are only involved in the rules described in Subsection~\ref{sec-th} and
each edge sends out as much as it has received.
The analysis of the final amount of charge of vertices and faces is more involved.
We performed the analysis with the computer assistance.
The program is available at \url{http://www.ucw.cz/~kral/cyclic-16/} as
the file \texttt{test-discharging.lhs}.
We used Literate Haskell to prepare the program:
compiling the file with Latex produces a detailed description how the program works, and
compiling it with GHC produces an executable file that performs the analysis.
The former file is available on the webpage, too.
In the rest of the section, the rules are referred to by the names of the constants described the amount of charge transferred.
For example, the \texttt{iso} rules are the rules described in the third point in Section~\ref{sec-large}.
\subsection{Final charge of vertices}
We now give details how the amount of final charge of vertices is analyzed.
Since $G$ is $3$-connected, its minimum degree is at least three.
If a $3$-vertex $v$ is contained in a $(\le\!4)$-face,
then it gets $1$ unit of charge from the incident $(\le\!4)$-face(s) and is not affected by any other rules.
If a $3$-vertex $v$ is not contained a $(\le\!4)$-face,
then it receives charge described by \texttt{iso} and \texttt{E} rules from Sections~\ref{sec-large} and \ref{sec-med}, and
it can send out charge by the \texttt{through\_heavy} rules from Section~\ref{sec-th}.
In particular, the amounts received and sent only depend on the sizes of the faces containing $v$.
Hence, the program just enumerates all possibilities and checks that the final charge of $v$ is non-negative.
We proceed similarly for $4$-vertices, $5$-vertices, $6$-vertices and $7$-vertices.
Note that a $4$-vertex contained in a $(\le\!4)$-face is unaffected by any rules (its sink is the incident $(\le\!4)$-face,
so it does not receive any charge by the \texttt{iso} and \texttt{E} rules),
so such vertices need not be analyzed.
Consider a $d$-vertex $u$, $d\ge 8$, and let $f_1,\ldots,f_d$ be the faces incident with $u$ (in this cyclic order).
For $1\le i\le d$, define $c_i$ to be the following charge.
If $|f_i|=3$, then $c_i$ is
the amount of charge sent from $u$ to $f_i$ by the rules from Section~\ref{sec-heavy} plus
the amount of charge sent from $u$ to an edge $uv$ by the \texttt{through\_heavy} rules described in the last two paragraphs of Section~\ref{sec-th}.
If $|f_i|=4$, $c_i$ is the amount of charge $u$ sends to $f_i$ by the rules from Section~\ref{sec-heavy}.
Otherwise, $c_i$ is half the amount of charge sent by the \texttt{through\_heavy} rules described in the second paragraph of Section~\ref{sec-th} minus
the amount of charge received from $f_i$ by the \texttt{iso} and \texttt{E} rules.
Observe that $c_i$ depends only on the sizes of the faces $f_{i-2},\ldots,f_{i+2}$ (with indices modulo $d$) and
let $q_i=\frac{1}{2}c_{i-1}+c_i+\frac{1}{2}c_{i+1}$ (again with indices modulo $d$).
The program enumerates all possible sizes of the faces $f_{i-3},\ldots,f_{i+3}$ and checks that $q_i\le 1$.
This yields that
$$\sum_{i=1}^d c_i=\frac{1}{2}\sum_{i=1}^d q_i\le \frac{d}{2}\;\mbox{.}$$
Hence, the total amount of charge sent out by $u$ is at most $d/2\le d-4$ and
its final charge is non-negative.
\subsection{Final charge of faces}
The final amounts of charge of faces are analyzed in a different way depending on the face sizes.
Let start with considering a $3$-face $f=v_1v_2v_3$, and
let $f_i$ be the other face containing the edge $v_iv_{i+1}$ (indices modulo three).
The \emph{shape} of a face $f$ consists of the information on the sizes of the faces $f_1$, $f_2$ and $f_3$ and
the information whether the faces cyclically adjacent to $f_i$ at $v_i$ and $v_{i+1}$ are $3$-faces, $4$-faces or $(\ge 5)$-faces.
The shape fully determines the amount of charge sent by $f$ to incident $3$-vertices and
the amount of charge received from the incident faces by the basic rules from Subsections~\ref{sec-large} and~\ref{sec-med}, and
the amount of charge received by the rules from Subsection~\ref{sec-addit} and~\ref{sec-spec} except for the rule\texttt{11\_to\_opp\_66tri\_extra}.
Let $c_0$ be the total amount of this charge.
The charge not accounted in $c_0$ is sent by the \texttt{E} rules through $4$-vertices,
by the rules from Subsections~\ref{sec-heavy} and~\ref{sec-th} and the rule \texttt{11\_to\_opp\_66tri\_extra}.
Each of these rules can be associated with one of the vertices $v_i$, $i=1,2,3$, and
its amount only depends on the sizes of the faces containing $v_i$ (in addition to the shape of $f$).
Hence, we can determine the worst case charge $c_i$ for each vertex $v_i$ independently of the other two vertices of $f$.
We then verify that $c_0+c_1+c_2+c_3-1$ is non-negative for each possible shape of a $3$-face.
The analysis of the final charge of $4$-faces is similar to that of $3$-faces.
We now focus of faces with sizes between $5$ and $13$ (inclusively).
The \emph{inventory} of a face is the number of adjacent A-triangles, B-triangles, columns and $(\le\!4)$-faces
distinguished by the number of their vertices that are incident with another $(\le\!4)$-face.
The inventory is enough to determine the final of $(\ge\!12)$-faces.
The program enumerates all possible inventories of $(\ge\!12)$-faces and
checks that the final charge of all $(\ge\!12)$-faces is non-negative.
The program also enumerate possible inventories of $\ell$-faces, $5\le\ell\le 11$,
discards those that give a non-negative lower bound on the final amount of charge of the considered face $f$.
For each of the non-discarded inventories,
the program enumerates all cyclic orders determining which the edges of the $\ell$-face are contained in the elements of the inventory.
Some of the enumerated configurations can be excluded by the reducible configurations and get discarded (this actually finishes off
the analysis of $6$-faces and $11$-faces).
In addition, lower bounds of the sizes of the other faces adjacent to $f$ are obtained,
e.g., the configuration GEN$_2$ is used to establish that an incident face must have size at least $\Delta^\star+7-\ell$.
In the remaining cases,
the program enumerates all possible sizes of faces that affect the charge sent or received by $f$,
i.e., faces next to A-triangles, faces incident with $3$-vertices of C-triangles at $(\le\!7)$-faces, and
faces incident with the $3$-vertices of non-column $4$-faces at $5$-faces, and
it checks that the final charge of $f$ is non-negative.
It remains to analyze the final amount of charge of $(\ge\!14)$-faces.
We account the charge sent out by a $\ell$-face $f$ to its vertices, $\ell\ge 14$.
If $v_1v_2$ is an edge of $f$ shared contained in an A-triangle, B-triangle or a column,
then $\texttt{weak}_{\ell}/2=1-\frac{4}{\ell}$ is assigned to each of $v_1$ and $v_2$.
If $v_1$ is an isolated vertex, it is assigned \texttt{iso}$_{\ell}=1-\frac{4}{\ell}$.
If $v_1v_2v_3$ is a path on the boundary of $f$ and $v_1v_2$ is contained in a C-triangle or a non-column $4$-face,
then $\texttt{small}_{\ell, a(v_1)}$ is assigned to $v_1$ and $\texttt{small}_{\ell, a(v_2)}$ is assigned to $v_2$.
If the edge $v_2v_3$ is also in a triangle or a $4$-face $f$ (which cannot be an $A$-triangle, $B$-triangle or a column),
then $a(v_2)=1$ and we assign $\texttt{small}_{\ell,1}$ to $v_2$ in addition,
i.e., $v_2$ is assigned $2\texttt{small}_{\ell, 1}=1-\frac{4}{\ell}$ in total.
Otherwise, $a(v_2)=0$ and $v_2$ is assigned $\texttt{small}_{\ell, 0}=1-\frac{4}{\ell}$.
We conclude that the charge sent out by $f$ is at most $\ell\left(1-\frac{4}{\ell}\right)=\ell-4$,
i.e., the final charge of $f$ is non-negative.
\section*{Acknowledgements}
The authors would like to thank Jan van den Heuvel for pointing out a flaw in one of their arguments.
| proofpile-arXiv_065-14917 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
There is great interest in building unbiased catalogues of quasars
for a range of important astrophysical questions. These include
understanding the quasar phenomenon itself and the growth and
occurrence of supermassive black holes through cosmic time,
the use of quasars as probes of intervening material and their
role in in re-ionisation of both hydrogen and helium, and for the UV background
levels throughout the universe
\citep{HP1994,Weymann}. Most current quasar surveys, however, rely on their specific intrinsic properties such as strong ultraviolet emission \citep{Schneider10}, distinct near/mid-infrared colours \citep{Maddox12,Secrest15}, X-ray output \citep{Brusa10} or prominent radio emission \citep{Ivezic02}.
In this {\it Letter} we apply an astrometric approach of identifying quasars as apparently
stationary sources on the sky, based purely on the astrometric measurement from
the \textit{Gaia} mission \citep{Heintz15}, and present the first pilot study of
such an approach.
Our goal here is to quantify the efficiency and completeness of this
selection technique.
Identifying quasars based only on their zero proper motions
has the potential to open a novel route of selecting quasars in an unbiased
way, and might even lead to the discovery of new types of quasars or other
types of extragalactic point sources.
\section{Astrometric selection of quasars}
The \textit{Gaia} data release 2 \citep[DR2;][]{GaiaDR2} catalogue consists of
more than $1.3\times 10^9$ sources down to $G \approx 21$ mag, for which the
five-parameter astrometric solution (positions, parallaxes and proper motions)
has been determined \citep{Lindegren18}. The Gaia $G$ filter is very broad covering
the spectral range from 400 to 1000 nm and hence quasars covering a wide range
of redshifts should be included in the catalogue.
We extract all sources within a radius of one degree from the North Galactic
Pole (NGP) centred on $(\alpha,\delta) =
12^{\mathrm{h}}\,51^{\mathrm{m}}\,26^{\mathrm{s}}.0~+27^{\mathrm{o}}\,07^{\mathrm{m}}\,42^{\mathrm{s}}.0$
from the \textit{Gaia} DR2 catalogue (see Fig.~\ref{fig:radec}). We then limit
our search to sources with $18 < G < 20$ mag, for which the associated
uncertainty is up to 1.2 mas\,yr$^{-1}$ in the respective proper motion
components. This is motivated by our pre-study \citep{Heintz15} in which we (based on pre-launch simulations of the Gaia-data)
found that the expected contamination of apparently stationary stars is lower
than $\approx 20\%$ at the Galactic poles, but increases significantly when
observing closer to the Galactic plane or at magnitudes brighter than $G < 18$
mag. We then select all point
sources with total proper motions, $\mu =\sqrt{\mu_{\mathrm{RA}}^2 +
\mu_{\mathrm{Dec}}^2}$, consistent with zero at the $2\sigma$ confidence level
(i.e. S/N$_{\mu} = \mu / \mu_{\mathrm{err}} < 2$). Finally, we identify the
counterpart to each source in the Sloan Digital Sky Survey (SDSS) and require
that all \textit{Gaia} sources have morphologies consistent with being point
sources (\texttt{class} = 6 in the SDSS) to limit our search to quasars only
(i.e. excluding Seyferts and potential contaminating extended galaxies). This
results in about 2\% of the sample being removed due to extended morphology.
Matching the \textit{Gaia} sample to the SDSS with a matching radius of less than 1 arcsec also allows us to investigate the properties of our sample in optical
colour-colour space.
\begin{figure} [!t]
\centering
\epsfig{file=RADec.pdf,width=\columnwidth}
\caption{Location on the sky of all point-like \textit{Gaia} sources with proper motions and $18 < G < 20$ mag (black dots) within one degree of the NGP. The subset of these with proper motions consistent with zero within $2\sigma$ are shown by the blue circles and those that are already spectroscopically confirmed quasars are shown by the red star symbols.}
\label{fig:radec}
\end{figure}
In total, we find that there are 2,634 spatially unresolved \textit{Gaia} sources with
proper motions and $18 < G < 20$ mag within one degree of the NGP, of which 100
sources ($\approx 4\%$) have proper motions consistent with zero (within
$2\sigma$). These are shown as the blue circles in Fig.~\ref{fig:radec}.
Cross-matching our extracted catalogue with the SDSS data release 14 quasar
sample \citep[SDSS-DR14Q,][]{Paris17} and the NASA/IPAC Extragalactic Database (NED) we find that 34 quasars are already spectroscopically confirmed
within the same magnitude limit and region on the sky, of which 32 ($\approx
95\%$) also have S/N$_{\mu} = \mu / \mu_{\mathrm{err}} < 2$.
For the remaining two the measured proper motions are $2.76\pm 1.09$ and
$1.22\pm 0.58$ mas yr$^{-1}$. We also discover two spectroscopically confirmed stars, observed as part of the SDSS-APOGEE survey \citep{Alam15}. An extract of the full sample of \textit{Gaia}
sources with zero proper motions is presented in Table~\ref{tab:1}.
We also examine the additional
requirement that the sources have parallaxes consistent with zero within
$3\sigma$, but only five sources (GQs\,1255+2707, 1248+2658, 1247+2655,
1247+2706, and 1251+2804) were outside this criterion so we chose to include
them for completeness.
\input{tables/bigtable.tex}
\section{Selection efficiency and completeness} \label{sec:res}
We now investigate the location of the \textit{Gaia} sources with zero proper
motions in optical colour-colour space. By doing so, we can examine whether
these candidate quasars have, e.g., ultraviolet excess typical of unobscured,
low-$z$ quasars \citep[e.g.][]{Sandage65,Schmidt83}. About 70\% of the zero
proper motion sources have blue ($u-g < 1$) colours (see Fig.~\ref{fig:sdsscol}).
For quasars at $z \gtrsim
2.2$, the Lyman-$\alpha$ emission line will move out of the $u$-band, such that
the quasars appear redder in $u-g$ colour space. At red $g-r$ colours ($g-r >
1$) the zero proper motion sources have optical colours consistent with M or G
dwarf stars. While some of these are likely to be stellar contaminations,
removing these candidates will also exclude dust-reddened quasars
and broad absorption line (BAL) quasars from the sample, which are
found to have very red optical colours and to be systematically missing in
most existing quasar samples \citep{Fynbo13,Krogager15,Ross2015,Krawczyk2015,Krogager16}.
\begin{figure} [!t]
\centering
\epsfig{file=SDSScol.pdf,width=\columnwidth}
\caption{Optical colour-colour plots of the \textit{WISE}-detected Gaia point sources with proper motions and $18 < G < 20$ mag (black dots) within one degree of the NGP. \textit{Gaia} point sources with zero proper motions are represented by the blue dots and the spectroscopically confirmed quasars are shown by the red star symbols. Typical stellar colours are shown as grey dots.}
\label{fig:sdsscol}
\end{figure}
\begin{figure} [!t]
\centering
\epsfig{file=WISEcol.pdf,width=\columnwidth}
\caption{\textit{WISE} colour-colour plot of \textit{Gaia} point sources with zero proper motions (blue dots) and SDSS DR14 quasars (red star symbols) within one degree of the NGP. Overplotted are contours of the full SDSS-DR14 quasar sample with mid-infrared counterparts in the AllWISE catalogue.}
\label{fig:wisecol}
\end{figure}
To assess the efficiency of our selection we cross-match our sample of
\textit{Gaia} sources with zero proper motions to the all-sky mid-infrared
survey based on the \textit{WISE} satellite \citep[AllWISE;][]{Wright10}.
Mid-infrared selection of quasars is efficient at separating stars and
galaxies from quasars and is not affected by dust extinction while also being
sensitive to high-redshift quasars. Of the 100 \textit{Gaia} point sources with
zero proper motions, we identify 76 of the counterparts in the AllWISE catalogue
within 1 arcsec. This cross-match might introduce a bias
excluding quasars with weak infrared emission. Stellar contaminants will also
have weak infrared emission, however, and we find that of the 24 sources
excluded in this approach, roughly half have a significant ultraviolet excess
whereas the other half have optical colours consistent with the main-sequence
stellar track. In Fig.~\ref{fig:wisecol} we show the zero proper motion
\textit{Gaia} sources in mid-infrared colour-colour space. Overplotted are
contours of the SDSS-DR14Q sample for which \textit{WISE} photometry exists. A
simple color criterion of $W1-W2 > 0.8$ has been found to be robust in
identifying quasars at most redshifts \citep{Stern12}. In our sample of zero
proper motion sources with \textit{WISE} photometry, 55 (70\%) have $W1-W2 >
0.8$ (of which 29 are already identified quasars). We consider the remaining 26
sources as high-likelihood quasars. All these have also been photometrically identifed as quasars by \citep{Richards09}, and we list their estimated photometric redshifts in Table~\ref{tab:1} as well, marked by a "P". We note, however, that at $W1-W2 < 0.8$,
two spectroscopically confirmed quasars have also been observed, one being a high-$z$ quasar with optical colours consistent with known quasars in this redshift range and the other being a typical UV-excess quasar. We therefore
consider the sources with zero proper motions and $W1-W2 < 0.8$ as possible contaminants
(excluding the two already known quasars). We then infer a conservative selection efficiency
of $N_{\mathrm{QSO}}/N_{\mathrm{star}} \gtrsim 75\%$. This is a
lower limit due to the population of quasars with blue $W1-W2$
colours that also populates our sample.
\begin{figure} [!t]
\centering
\epsfig{file=WISE_pm.pdf,width=8cm}
\caption{W1-W2 colour as a function of S/N$_{\mu}$ of the \textit{WISE}-detected Gaia point sources with proper motions and $18 < G < 20$ mag (black dots) within one degree of the NGP. \textit{Gaia} point sources with zero proper motions are represented by blue dots and spectroscopically confirmed quasars are shown with red star symbols.}
\label{fig:wisepm}
\end{figure}
We present our main result in Fig.~\ref{fig:wisepm} where we show the full sample of \textit{Gaia} sources with $18 < G < 20$ mag and within one degree of the NGP for which a counterpart in the AllWISE catalogue could be identified. It is clear from the figure that the majority of point sources selected on the basis of zero proper motions occupy a distinct region in S/N$_\mu$ -- \textit{WISE} colour parameter space. This demonstrates that selecting quasars as stationary sources on the sky is definitely feasible and has a high efficiency of $\gtrsim 75\%$. The completeness is close to 100\% within the defined magnitude limit, since all cosmological objects are selected without any prior assumptions on the spectral energy distributions.
\section{Discussion and conclusions}
We have here demonstrated the possibility to select quasars as
stationary objects in the {\it Gaia} DR2 data set. When observing fields well
away from the Galactic stellar disk (here the NGP) the contamination from stars is very modest (below 25\%) when targeting the most relevant magnitudes (here $18 < G < 20$). Hence, astrometric selection offers both a complete and clean
selection of quasars.
This technique offers the possibility to take major steps ahead on some very
interesting problems relating to the quasar phenomenon. We will mention a
few examples here. First, getting a more
complete picture of dust obscuration in quasar hosts will be possible with
a sample of quasars selected from proper motion. Second, the
redshift dependence of the frequency of BAL quasars
can be determined. Third, using a purely astrometrically selected sample
of quasars we can get an independent gauge of the metallicity distribution
of intervening galaxies, in particular the damped Lyman-$\alpha$ absorbers. Fourth, the identification of quasars via zero proper motion also provides unbiased measures of number densities of various absorbers, such as C\,\textsc{iv}, Mg\,\textsc{ii}, or H\,\textsc{i}.
Such a sample will still be subject to a flux limit, but this is easier to
model than the combined effect of a flux limit and the effect of dust reddening
on the quasar selection efficiency in optical quasar surveys.
We also note that the {\it Gaia} DR2 data have been applied to find new
gravitationally lensed quasars \citep{Krone-Martins2018}.
An interesting case is the confirmed quasar SDSS J125209.59+265018.4
(GQ125209+265018 in Table~\ref{tab:1}). In Fig.~\ref{fig:sdsscol} this is
located as the object on the stellar track at $u-g = 1.5$. In
Fig.~\ref{fig:wisecol} it is one of the two sources with blue WISE colours at
$W1-W2 < 0.8$. This illustrates well the potential
of selection of quasars from astrometry in finding quasars that are otherwise
difficult to photometrically identify.
When the full \textit{Gaia} data is released, the errors on the proper motions will
decrease and it will thus be easier to disentangle objects that are truly
stationary (quasars) and stars with low proper motions. This will also make it possible to search for stationary sources at even fainter magnitudes. Also, since Gaia
astrometry exists for most of the sky, this proper motion criteria could help
reduce the contamination in other quasar surveys. Since Gaia covers the full
sky, the selection can also be carried out for a large sample of sources --
however, with the caveat that the contamination from apparently stationary
stars increases significantly closer to the Galactic plane.
We can also estimate the expected contamination of e.g. the WISE $W1-W2$ color
selection, where it can be seen from Fig.~\ref{fig:wisepm} that 15\% of the
sources with $W1-W2 > 0.8$ have significant proper motions at more than
5$\sigma$.
\begin{acknowledgements}
KEH and PJ acknowledge support by a Project Grant (162948--051) from The Icelandic Research Fund. The Cosmic Dawn Center is funded by the DNRF. LC is supported by DFF -- 4090-00079.
\end{acknowledgements}
\bibliographystyle{aa}
| proofpile-arXiv_065-15129 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Let $K$ be a number field or a function field of transcendence degree one over its field of constants $k$. Let $A$ be an abelian variety over $K$, where we assume $\mathrm{Tr}_{K/k}(A)=0$ in the case of function fields. Then the group of $K$-rational points $A(K)$ is finitely generated by N\'eron's thesis, generalizing the Mordell-Weil Theorem. For a modern exposition of the Lang-N\'eron results, we refer to \cite{Co06}. Giving more arithmetic information about the Mordell-Weil group is a difficult problem in general, as explained for instance in \cite{Hin}. A central notion attached to the group of $K$-rational points is the regulator. Its appearance in the strong form of the Birch and Swinnerton-Dyer conjecture, or in statements \textit{\`a la Brauer-Siegel} given in \cite{HiPa} stresses the importance of it when studying the arithmetic of elliptic curves and of abelian varieties in general. We refer to \cite{Paz14, Paz16} for the proof of a Northcott property of regulators of number fields, and for a conjectural similar statement for abelian varieties. The main result obtained here concerns the regulator of elliptic curves over number fields. We use $h(.)$ for the Weil height of algebraic numbers.
\begin{theorem}\label{blichreg}
Let $E$ be an elliptic curve defined over the number field $K$. Let $d=[K:\mathbb{Q}]$ and $m$ be the Mordell-Weil rank of $E(K)$. Assume $m\geq 1$. Denote $h=\max\{1, h(j_E)\}$. Then there is a $c(d,m)>0$ depending at most on $d$ and on $m$, such that $$\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq c(d,m)\, h^{\frac{m-4}{3}}\Big(\log (3h)\Big)^{\frac{2m+2}{3}},$$
and one may take $c(d,m)=\frac{c^m}{m^m d^{m+2}(\log(3d))^2}$, where $c>0$ is an absolute constant.
\end{theorem}
It implies immediately the following corollary.
\begin{corollary}\label{NorthReg}
Let $K$ be a number field and $m\geq4$ be an integer. There are at most finitely many $\overline{K}$-isomorphism classes of elliptic curves over $K$ with rank $m$ and bounded regulator.
\end{corollary}
Theorem \ref{blichreg} has other interesting consequences. For instance, combining with classical estimates on the asymptotic of the number of points of bounded height on elliptic curves (see for instance \cite{Lang} Theorem 7.4 page 126 and Theorem 7.5 page 127), one remarks the following.
\begin{rem}\label{Count}
Pick an elliptic curve $E$ of rank $m\geq 1$ over the number field $K$ of degree $d$ and let $\hat{h}_E$ be the N\'eron-Tate height on $E$. Then
$$\Card \{P\in{E(K)}\,\vert\, \hat{h}_E(P)\leq T\}\sim c_E\, T^{m/2},$$
where $T\to +\infty$ is a real number, and
$$c_E=\frac{\pi^{m/2}}{\Gamma(\frac{m}{2}+1)} \frac{\Card E(K)_{tors}}{\sqrt{\Reg(E/K)}},$$
$\Gamma$ being the gamma function. One sees here that $c_E$ is essentially the inverse of the square root of the quotient studied in Theorem \ref{blichreg}, which implies the estimate
\begin{equation}
c_E\leq c'(d,m)\, h^{-\frac{m-4}{6}}\Big(\log (3h)\Big)^{-\frac{m+1}{3}}
\end{equation}
where $c'(d,m)$ is a positive real number depending at most on $d$ and $m$.
\end{rem}
The proof of Theorem \ref{blichreg} relies on a general approach to bound the regulator from below using counting arguments of points of small height and explained in Lemmas \ref{success} and \ref{successbis}. Direct application of counting results of David \cite{Da97} would then give $16$ instead of $4$ in Corollary \ref{NorthReg}. To be able to go down to $4$, we combine the counting results of David \cite{Da97} and of Petsche \cite{Pet06}, the Lemma \ref{success2} and provide a non-trivial optimization of parameters.
We also obtain similar statements in the function field case, with the natural restrictions inherent to this other context (non-isotriviality, bounded inseparability degree, limited families).
The article starts by recalling the relevant definitions in Section \ref{section def}. In Section \ref{general} we give the general approach to obtain lower bounds on the regulator of an abelian variety in any dimension. In Section \ref{ellNF} we focus on elliptic curves defined over a number field and we prove Theorem \ref{blichreg}. In Sections \ref{ellFF0} and \ref{ellFFp} we give the counterparts respectively for elliptic curves over function fields in characteristic zero and characteristic $p>0$. In Section \ref{questions} we propose new questions about the growth of regulators of elliptic curves and abelian varieties.
Throughout the text, we use the logarithm $\log$ normalized by $\log(e)=1$.
\section{Definitions}\label{section def}
\subsection{Number fields} Let $K$ be a number field of degree $d$. Let $M_K$ be a complete set of inequivalent absolute values $\vert.\vert_v$ on $K$, given local degrees $n_v$ and normalized by $\vert p\vert_v=1/p$ for any $v\vert p$, for the underlying rational prime $p$. Hence the product formula holds: $$\forall x\in{K-\{0\}},\quad \sum_{v\in{M_K}}n_v\log \vert x\vert_v =0.$$
The height on $K$ is defined by the formula $$h(x)=\frac{1}{d}\sum_{v\in{M_{K}}}n_v \log\max\{1, \vert x\vert_v\}.$$
Let $E$ be an elliptic curve defined over the number field $K$, with neutral element $O$. We define the N\'eron-Tate height on the group of rational points $E(K)$ with respect to $(O)$ by $$\widehat{h}_E(P)=\frac{1}{2}\lim_{n\to\infty}\frac{1}{n^2}h(x([n]P)).$$
Let $I\subset\mathcal{O}_K$ be a non-zero ideal of the ring of integers of $K$. Then one also defines the height of $I$ by $h(I)=\frac{1}{d}\log N_{K/\mathbb{Q}}(I)$. For any elliptic curve with conductor $F(E/K)$ and minimal discriminant $\Delta(E/K)$, we can now define the Szpiro quotient by $$\sigma_{E/K}=\frac{h(\Delta(E/K))}{h(F(E/K))},$$ with the convention that in the case of good reduction everywhere, we fix $\sigma_{E/K}=1$.
\subsection{Function fields} Let $K=k(\mathcal{C})$ be the function field of a smooth projective and geometrically connected curve $\mathcal{C}$ defined over $k$ and of genus $g$. We denote by $M_K$ a complete set of inequivalent valuations $v(.)$ and given with local degrees $n_v$ (from each point $a\in{\mathcal{C}}$ one has a valuation given for $x\in{K}$ by $a(x)=\mathrm{ord}_a(x)$, and $n_v=[k(a):k]$), which gives a normalization such that for any non-zero element $x\in{K}$, the product formula holds: $$ \sum_{v\in{M_K}}n_v v(x)=0.$$
The height on $K$ is then defined as $h(0)=0$ and for any $x\in{K-\{0\}}$, we pose $$h(x)=\sum_{v\in{M_K}}n_v \max\{0, -v(x)\}.$$
Let $E$ be an elliptic curve defined over the function field $K$. We define the N\'eron-Tate height on the group of rational points $E(K)$ with respect to $(O)$ by $$\widehat{h}_E(P)=\frac{1}{2}\lim_{n\to\infty}\frac{1}{n^2}h(x([n]P)).$$
A divisor $I$ on $K$ is a formal sum $\sum_{v\in{M_K}}a_v \cdot v$ where $a_v\in{\mathbb{Z}}$ is zero for all but finitely many places $v$. The divisor is effective if for all $v$, $a_v\geq0$. In that case we pose $$h(I)=\sum_{v\in{M_K}}n_v a_v =\deg(I).$$
\subsection{Regulators of abelian varieties}
Let $K$ be a number field or a function field of transcendence degree one over its field of constants $k$.
Let $A/K$ be an abelian variety over the field $K$ polarized by an ample and symmetric line bundle $L$. We assume that $A$ has trace zero if $K$ is a function field. Let $m_K$ be the Mordell-Weil rank of $A(K)$. Let $\widehat{h}_{A,L}$ be the N\'eron-Tate height associated with the pair $(A,L)$. Let $<.,.>_L$ be the associated bilinear form, given by $$<P,Q>_L=\frac{1}{2}\Big(\widehat{h}_{A,L}(P+Q)-\widehat{h}_{A,L}(P)-\widehat{h}_{A,L}(Q)\Big)$$ for any $P,Q\in{A(K)}$. This pairing on $A\times A$ depends on the choice of $L$.
\begin{definition}\label{reg abvar}
Let $P_1, ..., P_{m_K}$ be a basis of the lattice $A(K)/A(K)_{\mathrm{tors}}$, where $A(K)$ is the Mordell-Weil group. The regulator of $A/K$ is defined by $$\mathrm{Reg}_{L}(A/K)= \det(<P_i,P_j>_{L\; 1\leq i,j\leq m_K}),$$ where by convention an empty determinant is equal to $1$.
\end{definition}
As for the height, the regulator from Definition \ref{reg abvar} depends on the choice of an ample and symmetric line bundle $L$ on $A$.
One usually defines the regulator in a more intrinsic way, that doesn't depend on the choice of $L$. Start with the natural pairing on the product of $A$ with its dual abelian variety $\check{A}$ given by the Poincar\'e line bundle $\mathcal{P}$: for any $P\in{A(K)}$ and any $Q\in{\check{A}(K)}$, define $<P,Q>=\widehat{h}_{A\times \check{A},\mathcal{P}}(P,Q)$. Next choose a base $P_1, ..., P_{m_K}$ of $A(K)$ modulo torsion and a base $Q_1, ..., Q_{m_K}$ of $\check{A}(K)$ modulo torsion. Then define $$\Reg(A/K)= \vert\det(<P_i, Q_j>_{1\leq i,j\leq m_K})\vert.$$
Let us recall how these two regulators are linked (see for instance \cite{Hin}). Let $\Phi_L:A\to \check{A}$ be the isogeny given by $\Phi_L(P)=t_{P}^{*}L\otimes L^{-1}$. By Proposition 9.3.6 page 291 of \cite{BG}, one has the formula $$<P,Q>_L=\frac{1}{2}<P,\Phi_L(Q)>.$$
Hence if $u$ is the index of the subgroup $\Phi_L(\mathbb{Z}P_1\oplus ... \oplus\mathbb{Z}P_{m_K})$ in $\check{A}(K)$ modulo torsion, one has
\begin{equation}\label{regulators}
\mathrm{Reg}_L(A/K)=u2^{-m_K}\mathrm{Reg}(A/K).
\end{equation}
Let us remark that when $L$ induces a principal polarization, the index $u$ is equal to $1$. Hence in the case of elliptic curves, we have $L=(O)$ and $$\mathrm{Reg}_{L}(E/K)=2^{-m_K}\mathrm{Reg}(E/K).$$
\section{Lower bounds for regulators of abelian varieties}\label{general}
\subsection{Minkowski and regulators}
A natural idea is to apply Minkowski's successive minima inequality to the Mordell-Weil lattice, we give it here as a lemma.
\begin{lemma}\label{success}
Let $K$ be a number field or a function field of transcendence degree one over its field of constants $k$. Let $A$ be an abelian variety over the field $K$, let $L$ be an ample and symmetric line bundle on $A$. We assume that $A$ has trace zero if $K$ is a function field. Let $m$ be the Mordell-Weil rank of $A(K)$. Assume $m\geq 1$. Let $\Lambda=A(K)/A(K)_{\mathrm{tors}}$. Then for any $i\in\{1, ..., m\}$, let us denote the Minkowski $i$th-minimum of $(\Lambda, \sqrt{\widehat{h}_{A,L}})$ by $\lambda_i$. We have
\begin{equation}\label{Minkowski2}
\lambda_1 \cdots \lambda_{m}\leq m^{{m/2}} (\mathrm{Reg}_{L}(A/K))^{1/2}.
\end{equation}
\end{lemma}
\begin{proof}
Minkowski's successive minima inequality reads
\[
\lambda_1 \cdots \lambda_{m}\leq \frac{2^m \mathrm{vol}((\Lambda\otimes\mathbb{R})/\Lambda)}{\mathrm{vol}(B(0,1))} .
\]
As the volume of a Euclidean unit ball in $\Lambda\otimes\mathbb{R}\simeq\mathbb{R}^m$ is $\frac{\pi^{m/2}}{\Gamma(\frac{m}{2}+1)}$, and as $\mathrm{vol}((\Lambda\otimes\mathbb{R})/\Lambda)=(\mathrm{Reg}_{L}(A/K))^{1/2}$ we obtain
\[
\lambda_1 \cdots \lambda_{m}\leq \frac{2^m \Gamma(\frac{m}{2}+1)}{\pi^{m/2}} (\mathrm{Reg}_{L}(A/K))^{1/2}.
\]
\noindent Now for any $m\geq1$, one gets easily $ 2^m \Gamma(\frac{m}{2}+1)\leq (\pi m)^{\frac{ m}{2}}$, it leads to the result.
\end{proof}
\begin{lemma}\label{successbis}
Let $A$ be an abelian variety over a number field $K$, let $L$ be an ample and symmetric line bundle on $A$. Let $m$ be the Mordell-Weil rank of $A(K)$. Let $\Lambda=A(K)/A(K)_{tors}$. Assume $m\geq 1$. Assume one has $$\Card\{P\in{\Lambda} \; \vert \; \widehat{h}_{A,L}(P) \leq H\} \leq C,$$ for $H, C$ two fixed positive real numbers. Then for any $1\leq i \leq m$, the $i$-th successive minimum of $\Lambda$ satisfies
\begin{equation}\label{lambis}
\lambda_{i}^2 \geq \frac{H}{i^2 C^{2/i}}.
\end{equation}
\end{lemma}
\begin{proof}
Let $P_1,\ldots, P_m$ be linearly independent points of $\Lambda$ satisfying $\lambda_{i}^2=\widehat{h}_{A,L}(P_i)$ for any $1\leq i\leq m$. Consider the set of all sums $a_1P_1+\cdots+a_iP_i$ where for any index $ j\in\{1, \ldots, i\}$ the integral coefficient $a_j$ satisfies $0\leq a_j\leq C^{1/i}$. It contains more than $C$ points, hence (by the assumption) at least one of them has height greater than $H$. This point can be written as $P=a_1P_1+\cdots+a_iP_i$, and $\widehat{h}_{A,L}(P)\leq i^2 \max\{\widehat{h}_{A,L}(a_j P_j)\vert 1\leq j\leq i\}$ by the triangular inequality for the norm $\sqrt{\widehat{h}_{A,L}(.)}$, hence $$H\leq \widehat{h}_{A,L}(P) \leq i^2 C^{2/i} \lambda_i^2.$$
\end{proof}
Lemma \ref{success} and Lemma \ref{successbis} combine to give the following.
\begin{corollary}\label{reg2}
Let $A$ be an abelian variety over a number field $K$, let $L$ be an ample and symmetric line bundle on $A$. Let $m$ be the Mordell-Weil rank of $A(K)$. Assume $m\geq 1$. Assume one has $$\Card\{P\in{A(K)/A(K)_{tors}} \; \vert \; \widehat{h}_{A,L}(P) \leq H\} \leq C,$$ for $H, C$ two fixed positive real numbers. Then
\begin{equation}\label{Minkow}
\mathrm{Reg}_{L}(A/K) \geq \frac{H^m}{m^m (m!)^2}\prod_{i=1}^{m}\frac{1}{C^{2/i}}.
\end{equation}
\end{corollary}
\subsection{Regulators and van der Corput}
We show how to improve on Corollary \ref{reg2} by invoking van der Corput.
\begin{lemma}\label{success2}
Let $A$ be an abelian variety over a number field $K$, let $L$ be an ample and symmetric line bundle on $A$. Let $m$ be the Mordell-Weil rank of $A(K)$. Assume $m\geq 1$. Assume one has $$\Card\{P\in{A(K)/A(K)_{tors}} \; \vert \; \widehat{h}_{A,L}(P) \leq H\} \leq C,$$ for $H, C$ two fixed positive real numbers. Then
\begin{equation}\label{Blich}
\mathrm{Reg}_{L}(A/K) \geq \frac{H^m}{m^{m} C^2}.
\end{equation}
\end{lemma}
\begin{proof}
Consider $\mathcal{K}=\{P\in{A(K)\otimes \mathbb{R}} \; \vert \; \widehat{h}_{A,L}(P) \leq H\}$. It is a compact symmetric convex set (hence with finite volume) in $A(K)\otimes \mathbb{R}\simeq \mathbb{R}^m$. We apply van der Corput's result\footnote{In the spirit of Blichfeldt's principle.}, see for instance Theorem 7.1 in \cite{GL}, to the lattice $\Lambda=A(K)/A(K)_{tors}$ to obtain $$\Card(\mathcal{K}\cap \Lambda)\geq \frac{\Vol(\mathcal{K})}{2^m\Covol(\Lambda)}= \frac{H^{m/2}}{2^m (\mathrm{Reg}_L(A/K))^{1/2}}\frac{\pi^{m/2}}{\Gamma(\frac{m}{2}+1)}.$$ This gives directly $\frac{H^{m/2}}{m^{m/2}(\mathrm{Reg}_L(A/K))^{1/2}}\leq C$, hence the claim.
\end{proof}
\begin{rem}\label{success3}
Lemma \ref{success2} implies directly (with the same notation) that if
$$\Card\{P\in{A(K)} \; \vert \; \widehat{h}_{A,L}(P) \leq H\} \leq C,$$ then one has
\begin{equation}\label{Blich2}
\mathrm{Reg}_{L}(A/K) \geq \frac{H^m}{m^{m} C^2} (\Card A(K)_{tors})^2.
\end{equation}
\end{rem}
\section{Regulators of elliptic curves over number fields}\label{ellNF}
To warm up, we will first combine Lemma \ref{success} with various height lower bounds.
\subsection{Szpiro and finiteness}
Let us first extract Corollary 4.2 page 430 from \cite{HiSi3}.
\begin{theorem}(Hindry-Silverman)\label{nf1}
Let $K$ be a number field of degree $d$. Let $E/K$ be an elliptic curve and let $P\in{E(K)}$ be a non-torsion point. Then $$\hat{h}_E(P)\geq (20\sigma_{E/K})^{-8d} 10^{-4\sigma_{E/K}} h(E),$$ where $h(E)=\frac{1}{12}\max\{h(\Delta(E/K)), h(j_E)\}$, with $\Delta(E/K)$ the minimal discriminant of $E/K$ and $j_E$ its $j$-invariant.
\end{theorem}
Note that \cite{Da97} or Theorem 2 page 259 of \cite{Pet06} give an inverse polynomial dependence in the Szpiro quotient and in the degree of the number field.
We get the following corollary.
\begin{corollary}\label{Kclass}
Let $K$ be a number field of degree $d$. There are at most finitely many $K$-isomorphism classes of elliptic curves over $K$ with non-trivial bounded rank, bounded regulator and bounded Szpiro quotient.
\end{corollary}
\begin{proof}
Combine Theorem \ref{nf1} and (\ref{Minkowski2}) to get the inequality
\begin{equation}\label{RegSzp}
\mathrm{Reg}_L(E/K)\geq \frac{1}{m^m} \Big( (20\sigma_{E/K})^{-8d} 10^{-4\sigma_{E/K}} h(E) \Big)^m.
\end{equation}
As soon as the rank $m$ is positive and bounded, and as long as $\sigma_{E/K}$ is bounded, a bounded regulator implies a bounded height. Finiteness of the set of $K$-isomorphism classes follows, see Corollary 2.5 of \cite{CoSi} page 259 for instance. Note that our height is equivalent to Faltings's height $h_F(E)$ used by Silverman in the aforementioned corollary, one easily gets $h_F(E)\ll h(E)$ from the explicit formula for $h_F(E)$ given in the same reference.
\end{proof}
\subsection{Big ranks and finiteness}
Another approach is based on Corollaire 1.6 page 111 of \cite{Da97}, which we recall here.
\begin{proposition} (David)\label{DavidJNT}
Let $K$ be a number field of degree $d$. Let $E$ be an elliptic curve defined over $K$ of rank $m\geq 1$. Let $\Lambda=E(K)/E(K)_{tors}$. Let $i$ be an integer such that $1\leq i\leq m$ and $\lambda_i$ the $i$-th successive minimum. Let $h=\max\{1, h(j_E)\}$. Then $$\lambda_i^2\geq c_{16}(i) h^{(i^2-4i-4)/(4i^2+4i)}d^{-(i+2)/i}\Big(1+\frac{\log(d)}{h}\Big)^{-2/i},$$ and in particular $$\lambda_5^2\geq c_{17} h^{1/120}d^{-7/5}\Big(1+\frac{\log(d)}{h}\Big)^{-2/5}.$$
\end{proposition}
We deduce the following unconditional statement.
\begin{corollary}
Let $K$ be a number field. There are at most finitely many $\overline{K}$-isomorphism classes of elliptic curves over $K$ of fixed rank $m\geq 16$ with bounded regulator.
\end{corollary}
\begin{proof}
By combining Proposition \ref{DavidJNT} with (\ref{Minkowski2}) one obtains
$$\mathrm{Reg}_{L}(E/K)\geq \frac{1}{m^{m}}\prod_{i=1}^m c_{16}(i) h^{(i^2-4i-4)/(4i^2+4i)}d^{-(i+2)/i}\Big(1+\frac{\log(d)}{h}\Big)^{-2/i}.$$
A direct computation shows that $$\sum_{i=1}^{16}\frac{i^2-4i-4}{4i^2+4i}\geq 0.009\quad\quad\quad \mathrm{and}\quad\quad\quad \sum_{i=1}^{15}\frac{i^2-4i-4}{4i^2+4i}\leq -0.16.$$ Hence as soon as $m\geq 16$, if $m$ and $d$ are fixed, a bounded regulator implies a bounded $j$-invariant, which implies the finiteness of the set of $\overline{K}$-isomorphism classes for a fixed base field $K$.
\end{proof}
We will now show how to improve on this corollary using counting results from \cite{Da97, Pet06} and our Lemma \ref{success2}. We start with a small technical lemma that will help later on.
\begin{lemma}\label{chebych}
Let $K$ be a number field of degree $d$ and let $I$ be a non-zero integral ideal in $\mathcal{O}_K$. Let $S$ be the number of prime ideals dividing $I$. There is an absolute constant $c_0>0$ such that $$\log N(I)\geq c_0 S \log(\frac{S}{d}+2),$$ where $N(I)$ stands for the norm of the ideal $I$.
\end{lemma}
\begin{proof}
Above each rational prime number $p$, there are at most $d$ prime ideals in $\mathcal{O}_K$. Let us write $S=dq+r$ the Euclidean division of $S$ by $d$, where $0\leq r<d$. Then one has $$\log N(I)\geq d \log p_1+\ldots+d\log p_q+r \log p_{q+1},$$ where $p_i$ denotes the $i$-th rational prime number. An easy bound from below is $\log p_i \geq \log (i+1)$ for any integer $i\geq1$, hence $$\log N(I) \geq d\log (q+1)! +r \log (q+2)\geq c_0 dq\log(q+3)+c_0r\log(q+3)\geq c_0 S \log(\frac{S}{d}+2).$$
\end{proof}
We now give the proof of Theorem \ref{blichreg}.
\begin{proof} (of Theorem \ref{blichreg} and Corollary \ref{NorthReg})
By Theorem 1.2 of \cite{Da97} and Proposition 8 of \cite{Pet06}, one has two independent upper bounds, with $c_1, c_2, c_3, c_4$ positive absolute constants:
\begin{itemize}
\item[(a)] $
\Card\{P\in{E(K)\,\vert\, \hat{h}_E(P) \leq c_1\frac{h_v(E)}{d} }\}\leq c_2 \frac{d h}{h_v(E)}(1+\frac{\log d }{h}),$ where one defines $h_v(E)=\max\{n_v \log\vert j_E\vert_v, \rho_v\}$, with $\rho_v=0$ if $v$ is a finite place of multiplicative reduction and $\rho_v=\sqrt{3}/2$ if $v$ is archimedean.
\\
\item[(b)] $
\Card\Big\{P\in{E(K)\,\vert\, \hat{h}_E(P) \leq c_3\frac{h(\Delta(E/K))}{\sigma^2 }}\Big\}\leq c_4 d\sigma^2 \log(c_4d\sigma^2)$, where $\sigma=\sigma_{E/K}$ is the Szpiro quotient.
\\
\end{itemize}
First case: if one considers an elliptic curve $E/K$ with $h(j_E)<1$, one obtains by $(a)$, as $h=1$ in this case, for the choice $v$ archimedean and Lemma \ref{success2} (and Remark \ref{success3}):
\begin{equation}
\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq \frac{\RegL(E/K)}{(\Card E(K)_{tors})^2}\geq \frac{c_1^m h_v(E)^{m+2}}{c_2^2d^{m+2}} \frac{1}{m^{m} (1+\log d)^{2}}\geq \frac{c_1^m \sqrt{3}^{m+2} }{c_2^2 (2d)^{m+2}} \frac{1}{m^{m} (1+\log d)^{2}},
\end{equation}
which is the claimed lower bound in this case.
Thus we may assume from now on that $h(j_E)\geq 1$. Let us denote $K'=K(E[12])$. The elliptic curve $E$ is semi-stable over $K'$ and the degree $d'=[K':\mathbb{Q}]\leq [K:\mathbb{Q}] \Card\mathrm{GL}_2(\mathbb{Z}/12\mathbb{Z}) = 4608 d$. Note that for any place $v$ of $K'$ one has $$\Card\Big\{P\in{E(K)} \; \vert \; \widehat{h}_{E}(P) \leq c_1\frac{h_v(E)}{d'} \Big\} \leq \Card\Big\{P\in{E(K')} \; \vert \; \widehat{h}_{E}(P) \leq c_1\frac{h_v(E)}{d'} \Big\}.$$ Similarly, with $\sigma=\sigma_{E/K'}$ and $\Delta=\Delta(E/K')$, one has $$\Card\Big\{P\in{E(K)} \; \vert \; \widehat{h}_{E}(P) \leq c_3\frac{h(\Delta)}{\sigma^2} \Big\} \leq \Card\Big\{P\in{E(K')} \; \vert \; \widehat{h}_{E}(P) \leq c_3 \frac{h(\Delta)}{\sigma^{2}} \Big\}.$$ Both $h$ and $\widehat{h}_{E}(P)$ are invariant by field extension, so $(a), (b)$ and Lemma \ref{success2} (or Remark \ref{success3}, with $H=c_1h_v(E)/d'$ and $C=c_2d'h(1+(\log d')/h)/h_v(E)$ for the first inequality and $H=c_3 h(\Delta)/\sigma^2$ and $C=c_4 d'\sigma^2 \log(c_4d'\sigma^2)$ for the second inequality) imply that for any elliptic curve $E/K$ with $h(j_E)\geq 1$, and keeping $m$ the rank of $E$ over $K$, for any place $v$ of $K'$ of multiplicative reduction or archimedean,
\begin{equation}
\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq \max\left\{\frac{c_1^m h_v(E)^{m+2}}{c_2^2 d'^{m+2} h^2 (1+\frac{\log d'}{h})^{2}}, \frac{c_3^m h(\Delta)^m}{\sigma^{2m} (c_4 d'\sigma^2 \log(c_4d'\sigma^2))^2} \right\} \frac{1}{m^{m}}.
\end{equation}
\noindent This implies
\begin{equation}
\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq \max\left\{\frac{c_1^m}{c_2^2 d'^{m+2}}\frac{ h_v(E)^{m+2}}{ h^2}, \frac{c_3^m}{ (c_4 d')^2} \frac{ h(\Delta)^{m}}{\sigma^{2m+4} (\log(c_4 d' \sigma^2))^2} \right\} \frac{1}{m^{m} (1+\frac{\log d'}{h})^{2}}.
\end{equation}
Now using $d'\leq 4608 d$ and $\log(c_4d'\sigma^2)\ll\log(3h(\Delta))$ with an implied constant depending at most on $d$, and let $c(d,m)>0$ stand for a quantity depending at most on $d$ and $m$ and that may slightly change along the way, we get
\begin{equation}\label{pivot}
\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq \max\left\{\frac{ h_v(E)^{m+2}}{ h^2}, \frac{ h(\Delta)^{m}}{\sigma^{2m+4} (\log(3h(\Delta)))^2} \right\} c(d,m).
\end{equation}
Second case: assume $h(\Delta)\leq \frac{h}{2}$. As one has $$h=h(j_E)=h(\Delta)+\frac{1}{d'}\sum_{\substack{v\in{M_{K'}}\\v\vert\infty}} n_v\log\max\{\vert j_E\vert_v, 1\},$$ there exists an archimedean place $v$ such that $\log\max\{\vert j_E\vert_v, 1\}\geq \frac{h}{2}$, hence one deduces $h_v(E)\geq \frac{h}{2}$. So (\ref{pivot}) implies
\begin{equation}
\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq c(d,m) \frac{ (h/2)^{m+2}}{ h^2} \geq c(d,m) h^m,
\end{equation}
which is a better lower bound than the one claimed.
Third case: assume $h(\Delta)\geq \frac{h}{2}$. Let $S'$ stand for the number of places in $K'$ where $E$ has multiplicative reduction. We have in particular $S'\neq0$ and by considering $v$ the finite place of $K'$ with maximal $h_v(E)$, one gets by the semi-stability assumption
\begin{equation}\label{hv}
h_v(E)\geq \frac{h(\Delta) d'}{S'}
\end{equation}
and by Lemma \ref{chebych} there is an absolute $c_0>0$ such that
\begin{equation}
\frac{h(\Delta)d'}{\sigma}=\log N(F(E/K'))\geq c_0 S'\log(\frac{S'}{d'}+2),
\end{equation}
hence
\begin{equation}\label{sig}
\frac{h(\Delta)d'}{c_0 S'\log(\frac{S'}{d'}+2)}\geq \sigma,
\end{equation}
inject (\ref{hv}) and (\ref{sig}) in (\ref{pivot}), and get:
\begin{equation}\label{piv}
\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq c(d,m) \max\left\{\frac{ h(\Delta)^{m}}{(S')^{m+2} }, \frac{ h(\Delta)^{m}}{\Big(\frac{h(\Delta)}{S'\log(\frac{S'}{d'}+2)}\Big)^{2m+4} \Big(\log(3h(\Delta))\Big)^2} \right\}.
\end{equation}
Subcase (i): assume $S'\leq d' h(\Delta)^{\frac{2}{3}} (\log(3h(\Delta))^{-\frac{2m+2}{3m+6}}$. Then the first part of (\ref{piv}) implies
\begin{equation}
\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq c(d,m) h(\Delta)^{m-\frac{2}{3}(m+2)}\Big(\log(3h(\Delta))\Big)^{\frac{(2m+2)(m+2)}{3m+6}},
\end{equation}
which gives directly
\begin{equation}
\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq c(d,m) h^{\frac{m-4}{3}}\Big(\log(3h)\Big)^{\frac{2m+2}{3}},
\end{equation}
as claimed.
Subcase (ii): assume $S'> d' h(\Delta)^{\frac{2}{3}} (\log(3h(\Delta))^{-\frac{2m+2}{3m+6}}$. Then the second part of (\ref{piv}) implies
\begin{equation}
\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq c(d,m) h(\Delta)^{m-\frac{1}{3}(2m+4)}\Big(\log(3h(\Delta))\Big)^{\frac{2m+2}{3}},
\end{equation}
which ends the analysis of this last case with
\begin{equation}
\frac{\Reg(E/K)}{(\Card E(K)_{tors})^2}\geq c(d,m) h^{\frac{m-4}{3}}\Big(\log(3h)\Big)^{\frac{2m+2}{3}}.
\end{equation}
A careful study of the different cases gives the claim for $c(d,m)$ and this concludes the whole proof of Theorem \ref{blichreg}.\\
To obtain Corollary \ref{NorthReg}, note that a bounded regulator and a fixed rank $m\geq4$ implies a bounded $j$-invariant, hence the claimed finiteness.
\end{proof}
\begin{rem}
There exists infinitely many elliptic curves over $\mathbb{Q}$ with rank at least $19$, we refer to the work of Elkies \cite{El08}. They are obtained by specialization of an elliptic curve with rank at least $19$ over $\mathbb{Q}(T)$.
\end{rem}
\begin{rem}
It is easier to produce elliptic curves of rank at least $5$ over $\mathbb{Q}(T)$, and we thank Jean-Fran\c{c}ois Mestre for pointing this out. Pick a generic long Weierstrass equation $(W): y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6.$ We have five parameters $(a_1,a_3,a_2,a_4,a_6)$. Choose the points $(0,0), (1,1), (2,3), (3,-1), (T,2)$. There exists a unique non-constant set of parameters $(a_1,a_3,a_2,a_4,a_6)\in{\mathbb{Q}(T)^5}$ such that $(W)$ passes through these five points. These points are generically independent on the curve defined by $(W)$. See also \cite{Mes} for explicit examples with rank bigger than 11 over $\mathbb{Q}(T)$. Then use specialization to obtain families over $\mathbb{Q}$ with big ranks.
\end{rem}
\section{Regulators of elliptic curves over function fields of characteristic zero}\label{ellFF0}
We will combine lemma \ref{success} with the following theorem.
\begin{theorem}\label{ff0}(Hindry-Silverman \cite{HiSi3}, Theorem 6.1 page 436)
Let $K$ be a function field of characteristic zero and genus $g$. Let $E/K$ be an elliptic curve of discriminant $\Delta(E/K)$ and let $P\in{E(K)}$ be a non-torsion point. Then $$\hat{h}_E(P)\geq 10^{-15.5-23g} h(E),$$ where $h(E)=\frac{1}{12}\deg \Delta(E/K)$. The same inequality is valid for elliptic curves over function fields in characteristic $p>0$ if the $j$-map has inseparable degree $1$.
\end{theorem}
We can now deduce the following statement by looking at Moret-Bailly's results on families of abelian varieties with bounded height. We first recall the definition of a \textit{limited family}\footnote{\textit{Confer} D\'efinition 1.1 page 212 of \cite{MB85} for limited families in more general settings.}.
\begin{definition}
Let ${\mathcal A}_1$ be the coarse moduli space of elliptic curves over $K$ (\textit{i.e.} the affine line of $j$-invariants). Let ${\mathcal E}$ be a set of elliptic curves defined over $K$. The set ${\mathcal E}$ is limited if there exists a variety $T$ over the field of constants $k\subset K$, an open set $U\subset T_K$ surjective on $T$ and a $k$-morphism $f:U\to {\mathcal A}_1$ such that for any $E\in{\mathcal E}$, there is an $x\in T(k)$ with $f(x_K)$ being the image of $E$ in ${\mathcal A}_1(K)$.
\end{definition}
\begin{corollary}
Let $K$ be a function field of characteristic zero and genus $g$. The set of elliptic curves of trace zero over $K$, with non-trivial bounded rank and bounded regulator is limited.
\end{corollary}
\begin{proof}
Apply Theorem \ref{ff0} and (\ref{Minkowski2}) to get the inequality $$\mathrm{Reg}_L(E/K)\geq \frac{1}{m^m} \Big(10^{-15.5-23g} h(E)\Big)^m.$$ As soon as one fixes $m>0$, a bounded regulator implies a bounded height. Then apply \cite{MB85} Th\'eor\`eme 4.6 page 236.
\end{proof}
\section{Regulators of elliptic curves over function fields of positive characteristic}\label{ellFFp}
We will combine lemma \ref{success} with the following consequence of \cite{HiSi3}, which improves on Lemma 6.6 page 998 of \cite{Nas16}.
\begin{theorem}(based on Hindry-Silverman \cite{HiSi3}, see also \cite{Nas16})\label{ffp}
Let $P$ be a point of infinite order on $E$ over $k(\mathcal{C})$ of positive characteristic $p$ and genus $g$, and assume that the $j$-map of $E$ has inseparable degree $p^e$. Then one has $$\widehat{h}_E(P)\geq p^{-2e} 10^{-15.5-23g} h(E).$$
\end{theorem}
\begin{proof}
There exists a curve $E_0$ of inseparable degree $1$ such that $\phi^{e}:E_0\to E=E_0^{(p^{e})}$ is the $e$-th Frobenius isogeny. Denote the dual isogeny by $\widehat{\phi}^{e}$. We compute $$\widehat{h}_E(P)=p^{-e}\widehat{h}_{E_0}(\widehat{\phi}^{e}(P))\geq p^{-e}10^{-15.5-23g} h(E_0),$$
where this last inequality comes from the separable case (see Theorem \ref{ff0}) and since $h(E)=p^{e}h(E_0)$, we are done.
\end{proof}
In link with this Theorem \ref{ffp}, the interested reader should note that the remark page 434 of \cite{HiSi3} is inaccurate. Moreover, in Theorem 7 of \cite{GoSz} one should assume $e=0$.
\begin{corollary}
Let $K$ be a function field of characteristic $p>0$ and genus $g$. The set of elliptic curves of trace zero over $K$, with non-trivial bounded rank, bounded inseparability degree and bounded regulator is limited, and finite when $K$ has finite constant field.
\end{corollary}
\begin{proof}
Apply Theorem \ref{ffp} and (\ref{Minkowski2}) to get the inequality $$\mathrm{Reg}_L(E/K)\geq \frac{1}{m^m} \Big( p^{-2e} 10^{-15.5-23g} h(E)\Big)^m.$$ As soon as the rank $m$ is positive and bounded, and as long as $e$ is bounded, a bounded regulator implies a bounded height. Then apply \cite{MB85} Th\'eor\`eme 4.6 page 236.
\end{proof}
\section{Questions}\label{questions}
\subsection{Regulator and rank}
\begin{ques}
Let $E$ be an elliptic curve over a number field $K$. Let $m_K$ be its Mordell-Weil rank. Can one prove
\begin{equation}\label{Q1}
\mathrm{Reg}(E/K)\geq c_0 \,m_K,
\end{equation}
where $c_0>0$ is a constant depending at most on $K$?
\end{ques}
The case of rank zero is trivial. Already in rank one, it would imply a non-trivial lower bound on the height of a generator of $E(K)$.
\\
\subsection{Small points on an elliptic curve}
A weaker version of Lang's conjectural inequality would already provide new insights.
\begin{ques}
Let $E$ be an elliptic curve over a number field $K$. Let $P$ be a $K$-rational point of infinite order. Can one prove
\begin{equation}\label{Q2}
\widehat{h}_E(P)\geq f(h(E)),
\end{equation}
where $f$ is a function tending to infinity when $h(E)$ tends to infinity?
\end{ques}
The interest here comes from the following remark: if one combines (\ref{Q2}) with inequality (\ref{Minkowski2}), this would imply a big improvement on Corollary \ref{NorthReg} by replacing the condition $m\geq4$ by $m\geq1$. Note that the inequality $\widehat{h}_E(P)\geq c_0>0$, where $c_0$ depends at most on $K$, is already an open and interesting problem though.
\subsection{Regulator and injectivity volume}
The Lang-Silverman conjecture in dimension $g\geq 1$, or the ABC conjecture in dimension 1, imply that on a simple principally polarized abelian variety $(A,L)$ of dimension $g$ over a number field $K$, when the rank is non-zero we have $\lambda_1^2\gg h(A)$. Let $\rho_\sigma(A,L)$ be the injectivity diameter of $A_\sigma(\mathbb{C})$ and $V_{\sigma}(A,L)$ be the injectivity volume, as described in \cite{Au16}. We would thus get by the Matrix Lemma $$m\RegL(A/K)^{1/m}\gg \lambda_1^2 \gg h(A)\gg \sum_{\sigma:K\hookrightarrow\mathbb{C}}\frac{1}{\rho_\sigma(A,L)^2}\gg \sum_{\sigma:K\hookrightarrow\mathbb{C}}\frac{1}{V_\sigma(A,L)^{1/g}},$$ where all the implied constants are positive and may depend on $K$ and $g$.
\begin{ques}
Can one prove independently
\begin{equation}\label{Q3}
\RegL(A/K)\gg \left(\displaystyle{\sum_{\sigma:K\hookrightarrow\mathbb{C}}}\frac{1}{V_\sigma(A,L)^{1/g}}\right)^{c_0 m},
\end{equation}
for a universal $c_0>0$? For elliptic curve with $m\geq 5$, the value $c_0=\frac{1}{15}$ is valid from Theorem \ref{blichreg}, with an implied constant depending on $K$ and on $m$.
\end{ques}
| proofpile-arXiv_065-15215 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec-intro}
In the last decades the mathematical interest in geophysical problems was
steadily growing. While there is already a large body of work in atmospheric
and oceanographic fluid flows, the mathematics for geophysical models for solid
earth is much less developed. The latter concerns in particular the deformation
and motion of lithospheric plates in the upper crust, in particular
earthquakes. The difficulties in these models is the complex behavior of rock
that behaves elastically like a solid in the case of seismic waves on short
time scales but behaves like a viscoplastic fluid when considered over
centuries. However, very slow motion of long periods are crucial for building
up internal stresses that are then released in short rupture events triggering
earthquakes. Only recently, a new class of periodic motions in the Earth crust
was detected by evaluating GPS measurements, namely the so-called
``\emph{episodic tremor and slip}'' (cf.\ \cite{KTOT12ETSS,Bart20LTVE}): Here
all motions are so slow that no seismic waves are emitted, but there exist two
distinct regimes, one involving inelastic motions and one involving slow smooth
slip. These events are observed in so-called subduction zones and have periods
in the range of a few years while the overall shear velocity rate is in the range of
millimeter per year.
In addition to these temporal time scales there are also several spatial scales
involved. For instance, between tectonic plates there form weak
regions called faults that are relatively narrow but may accumulate
relatively large deformations, in particular in rapid shearing events. We
refer to \cite{PKRO16ERNS, HeGevD18IRSD, RHCRKO19SGSM, PHGA19SAFG, NGBPW20D3DR}
for some recent efforts in geodynamical modeling towards a better understanding
of these phenomena. On the mathematical side the work started less than a
decade ago and is still comparably small, see \cite{RoSoVo13MRLF, PiSaKo13VFRS,
Pipping2019, HeKoPo20FHMI, EiHoMi22LHSV, EiHoLa21?WSUE}. Moreover, there is a
dichotomy with respect to bulk interface models, where most of the nonlinear
effects are localized in the interface (e.g. by a so-called rate-and-state
dependent friction law), and pure bulk models where typically only existence
results for solutions are obtained but no qualitative behavior of the solutions
can be deduced.\medskip
With this work we want to initiate a mathematical study where pure bulk models
are considered but still interesting qualitative features can be deduced. In
this first study we will confine ourselves to a simplified ``stratified''
setting where only shear deformations are considered that depend on a
one-dimensional variable $x \in (-H,H)$ representing the transverse direction
to a straight fault or damage zone between two compact rocks representing two
plates that move with respect to each other, see Figure \ref{fig:geometry}.
The continuum model is given in terms of
\\
\textbullet\ the shear velocity $v= v(t,x) \in \mathbb R$,
\\
\textbullet\ the elastic strain $\varepsilon = \varepsilon(t,x)$,
\\
\textbullet\ the plastic strain $p= p(t,x)$,
\\
\textbullet\ the internal damage variable $\alpha=\alpha(t,x)$, and
\\
\textbullet\ the internal aging variable $\theta=\theta(t,x)$.
The model to be studied in its simplest form is the following system of five
partial differential equations posed for $(t,x)\in
(0,\infty)\times (-H,H)$ (see \eqref{evol} for the more general case treated below):
\begin{subequations}
\label{eq:I.evol}
\begin{align}
\label{eq:I.evol.a}
&\varrho \DT v = \big( \mathbb{C}(\alpha) \varepsilon\big)_x , &&\DT\varepsilon + \DT p = v_x,
\\
\label{eq:I.evol.b}
&\partial_{\DT p} R(\DT p,\theta) \ni \mathbb{C}(\alpha) \varepsilon + \eta\DT p_{xx},
\hspace{-1em}
&& \DT\alpha =- \frac12\mathbb{C}'(\alpha) \varepsilon^2 + \beta(1{-}\alpha) + \gamma
\alpha_{xx} ,\quad \mbox{}
\\
\label{eq:I.evol.c}
\mbox{}\qquad&\DT\theta = 1-\theta/\theta_\infty - \lambda |\DT p|\theta +
\kappa \theta_{xx} , \hspace{-2em}
\\
\intertext{with the dot-notation $(\cdot)\!\DT{^{}}$ and the notation $(\cdot)_x$
for the partial derivatives in time and in space, respectively.
We complete it with boundary conditions}
\label{eq:I.evol.d}
&
v(t,\pm H) = \pm v_\infty(t), \ \ p(t,\pm H)=0, &&
\alpha(t,\pm H)=1, \ \
\theta(t,\pm H) = \theta_\infty.
\end{align}
\end{subequations}
Here $\beta,\ \gamma,\ \eta,\ \kappa$, and $\lambda$ are positive constants,
whereas $\alpha \mapsto \mathbb{C}(\alpha)>0$ and
$(\pi,\theta) \mapsto R(\pi,\theta)>0$ are general smooth constitutive
functions. In particular, the state of damage $\alpha$ may decrease the elastic
stiffness $\mathbb{C}(\alpha)$, and even more importantly the yield stress
$\mu(\pi,\theta)$ may depend on the plastic rate $\pi=\DT p$ as well as on the
aging variable $\theta$. Thus, we are able to mimic the commonly used
Dieterich-Ruina {\it rate-and-state friction} law
\cite{Diet07ARSD,Ruin83SISVFL} where now the aging variable can be interpreted
as the ``state'' while the dependence on $\pi=\DT p$ gives the rate dependence.
Here $R(\;\!\cdot\!\;,\theta):\mathbb R\to \mathbb R$ is the plastic dissipation potential
depending on the aging variable $\theta$,
i.e.\ it is convex and satisfies $R(\pi,\theta) \geq 0 = R(0,\theta)$. The
plastic yield stress (or dry friction coefficient) is encoded by assuming
$R(\pi,\theta) = \mu(0,\theta)|\pi| + \mathscr{O}(\pi^2)$. Hence, we obtain a set-valued
convex subdifferential, which we assume to have the form $\partial_\pi R(\pi,\theta) =
\mu(\pi,\theta) \mathop{\mathrm{Sign}}(\pi) +\mathscr{O}(\pi)$, where
``\,Sign'' is the set-valued sign function, see \eqref{eq:def.Sign}. Thus,
the first equation in \eqref{eq:I.evol.b}, involving the nonsmooth
convex function $R(\cdot,\theta)$, is an inclusion
and gives rise to a {\it free boundary}, namely between regions with
the purely elastic regime with $\pi = \DT
p\equiv 0$ where $\text{Sign}(\DT p) = [-1,1]$ and the plastic regime where
$\pi = \DT p \neq 0$ and $\text{Sign}(\DT p)= \{-1\} $ or $\{+1\}$. \medskip
Our paper is organized as follows: In Section \ref{se:Setup} we provide the
background from geodynamics introducing the rate-and-state friction models with
a given interface and our distributed-parameter model which is slightly more
general than \eqref{eq:I.evol}. In particular, Section \ref{sec-steady}
discusses the steady-state equation where $\DT v=\DT \alpha=\DT \theta=0$
while the plastic flow rate $\pi=\DT p$ is independent of time.
The full evolutionary model is then introduced in Section \ref{su:EvolMod}.
The analysis of steady states is the content of Section \ref{se:AnaSteady}.
In Theorem \ref{th:ExistSteady} we provide an existence theorem for steady states
under quite natural assumptions and arbitrary shear velocities $v(\pm H)=\pm
v_\infty$. The proof relies on a Schauder fix-point
argument and we cannot infer uniqueness, which is probably false in this
general setting. In Proposition \ref{prop2} we show that for steady states
the limit $\eta\to 0^+$ in \eqref{eq:I.evol.b} can be performed in such a way
that accumulation points are still steady states.
In Section \ref{sec-evol} we discuss the full dynamic model, show its
thermodynamic consistency, and derive the natural a priori estimates. For our
main existence result we restrict to the case without damage, i.e.\ $\mathbb{C}$ is
independent of $\alpha$ and $\alpha\equiv 1$ solves \eqref{eq:I.evol.b}. The
result of Theorem \ref{th:EvolExist} is obtained by time
discretization and a staggered incremental scheme mimicking the solution of the
static problem in Theorem \ref{th:ExistSteady}. \color{black} The analytical aspects are
nontrivial because of the non-variational character of the problem, the non-polynomial
friction law \eqref{DR1+} leading to usage of Orlicz spaces, and
the lack of compactness for the elastoplastic wave equation.
The final Section~\ref{se:NumSimul} is devoted to a numerical exploration of
some simplified models that show the typical behavior expected also for the
full model. The simplified model is obtained from \eqref{eq:I.evol} by
neglecting $\alpha$ as in Section \ref{sec-evol} and by further ignoring
inertia (i.e.\ setting $\varrho=0$ and choosing $\eta=0$), see Section
\ref{su:SimplifMod}:
\begin{equation}
\label{eq:I.SimplMod}
\frac{2H}{\mathbb{C}} \DT \sigma + \int_{-H}^H \varPi(\sigma,\theta) \,\mathrm{d} x = 2
v_\infty(t), \quad
\DT \theta = 1{-}\frac\theta{\theta_\infty} - \lambda \varPi(\sigma,\theta) +
\kappa \theta_{xx} ,
\end{equation}
with $\theta(t,\pm H)=\theta_\infty$, where $\pi=\varPi(\sigma,\theta)=
\partial_\xi\mathcal R^*(\sigma,\theta)$ is the unique
solution of $\sigma \in \partial_\pi\mathcal R(\pi,\theta)$.
In Section \ref{su:NumSimSteady} we discuss the steady states
$(\theta_\mathrm{stst},\pi_\mathrm{stst})$ where $\pi_\mathrm{stst}=
\varPi(\sigma_\mathrm{stst},\theta_\mathrm{stst})$. We do a parameter study for varying
$\kappa$ and $v_\infty$ and obtain a monotone behavior with respect to
$v_\infty$, namely $\theta_\mathrm{stst}$ is decreasing and $\pi_\mathrm{stst}$ is increasing.
We always observe spatial localization in the sense that
$\pi_\mathrm{stst}$ is supported on $[-h_*(v_\infty,\kappa), h_*(v_\infty,\kappa)]$ with
a free boundary positioned at the points $\pm h_*(v_\infty,\kappa)$ with
$h_*(v_\infty,\kappa) \lneqq H$ and $h_*(v_\infty,\kappa)\approx 0.55
\sqrt\kappa$ for $\kappa, v_\infty \to 0^+$.
The pure existence of steady states does not say anything about stability in
the dynamic model \eqref{eq:I.SimplMod}. In Section \ref{su:NumSimODE} we
provide a two-dimensional ODE model where there is a unique steady state that
is unstable for small positive $v_\infty$ and convergence of general solutions
to periodic motions. Similarly, Section \ref{su:NumSimPDE} shows simulations
for system \eqref{eq:I.SimplMod} which shows convergence towards
$(\theta_\mathrm{stst},\pi_\mathrm{stst})$ if $v_\infty$ is large but predicts convergence
towards time-periodic solutions that also have a clearly defined plastic zone
smaller than $(-H,H)$, see Figures \ref{fig:SM.converge} and \ref{fig:SM.osc}.
A surprising effect is that the width $2h$ of the core of the fault (the active
cataclastic zone) does not tend to be 0 if the plasticity gradient is ignored
by setting $\eta=0$, and even not if the aging gradient is ignored by setting
$\kappa=0$. In Proposition \ref{prop3} we show that under natural assumptions
on the rate-and-state friction law one obtains a linear dependence
$h=h_*(v_\infty,0)= |v_\infty|/\pi_*$ for shear velocities with
$|v_\infty|< H\pi_*$, where $\pi_*$ is uniquely determined by the friction law
and the aging law.
Another noteworthy effect is that the length scale of the aging qualitatively
influences the character of response, varying in between the stick-slip and the
sliding regimes. In particular, for very large shear velocities $v_\infty$
(which are not relevant in usual geophysical faults in the lithosphere) the
fault goes into a continuous sliding mode and no earthquakes occur. Actually,
this is a recognized attribute of this friction model which in
\cite{Baum96DFDL} has been compared to the observation of our ``everyday life
when one often manages to get rid of door-squeaking by a fast opening''. In
contrast under very slow shear velocities, the friction threshold is not
reached for large time spans after a relaxation. Only when enough shear stress
has build up, the threshold can be overcome. But then not only stresses are
released but also the aging variable is reduced which leads to a much larger
stress release than needed. Hence, another long waiting time is needed until
next ``earthquake'' will start.
\section{Setup of the geodynamical model}
\label{se:Setup}
\subsection{Geodynamical background}
\label{su:Geodynamics}
{\it Earth's crust} (together with lithosphere) is a rather solid rock bulk
surrounding the lower, more viscous parts of the planet. It is subjected by
damage typically along thin, usually flat weak surfaces, called {\it faults},
which exist within millions of years. The faults may exhibit slow sliding
(so-called aseismic slip) or fast {\it rupture} (causing {\it tectonic earthquakes}
and emitting seismic waves) followed by long period or reconstruction (healing)
in between particular earthquakes. The former phenomenon needs some extra
creep-type rheology modeled using a plastic strain variable
or some smoothing of the activated character of the frictional
resistance at very small rates (cf.\ Remark~\ref{rem-aseismic}) and will not be
scrutinized in this article, while the latter phenomenon needs some
friction-type rheology. Thus faults can be modeled as frictional contact
surfaces or as flat narrow stripes.
As for the frictional contact, the original Dieterich-Ruina
rate-and-state friction model \cite{Diet07ARSD,Ruin83SISVFL}
prescribes the tangential stress $\sigma_{\rm t}$ on the frictional interface as
\begin{align}\label{DR1}
\sigma_{\rm t}=\sigma_{\rm n}\Big(\!\!\!\!
\lineunder{\mu_0+a\,{\rm ln}\frac{v}{v_{\rm ref}}
+b\,{\rm ln}\frac{v_{\rm ref}\theta}{d_{\rm c}}}
{= $\mu(v,\theta)$ = frictional resistance}\!\!\!\!\Big)
\end{align}
where the normal stress $\sigma_{\rm n}$ is considered to be given (=\,a
so-called Tresca friction model) and $v$ is (the norm of) the tangential
velocity jump along interface. The (given) parameters $a$ and $b$ are the
direct-effect and the evolution friction parameters, respectively, $d_{\rm c}$
is the characteristic slip memory length, and $v_{\rm ref}$ reference
velocity. If $a{-}b>0$, we speak about velocity strengthening while, if
$a{-}b<0$, we speak about {\it velocity weakening} -- the latter case may lead
to instabilities and is used for earthquake modeling. The friction
coefficient $\mu=\mu(v,\theta)$ depends in this model on the velocity magnitude
$v$ and an internal variable $\theta$ being interpreted as an {\it aging}
variable, sometimes also as damage. The evolution of $\theta$ is governed by a
specific flow rule typically of the form of an ordinary differential equation
at each spot of the fault, say:
\begin{align}
\label{DR3}
\DT\theta=f_0(\theta)-f_1(\theta)|v|\,
\end{align}
with some continuous nonnegative functions $f_0$ and $f_1$
More specifically, $f_0(\theta)=1$ and $f_1(\theta)=\theta/d_{\rm c}$
with $d_{\rm c}>0$ is most common, considered e.g.\ in
\cite{Bizz11TVCP, BeTuGo08CRPB, BieTel01MCPF, DauCar08CMFG, DauCar10FFE,
KaLaAm08SEMS, RDDDC09FDMR, Scho98EFL};
then for the static case $v=0$, the aging variable $\theta$ grows linearly
in time and has indeed the meaning of an ``age'' as a time elapsed from
the time when the fault ruptured in the past. The steady state $\DT\theta=0$
leads to $\theta=d_{\rm c}/|v|$ so that $\mu=\mu_0+(a{-}b)\,{\rm ln}|v/v_{\rm ref}|$.
Alternatively, one can consider the flow rule \eqref{DR3}
with some other $f_0$:
\begin{align}
\label{DR3+}
f_0(\theta)=\max\Big(1-\frac\theta{\theta_\infty\!}\,,0\Big)\ \ \text{ and }\ \
f_1(\theta)=\frac\theta{d_{\rm c}}\,,
\end{align}
cf.\ \cite{PeRiZh95SHSP},
and then $\theta$ stays bounded and asymptotically approaches $\theta_\infty$
in the steady state if $v\to0$, namely
$\theta=d_{\rm c}\theta_\infty/(d_{\rm c}{+}\theta_\infty|v|)$. This suggests to
interpret $\theta$ rather as a certain hardening or ``gradual locking''
of the fault in the ``calm'' steady state $v=0$.
An obvious undesired attribute of \eqref{DR1} is, as already noted in
\cite[p.108]{Diet07ARSD}, that, ``as $v$ or $\theta$ approach zero,
eqn.\ \eqref{DR1} yields unacceptably small (or negative) values of sliding
resistance'' $\mu$. Therefore, \eqref{DR1} obviously violates the
Clausius-Duhem entropy inequality, although being used in dozens of geophysical
articles relying that in specific applications
the solutions might not slide into these physically wrong regimes.
Nevertheless,
a regularization leading to $\mu>0$ and thus to a physically
correct non-negative dissipation is used, too, typically
as \cite{Diet87NTES}, cf.\ e.g.\ also \cite{PeRiZh95SHSP}:
\begin{align}\label{DR1+}
&\mu=\mu(v,\theta)=\mu_0+a\,{\rm ln}\Big(\frac{|v|}{v_{\rm ref}\!}\,{+}1\Big)
+b\,{\rm ln}\Big(\frac{v_{\rm ref}}{d_{\rm c}}\theta{+}1\Big)\,.
\end{align}
In what follows, we will therefore have in mind rather \eqref{DR1+} than
\eqref{DR1}. For an analysis and numerics of the rate-and-state friction
in the multidimensional visco-elastic context we refer to \cite{PiSaKo13VFRS,
PKRO16ERNS, PatSof17ARSF, Pipping2019}.
Since the velocity occurs in the aging flow rule \eqref{DR3}, this nonisothermal
friction model however does not seem consistent with standard
thermodynamics as pointed out in \cite{Roub14NRSD} in the sense
that the evolution \eqref{DR3} does not come from any free energy.
On top of it, it has been known from the beginning of this rate-and-state
model that it does not fit well some experiments \cite{Ruin80FLIQ} and
(rather speculative) modifications e.g.\ by using several aging variables
(which naturally opens a space for fitting more experiments) have been
devised, cf.\ \cite{Ruin83SISVFL}.
A rather formal attempt to overcome the mentioned thermodynamical inconsistency
has been done in \cite{PiSaKo13VFRS} by introducing two energy
potentials. Thermodynamically consistent models have been devised either by
using isothermal damage with healing \cite{RoSoVo13MRLF} or by nonisothermal
damage when temperature variation was interpreted approximately as a sliding
velocity magnitude $v$. The latter option uses the idea that the slip of the
lithospheric fault generates heat which increases temperature on the fault. In
geophysical literature, the heat produced during frictional sliding is believed
``to produce significant changes in temperature, thus the change of strength of
faults during seismic slip will be a function of ... also temperature'', cf.\
\cite[p.7260]{Ches94ETFC}. The usage of an (effective) interfacial temperature
discussed in \cite{DauCar08CMFG, DauCar10FFE} following ideas from
\cite{Lang08STZR}. In \cite{BaBeCa99PASR, Ches94ETFC, Ches95RMWC, Scho02MEF}
the classical rate-and-state friction law is also made temperature dependent.
Experimentally, even melting of rocks due to frictional heating is sometimes
observed.
A simplified friction model $\mu(v)=\mu_0+a{\rm ln}(b|v|{+}1)$ or
$\mu(v)=\mu_0+(a{-}b){\rm ln}|v/v_{\rm ref}|$
is sometimes also considered under the name rate-dependent friction
\cite{Diet79MRFE,LyBZAg05VDRR,Ruin83SISVFL,TonLav18SSTE}
and was analyzed in \cite{Miel18TECI} as far as its stability.
In contrast, the above mentioned variant of temperature dependent
friction can be called purely state dependent.
The friction model is sometimes ``translated'' into a bulk model
involving a plastic-like strain and the sliding-friction
coefficient $\mu$ then occurs as a threshold (a so-called yield stress)
in the plastic flow rule, cf.\ \cite[Sect.\,6]{Roub14NRSD},
or \cite{DauCar09SSIS,DauCar10FFE,HeGevD18IRSD,LyBZAg05VDRR,TonLav18SSTE}, known
also under the name a {\it shear-transformation-zone} (STZ) concept
referring to a (usually narrow) region in an amorphous solid that
undergoes plastification when the material is under a big mechanical load.
Instead of velocity dependence \eqref{DR1+}, one should play with dependence
on the strain rate, cf.\ \eqref{DR1++} below. These options can be ``translated''
into the bulk model by making the yield stress $\mu$ dependent, beside
the strain rate, also on an aging variable $\theta$,
or on an temperature, or on a damage, or on various combination of those.
Altogether, one thus get a wide menagerie of friction-type models.
Here we consider, as rather standard in geophysical modeling as \eqref{DR1+},
an isothermal variant and make $\mu$ dependent on strain rate and on aging. We
consider also damage (or phase-field) as usual in fracture mechanics
to illustrate its a different position in the model.
The main phenomena are that aging evolution does not directly contribute
to energetics when influencing only dissipative ``friction'' $\mu$.
This is similar to a cam-clay model \cite{DaDeSo11QECC,DaDeSo12QECC}
where the dissipative response is controlled through an internal variable
whose rate, however, does not explicitly contribute to energetics.
On the other hand, damage (or phase-field) influences the elastic response
through the elastic response in the stored energy and is also driven by the
resulting driving force from it. Also,
we adopt the (realistic) assumption that the elastic strain
(as well as its rate) is small, which makes possible to let
$\mu$ dependent on the plastic strain rate rather than
elastic strain rate and to put it into the standard framework of
rate-dependent plasticity. The plasticity is consider without
any hardening which otherwise might dominate with big slips
on long time scales and would unacceptably corrupt the autonomy of
the model. In principle, damage may also influence friction
$\mu$ like in \cite{RoSoVo13MRLF,RouVal16RIPP} but we will not consider it.
\subsection{The one-dimensional steady-state model}\label{sec-steady}
It is generally understood that fracture mechanics and in
particular fault mechanics is very complex and difficult to analyze.
Therefore, we focus to a very simplified situation: a flat
fault which is perfectly homogeneous in its tangential direction.
Thus all variables depend only on the position in the normal
direction and the problem reduces to be one dimensional,
cf.\ Figure~\ref{fig:geometry}.
\begin{figure}[ht]
\centering
\psfrag{cataclastic zone}{\hspace*{-.3em}\small\bf cataclastic zone}
\psfrag{compact rock}{\footnotesize\hspace*{0em}
\begin{minipage}[c]{5em}\baselineskip=8pt compact\\\hspace*{.7em}rock\end{minipage}}
\psfrag{damage zone}{\footnotesize damage zone $D$ (width = $2H$)}
\psfrag{2h}{\footnotesize $2h$}
\psfrag{velocity}{\footnotesize\hspace*{0em}
\begin{minipage}[c]{5em}\baselineskip=8pt prescribed\\velocity $v_\infty$\end{minipage}}
\includegraphics[width=28em]{trber8.png}
\vspace*{-.1em}
\caption{\small\sl Schematic geometry: a cross-section through a fault.
}\label{fig:geometry}
\end{figure}
We ask a question about existence of a steady state in the situations
where the sides of the fault move with a constant speed in opposite
directions. The model is thus expressed in rates rather than
displacements and plastic strains.
Such steady states are also called {\it aseismic slips} (sliding),
in contrast to seismic slips which are dynamical phenomena related
with a stick-slip motion and earthquakes. For the relation of the
aseismic slip (fault growth) and orientation of faults see
\cite{PHGA19SAFG}. The aseismic slip can be also understood as creep,
within which the Maxwellian viscoelastic rheology is manifested.
The variables of our steady-state model will thus be:
\\
\textbullet\ $\ \ v$ velocity (in m/s),
\\
\textbullet\ $\ \ \pi$ plastic strain rate (in 1/s),
\\
\textbullet\ $\ \ \varepsilon$ elastic strain (dimensionless),
\\
\textbullet\ $\ \ \alpha$ damage (dimensionless, ranging over $[0,1]$), and
\\
\textbullet\ $\ \ \theta$ aging (in seconds), and later also
\\
\textbullet\ $\ \ \sigma$ a stress (or, in one-dimensional case, rather a force in J/m=N).
\noindent
These first five variables are to satisfy the following system of five equations (inclusions):
\begin{subequations}
\label{eq}\begin{align}\label{eq1}
&(\mathbb{C}(\alpha)\varepsilon)_x=0&&\text{(momentum equilibrium)}
\\ \label{eq2}
&\pi=v_x&&\text{(plastic shear rate)}
\\&\label{eq3}
\mu(\pi,\theta)\text{Sign}(\pi)\ni \mathbb{C}(\alpha)\varepsilon+\eta\pi_{xx}\,,&&\text{(plastic flow rule)}
\\[-.2em]&\label{eq4}
\frac12\mathbb{C}'(\alpha)\varepsilon^2+G_{\rm c}\frac{\alpha{-}1}{\ell^2}
=G_{\rm c}\ell^2\alpha_{xx}\,,&&\text{(damage flow rule)}
\\[-.2em]&\label{eq5}
|\pi|f_1(\theta)-f_0(\theta)=
\kappa\theta_{xx}\,,&&\text{(aging flow rule)}
\end{align}\end{subequations}
where $(\cdot)_x$ denotes the derivative (later also partial derivative) in
$x$. Actually, \eqref{eq3} contains a set-valued term
$\partial_\pi R(\pi,\theta) = \mu(\pi,\theta)\text{Sign}(\pi)$ and is thus an
inclusion rather than an equation. There, we have denoted by ``\,Sign'' in
set-valued sign function, i.e.
\begin{align}
\label{eq:def.Sign}
\text{Sign}(\pi)=\begin{cases}1&\text{for }\pi>0,\\[-.2em]
[-1,1]&\text{for }\pi=0.\\[-.2em]
-1&\text{for }\pi<0.\end{cases}
\end{align}
This system arises as a steady state from an evolution model \eqref{evol} below.
In particular, the equation \eqref{eq2} arises from the additive (Green-Naghdi's)
decomposition of the total strain into the elastic strain and the plastic
strain, cf.\ \eqref{evol2} below. Written in terms of rates and taking into account
that the rate of the elastic strain is zero in the steady state, we arrive at
\eqref{eq2}. In fact, the velocity $v$ here enters the rest of the system only
through the boundary condition \eqref{BC} below, in contrast to the full evolutionary
model later in Section~\ref{sec-evol} where velocity acts through the inertial
force.
The data (or constitutive relations) in the model \eqref{eq} are:
$\ \ \ \mu=(\pi,\alpha)$ a yield stress (in the one-dimensional model in N=J/m)),
$\ \ \ \mathbb{C}=\mathbb{C}(\alpha)$ elastic modulus (smooth, nondecreasing,
in N=J/m),
$\ \ \ f_0$ aging rate (dimensionless),
$\ \ \ f_1$ ``contra-aging'' coefficient (in seconds),
$\ \ \ G_{\rm c}$ fracture toughness (in a one-dimensional model in N=J/m),
$\ \ \ \eta>0$ a length scale coefficient for $\pi$
(i.e.\ for the cataclastic zone, in W/m),
$\ \ \ \ell>0$ a length scale coefficient for the damage (in meters),
$\ \ \ \kappa>0$ a length scale coefficient for the aging (in m$^2$/s),
\noindent
while $f_0$ and $f_1$ are essentially borrowed from \eqref{DR3+}.
Actually, $v$ in \eqref{DR1+} has the meaning rather of a difference of
velocities across the contact interface than a velocity itself which would
not be Galilean invariant. In a variant of the bulk model, $\mu$ should
depend rather on a shear rate and, instead of the coefficient
$1/v_{\rm ref}^{}$, one should consider a $h/v_{\rm ref}^{}$ with $h$ a certain
characteristic width of the active slip area, likely to be identified with
the width of the cataclastic core zone, cf.~Figure~\ref{fig:geometry}.
Thus, we consider
\begin{align}\label{DR1++}
&\mu=\mu(\pi,\theta)=\mu_0+a\,{\rm ln}\Big(\frac{h}{v_{\rm ref}}|\pi|{+}1\Big)
+b\,{\rm ln}\Big(\frac{v_{\rm ref}}{d_{\rm c}}\theta{+}1\Big)\,.
\end{align}
In comparison with \eqref{DR3}, the steady-state equation \eqref{eq5}
contains the length-scale term $\kappa\theta_{xx}$. Also damage equation \eqref{eq5}
contains a length-scale term $\ell^2\alpha_{xx}$ competing with
the driving force $\frac12\mathbb{C}'(\alpha)\varepsilon^2$ coming from the $\alpha$-dependence
in \eqref{eq1}. Note that the gradient term in \eqref{eq3} applies to plastic rate
and no gradient term involves directly the plastic strain, similarly as in
\cite{DaRoSt21NHFV,Roub22QHLS}. This eliminates spurious hardening-like
effects by large slips accumulated on faults in large time scales, which would
otherwise start dominating and corrupt the autonomous character of the model.
We have to complete the system \eqref{eq} by suitable boundary condition.
Specifically, we choose the boundary conditions
\begin{align}\label{BC}
v(\pm H)=\pm v_\infty,\ \ \ \ \
\pi(\pm H)=0,\ \ \ \ \
\alpha(\pm H)=1,\ \ \ \ \
\theta(\pm H)=\theta_\infty
\end{align}
with $\theta_\infty$ from \eqref{DR3+}.
Let us mention that we use the mathematical convention that $\alpha=1$ means
undamaged material while $\alpha=0$ means maximally damaged material.
From \eqref{eq1}, we can see that $\mathbb{C}(\alpha)\varepsilon$ is constant on the damage
domain $D=[-H,H]$, say $=\sigma$.
From this, we can express
\begin{align}\label{e=..alpha}
\varepsilon(x)=\frac\sigma{\mathbb{C}(\alpha(x))}\qquad\text{ for all }\ x\inD\,.
\end{align}
If $\mathbb{C}(\cdot)$ is increasing, one can conversely express $\alpha$ as a
function of $\varepsilon$, but we will eliminate $\varepsilon$ rather than $\alpha$.
Also the equation \eqref{eq2} can be eliminated because the velocity $v$ occurs
only in the first boundary condition in \eqref{BC}. This condition then
turns into an integral side constraint
$\int_D\pi \,\mathrm{d} x=\int_D v_x \,\mathrm{d} x=v(H)-v(-H)=2v_\infty$.
We can thus reduce \eqref{eq} to the system of three elliptic
ordinary-differential equations
\begin{subequations}\label{eq-three}\begin{align}
&\label{eq3-three}
\mu(\pi,\theta){\rm Sign}(\pi)\ni\sigma+\eta\pi_{xx}\,,
\\&\label{eq4-three}
\frac{\mathbb{C}'(\alpha)}{2\mathbb{C}^2(\alpha)}\sigma^2+G_{\rm c}\frac{\alpha{-}1}{\ell^2}
=G_{\rm c}\ell^2\alpha_{xx}\,,
\\&\label{eq5-three}
|\pi|f_1(\theta)-f_0(\theta)=
\kappa\theta_{xx}\,
\end{align}\end{subequations}
with the integral and the boundary conditions
\begin{subequations}\label{BC-three}\begin{align}\label{BC-three1}
&\pi(\pm H)=0\ \ \ \text{ with }\ \ \int_D\!\!\pi \,\mathrm{d} x=2v_\infty,\ \ \ \ \
\\[-.6em]&\label{BC-three2}
\alpha(\pm H)=1,\ \ \ \ \ \\&\theta(\pm H)=\theta_\infty\,.
\end{align}\end{subequations}
It is noteworthy that \eqref{eq4-three} decouples from (\ref{eq-three}a,c)
which arises not from necessity but rather from our desire for simplicity and for
consistency with the standard rate-and-state friction as in
Section~\ref{sec-intro}: we assumed that $\mu$, $f_0$, and $f_1$ are
independent of $\alpha$. The system (\ref{eq-three}a,c)--(\ref{BC-three}a,c)
thus represents a nonstandard non-local two-point boundary-value problem for
the functions $(\pi,\theta)$ on $D$ and one scalar variable
$\sigma$. When solved, the two-point boundary-value problem
\eqref{eq4-three}--\eqref{BC-three2} can be solved for $\alpha$. Then $\varepsilon$ is
obtained from \eqref{e=..alpha}. Eventually, the velocity $v$ can be calculated
from (\ref{eq2}) when using also \eqref{BC-three1}.
\subsection{The evolutionary model}
\label{su:EvolMod}
\defp{p}
We will now investigate an evolution version of the steady-state model \eqref{eq},
which in particular explains how \eqref{eq} have arisen.
In addition to the variables needed in Section~\ref{sec-steady},
we now will exploit also:
\\
\textbullet\ $\ \ p$ plastic strain (dimensionless) and
\\
\textbullet\ $\ \ \varrho$ mass density (in one-dimensional model kg/m).\\
An additional ingredient will be
a dissipation potential $\zeta$ for damage, which is convex with
subdifferential $\partial\zeta$ and has physical
dimension J/m.
\noindent
The evolution variant of \eqref{eq} then looks as:
\begin{subequations}\label{evol}\begin{align}\label{evol1}
&\varrho\DT v-(
\mathbb{C}(\alpha)\varepsilon)_x=0\,,&&\text{(momentum equilibrium)}
\\\label{evol2}
&\DT\varepsilon+\DT{p}=v_x\,,
&&\text{(additive decomposition)}
\\&\label{evol3}
\partial_\pi R(\DTp,\theta) \ni
\mathbb{C}(\alpha)\varepsilon+\eta\DT{p}_{xx}\,,&&\text{(plastic flow rule)}
\\&\label{evol4}
\partial\zeta(\DT\alpha)
+\frac12\mathbb{C}'(\alpha)\varepsilon^2+G_{\rm c}\frac{\alpha{-}1}{\ell^2}
\ni G_{\rm c}\ell^2\alpha_{xx}\,,&&\text{(damage flow rule)}
\\&\label{evol5}
\DT\theta=f_0(\theta)-|\DT{p}|f_1(\theta)
+\kappa\theta_{xx}\,.&&\text{(aging flow rule)}
\end{align}\end{subequations}
It is to be completed with
boundary conditions as \eqref{BC} with possibly time dependent
boundary velocity $v_\infty=v_\infty(t)$, i.e.\ here
\begin{align}\label{BC-evol}
v(\pm H)=\pm v_\infty(t),\ \ \ \ \
p(\pm H)=0,\ \ \ \ \
\alpha(\pm H)=1,\ \ \ \ \
\theta(\pm H)=\theta_\infty\,.
\end{align}
with $\theta_\infty$ constant in time. The (Green-Naghdi's) additive
decomposition is written in rates, which just gives \eqref{evol2}.
Obviously, the steady-state variant of \eqref{evol} where all
time derivatives vanish yield just \eqref{eq}.
The system (\ref{evol}a-d) has a rational physical background while \eqref{evol5}
expresses some extra phenomenology controlling the nonconservative part in
\eqref{evol3}. For $\varrho=0$, the system (\ref{evol}a--d) represents the
so-called Biot equation
$\partial_{\DT q}\mathcal{R}(q,\theta,\DT q)
+\partial_q\mathcal{E}(q,\theta)=0$ for the state $q=(u,p,\alpha)$ and
$\theta$ given with the total dissipation potential
$\mathcal{R}(q,\theta,\DT q)=\int_D\zeta_{\rm
tot}^{}(\theta,\alpha;\pi,\DT\alpha) \,\mathrm{d} x$ and the stored energy
$\mathcal{E}(q,\theta)=\int_D\psi(\varepsilon,\alpha,\theta) \,\mathrm{d} x$, while for
$\varrho>0$ it arises from the Hamilton variational principle generalized for
the dissipative systems with internal variables.
The underlying specific stored energy and the dissipation potential (in terms of
the rates of plastic strain $p$ and damage $\alpha$) behind
this model are
\begin{subequations}
\label{eq:energetics}
\begin{align}
&\varphi(\varepsilon,\alpha)=\frac12\mathbb{C}(\alpha)\varepsilon^2
+G_{\rm c}\Big(\frac{(1{-}\alpha)^2}{2\ell^2}
+\frac{\ell^2}2\alpha_x^2\Big)
\ \ \text{ and }\ \
\\&\zeta_{\rm tot}^{}(\theta;\DT{p},\DT\alpha)
= R(\DT{p},\theta) + \zeta(\DT\alpha)+\frac\eta2{\DT p}_{x}^2\,,
\end{align}
\end{subequations}
where often $\mathbb{C}(\alpha)=(\ell^2/\ell_0^2+\alpha^2)C_0$ with some $\ell_0$.
The constants $\ell$ and $\ell_0$ are in meters while the
fracture toughness $G_{\rm c}$ is in J/m$^2$, cf.\
\cite[Eqn.\,(7.5.35)]{KruRou19MMCM}, or rather in J/m in our 1-dimensional model.
This is known as the Ambrosio-Tortorelli functional \cite{AmbTor92AFDP}.
\section{Analysis of the steady state model}
\label{se:AnaSteady}
Further on, we will use the standard notation for the function space. In
particular, $C(D)$ will be the space of continuous functions on $D$
and $L^p(D)$ will denote the Lebesgue space of measurable functions
on the domain $D=[-H,H]$ whose $p$-power is integrable (or, when $p=\infty$,
which are bounded), and $W^{k,p}(D)$ the Sobolev space of functions in
$L^p(D)$ whose $k$-th distributional derivative belongs to $L^p(D)$.
We abbreviate $H^k(D)=W^{k,2}(D)$. Besides, $H_0^1(D)$ will denote
a subspace of $H^1(D)$ of functions with zero values at $x=\pm H$. In
Section~\ref{sec-evol}, for the time interval $I=[0,T]$ and a Banach space $X$,
we will also use the Bochner spaces $L^p(I;X)$ of Bochner-measurable functions
$I\to X$ whose norm in in $L^p(I)$, and the Sobolev-Bochner space $H^1(I;X)$ which
belong, together with their distributional time derivative, into $L^p(I;X)$.
\subsection{Existence of steady states}
Let us recall the standard definition of a weak solution to the inclusion
\eqref{eq3} as a variational inequality
\begin{align}\label{eq3-weak}
\int_D\big( R(\widetilde\pi,\theta)-\sigma(\widetilde\pi{-}\pi)
+\eta\pi_x(\tilde\pi{-}\pi)_x \big) \,\mathrm{d} x\ge\int_D
R(\pi,\theta) \,\mathrm{d} x
\end{align}
to be satisfied for any $\widetilde\pi\in L^1(D)$, where $\partial_\pi R(\pi,\theta)
=\mu(\pi,\theta)\mathrm{Sign}(\pi)$.
We will prove existence of solutions due to even a stronger
concept of a classical (also called Carath\'eodory or strong) solution, namely
that $|\pi_{xx}|$ is integrable (actually in our case even bounded) and
\begin{align}\label{eq3-strong}
\forall\,\widetilde\pi\in\mathbb R:\quad
R(\widetilde\pi,\theta)-\sigma(\widetilde\pi{-}\pi)+\eta\pi_{xx}(\widetilde\pi{-}\pi)
\ge R(\pi,\theta)
\end{align}
holds a.e.\ on $D$.
As mentioned in Section~\ref{sec-intro}, the rate-and-state friction model lacks
standard thermodynamical consistency, which is reflected in the steady-state
case by a lack of joint variational structure. Nevertheless, the two
equations \eqref{eq3} and \eqref{eq5} for $\pi$ and $\theta$, respectively, have
an individual variational structure governed by
the functionals
\begin{equation}
\label{eq:calA.calB}
\mathcal A_\pi(\theta) := \int_D|\pi|\varphi_1(\theta)-
\varphi_0(\theta) +\frac\kappa2|\theta_x|^2 \,\mathrm{d} x
\quad \text{and} \quad
\mathcal B_\theta(\pi) := \int_D R(\pi,\theta) +\frac\eta2|\pi_x|^2 \,\mathrm{d} x ,
\end{equation}
where $\varphi_0$ and $\varphi_1$ are primitive functions to $f_0$ and $f_1$,
respectively. Then, the pair $(\theta,\pi)$ is a desired solution if and only
if $\theta $ minimizes $\mathcal A_\pi(\cdot)$ on
$\{\theta \in H^1(D) ;\ \theta(\pm H)=\theta_\infty\}$ and $\pi$ minimizes
$\mathcal B_\theta(\cdot )$ on
$\{\pi \in H^1_0(D);\ \int_D \pi \,\mathrm{d} x =2v_\infty\}$. Since both functionals
$\mathcal A_\pi(\cdot)$ and $\mathcal B_\theta(\cdot )$ are strictly convex,
the solutions operators $\theta=S_{\mathcal A}(\pi)= \text{argmin}\mathcal
A_\pi$ and $\pi =S_{\mathcal B}(\theta) = \text{argmin}\mathcal B_\theta(\cdot )$
are well-defined.
The existence of steady states will be proved by a Schauder fixed-point
theorem applied to $S_{\mathcal A} \circ S_{\mathcal B}$.
\begin{theorem}[Existence of steady states]
\label{th:ExistSteady}
Let the following assumptions hold:
\begin{subequations}\label{ass}\begin{align}\nonumber
&\mu:\mathbb R^2\to\mathbb R \text{ continuous, }\mu(\cdot,\theta)\text{ non-decreasing
on } [0,+\infty)
\\&\hspace{10em}
\text{and non-increasing on } (-\infty,0],\quad \inf\nolimits_\mathbb R\mu(0,\theta)>0,
\label{ass1}\\
&\mathbb{C}:\mathbb R\to\mathbb R \text{ continuously differentiable, }\
\mathbb{C}'([1,\infty))=0,\ \ \inf\nolimits_\mathbb R \mathbb{C}(\alpha)>0 ,
\\\nonumber&f_0,f_1 \text{ continuous, non-negative, }
f'_1(\theta)>0, \ \ f_1(0)=0,
\\&\label{ass2}
\hspace{13.8em} f'_0(\theta)< 0, \ \ f_0(\theta_\infty)=0,
\\&\kappa>0,\ \ \ell>0, \ \ \eta>0 \,.
\end{align}
\end{subequations}
Then:\\
\ITEM{(i)}{For all $v_\infty \in \mathbb R$, problem \eqref{eq}--\eqref{BC} has a solution in the
classical sense (i.e.\ {\rm(\ref{eq}a,b,d,e)} hold everywhere
and \eqref{eq3-strong} holds a.e.\ on $D$) such that
$\varepsilon\in W^{1,\infty}(D)$, $v\in W^{3,\infty}(D)$, and
$\pi,\alpha,\theta\in W^{2,\infty}(D)$.}
\ITEM{(ii)}{Moreover, any solution
satisfied $0\le\theta\le\theta_\infty$ and
$0\le\alpha\le1$ with $\alpha$ convex.}
\ITEM{(iii)}{If $v_\infty\ne0$, then $\sigma v_\infty>0$ with
$\sigma=\mathbb{C}(\alpha)\varepsilon$ denoting the stress, and if also $\mathbb{C}'\le0$ with
$\mathbb{C}'(1)<0$, then $\alpha(x)<1$ except at $x=\pm H$.}
\ITEM{(iv)}{If $\mathbb{C}$, $f_0$, $f_1$, and
$\mu$ are smooth, then $\alpha,\theta\in W^{4,\infty}(D)$.}
\end{theorem}
\begin{proof}
For a given $\widetilde\theta$, equation \eqref{eq3-three} with the nonlocal condition
in \eqref{BC-three} is equivalent to $\pi = S_\mathcal{B}(\widetilde\theta)=
\text{argmin} \mathcal B_{\widetilde\theta}(\cdot)$.
The monotonicity of $\mu(\cdot,\widetilde\theta)$ assumed in \eqref{ass1} ensures the
uniform convexity of the functional $\mathcal B_\theta(\cdot)$. Therefore the
minimizer $\pi=S_\mathcal{B}(\widetilde\theta)$, which clearly exists by the direct
method in the calculus of variations, is uniquely determined. Moreover, it
depends depends continuously on $\widetilde\theta$ with respect to the weak topology
on $H^1(D)$. Thanks to \eqref{ass1}, for $v_\infty$ given,
$\mathcal B_\theta(\cdot)$ is coercive uniformly with respect to $\widetilde\theta$,
and therefore the minimizer $\pi= S_\mathcal{B}(\theta) $ can be a priori
bounded in $H^1(D)$ independently on $\widetilde\theta$.
With a Lagrange multiplier $\sigma$ for the scalar-valued constraint
$\int_D \pi\,\mathrm{d} x = 2v_\infty$, the Lagrangian for minimizing $\mathcal
B_{\widetilde\theta}$ reads
\begin{align}
\label{eq:scrL}
\mathscr{L}(\pi,\sigma)=\int_D R(\pi,\widetilde\theta)
+\frac\eta2\pi_x^2+\sigma\Big(\pi-\frac{v_\infty}H\Big) \,\mathrm{d} x
\end{align}
and the optimality conditions $\partial_\pi\mathscr{L}(\pi,\sigma)\ni0$ and
$\partial_\sigma\mathscr{L}(\pi,\sigma)=0$ with ``$\partial$'' denoting the partial
subdifferentials (in the functional sense) give respectively the inclusion
\eqref{eq3-three} with $\widetilde\theta$ instead of $\theta$ and the integral
condition $\int_D\pi \,\mathrm{d} x=2v_\infty$ in \eqref{BC-three}. Also this
multiplier is determined uniquely and depends continuously on $\widetilde\theta$. From
\eqref{eq3-three} written as
$\sigma\in\mu(\pi,\widetilde\theta){\rm Sign}(\pi)-\eta\pi_{xx}\in H_0^1(D)^*$,
we can see that also $\sigma\in\mathbb R$ is a priori bounded independently of
$\widetilde\theta$.
For a given $\pi$,
equation \eqref{eq5} is equivalent to $\theta=
S_\mathcal{A}(\pi)=\text{argmin}\mathcal A_\pi(\cdot)$.
As $f_1$ is nondecreasing and $f_0$ is
nonincreasing, the functional $\mathcal A_\pi(\cdot)$ is convex, and it is to
be minimized on the affine manifold
$\{\theta\in H^1(D);\ \theta(\pm H)=\theta_\infty\}$, cf.\ the boundary
conditions \eqref{BC-three}. Therefore this boundary-value problem has a unique weak
solution $\theta\in H^1(D)$,
which depends continuously on $\pi$ and
can be bounded independently of $\widetilde\theta$ when taking into
account the mentioned a priori bound for $\pi$.
Using $f_1(0)=0$, $f_0(\theta_\infty)=0$, and $\theta(\pm H)=\theta_\infty$,
the maximum principle implies $0\le\theta\le\theta_\infty$.
Altogether, we obtain a mapping
$\widetilde\theta \mapsto \theta =S_\mathcal{A}\big( S_\mathcal{B}(\theta)\big)$
which is continuous with respect to the weak topology on $H^1(D)$ and
valued in some bounded set (depending possibly on a given $v_\infty$). By the
Schauder fixed-point theorem, this mapping has a fixed point $\theta$. This
thus determines also $\pi=S_\mathcal{B}(\theta)$ and $\sigma$.
Having $\sigma$ determined, we can find a unique weak solution
$\alpha\in H^1(D)$ to the equation \eqref{eq4-three} with the boundary
conditions \eqref{BC-three2} and then, from \eqref{e=..alpha}, we also obtain
$\varepsilon\in H^1(D)$. From $v(x)=\int_{-H}^x\pi(\widetilde x)\,\d\widetilde x$, we also
obtain $v\in W^{2,2}(D)$.
The quadruple $(\pi,\alpha,\theta,\sigma)$ solves
\eqref{eq-three}--\eqref{BC-three} in the weak sense. By comparison, we can
also see that $\pi_{xx},\alpha_{xx},\theta_{xx}\in L^\infty(D)$, so that
$\pi,\alpha,\theta\in W^{2,\infty}(D)$.
If $v_\infty\ne0$, then necessarily $\sigma\ne0$. If also $\mathbb{C}'\le0$ with
$\mathbb{C}'(1)<0$, the (convex) solution $\alpha$ to \eqref{eq4-three} must be
nontrivial, this $\alpha<1$ except the end points $x=\pm H$.
Then, from \eqref{e=..alpha} with $\sigma$ already fixed and $\mathbb{C}(\cdot)$
smooth, we obtain $\varepsilon\in W^{2,\infty}(D)$. Eventually
$v\in W^{3,\infty}(D)$ can be reconstructed from \eqref{eq2} with the
boundary conditions \eqref{BC}; here we used the constraint
$\int_D \pi \,\mathrm{d} x = 2 v_\infty$.
\end{proof}
We discuss further qualitative properties of solution pairs $(\theta,\pi)$
that arise from the specific form of the steady state equations
\eqref{eq}--\eqref{BC}. As our above result does not imply uniqueness of
solutions, our next results states that there are solutions with symmetry and,
under a weak additional condition, these solutions are also monotone on
$[0,H]$. For the latter we use the technique of rearrangements, which strongly
relies on the fact that we have no explicit $x$-dependence in our material
laws. For general function $f \in L^1(D)$ we define its even decreasing and
even increasing rearrangements $f_\mathrm{dr}$ and $f_\mathrm{ir}$ via
\[
\{x\in D;\ f_\mathrm{dr}(x)>r\}= (-X(r),X(r)) \quad \text{where }
X(r):=\frac12\mathcal L^1\big( \{x\in D;\ f(x)>r\}\big)
\]
and $f_\mathrm{ir}(x) = f_\mathrm{dr}(H{-}|x|)$, see Figure
\ref{fig:rearrange}.
\begin{figure}
\begin{tikzpicture}
\draw[thick,->] (-1.2,0) -- (1.2,0) node[above]{$x$};
\draw[thick,->] (0,-0.4) -- (0,2.3) node[pos=0.8,left]{$f$};
\draw[] (-1,2)--(-1,-0.1) node[below]{$-1$};
\draw[] (1,2)--(1,-0.1) node[below]{$+1$};
\draw[very thick, color=blue] (-1,1)--(0.5555,0.2222)--(1,2);
\draw[very thick, color=red!80!blue, domain=-1:1,samples=161]
plot (\x, { 0.4-0.5* cos (540*\x) } ) ;
\end{tikzpicture}
\quad
\begin{tikzpicture}
\draw[thick,->] (-1.2,0) -- (1.2,0) node[above]{$x$};
\draw[thick,->] (0,-0.4) -- (0,2.3) node[pos=0.8,left]{$f_\mathrm{dr}$};
\draw[] (-1,2)--(-1,-0.1) node[below]{$-1$};
\draw[] (1,2)--(1,-0.1) node[below]{$+1$};
\draw[very thick, color=blue]
(-1,0.2222)--(-0.25,1)--(0,2)--(0.25,1)--(1,0.2222);
\draw[very thick, color=red!80!blue, domain=-1:1, samples=161]
plot (\x, { 0.4+0.5*cos(180*\x) } );
\end{tikzpicture}
\quad
\begin{tikzpicture}
\draw[thick,->] (-1.2,0) -- (1.2,0) node[above]{$x$};
\draw[thick,->] (0,-0.4) -- (0,2.2) node[pos=0.8,left]{$f_\mathrm{ir}\!$};
\draw[] (-1,2)--(-1,-0.1) node[below]{$-1$};
\draw[] (1,2)--(1,-0.1) node[below]{$+1$};
\draw[very thick, color=blue]
(-1,2)--(-0.75,1)--(0,0.2222)--(0.75,1)--(1,2);
\draw[very thick, color=red!80!blue, domain=-1:1, samples=161]
plot (\x, { 0.4 - 0.5*cos(180*\x) } );
\end{tikzpicture}
\hfill
\begin{minipage}[b]{0.33\textwidth}\caption{\small\sl Two examples of functions $f$ and
their decreasing and increasing
rearrangements $f_\mathrm{dr}$ and $f_\mathrm{ir}$. \label{fig:rearrange}}
\end{minipage}
\end{figure}
The new condition \eqref{eq:mu.additive} for the following result is
satisfied in our adaptation \eqref{DR1++} of the classical Dieterich-Ruina
friction law \eqref{DR1}.
\begin{proposition}[Symmetric and monotone pairs]
\label{pr:SymMonotone}
Let the assumption \eqref{ass} of Theorem \ref{th:ExistSteady} hold. Then, for
all $v_\infty$ there exists an even solution pair $(\theta,\pi)$, i.e.\
$\theta$ and $\pi$ are even functions on $D=[-H,H]$. If we additionally assume
\begin{equation}
\label{eq:mu.additive}
\mu(\pi,\theta) = \mu(\pi,0)+ B(\theta) \quad \text{with }
B:\mathbb R\to [0,\infty) \text{ nondecreasing},
\end{equation}
then there exists an \emph{even, monotone pair} $(\theta,\pi)$, i.e.\ it is an
even pair such that additionally $[0,H]\ni x \mapsto \theta(x)$ is
nondecreasing and $[0,H]\ni x \mapsto \pi(x)$ is nonincreasing.
\end{proposition}
\begin{proof} Throughout the proof we will restrict to the case $v_\infty>0$
leading to $\sigma>0$ and $\pi\geq 0$. The case $v_\infty=0$ is trivial with
$(\theta,\pi)\equiv (\theta_\infty,0)$, and $v_\infty<0$ follows similarly
with $\sigma<0$ and $\pi\leq 0$.
To obtain the evenness we simply restrict the existence theory
developed in the proof of Theorem \ref{th:ExistSteady} to the closed
subspaces of even functions. By the uniqueness of the minimizers of $\mathcal
A_\pi$ and $\mathcal B_\theta$ it is clear that $S_\mathcal{A}$ and
$S_\mathcal{B}$ map even functions to even functions. Hence, Schauder's
fixed-point theorem produces an even solution.
For showing the existence of monotone pairs we rely on classical results for
rearrangements, see e.g.\ \cite{Kawo85RCLS}, namely the Polya-Szeg\"o
inequality
\begin{equation}
\label{eq:PoSz}
\int_D (f_\mathrm{dr})_x ^2 \,\mathrm{d} x = \int_D (f_\mathrm{ir})_x ^2 \,\mathrm{d} x \leq
\int_D f_x^2 \,\mathrm{d} x
\end{equation}
and the Hardy-Littlewood inequality (cf.\ \cite[Ch.\,10]{HaLiPo34I})
\begin{equation}
\label{eq:HardyLittle}
\int_D f_\mathrm{dr}\,g_\mathrm{ir} \,\mathrm{d} x= \int_D f_\mathrm{ir}\,g_\mathrm{dr} \,\mathrm{d}
x \leq \int_D f\,g \,\mathrm{d} x \leq \int_D f_\mathrm{dr}\,g_\mathrm{dr} \,\mathrm{d} x = \int_D
f_\mathrm{ir}\,g_\mathrm{ir} \,\mathrm{d} x.
\end{equation}
While the upper estimate is classical and works for integration over $D=B_R(0)
\subset \mathbb R^d$ or $D=\mathbb R^d$, the lower estimate is special to $D\subset \mathbb R^1$,
see \cite[Eqn.\,(10.2.1)]{HaLiPo34I}.
To exploit the theory of rearrangements we define the closed convex sets
\begin{align*}
&\boldsymbol\Theta_\mathrm{ir}:=\big\{ \: \theta\in H^1(D); \ \theta(x)\in
[0,\theta_\infty], \ \theta(\pm H) =\theta_\infty,
\ \theta = \theta_\mathrm{ir} \: \big\} \quad \text{and}
\\
&\boldsymbol\Pi_\mathrm{dr}:=\big\{ \: \pi\in H^1(D); \ \pi(x) \geq 0, \ \pi(\pm
H)=0, \ \pi = \pi_\mathrm{dr}, \ \textstyle \int_D \pi\,\mathrm{d} x = 2v_\infty \: \big\}
\end{align*}
and show below the mapping properties $S_\mathcal{A} : \boldsymbol\Pi_\mathrm{dr} \to
\boldsymbol\Theta_\mathrm{ir}$ and $S_\mathcal{B} : \boldsymbol\Theta_\mathrm{ir}
\to \boldsymbol\Pi_\mathrm{dr} $. Thus, Schauder's fixed-point theorem can be
restricted to $S_\mathcal{A} \circ S_\mathcal{B} :
\boldsymbol\Theta_\mathrm{ir} \to \boldsymbol\Theta_\mathrm{ir}$ resulting in a
fixed point $\theta^* \in \boldsymbol\Theta_\mathrm{ir}$. With $\pi^*
=S_\mathcal{B}( \theta^*)$, we obtain the desired even, monotone solution pair
$(\theta^*,\pi^*)$, namely $\theta^*=\theta^*_\mathrm{ir}$ and $\pi^*=\pi_\mathrm{dr}$.
To establish $S_\mathcal{A} : \boldsymbol\Pi_\mathrm{dr} \to
\boldsymbol\Theta_\mathrm{ir}$, we start with $\pi \in
\boldsymbol\Pi_\mathrm{dr}$ and show $\mathcal A_\pi(\theta_\mathrm{dr})
\leq \mathcal A_\pi(\theta)$ for all $\theta \in H^1(D)$. As $\theta=
S_\mathcal{A}(\pi)$ is the unique minimizer of $\mathcal A_\pi(\cdot)$, we
obtain $\theta= \theta_\mathrm{dr}$ as desired.
To show $\mathcal A_\pi(\theta_\mathrm{dr}) \leq \mathcal A_\pi(\theta)$, we
exploit $|\pi|= \pi = \pi_\mathrm{dr}$ and the rearrangements estimates
\eqref{eq:PoSz} and \eqref{eq:HardyLittle} to obtain
\begin{align*}
&\int_D \theta_x^2 \,\mathrm{d} x \!\overset{\text{\eqref{eq:PoSz}}}\geq\! \int_D \big(\theta_\mathrm{ir}\big){}_x^2 \,\mathrm{d} x , \qquad
\int_D \varphi_0(\theta) \,\mathrm{d} x = \int_D \varphi_0\big(\theta_\mathrm{ir} \big)
\,\mathrm{d} x ,\\
& \int_D |\pi|\,\varphi_1(\theta) \,\mathrm{d} x
\!\overset{\text{\eqref{eq:HardyLittle}}}\geq\! \int_D
\pi_\mathrm{dr}\, \big( \varphi_1(\theta)\big)_\mathrm{dr} \,\mathrm{d} x = \int_D
|\pi|\,\varphi_1\big(\theta_\mathrm{dr}\big) \,\mathrm{d} x .
\end{align*}
For the last identity we use $\big( \varphi_1(\theta)\big)_\mathrm{dr} =
\varphi_1(\theta_\mathrm{dr})$ which holds because of
$\varphi'_1=f_1(\theta) \geq 0$. Summing the three relations gives $\mathcal
A_\pi(\theta_\mathrm{dr}) \leq \mathcal A_\pi(\theta)$.
Similarly, we derive $S_\mathcal{B} : \boldsymbol\Theta_\mathrm{ir} \to
\boldsymbol\Pi_\mathrm{dr}$ from $\mathcal B_{\theta}(\pi_\mathrm{dr}) \leq
\mathcal B_{\theta}(\pi)$ if $\theta \in \boldsymbol\Theta_\mathrm{ir}$. For
this we use assumption \eqref{eq:mu.additive}, which gives $R(\pi,\theta)=
R(\pi,0)+ B(\theta)|\pi|$, and the three relations
\begin{align*}
&\int_D \pi_x^2 \,\mathrm{d} x \!\overset{\text{\eqref{eq:PoSz}}}\geq\! \int_D
\big(\pi_\mathrm{dr}\big){}_x^2 \,\mathrm{d} x , \qquad
\int_D R(\pi,0) \,\mathrm{d} x = \int_D R(\pi_\mathrm{dr},0) \,\mathrm{d} x ,\\
& \int_D |\pi|\,B(\theta) \,\mathrm{d} x
\!\overset{\text{\eqref{eq:HardyLittle}}}\geq\! \int_D
\pi_\mathrm{dr}\, \big( B(\theta)\big)_\mathrm{dr} \,\mathrm{d} x = \int_D
|\pi|\, B\big(\theta_\mathrm{dr}\big) \,\mathrm{d} x ,
\end{align*}
where we used that $B$ is nondecreasing.
This finishes the proof of existence of even, monotone pairs.
\end{proof}
\color{black}
\begin{remark}[{\sl Aseismic-slip regime}]\label{rem-aseismic}
Under very low shear velocities $|v_\infty|\ll 1$, real faults may go into
so-called aseismic slip (also called aseismic creep), where one observes pure
sliding like predicted by our steady state solutions constructed
above. However, for our simplified evolutionary model introduced in Section
\ref{se:NumSimul} (cf.\ \eqref{eq:SimpMod}) numerical simulations predict
instability of the steady state and the development of stick-slip
oscillations, see Section~\ref{su:NumSimPDE}. In the former case, stresses
remain low and never challenge the plastic yield stress
$\mu(0,\theta_\infty)$ at the core of the faults, a fact which is
unfortunately not covered by our model. One possible modification for
modeling this effect would be to replace the set-valued Sign$(\cdot)$ in
\eqref{eq3} by some monotone smooth approximation, e.g.\
$\pi \mapsto \tanh(\pi/\delta)$ with $0<\delta\ll 1$.
\end{remark}
\subsection{Asymptotics of the plastic zone for $\eta\to0$
and $\kappa\to 0$}
\label{sec-localization}
The gradient term in \eqref{eq3} and in \eqref{eq3-three} controls in a certain
way the width of the cataclastic zone where the slip is concentrated. There is
an expectation that, when suppressing it by $\eta\to0$, the slip zone will get
narrower. It is however a rather contra-intuitive effect that the zone
eventually does not degenerate to a completely flat interface like it would be
in so-called perfect plasticity where the plastic strain rate $\pi$ would be a
measure on $D$. Here, in the limit, $\pi$ only looses its
$W^{2,\infty}$-regularity as stated in Theorem~\ref{th:ExistSteady} for
$\eta>0$ but remains in $L^1(D)$.
The definition of weak solutions \eqref{eq3-weak}
remains in its variational form or in its
strong form \eqref{eq3-strong} just putting $\eta=0$.
It should be emphasized that the boundary conditions $\pi(\pm H)=0$ are now
omitted. It will turn out that in the limit $\eta=0$ the plastic variable
$\pi$ becomes a pointwise function of $\theta$ and $\sigma$.
By the strict convexity of $\pi\mapsto R(\pi,\theta)$ the set-valued mapping
$\pi\mapsto \partial_\pi(\pi,\theta)=\mu(\pi,\theta){\rm Sign}(\pi)$ is strictly
monotone (cf.\ \eqref{ass1}). Thus, $\pi$ in
$\mu(\pi,\theta)\mathrm{Sign}(\pi)\in\sigma$
can be uniquely determined as a function of $\sigma$ and $\theta$.
Specifically,
\begin{align}
\label{pi=f(theta,sigma)}
\pi=\big[\mu(\cdot,\theta)\mathrm{Sign}(\cdot)\big]^{-1}(\sigma)=:
\varPi(\sigma,\theta)\,,
\end{align}
and the mapping $\varPi:\mathbb R^2\to\mathbb R$ is continuous.
In this section, let us denote the solution obtained as a
Schauder fixed point in the proof of Theorem~\ref{th:ExistSteady} by
$(\varepsilon_\eta,v_\eta,\pi_\eta,\alpha_\eta,\theta_\eta,\sigma_\eta)$.
\begin{proposition}[Convergence for $\eta\to0$]\label{prop2}
Let assumptions \eqref{ass} hold together with
\begin{subequations}
\label{ass-M}
\begin{align}\label{ass-M1}
& \exists\, \varPhi:\mathbb R\to [0,\infty) \text{ continuous, superlinear }\forall\,
(\pi,\theta): \quad R(\pi,\theta) \geq \varPhi(\pi) \ \ \text{ and}
\\\label{ass-M2}
&\big|\mu(\pi,\theta){-}\mu(\pi,\widetilde\theta)\big|\le o\big(|\theta{-}\widetilde\theta|\big)
\ \text{ with some $o:\mathbb R^+\to\mathbb R^+$ continuous, $o(0)=0$}\,.
\end{align}
\end{subequations}
There is a subsequence such that, for some
$\pi\in L^1(D)$, $v\in W^{1,1}(D)$,
$\alpha\in W^{2,\infty}(D)$, $\varepsilon\in W^{1,\infty}(D)$,
$\theta\in W^{1,\infty}(D)$, and $\sigma\in\mathbb R$, it holds
\begin{subequations}
\label{conv}
\begin{align}
\label{conv1}
\mbox{}\qquad\qquad &\varepsilon_\eta\to\varepsilon&&\text{weakly* in }\
W^{2,\infty}(D),&&\qquad\qquad\mbox{}
\\
&v_\eta\to v&&\text{weakly in }\ W^{1,1}(D),&&
\\
&\pi_\eta\to\pi&&\text{weakly in }\ L^1(D),&&
\\
&\alpha_\eta\to\alpha&&\text{weakly* in }\ W^{2,\infty}(D),&&
\\
&\theta_\eta\to\theta&&\text{strongly in }\ H^1(D),&&
\\
&\sigma_\eta\to\sigma&&\text{in }\ \mathbb R,\ \ \text{ and }
\\
\label{conv.g}
& \pi(x) =\varPi(\sigma,\theta(x))\hspace{-2em}&& \text{ for a.a. } x\in D.&&&&
\end{align}
\end{subequations}
Moreover, $(\varepsilon,v,\pi,\alpha,\theta,\sigma)$
is a classical solution to \eqref{eq}--\eqref{BC} in the sense that {\rm(\ref{eq}a,b,d,e)}
and \eqref{eq3-strong} with $\eta=0$ hold pointwise everywhere on $D$.
More specifically, $\pi\in C(D)$ and $v\in C^1(D)$.
\end{proposition}
\begin{proof}
From the proof of Theorem~\ref{th:ExistSteady}, we can see that the a priori
bounds for
\[
(\varepsilon_\eta,v_\eta,\pi_\eta,\alpha_\eta,\theta_\eta,\sigma_\eta) \in
W^{2,\infty}(D) {\times} W^{1,1}(D)^2{\times} L^1(D){\times}
W^{2,\infty}(D) {\times} W^{2,1}(D)^2 {\times} \mathbb R
\]
are independent of $\eta>0$ and
$\|\pi_\eta\|_{H^1(D)}=\mathscr{O}(1/\sqrt\eta)$. Moreover, from
$\pi_\eta= S_\mathcal{B}(\theta_\eta)$, we can easily see that even
$R(\pi_\eta,\theta_\eta)$ is bounded in $L^1(D)$. Using
\eqref{ass-M1} we can apply the criterion of de la Valle\'e
Poussin \cite{ValPou15IL} and obtain that $\{\pi_\eta\}_{\eta>0}$ is weakly
compact in $L^1(D)$.
Then the limit passage in the weak solution to \eqref{eq}--\eqref{BC}
for $\eta\to0$ is quite easy. The only nontrivial point is the limit passage in
the variational inequality \eqref{eq3-weak}. We first use $\eta
(\pi_\eta)_x = \mathscr{O}(\sqrt\eta)$ in $L^2(D)$ and obtain, for all
$\widetilde \pi \in H^1(D)$, the relations
\begin{align}
\nonumber
\!\!\int_D\! &
R(\widetilde\pi,\theta)-\sigma(\widetilde\pi{-}\pi) \,\mathrm{d} x
=\lim_{\eta\to0}\int_D R(\widetilde\pi,\theta_\eta)- \sigma_\eta \, (\widetilde\pi{-}\pi_\eta)
+\eta(\pi_\eta)_x\tilde\pi_x \,\mathrm{d} x
\\ \nonumber
&\geq
\limsup_{\eta\to0}\int_D
R(\widetilde\pi,\theta_\eta)- \sigma_\eta \,(\widetilde\pi{-}\pi_\eta)
+\eta\,(\pi_\eta)_x(\tilde\pi{-}\pi_\eta)_x \,\mathrm{d} x
\overset{\text{\eqref{eq3-weak}}}\geq\liminf_{\eta\to0}\int_D
R(\pi_\eta,\theta_\eta) \,\mathrm{d} x
\\
&\geq \liminf_{\eta\to0}\int_D R(\pi_\eta,\theta) \,\mathrm{d} x
+\lim_{\eta\to0}\int_D\!R(\pi_\eta,\theta_\eta){-}R(\pi_\eta,\theta) \,\mathrm{d} x
\ge\!\int_D\!
R(\pi,\theta) \,\mathrm{d} x\ + \ 0 \, .
\label{eq3-weak+}
\end{align}
The liminf estimate follows because $R(\cdot,\theta)$ is convex and
continuous such that $\int_D R(\cdot,\theta) \,\mathrm{d} x $ is weakly lower
semicontinuous on $L^1(D)$. The penultimate integral in
\eqref{eq3-weak+} converges to $0$ because
$\theta_\eta\to\theta$ uniformly on $D$ due to the compact embedding
$W^{2,1}(D)\subset C(D)$. Hence,
$\lim_{\eta\to0}|\int_D R(\pi_\eta,\theta_\eta)-R(\pi_\eta,\theta)| \,\mathrm{d} x
\le\lim_{\eta\to0}\int_D|\pi_\eta|o(\theta_\eta{-}\theta) \,\mathrm{d} x\le
\lim_{\eta\to0}\|\pi_\eta\|_{L^1(D)}^{}o(\|\theta_\eta{-}\theta\|_{L^\infty(D)}^{})
=0$ where the function $o$ is from \eqref{ass-M2}.
The variational inequality \eqref{eq3-weak+} does not contain any
$x$-derivatives any more and hence is equivalent to the pointwise inequality
$R(\widetilde\pi, \theta(x)) - \sigma (\widetilde\pi{-}\pi(x)) \geq R(\pi(x),\theta(x)) $
a.e.\ in $D$. But this is equivalent to
$\sigma \in \partial_\pi R(\widetilde\pi(x),\theta(x))$ and hence \eqref{conv.g} holds.
Since the mapping $\varPi:\mathbb R^2\to\mathbb R$ from \eqref{pi=f(theta,sigma)} is
continuous and since $\theta\in H^1(D)\subset C(D)$, we see that
$x\mapsto \pi(x)=\varPi(\sigma,\theta(x))$ is continuous as well, i.e.\
$\pi\in C(D)$.
\end{proof}
We are now ready to study the limit $\kappa\to 0$ as well, which is really
surprising because we are losing all control over spatial derivatives and all
the modeling length scales induced by $\eta$ and $\kappa$ tend to $0$. In such
a situation the usual compactness arguments fail and fast spatial oscillations,
i.e.\ microstructures, may appear. Indeed we will see in Remark
\ref{re:ManySols} that there are indeed many complicated solutions without any
length scale. However, it is surprising that it is possible to show that
natural solutions exist, namely even, monotone pairs $(\theta,\pi)$. The idea
is to use for $\kappa>0$ and $\eta=0$ the even, monotone pairs
$(\theta^\kappa,\pi^\kappa)$ obtained from Proposition \ref{pr:SymMonotone} and
the subsequent limit $\eta\to 0$ in Proposition \ref{prop2}. The monotonicity
of the pairs $(\theta^\kappa,\pi^\kappa)$ allows us to deduce pointwise
convergence, which is good enough to pass to the limit $\kappa\to 0$ even in
nonlinear functions.
Under the additional assumptions \eqref{eq:wtmu.monotone}, which are satisfied
by our example treated in Section \ref{su:SimplifMod}, we then obtain the
typical behavior. There is a critical value $\pi_*>0$ such that for small
positive $v_\infty$ the cataclastic zone is $(-h,h)$ with $h=v_\infty/\pi_*$,
where $(\theta,\pi)$ assume constant values $(\theta_*,\pi_*)$ independent of
$v_\infty$, whereas for $x$ with $h<|x|<H$ we have
$(\theta,\pi)=(\theta_\infty,0)$, see \eqref{eq:PlateauSol}.
\begin{proposition}[The limit $\kappa\to 0$ for monotone pairs]
\label{prop3}
Let the assumptions \eqref{ass}, \eqref{eq:mu.additive}, and \eqref{ass-M} hold
and let us consider a family $\big( (\theta^\kappa,\pi^\kappa)\big)_{\kappa>0}$ of
even, monotone solutions to \eqref{eq} with $\eta=0$ and $v_\infty>0$.
Then:\\
\ITEM{(i)}{there exists a subsequence (not relabeled) and an even, monotone pair
$(\theta^0,\pi^0) \in L^\infty(D)\times L^\infty(D) $ such that for $\kappa
\to 0$ we have the convergence}
\[
(\theta^\kappa(x),\pi^\kappa(x)) \ \to \ (\theta^0(x),\pi^0(x)) \quad
\text{ for a.a. }x\in D
\]
\ITEM{}{and that $(\theta^0,\pi^0)$ solves the minimization problems}
\begin{equation}
\label{eq:calA.B.0}
\mathcal A_{\pi^0}^0(\theta^0) \leq \mathcal A_{\pi^0}(\theta):=\!\int_D
|\pi^0| \varphi_0(\theta){-}\varphi_0(\theta) \,\mathrm{d} x
\ \text{ and } \
\mathcal B_{\theta^0}^0(\pi^0) \leq \mathcal B_{\theta^0}(\pi):=\!\int_D\!
R(\pi,\theta^0)\,\mathrm{d} x
\end{equation}
\ITEM{}{for all $(\theta,\pi)\in L^1(D){\times} L^1(D)$ with
$\int_D \pi \,\mathrm{d} x = 2v_\infty$.}
\ITEM{(ii)}{Moreover, if we define $\theta=\Theta_f(\pi)$ to be the unique solution of
$f_0(\theta)=|\pi| f_1(\theta)$, set $\widetilde\mu:[0,\infty)\to
(0,\infty); \ \pi\mapsto \mu(\pi,\Theta_f(\pi))$, and
assume that there exists $\pi_\circ>0$ such that}
\begin{equation}
\label{eq:wtmu.monotone}
\widetilde\mu \text{ is strictly decreasing on }[0,\pi_\circ]
\quad \text{and} \quad
\widetilde\mu \text{ is strictly increasing on } [\pi_\circ,\infty),
\end{equation}
\ITEM{}{then there exists a unique $\pi_*>\pi_\circ$ such
$\int_0^{\pi_*}\widetilde\mu(\pi)\,\mathrm{d}\pi= \pi_* \widetilde\mu(\pi_*)$ and the above solutions
$(\theta^0,\pi^0)$ are uniquely given by}
\begin{equation}
\label{eq:PlateauSol}
(\theta^0,\pi^0)(x)
= \begin{cases} (\Theta_f(\pi_*),\pi_*) & \text{for } |x| <
{v_\infty}/{\pi_*} \leq H, \\
(\theta_\infty,0) & \text{for }
{v_\infty}/{\pi_*} <|x| \leq H, \\
\big(\Theta_f(v_\infty/H), v_\infty/H\big) & \text{for } v_\infty\geq \pi_* H.
\end{cases}
\end{equation}
\ITEM{}{In particular, in this case the whole family $\big(
(\theta^\kappa,\pi^\kappa)\big)_{\kappa>0}$ converges pointwise.}
\end{proposition}
\begin{proof}
By Proposition \ref{pr:SymMonotone} and Proposition \ref{prop2} we know that
for all $\kappa>0$ even, monotone pairs $(\theta^\kappa,\pi^\kappa)$ exist
and satisfy $\theta^\kappa \in W^1(D)$ and $\pi^\kappa \in C(D)$. Moreover,
we have $\theta^\kappa(x)\in [0,\theta_\infty]$ and
$\pi^\kappa (x) = \varPi(\sigma^\kappa, \theta^\kappa(x))$ for all $x\in D$.
\emph{Step 1. Superlinear a priori bound for $\pi^\kappa$:} We
again use the uniform superlinearity of the dissipation potential
$R(\cdot,\theta)$ from \eqref{ass-M1}. As
$\pi^\kappa$ is a minimizer of $\mathcal B_{\theta^\kappa} (\cdot)$ we obtain
the uniform bound $\int_D \varPhi(\pi_\kappa) \,\mathrm{d} x \leq C_*< \infty$.
Thus, we have weak compactness (by de la Valle\'e Poussin \cite{ValPou15IL})
and along a subsequence (not relabeled) we have $\pi^\kappa \rightharpoonup \pi^0$ and
conclude $\int_D \pi^0 \,\mathrm{d} x = 2 v_\infty$. Moreover, using
$\pi^\kappa = \pi_\mathrm{dr}^\kappa$ this implies the a priori bound
\begin{equation}
\label{eq:Phi.pw.bdd}
0 \leq \pi^\kappa (x) \leq R \quad \text{for } |x| \geq \frac{C_*}{\varPhi(R)} .
\end{equation}
\emph{Step 2. Pointwise convergence:} Exploiting the monotonicity and the a
priori bounds $\theta^\kappa\in [0,\theta_\infty]$ and \eqref{eq:Phi.pw.bdd},
we can apply the classical Helly's selection principle to obtain pointwise
convergence (everywhere in $D$). Along a subsequence (not relabeled) we have
\[
\sigma^\kappa\to \sigma^0, \qquad (\theta^\kappa(x),\pi^\kappa(x)) \to
(\theta^0(x),\pi^0(x)) \text{ for all }x \in D.
\]
Here the monotonicities are kept, i.e.\ $\theta^0=\theta_\mathrm{ir}$ and
$\pi^0= \pi^0_\mathrm{dr}$, but the continuity of the limits might be
lost. Moreover, $\pi^0(0)=\infty$ might be possible.
\emph{Step 3. Limit passage in the equations:} Since $\varPi$ is continuous,
the pointwise convergence yields the limit relation
\begin{equation}
\label{eq:varPi.0}
\pi^0(x)= \varPi(\sigma^0,\theta^0(x)) \quad \text{for all } x \in D.
\end{equation}
For the equation determining $\theta$ we can use the a priori estimate $ \kappa
\| \theta^\kappa\|_{L^2}^2 \leq C_*$ and pass to the limit in the weak form of
$(\kappa \,\theta^\kappa_x)_x+f_0(\theta^\kappa)=\pi^\kappa f_1(\theta^\kappa)$, i.e.\ in
the integral identity
\[
\int_D \kappa \,\theta^\kappa_x \,\widetilde\theta_x - f_0(\theta^\kappa)\, \widetilde\theta +
\pi^\kappa f_1(\theta^\kappa)\, \widetilde\theta \,\mathrm{d} x = 0 \quad \text{for all }
\widetilde\mu \in H^1_0(D).
\]
This provides the pointwise relation
\begin{equation}
\label{eq:Thetaf.0}
f_0(\theta^0(x)) = \pi^0(x)\, f_1(\theta^0(x)) \quad
\text{ for a.a.\ }x\in D.
\end{equation}
From \eqref{eq:varPi.0} and \eqref{eq:Thetaf.0} we immediately see that
\eqref{eq:calA.B.0} holds.
We next observe that $\theta=\Theta_f(\pi)$ is well-defined by the implicit
function theorem using \eqref{ass2}. Thus, the solutions satisfy
$\theta^0(x)=\Theta_f(\pi^0(x))$ for a.a.\ $x \in D$. Henceforth,
recalling
$\widetilde\mu(\pi)= \mu(\pi,\Theta_f(\pi))$, the minimization
problem \eqref{eq:calA.B.0} is equivalent to $\sigma \in
\widetilde\mu(\pi)\mathrm{Sign}(\pi)$ and $\int_D \pi\,\mathrm{d} x = 2v_\infty$.
Defining the function $\mathsf R(\pi)=\int_0^\pi \widetilde\mu(s) \,\mathrm{d} s $, this is
equivalent to the following problem:
\[
\text{minimize }\ \pi \mapsto \int_D \mathsf R(\pi(x)) \,\mathrm{d} x \quad \text{ subject to
} \ \pi\geq 0 \text{ and } \int_D \pi \,\mathrm{d} x = 2v_\infty >0.
\]
However, this minimization problem is well understood via the convex hull
$\mathsf R^{**}$, see \cite[Ch.\,2]{Brai02GCB}. By our assumption
\eqref{eq:wtmu.monotone} we know that $\mathsf R^{**}$ has the form
\begin{align}
\label{eq:R**}
&\mathsf R^{**}(\pi)= \begin{cases} \mathsf R(\pi_*) \pi/\pi_* &
\text{for } \pi \in [0,\pi_*], \\
\mathsf R(\pi) & \text{for } \pi \geq \pi_*, \end{cases}.
\end{align}
and satisfies $\mathsf R^{**}(\pi) \lneqq \mathsf R(\pi)$ for $\pi\in (0,\pi_*)$ and $
\mathsf R''(\pi)>0$ for $\pi \geq\pi_*$, see Figure \ref{fig:R**}. \color{black}
\begin{figure}
\begin{tikzpicture}
\draw[->, thick] (0,0)--(4,0) node[below]{$\pi$};
\draw[->, thick] (0,0)--(0,3);
\draw (1.5,0.1)--(1.5,-0.15) node[below]{$\pi_\circ$};
\draw (2.2,0.1)--(2.2,-0.15) node[below]{$\pi_*$};
\draw[color =blue, very thick, domain=0.0:3.8] plot (\x, {0.4+ 0.4*(\x-1.5)^2})
node[pos=0.8, left]{$\widetilde{\mu}(\pi)$};
\draw[color=blue!50!red, very thick, dashed] (0,0.62)--(2.2,0.62);
\end{tikzpicture}
\quad
\begin{tikzpicture}
\draw[->, thick] (0,0)--(4,0) node[below]{$\pi$};
\draw[->, thick] (0,0)--(0,3);
\draw (1.5,0.1)--(1.5,-0.15) node[below]{$\pi_\circ$};
\draw (2.2,0.1)--(2.2,-0.15) node[below]{$\pi_*$};
\draw[color =blue, very thick, domain=0.0:3.5]
plot (\x, {0.4*\x+ 0.1333*(1.5^3+(\x-1.5)^3)}) node[left]{$\mathsf R(\pi)$};
\draw[color=blue!50!red, very thick, dashed] (0,0)--(2.2,1.36) node[pos=0.8, below right]{$\mathsf R^{**}(\pi)$} ;
\end{tikzpicture}
\quad
\begin{minipage}[b]{0.29\textwidth}\caption{\small\sl The functions $\widetilde\mu$, $\mathsf R$, and
$\mathsf R^{**}$.}
\label{fig:R**}
\end{minipage}
\end{figure}
As our $\mathsf R$ is superlinear, a minimizer always exists. Moreover,
recalling that $v_\infty/H>0$ is the average value of $\pi:D\to \mathbb R$, the
minimizer is unique if and only if the tangent at $\pi=v_\infty/H$ is not in
the interior of an interval on which $\mathsf R^{**} $ is affine. In the open
interval $(0, v_\infty/H)$ the minimizers $\pi$ attain only the values $0$ and
$\pi_*$ on sets with the corresponding measures to fit the average. However,
by constructing the even, nonincreasing rearrangement, we find a unique
minimizer, where only the value at the two jump points
$x = \pm h= v_\infty/\pi$ are free.
From these uniqueness results we also obtain the convergence of the full family
by the standard contradiction via compactness. With this, Proposition
\ref{prop3} is established.
\end{proof}
The new condition \eqref{eq:wtmu.monotone} can be checked numerically for our
example specified in \eqref{eq:DataSimple} giving $\pi_* \approx
1.4923$ and $\pi_\circ=0.6193$. Indeed, to see the desired effect of a fixed
$\pi_*$ leading to a
cataclastic zone of width
$2h=2v_\infty/\pi_*$, our condition \eqref{eq:wtmu.monotone} is sufficient, but
far from being necessary. What we really need is that $\mathsf R^{**}$ is affine in
an interval $[0,\pi_*]$, which automatically follows if $\mathsf R''(0^+)=\lim_{pi
\searrow 0} \mathsf R''(\pi) < 0$. In fact, in general we can
consider the case $\mu(\pi,\theta)= \mu_0+A(\pi)+ B(\theta)$ and general $f_0$
and $f_1$. Using $\Theta_f(0)=\theta_\infty$ following from
$f_0(\theta_\infty)=0$, an explicit calculation gives
\[
\mathsf R''(0^+) =\widetilde\mu'(0^+) = \partial_\pi \mu(0^+,\theta_\infty) + \partial_\theta
\mu(0^+,\theta_\infty) \frac{f_1(\theta_\infty)}{f'_0(\theta_\infty)},
\]
which may be negative because of $f'_0(\theta_\infty)<0$.
\begin{remark}[{\sl Nonuniqueness of solutions}]
\label{re:ManySols}
We want to emphasize that the uniqueness result for $\kappa=\eta=0$ at the end
of Proposition \ref{prop3} concerns only even, monotone solutions. Because of
$\kappa=\eta=0$ there are indeed infinitely many solutions, as we can
``rearrange'' the function values of $(\theta,\pi)$ freely. In the case
$v_\infty< \pi_* H$, we can choose any open set $P\subset D$ with
$\int_D 1_P\,\mathrm{d} x = 2v_\infty/\pi_*$ and the function
\[
\big(\theta(x),\pi(x)\big) = \begin{cases}
\big(\Theta_f(\pi_*),\pi_*\big)& \text{ for } x \in P,\\
\big(\theta_\infty,0\big)& \text{ for } x \in D\setminus P
\end{cases}
\]
is a solution of \eqref{eq:calA.B.0} as well.
\end{remark}
\color{black}
\section{Analysis of the evolutionary model}\label{sec-evol}
We now consider the evolutionary model \eqref{evol}.
The energetics \eqref{eq:energetics} behind this model can be revealed by
testing momentum balance \eqref{evol1} by $\widetilde v=v-w^\infty$ with
$w^\infty(t,x):=v_\infty(t) x /H$, the plastic flow rule \eqref{evol2} by
$\DTp$, and the damage rule \eqref{evol3} by $\DT\alpha$.
Using the Dirichlet boundary condition for the velocity at $x=\pm H$, we
have $\widetilde v(\pm H)=0$, as needed. The first test gives, in particular, the term
\begin{align}\nonumber
\int_D
\mathbb{C}(\alpha)\varepsilon\Big(v_x-\frac{v_\infty}H\Big) \,\mathrm{d} x=\int_D\mathbb{C}(\alpha)\varepsilon
(\DT\varepsilon{+}\DTp) \,\mathrm{d} x-\frac{v_\infty}H\int_D\mathbb{C}(\alpha)\varepsilon \,\mathrm{d} x\qquad\qquad
\\=\frac{\d}{\d t}\int_D\frac12\mathbb{C}(\alpha)\varepsilon^2\,\mathrm{d} x
+\int_D\mathbb{C}(\alpha)\varepsilon\DTp-\frac12\mathbb{C}'(\alpha)\varepsilon^2\DT\alpha \,\mathrm{d} x
-\frac{v_\infty}H\int_D\mathbb{C}(\alpha)\varepsilon \,\mathrm{d} x\,,
\end{align}
where also \eqref{evol2} has been used. This test of the inertial form gives
\begin{align}\nonumber
\int_D\varrho\DT v\Big(v-v_\infty\frac{x}H\Big) \,\mathrm{d} x=
\frac{\d}{\d t}\int_D\frac\varrho2v^2 \,\mathrm{d} x
-v_\infty\int_D\varrho\DT v \frac{x}H \,\mathrm{d} x\,.
\end{align}
Combining it with the tests of \eqref{evol2} by $\DTp$ and of \eqref{evol3} by
$\DT\alpha$ which give
\begin{subequations}\begin{align}\label{subst}
&\!\!\int_D\mathbb{C}(\alpha)\varepsilon\DTp \,\mathrm{d} x=
\int_D\mu(\DTp,\theta)|\DTp|+\eta\DTp_x^2 \,\mathrm{d} x\ \ \text{ and}
\\&
\!\!\int_D\!\!-\frac12\mathbb{C}'(\alpha)\varepsilon^2\DT\alpha \,\mathrm{d} x=
\!\int_D\!\!\DT\alpha\partial\zeta(\DT\alpha)+\Big(\frac12\mathbb{C}'(\alpha)\varepsilon^2\!
+G_{\rm c}\frac{\alpha{-}1}{\ell^2}\Big)\DT\alpha \,\mathrm{d} x
+\frac{\d}{\d t}\int_D\frac12G_{\rm c}\ell^2\alpha_x^2 \,\mathrm{d} x,
\end{align}\end{subequations}
we altogether obtain the energy balance
\begin{align}\nonumber
&\frac{\d}{\d t}\int_D\!\!\!\linesunder{\frac\varrho2v^2+\varphi(\varepsilon,\alpha)+\frac12G_{\rm c}\ell^2\alpha_x^2}{kinetic and stored}{energies}\!\!\!\!\d x\!
\\[-1em]&\hspace{11em}
+\int_D\!\!\!\lineunder{\mu(\DTp,\theta)|\DTp|+\DT\alpha\partial\zeta(\DT\alpha)+\eta\DTp_{x_{_{_{}}}}^2}{dissipation rate}\!\!\!\!\d x=\!\!\!\!\!\linesunder{\langle\tau,(v_{\infty_{_{}}},-v_\infty)\rangle}{power of}{external load}\!\!\!,
\label{engr}\end{align}
where $\tau\in\mathbb R^2$ is the traction on the boundary (i.e.\ here two
forces at $x=\pm H$) defined as a functional
$\langle\tau,(z(H),z(-H))\rangle
=\int_D\varrho\DT v z+\mathbb{C}(\alpha)\varepsilon z_x \,\mathrm{d} x$ for any
$z\in H^1(D)$, cf.\ e.g.\ \cite[Sect.6.2]{KruRou19MMCM}.
Further on, we will be interested in an initial-value problem. For this, we
prescribe some initial conditions, i.e.\
\begin{align}
\label{IC}
v(\cdot,0)=v_0\,,\ \ \ \varepsilon(\cdot,0)=\varepsilon_0\,,\ \ \
\alpha(\cdot,0)=\alpha_0\,,\ \ \text{ and }\ \
\theta(\cdot,0)=\theta_0\,.
\end{align}
A definition of the weak solutions of particular equations/inclusions
in \eqref{evol} can be cast by standard way, using convexity of the
involved functionals. Let us specify, rather for illustration, the weak
formulation for the inclusion \eqref{evol3} exploiting that
$\mu(\DTp,\theta){\rm Sign}(\DTp)$, i.e.\
$\mu(\pi,\theta){\rm Sign}(\pi) = \partial_\pi R(\DTp,\theta)$ where
$R(\pi ,\theta)$ is convex in
the variable $\pi=\DTp$. This leads to the variational inequality
\begin{align}\label{eq3-weak+++}
\int_0^T\!\!\int_D
R(\widetilde\pi,\theta)-\mathbb{C}(\alpha)\varepsilon(\widetilde\pi{-}\DTp)
-\eta\DTp_x(\tilde\pi{-}\DTp)_x
\,\mathrm{d} x\d t\ge\int_0^T\!\!\int_D
R(\DTp,\theta) \,\mathrm{d} x\d t
\end{align}
to be valid for any $\widetilde\pi\in L^\infty(I{\times}\Omega)$.
Beside the previous assumptions, we now also assume
\begin{align}
v_0\in L^2(D)\,,\ \ \ \varepsilon_0\in L^2(D)\,,\ \ \
\alpha_0\in H^1(D)\,,\ \ \
\theta_0\in H^1(D)\,.
\label{ass-IC}\end{align}
The definition of weak solutions to \eqref{evol} with \eqref{BC-evol}
and \eqref{IC} is standard and we will not write it explicitly; the
variational inequality \eqref{eq3-weak} is to hold
integrated over $I$. Furthermore, we also exploit the superlinear growth of
$R(\cdot,\theta)$ from \eqref{ass-M1}, namely
\begin{align}
\label{mu-growth}
\mu(\pi,\theta)|\pi| \geq R(\pi,\theta) \geq \varPhi(\pi) ,
\end{align}
which is a standard estimate for $\widetilde\mu \in \partial\psi(\pi)$, namely $\pi\widetilde\mu =
\psi(\pi)+\psi^*(\widetilde\mu)\geq \psi(\pi)$ as $\psi^*\geq 0$.
Note that the standard model \eqref{DR1+} complies
with assumption \eqref{ass-M1}.
Relying formally on the tests leading to \eqref{engr}, after integration
in time on the interval $[0,t]$ when using also the by-part
integration, we obtain
\begin{align}\nonumber
&\int_D\frac\varrho2v^2(t)+\varphi(\varepsilon(t),\alpha(t))+\frac12G_{\rm c}\ell^2
\alpha_x^2(t) \,\mathrm{d} x
+\int_0^t\!\!\int_D
\mu(\DTp,\theta)|\DTp|+\DT\alpha\partial\zeta(\DT\alpha)+\eta\DTp_x^2 \,\mathrm{d} x\d t
\\[-.3em]&\qquad\qquad\nonumber
=\int_D\frac\varrho2v_0^2+\varphi(\varepsilon_0,\alpha_0)
+\frac12G_{\rm c}\ell^2[\alpha_0]_x^2 \,\mathrm{d} x
+\int_0^t\!\!\int_D\varrho\DT v w^\infty+\mathbb{C}(\alpha)\varepsilon_xw^\infty_x \,\mathrm{d} x\d t
\\&\qquad\qquad\nonumber=\int_D\frac\varrho2v_0^2+\varphi(\varepsilon_0,\alpha_0)
+\frac12G_{\rm c}\ell^2[\alpha_0]_x^2+\varrho v(t)\big(v_\infty(t){-}v_\infty(0)\big)\frac{x}H \,\mathrm{d} x
\\[-.3em]
&\hspace*{17em}+\!\int_0^t\!\!\int_D\!\mathbb{C}(\alpha)\varepsilon_x \frac{v_\infty}H
-\varrho v\DT v_\infty\frac{x}H \,\mathrm{d} x\d t.
\label{engr1}\end{align}
Moreover, the aging equation \eqref{evol5} has to be tested separately by using
the test function $\theta{-}\theta_\infty$, which has zero traces
for $x=\pm H$. Integrating the result over $[0,t]$ leads to
\begin{align}
\nonumber
\int_D \frac12\,\theta^2(t) \,\mathrm{d} x+\int_0^t\!\!\int_D\kappa\theta_x^2\,\mathrm{d} x\,\mathrm{d} t
&=\int_D(\theta(t){-}\theta_0)\theta_\infty \,\mathrm{d} x
\\[-.5em]&\quad+\int_0^t\!\!\int_D|\DTp|f_1(\theta)(\theta{-}\theta_\infty)
-f_0(\theta)(\theta{-}\theta_\infty)\,\mathrm{d} x\,.
\label{engr2}\end{align}
When summing \eqref{engr1} and \eqref{engr2}, we can use the H\"older and a
(generalized) Young inequality to estimate the resulting right-hand side.
Actually, the only nontrivial term is
$|\DTp|f_1(\theta)(\theta{-}\theta_\infty)$ in
\eqref{engr2} and it can be estimated as
\begin{align}
\nonumber
\int_D|\DTp|f_1(\theta)(\theta{-}\theta_\infty) \,\mathrm{d} x
&\leq \int_D \frac12\varPhi\big(|\DTp|\big)
+\frac12 \varPhi^*\big(2f_1(\theta)(\theta{-}\theta_\infty)\big) \,\mathrm{d} x
\\ \label{engr3}
&\overset{\text{\eqref{mu-growth}}}\leq \int_D\frac12\mu(\DTp,\theta)|\DTp|
+\frac12\varPhi^*\big(2f_1(\theta)(\theta{-}\theta_\infty)\big) \,\mathrm{d} x\,,
\end{align}
where $\varPhi^*$ is the Fenchel-Legendre conjugate of $\varPhi$, i.e.\
$\varPhi^*(s)=\sup_{\pi \in \mathbb R} \big(\pi s - \varPhi(\pi) \big) $. \color{black}
The term $\frac12\mu(\DTp,\theta)|\DTp|$ in \eqref{engr3} can then be
absorbed in the left-hand side of \eqref{engr1} while
$\frac12 \varPhi^*(2f_1(\theta)(\theta{-}\theta_\infty))$ is a priori
bounded since $0\le\theta\le\theta_\infty$. Eventually, the last term in
\eqref{engr1} can be estimated as $\varrho(1{+}|v|^2)|\DT v_\infty|$.
Assuming $v_\infty\in W^{1,1}(I)$ and using Gronwall's inequality, from the
left-hand sides of \eqref{engr1} and \eqref{engr2} we can read the a priori
estimates
\begin{subequations}
\label{est}
\begin{align}
\label{est1}
&
\|v\|_{L^\infty(I;L^2(D))}^{}\le C,
\\&\|\varepsilon\|_{L^\infty(I;L^2(D))}^{}\le C,
\\&\|p\|_{H^1(I;H^1(D))}\le C,
\\&\|\alpha\|_{L^\infty(I;H^1(D))\,\cap\,H^1(I;L^2(D))}^{}\le C,
\\&\|\theta\|_{L^\infty(I;L^2(D))\,\cap\,L^2(I;H^1(D))}^{}\le C.
\label{est4}
\end{align}
\end{subequations}
By comparison, we will get also an information about
$\DT v=(\mathbb{C}(\alpha)\varepsilon)_x/\varrho\in L^\infty(I;H^1(D)^*)$, about
$\DT\varepsilon=v_x-\DTp\in L^2(I;H^1(D)^*)$, and also
about $\DT\theta=f_0(\theta)-|\DT{p}|f_1(\theta)
+\kappa\theta_{xx}\in L^2(I;H^1(D)^*)$.
The rigorous existence proof of weak solutions is however very nontrivial and
seems even impossible for the full dynamical model \eqref{evol} with damage.
Some modifications by involving some additional dissipative terms or some
higher-order conservative terms seem necessary, cf.\
\cite[Sect.7.5]{KruRou19MMCM} or also \cite{RoSoVo13MRLF} for the model
without aging. Consistently also with the computational experiments in
Section~\ref{se:NumSimul} below, we thus present the rigorous proof only
for a model without damage, i.e.\ for $\mathbb{C}>0$ constant.
\begin{theorem}[Damage-free
case -- existence and regularity of solutions]\label{th:EvolExist}
Let {\rm(\ref{ass}a,c,d)} with $\mu$ smooth, \eqref{ass-IC}, and \eqref{mu-growth}
hold, and $\varrho>0$ be a constant and $v_\infty\in W^{1,1}(I)$. Then:\\
\ITEM{(i)}{There is a weak solution $(v,\varepsilon,p,\theta)
\in L^\infty(I;L^2(D))^2\times
H^1(I;H^1(D))\times (L^\infty(I;L^2(D))\cap L^2(I;H^1(D)))$
to the initial-boundary-value problem for the system {\rm(\ref{evol}a-c,e)}
with the boundary conditions \eqref{BC-evol} and the initial conditions \eqref{IC}.}
\ITEM{(ii)}{If $\sup_{0\le\theta\le\theta_\infty}^{}\mu(\cdot,\theta)$ does not have
a growth more than $\mathscr{O}(|\pi|^s)$, then these solutions are,
in fact, regular in the sense that $p\in W^{1,s}(I;H^2(D))$ and,
if $s\ge2$, also $\theta\in H^1(I;L^2(D))
\cap L^\infty(I;H^1(D))\cap L^2(I;H^2(D))$ and also each such weak
solution satisfies the energy balance \eqref{engr} without $\alpha$-terms integrated
over a time interval $[0,t]$ with any $t\in I$.}
\end{theorem}
Let us note that the $\mathscr{O}(|\pi|^s)$-growth condition in the point (ii)
surely covers the model \eqref{DR1++} for any $1\le s<\infty$.
\begin{proof}[Sketch of the proof]
Actually, the above formal procedure is to be made first for a suitable
approximation whose solutions exist by some specific arguments, and then to
pass to the limit. Imitating the split for the static problem used in the
proof of Theorem~\ref{th:ExistSteady}, we choose a staggered time
discretization. We take an equidistant partition of the time interval $I$ by
using the time step $\tau>0$, assuming $T/\tau$ integer and considering a
sequence of such $\tau$'s converging to 0. Then, recalling
$\partial_\pi R(\pi,\theta)=\mu(\pi,\theta){\rm Sign}(\pi)$, we consider a
recursive boundary-value problem for the system
\begin{subequations}
\label{disc}
\begin{align}
\label{disc1}
&\varrho\frac{v_\tau^k-v_\tau^{k-1}\!\!}\tau
-(\mathbb{C}\varepsilon_\tau^k)_x=0\,,
\\&\frac{\varepsilon_\tau^k-\varepsilon_\tau^{k-1}\!\!}\tau=(v_\tau^k)_x-\pi_\tau^k\,,
\\&\mu(\pi_\tau^k,\theta_\tau^{k-1})\xi_\tau^k=\mathbb{C}\varepsilon_\tau^k+\eta(\pi_\tau^k)_{xx}
\ \ \ \text{ with }\ \xi_\tau^k\in{\rm Sign}(\pi_\tau^k)\,,
\\&\frac{\theta_\tau^k-\theta_\tau^{k-1}\!\!}\tau=f_0(\theta_\tau^k)
-|\pi_\tau^k|f_1(\theta_\tau^k)+\kappa(\theta_\tau^k)_{xx}\,\label{disc4}
\end{align}\end{subequations}
to be solved for $k=1,2,...,T/\tau$ starting for $k=1$ from the
initial conditions $v_\tau^0=v_0$, $\varepsilon_\tau^0=\varepsilon_0$, and $\theta_\tau^0=\theta_0$.
The boundary conditions for \eqref{disc} are like in \eqref{BC} but now with
time-varying velocity $v_\infty$, i.e.\
\begin{align}\label{BC-disc}
v_\tau^k(\pm H)=\pm v_\infty^k=:\int_{(k-1)\tau}^{k\tau}\!\!\!\frac{v_\infty(t)}\tau \,\mathrm{d} t,
\ \ \ \ \ \ \ \pi_\tau^k(\pm H)=0,\ \ \ \ \ \ \ \theta_\tau^k(\pm H)=\theta_\infty\,.
\end{align}
The system (\ref{disc}a-c) has a variational structure with a convex coercive
potential
\begin{align}
(v,\varepsilon,\pi)\mapsto\int_D\!
\varrho\frac{(v{-}v_\tau^{k-1})^2\!}{2\tau}
+\mathbb{C}\varepsilon(v_x{-}\pi)+\mathbb{C}\,\frac{(\varepsilon{-}\varepsilon_\tau^{k-1})^2\!}{2\tau}
+R(\pi,\theta_\tau^{k-1})+\frac\eta2\pi_x^2 \,\mathrm{d} x\,.
\end{align}
For a sufficiently small $\tau>0$, this potential is convex and coercive
on $L^2(D)^2\times H^1(D)$. Minimization of this functional on an
affine manifold respecting the boundary conditions $v(\pm H)=\pm v_\infty^k$,
$\pi(\pm H)=0$, and $\theta(\pm H)=\theta_\infty$ gives by
the standard direct-method argument existence of an (even unique) minimizer,
let us denote
it by $(v_\tau^k,\varepsilon_\tau^k,\pi_\tau^k)\in L^2(D)^2\times H^1(D)$.
This minimizer satisfies (\ref{disc}a,b) in the weak sense
and also the inclusion $\partial_\pi R(\pi_\tau^k,\theta_\tau^{k-1})\ni
\mathbb{C}\varepsilon_\tau^k+\eta(\pi_\tau^k)_{xx}$. Therefore, there exists
$\xi_\tau^k\in{\rm Sign}(\pi_\tau^k)\subset H^1(D)^*$ such
that $\mu(\pi_\tau^k,\theta_\tau^{k-1})\xi_\tau^k=\mathbb{C}\varepsilon_\tau^k+\eta(\pi_\tau^k)_{xx}$
in the weak sense. Then we can solve \eqref{disc4} by
minimization of the convex functional
\begin{align}
\theta\mapsto\int_D\!\frac{(\theta-\theta_\tau^{k-1})\!}{2\tau}
+|\pi_\tau^k|\varphi_1(\theta)-\varphi_0(\theta)+\frac\kappa2\theta_x^2 \,\mathrm{d} x\,,
\end{align}
where $\varphi_i$ are the primitive functions to $f_i$, $i=0,1$. This functional
is coercive on a linear manifold of the space $H^1(D)$ respecting
the boundary condition \eqref{BC}. Let us denote its unique minimizer by
$\theta_\tau^k$.
We introduce the piecewise affine continuous and the piecewise constant
interpolants. Having $\{v_\tau^k\}_{k=0}^{T/\tau}$, we define
\begin{align}\label{def-of-interpolants}
&\overline v_\tau(t):=v_\tau^k,\ \ \ \underline v_\tau(t):=v_\tau^{k-1},
\ \text{ and }\ v_\tau(t):=\Big(\frac t\tau{-}k{+}1\Big)v_\tau^k
\!+\Big(k{-}\frac t\tau\Big)v_\tau^{k-1}
\end{align}
for $(k{-}1)\tau<t\le k\tau$ with $k=0,1,...,T/\tau$. Analogously,
we define also $\overline\varepsilon_\tau$, or $\underline\theta_\tau$, etc.
This allows us to write the system \eqref{disc} in a ``compact'' form:
\begin{subequations}\label{disc+}\begin{align}\label{disc1+}
&\varrho\DT v_\tau -(\mathbb{C}\overline\varepsilon_\tau)_x=0\,,
\\&\DT\varepsilon_\tau=(\overline v_\tau)_x-\overline\pi_\tau\,,\label{disc2+}
\\&\mu(\overline\pi_\tau,\underline\theta_\tau)\overline\xi_\tau=
\mathbb{C}\overline\varepsilon_\tau+\eta(\overline\pi_\tau)_{xx}
\ \ \ \text{ with }\ \overline\xi_\tau\in{\rm Sign}(\overline\pi_\tau)\,,\label{disc3+}
\\&\DT\theta_\tau=f_0(\overline\theta_\tau)
-|\overline\pi_\tau|f_1(\overline\theta_\tau)+\kappa(\overline\theta_\tau)_{xx}\,.\label{disc4+}
\end{align}\end{subequations}
By modifying appropriately the procedure which led to the a priori estimates
(\ref{est}a-c,e), we obtain here
\begin{subequations}\label{est++}\begin{align}
&\|\overline v_\tau\|_{L^\infty(I;L^2(D))}^{}\le C\,,
\\&\|\overline\varepsilon_\tau\|_{L^\infty(I;L^2(D))}^{}\le C\,,
\\&\|\overline\pi_\tau\|_{L^2(I;H^1(D))}^{}\le C\,,
\\&\label{est++theta}
\|\overline\theta_\tau\|_{L^\infty(I{\times}D)\cap L^2(I;H^1(D))}^{}\le C\,,\ \
\text{ and here also}
\\&\|\overline\xi_\tau\|_{L^\infty(I{\times}D)\cap L^2(I;H^1(D)^*)}^{}\le C\,.
\end{align}\end{subequations}
All these estimates hold also for the piecewise affine interpolants, and
\eqref{est++theta} holds also for $\underline\theta_\tau$. The last estimate is
obtained by comparison from
$\overline\xi_\tau=
(\mathbb{C}\overline\varepsilon_\tau{+}\eta(\overline\pi_\tau)_{xx})/\mu(\overline\pi_\tau,\underline\theta_\tau)$
when testing it by functions bounded in $L^2(I;H^1(D))$ and using the
smoothness of $1/\mu(\overline\pi_\tau,\underline\theta_\tau)$.
Then, by the Banach selection principle, we obtain subsequences indexed, for
simplicity, again by $\tau$) weakly* converging in the topologies indicated in
\eqref{est++}, and we pass to a limit for $\tau\to0$ and are to show
that such limit (let us denote it by $(v,\varepsilon,\pi,\theta,\xi)$) solve the
continuous problem with $\pi=\DTp$. For this, one uses the Aubin-Lions
compactness theorem adapted for the time-discretization method as in
\cite[Sect.\,8.2]{Roub13NPDE}. Thus we can rely on that
\begin{align}\label{theta-strong}
\overline\theta_\tau\to\theta\quad\text{ strongly in }\ L^c(I{\times}D)\
\text{ for any $1\le c<\infty$}.
\end{align}
The limit passage in the linear hyperbolic equation \eqref{evol1} is due to a
weak convergence of both $v$ and $\varepsilon$ and also the limit passage in the
linear equation \eqref{evol2} is easy via weak convergence. Yet, there is one
peculiarity in the limit passage in the nonlinearity in \eqref{evol3} for which
a strong convergence of $\varepsilon$ is needed, but we do not have any information
about space gradient of $\varepsilon$. The other peculiarity is a need of the strong
convergence of $\DTp$ which is needed for \eqref{evol5}, but we do
not have any information about $\DT\pi$, so that mere compactness arguments
cannot be used. This can be obtained from the momentum equation \eqref{evol1}
and from \eqref{evol3} when using the strong monotonicity of the operators in
\eqref{evol1} and \eqref{evol3} simultaneously. As for \eqref{evol3}, note that
$\mu(\pi,\theta){\rm Sign}(\pi)=\partial_\pi R(\pi,\theta)$ and that
$R(\cdot,\theta)$ is convex, to that $\partial_\pi R(\cdot,\theta)$ is monotone. In
particular, for any $\overline\xi_\tau\in{\rm Sign}(\overline\pi_\tau)$ and
$\xi\in{\rm Sign}(\pi)$, we have
$\int_0^t\langle\overline\xi_\tau{-}\xi,\overline\pi_\tau{-}\pi\rangle \,\mathrm{d} t\ge0$, where
$\langle\cdot,\cdot\rangle$ denotes the duality pairing between
$H^1(D)^*$ and $H^1(D)$.
The usage of this monotonicity of the set-valued mapping
$\partial_\pi R(\cdot,\theta)$ should be done carefully. The time-discrete approximation
of \eqref{eq3-weak+++} gives some $\overline\pi_\tau\in L^2(I;H^1(D))$
and $\overline\xi_\tau\in L^2(I;H^1(D)^*)$ satisfying \eqref{disc3+}
together with the boundary conditions $p(\pm H)=0$ in the weak form.
From the mentioned monotonicity and by using \eqref{disc1+} and \eqref{disc3+} tested
by $\overline v_\tau{-}v$ and $\overline\pi_\tau{-}\pi$ and integrated over a time interval
$[0,t]$ and the domain $D$, we obtain
\begin{align}\nonumber
&\int_D\frac\varrho2(v_\tau(t){-}v(t))^2
+\frac12\mathbb{C}(\varepsilon_\tau(t){-}\varepsilon(t))^2 \,\mathrm{d} x+ \int_0^t\!\!\int_D
\eta(\overline\pi_\tau{-}\pi)_x^2 \,\mathrm{d} x\d t
\\[-.3em]&\nonumber\le
\int_0^t\!\bigg(\big\langle\varrho\DT v_\tau{-}\varrho\DT v,
\overline v_\tau{-}v\big\rangle+
\big\langle\DT\varepsilon_\tau{-}\DT\varepsilon,\mathbb{C}\overline\varepsilon_\tau{-}\mathbb{C}\varepsilon\big\rangle
+\big\langle\mu(\overline\pi_\tau,\underline\theta_\tau)\overline\xi_\tau
-\mu(\pi,\underline\theta_\tau)\xi,\overline\pi_\tau{-}\pi\big\rangle
\\[-.3em]&\hspace{5em}\nonumber
+\big\langle\varrho\DT v,\overline v_\tau{-}v_\tau\big\rangle
+\big\langle\DT\varepsilon,\mathbb{C}\overline\varepsilon_\tau{-}\mathbb{C}\varepsilon_\tau\big\rangle
+\!\int_D\eta(\overline\pi_\tau{-}\pi)_x^2 \,\mathrm{d} x\bigg) \,\mathrm{d} t
\\[-.3em]&\nonumber=-\int_0^t\!\bigg(\big\langle\varrho\DT v,
\overline v_\tau{-}v\big\rangle+\big\langle\DT\varepsilon_\tau{-}\DT\varepsilon,\mathbb{C}\varepsilon\big\rangle
+\big\langle\mu(\pi,\underline\theta_\tau)\xi,\overline\pi_\tau{-}\pi\big\rangle
\\[-.5em]&\hspace{5em}
-\big\langle\varrho\DT v,\overline v_\tau{-}v_\tau\big\rangle
-\big\langle\DT\varepsilon,\mathbb{C}\overline\varepsilon_\tau{-}\mathbb{C}\varepsilon_\tau\big\rangle
+\!\int_D\!\eta\pi_x(\overline\pi_\tau{-}\pi)_x \,\mathrm{d} x\bigg) \,\mathrm{d} t\to0\,,
\label{->0}\end{align}
where $\langle\cdot,\cdot\rangle$ again denotes the duality pairing between
$H^1(D)^*$ and $H^1(D)$. The meaning of
$\langle\mu(\overline\pi_\tau,\underline\theta_\tau)\overline\xi_\tau
,\overline\pi_\tau{-}\pi\rangle$ for $\overline\xi_\tau$ valued in $H^1(D)^*$ is
rather $\langle\overline\xi_\tau,\mu(\overline\pi_\tau,
\underline\theta_\tau)(\overline\pi_\tau{-}\pi)\rangle$, relying that
$\mu(\overline\pi_\tau,\underline\theta_\tau)(\overline\pi_\tau{-}\pi)$ is valued in
$H^1(D)$; here we need $\mu$ smooth so that
$(\mu(\overline\pi_\tau,\underline\theta_\tau)(\overline\pi_\tau{-}\pi))_x
=\mu(\overline\pi_\tau,\underline\theta_\tau)(\overline\pi_\tau{-}\pi)_x
+(\mu_\pi'(\overline\pi_\tau,\underline\theta_\tau)(\overline\pi_\tau)_x+
\mu_\theta'(\overline\pi_\tau,\underline\theta_\tau)(\underline\theta_\tau)_x)
(\overline\pi_\tau{-}\pi)$ is valued in $L^2(D)$. Similarly, it applies also
for $\langle\mu(\pi,\underline\theta_\tau)\xi,\overline\pi_\tau{-}\pi\rangle$. For
the inequality in \eqref{->0} see \cite[Remark~8.11]{Roub13NPDE}. For the
equality in \eqref{->0}, we used \eqref{disc2+} together with its limit
obtained by the weak convergence, i.e.\ $\DT\varepsilon=v_x-\pi$, and also
(\ref{disc+}a,c) for the identity
\begin{align}\nonumber
&\big\langle\DT\varepsilon_\tau{-}\DT\varepsilon,\mathbb{C}\overline\varepsilon_\tau{-}\mathbb{C}\varepsilon\big\rangle
=\big\langle(\overline v_\tau{-}v)_x,\mathbb{C}\overline\varepsilon_\tau\big\rangle
-\big\langle\overline\pi_\tau{-}\pi,\mathbb{C}\overline\varepsilon_\tau\big\rangle
-\big\langle\DT\varepsilon_\tau{-}\DT\varepsilon,\mathbb{C}\varepsilon\big\rangle
\\&\nonumber\ \ \ =-\big\langle\varrho\DT v_\tau,\overline v_\tau{-}v\big\rangle
-\big\langle\mu(\overline\pi_\tau,\underline\theta_\tau)\overline\xi_\tau,\overline\pi_\tau{-}\pi\big\rangle-\!\int_D\!\eta(\overline\pi_\tau)_x(\overline\pi_\tau{-}\pi)_x\d x
-\big\langle\DT\varepsilon_\tau{-}\DT\varepsilon,\mathbb{C}\varepsilon\big\rangle\,.
\end{align}
It is important, that \eqref{->0} holds for any $\xi\in{\rm Sign}(\pi)$ and, at
this moment, we do not assume that $\xi$ comes as a limit from the (sub)sequence
$\{\overline\xi_\tau\}_{\tau>0}$.
To the convergence in \eqref{->0}, we used that
$\DT v\in L^2(I;H^1(D)^*)$ while $\overline v_\tau{-}v\to0$ weakly
$L^2(I;H^1(D))$, and that
$\DT\varepsilon_\tau{-}\DT\varepsilon\to0$ weakly in $L^2(I;H^1(D)^*)$, and eventually that
$\mu(\pi,\underline\theta_\tau)$ converges (to a limit which is not important here)
strongly in $L^c(I{\times}D)$ due to \eqref{theta-strong} while
$\overline\pi_\tau{-}\pi\to0$ weakly in $L^2(I;H^1(D))$ so that also
$\mu(\pi,\underline\theta_\tau)(\overline\pi_\tau{-}\pi)\to0$ weakly in
$L^2(I;H^1(D))$. Therefore, considering \eqref{->0} integrated over $I$,
we obtain
\begin{subequations}\label{conv-v-eps-pi}\begin{align}
&&&\overline v_\tau\to v&&\text{strongly in }\ L^2(I{\times}D)\,,&&&&\\
&&&\overline\varepsilon_\tau\to\varepsilon&&\text{strongly in }\ L^2(I{\times}D)\,,\\
&&&\overline\pi_\tau\to\pi&&\text{strongly in }\ L^2(I;H^1(D))\,.\label{conv-pi}
\end{align}\end{subequations}
In fact, by interpolation, (\ref{conv-v-eps-pi}a,b) holds even in
$L^c(I;L^2(D))$ for any $1\le c<\infty$.
For \eqref{conv-pi}, we used the strong convergence of gradients of $\DTp_k$
and the fixed boundary conditions, so that we do not need to rely on
the monotonicity of $\partial_\pi R(\cdot,\theta)$ which may not be strong.
Having the strong convergence \eqref{conv-v-eps-pi} at disposal, the limit
passage is then easy, showing that the previously obtained weak limit
$(v,\varepsilon,\pi,\theta)$ is a weak solution to the system \eqref{evol}.
In particular, from the inclusion in \eqref{disc3+} one obtains
$\xi\in{\rm Sign}(\pi)$ by using maximal monotonicity of the graph of
the set-valued mapping
${\rm Sign}:L^2(I;H^1(D))\rightrightarrows L^2(I;H^1(D)^*)$
and the strong convergence \eqref{conv-pi}. Thus (i) is proved.
As to (ii), if $\mu(\pi,\theta)\le \mathscr{O}(|\pi|^s)$, then
$\eta\pi_{xx}\in\mathbb{C}\varepsilon-\mu(\pi,\theta){\rm Sign}(\pi)$ is bounded
in $L^s(I;L^2(D)$ so that $\pi\in L^s(I;H^2(D))$.
If $s\ge2$,
the procedure which led to the energy balance \eqref{engr} considered here without
$\alpha$-terms but integrated over a time interval $[0,t]$ was indeed rigorous.
This is because $v\in L^2(I;H^1(D))$, as can be seen by comparison from
\eqref{evol2}, is in duality with $\varrho\DT v\in L^2(I;H^1(D)^*)$ and with
$(\mathbb{C}\varepsilon)_x\in L^2(I;H^1(D)^*)$, so that testing the momentum equation
\eqref{evol1} and the related by-part integration is legitimate. Similar arguments
concern also the aging rule \eqref{evol5}.
Since $\eta\pi_{xx}\in L^2(I{\times}D)$ if $s\ge2$, also the
test of the plastic rate equation \eqref{evol3} by $\pi\in L^s(I{\times}D)$
is legitimate together with the related by-part integrations.
In this case when $s\ge2$, also \eqref{disc4+} can be tested by $\DT\theta_\tau$,
which gives the regularity
$\theta\in H^1(I;L^2(D))\cap L^\infty(I;H^1(D))$.
By comparison $\kappa\theta_{xx}=\DT\theta+|\pi|f_1(\theta)-f_0(\theta)\in
L^2(I{\times}D)$, we obtain also $\theta\in L^2(I;H^2(D))$.
\end{proof}
\begin{remark}[{\sl Stability and time-periodic solutions}] In geodynamics the
phenomenon called \emph{episodic tremor and slip} describes time-periodic
motions in subduction zones where shorter periods of plastic slips alternate
with longer periods with slow slip events. Hence, it would be interesting to
complement our existence result for ``transient events'' governed by the
above initial-value problem by a theory for time-periodic solutions. The
aim would be show that there is a period $t_*>0$ and a solution of the
system \eqref{evol} with the boundary conditions \eqref{BC} satisfying
$(\DT v, \DT \varepsilon, \DT \alpha, \DT\theta) \not\equiv 0$ and
\begin{align}\label{PC}
v(\cdot,t_*)=v(\cdot,0)\,,\ \ \ \varepsilon(\cdot,t_*)=\varepsilon(\cdot,0)\,,\ \ \
\alpha(\cdot,t_*)=\alpha(\cdot,0)\,,\ \text{ and }\ \
\theta(\cdot,t_*)=\theta(\cdot,0)
\end{align}
instead of \eqref{IC}.
Of course, a general question is that of stability of the steady state
solutions $(\pi,\theta)$ obtained in Section \ref{se:AnaSteady} or potentially
of such time-periodic solutions as described here. As we will see in the
following section, one indication of the existence of time-periodic solutions
is the loss of stability of the steady state solution. But because of the
complexity of the model, these questions are beyond the scope of this
paper.
\end{remark}
\begin{remark}[{\sl Asymptotics for $\eta\to0$ and $\kappa\to0$}]
Unlike to the case for steady solutions for \eqref{eq} as in
Section~\ref{sec-localization}, it is not possible in the evolutionary model
\eqref{evol} to pass to the limit for $\eta\to0$. In particular, a limit
passage in the term $\mathbb{C}\varepsilon^\eta(\widetilde\pi^\eta{-}\DTp^\eta)$ occurring in
\eqref{eq3-weak+++} seems to be out of reach. The substitution
\eqref{subst} by a convex term in $\DTp$ could not help, being not weakly
upper-semicontinuous. If also \eqref{ass-M1} holds, then like in
Propositions \ref{prop2} and \ref{prop3}, we can at least obtain some
uniform bounds, in particular for the plastic strain rate $\pi=\DTp$ in the
Orlicz space $L_\varPhi(I{\times}D)$ with $\varPhi$ from
\eqref{ass-M1}, i.e.\ $\int_I\int_D \varPhi(\pi(t,x))\,\mathrm{d} x \,\mathrm{d} t
<\infty$. Yet, the limit passage for $\eta\to0$, even while keeping
$\kappa>0$ fixed, remains intractable.
\end{remark}
\section{Illustrative numerical simulations}
\label{se:NumSimul}
We illustrate the response of the evolutionary model in Section~\ref{sec-evol}
by a simplified model derived in Section
\ref{su:SimplifMod}. This model still has exactly the same steady states
as the full model, such that all the theory of Section \ref{se:AnaSteady} applies
to it, when ignoring statements about the damage variable $\alpha$. We expect
that the simplified model is still relevant as far as
usually observed dynamical features concern. Moreover, it also displays the
effect of the free boundary occurring between the elastic zone and the plastic
zone. In Section \ref{su:NumSimSteady} we show by numerical simulations
that the steady states localize for $v_\infty\to 0$ in such a way
that $\pi_\mathrm{stst}$ has support (i.e.\ the so-called cataclastic zone)
in $[-h_*(v_\infty,\kappa),h_*(v_\infty,\kappa)]$ with
$h_*(v_\infty,\kappa)\sim\sqrt\kappa$ for $\kappa \to 0^+$. Moreover, we show
that, when keeping $v_\infty\neq 0$ fixed but sufficiently small, we obtain a support
with $h_*(v_\infty,\kappa) \to v_\infty/\pi_*$ for $\kappa \to 0^+$.
In Section \ref{su:NumSimODE} \color{black} we study an ODE model for
scalars $\theta(t)$ and
$\sigma(t)$ which displays the effect of oscillatory behavior for $|v_\infty| <
v_\text{crit} $ while solutions converge to the unique steady state for
$|v_\infty| > v_\text{crit} $. Finally Section \ref{su:NumSimPDE}
presents
simulations for the simplified evolutionary model. In particular, we observe
again that for small nontrivial values of $|v_\infty|$ we have oscillatory
behavior, where the plastic zone is spatially and temporarily localized in the
sense that the support of $\pi(t,\cdot)$ is compactly contained in $D=[-H,H]$
for all $t \in [0,T_\text{per}]$ and that $\pi(t,x)=0$ for all $x\in D$ and
all $t\in [t_1,t_2]$ for a nontrivial interval $[t_1,t_2]\subset [0,T_\text{per}]$.
For $|v_\infty| $ large, we find convergence into a steady state with a
nontrivial plastic (cataclastic) zone.
All the following results are derived from numerical experiments only.
\subsection{The simplified model without damage}
\label{su:SimplifMod}
To display the main features of our rate-and-state friction model we reduce
the full evolutionary model \eqref{evol} by making the following simplifications:
\\
\textbullet\ we neglect inertial effects (i.e.\ we set $\varrho=0$ in
\eqref{evol1}), thus\\
\mbox{}\quad making the system quasistatic but still keeping a
rate-and-state dependent plasticity;
\\
\textbullet\ we choose $\eta=0$ for the length-scale parameter in \eqref{evol3}\\
\mbox{}\quad
as analyzed in Section~\ref{sec-localization} for the steady-state solutions;
\\
\textbullet\ we neglect all damage effects through $\alpha$ and
omit \eqref{evol4} as we did in Theorem~\ref{th:EvolExist}.
\noindent
Because of $\varrho=0$, the momentum balance leads to a spatially constant
stress $\sigma(t)=\mathbb{C} \varepsilon$. As now $\mathbb{C}$ is constant, also $\varepsilon(t)$ is
spatially constant. Integrating \eqref{evol2} over $x\in D =[{-}H,H]$ and using
the boundary condition for $v$ from \eqref{BC-evol} gives the following coupled system
for $\sigma$, $\pi=\DT p$, and $\theta$:
\begin{subequations}
\label{eq:SimpMod}
\begin{align}
\label{eq:SimpMod.a}
&\frac{2H}{\mathbb{C}} \DT \sigma + \int_D \pi \,\mathrm{d} x = 2 v_\infty(t),
\\
\label{eq:SimpMod.b}
&\mu(\pi,\theta) \mathrm{Sign}(\pi) \ni \sigma,
\\
& \DT\theta = f_0(\theta)-|\pi|f_1(\theta) + \kappa \theta_{xx} , \quad
\theta(t,\pm H)=\theta_\infty.
\label{eq:SimpMod.c}
\end{align}
\end{subequations}
Throughout this section we assume that $\mu$ has the form
\[
\mu(\pi,\theta) = \mu_0 + A(\pi) + B(\theta)\ \ \text{ with }\ \
A(\pi),B(\theta)\geq 0\ \ \text{ and }\ \ A(-\pi)=A(\pi)\,;
\]
cf.\ also \eqref{DR1++}. Assuming further $A'(\pi)>0$ for $\pi>0$ we can solve
\eqref{eq:SimpMod.b} in the form
\begin{equation}
\label{eq:def.Pi}
\pi=\varPi(\sigma, \theta)\ \ \text{ with }\ \ \varPi(\sigma,\theta) = \left\{
\begin{array}{cl}
0 & \text{for }|\sigma| \leq \mu_0{+}B(\theta), \\
A^{-1}\big(\sigma{-}\mu_0{-}B(\theta)\big)& \text{for }\sigma>\mu_0{+}B(\theta), \\
\!\!-A^{-1}\big(|\sigma{-}\mu_0{-}B(\theta)|\big)& \text{for }\sigma<-\mu_0{-}B(\theta).\\
\end{array} \right.
\end{equation}
Thus, we obtain our final coupled system of a scalar ODE for $\sigma$ with a
non-locally coupled scalar parabolic PDE for $\theta$, namely
\begin{subequations}
\label{eq:SM}
\begin{align}
& \label{eq:SM.a}
\DT \sigma = \frac{\mathbb{C}}H\, v_\infty(t) - \frac{\mathbb{C}}{2H}
\int_D \varPi(\sigma,\theta) \,\mathrm{d} x,
\\
& \label{eq:SM.b}
\DT\theta = f_0(\theta)-|\varPi(\sigma,\theta)|f_1(\theta) + \kappa \theta_{xx} , \quad
\theta(t,\pm H)=\theta_\infty.
\end{align}
\end{subequations}
Here the nonsmoothness due to the plastic behavior is realized by the nonsmooth
function $\pi=\varPi(\sigma,\theta)$ defined in \eqref{eq:def.Pi}.
For all the following simulation we choose the
following parameters and functions:
\begin{equation}
\label{eq:DataSimple}
\begin{aligned}
&H=1,\ \ \mathbb{C}=1,\ \ \theta_\infty=10,\ \ \mu_0=1,\ \
f_0(\theta)=1-\theta/\theta_\infty,
\\
&f_1(\theta)=10\, \theta,\ \ A(\pi)=\ln(|\pi|{+}1),\ \
B(\theta)=\ln(4\theta{+}1).
\end{aligned}
\end{equation}
Subsequently, we will only vary the coefficient $\kappa>0$ and the shear velocity
$v_\infty$.
\subsection{Steady states}
\label{su:NumSimSteady}
We first discuss the steady states for \eqref{eq:SM}, which are indeed a
special case of the steady states obtained in Proposition \ref{prop2}.
Numerically, we always found exactly one steady state
$\theta_\mathrm{stst} = \Theta(v_\infty,\kappa)$, but were unable to prove its
uniqueness rigorously. When varying the parameters $v_\infty$ and $\kappa$ we
can easily observe clear trends for $(\theta_\mathrm{stst} ,\pi_\mathrm{stst} )$, where the
associated plastic flow rate is given by
$\pi_\mathrm{stst} =P(\sigma_\mathrm{stst} ,\theta_\mathrm{stst} )$, see Figure
\ref{fig:De-In-Crease}. We first observe that for fixed $\kappa$ the functions
$\theta_\mathrm{stst} $ and $\pi_\mathrm{stst} $ depend
monotonically on $v_\infty$ in the
expected way, namely $\theta_\mathrm{stst} $ decreases with the shear velocity
$v_\infty$, while $\pi_\mathrm{stst} $ increases, which fits to the relation
$2v_\infty= \int_D \pi_\mathrm{stst} (v_\infty, \kappa; x) \,\mathrm{d} x$.
\begin{figure}[h]
\centerline{\scriptsize\sf Stationary profiles $\theta_\mathrm{stst} $ of the aging variable}
\begin{tabular}{cccc}
\includegraphics[width=0.23\textwidth]{THETA_ka01_all}&
\includegraphics[width=0.23\textwidth]{THETA_ka04_all}&
\includegraphics[width=0.23\textwidth]{THETA_ka16_all}&
\includegraphics[width=0.23\textwidth]{THETA_ka64_all}
\end{tabular}
\\[0.5em]
\centerline{\scriptsize\sf Stationary profiles $\pi_\mathrm{stst} $ of the plastic
strain rate\hspace{9em}}\vspace*{-0.5em}
\begin{tabular}{cccc}
\includegraphics[width=0.23\textwidth,height=0.12\textwidth]{Pi_ka01_all}&
\includegraphics[width=0.23\textwidth,height=0.146\textwidth]{Pi_ka04_all}&
\includegraphics[width=0.23\textwidth,height=0.196\textwidth]{Pi_ka16_all}&
\includegraphics[width=0.23\textwidth,height=0.196\textwidth]{Pi_ka64_all}
\\
{\small$\kappa=0.01$}&{\small$\kappa=0.04$}&{\small$\kappa=0.16$}&{\small$\kappa=0.64$}
\end{tabular}
\caption{\small\sl Each picture shows ten curves that
correspond to the shear velocities $v_\infty\in \{0.005,\,0.01,\,0.02,\,0.05,\,0.1,\,0.2,\,0.5,\,1.0,\,2.0,\,5.0\}$, respectively. The rows shows
$\theta_\mathrm{stst} $ (decreasing with $v_\infty$) and the lower rows shows
$\pi_\mathrm{stst} $ (growing $v_\infty$).}
\label{fig:De-In-Crease}
\end{figure}
Moreover, for $v_\infty\to 0^+$ the scaled plastic rate
$\pi_\mathrm{stst}/v_\infty$ converges to a
nontrivial limit with localized support, while $\theta_\mathrm{stst} $ converges
uniformly to $\theta_\infty$. For larger and larger $v_\infty$ the plastic zone
occupies more and more of the domain $D=[{-}1,1]$ and $\theta_\mathrm{stst} $ is
very small in most of the plastic zone, namely $\theta \approx
\Theta_f(\pi)=\theta_\infty/(1{+}10 \pi \theta_\infty)\approx 1/(10 \pi)$.
When reducing the size of $\kappa$ we also see that the size of the plastic
zone shrinks. For small $v_\infty$ it can be seen that the support of
$\pi_\mathrm{stst} $ is $[{-}h_*(v_\infty,\kappa),h_*(v_\infty,\kappa)]$ with
$h_*(v_\infty,\kappa) \sim \sqrt\kappa$, see Figure \ref{fig:Support}.
\begin{figure}[h]
\centerline{\scriptsize\sf Rescaled stationary profiles $\pi_\mathrm{stst}/v_\infty $ of the
plastic strain rate}
\begin{tabular}{cccc}
\includegraphics[width=0.23\textwidth,height=0.2\textwidth]{Pi_ka01_scaled}&
\includegraphics[width=0.23\textwidth,height=0.2\textwidth]{Pi_ka04_scaled}&
\includegraphics[width=0.23\textwidth,height=0.2\textwidth]{Pi_ka16_scaled}&
\includegraphics[width=0.23\textwidth,height=0.2\textwidth]{Pi_ka64_scaled}
\\
{\small$\kappa=0.01$}&{\small$\kappa=0.04$}&{\small$\kappa=0.16$}&{\small$\kappa=0.64$}
\end{tabular}
\caption{\small\sl The figures display the rescaled plastic strain rates
$\pi_\mathrm{stst}/v_\infty$ for shear velocities $v_\infty\in
\{0.005,\,0.01,\,0.02,\,0.05,\,0.1,\,0.2,\,0.5,\,1.0,\,2.0,\,5.0\}$,
respectively. For $v_\infty\to 0$ one sees convergence to a limit shape with
minimal support $[-h_*(\kappa), h_*(\kappa)]$ where
$\kappa(0.01)\approx 0.055$, $\kappa(0.04)\approx 0.11$,
$\kappa(0.16)\approx 0.21$, and $\kappa(0.64)\approx 0.41$. Effectively, we
can see a free boundary between active cataclastic core zone and the rest of
the fault.}
\label{fig:Support}
\end{figure}
Finally, we want to study the case corresponding to Proposition
\ref{prop3}, where $v_\infty$ is kept fixed and the limit $\kappa \to 0$ is
performed. In Figure \ref{fig:Lim.kappa0} we show plots of the steady states
$(\theta_\mathrm{stst}^\kappa,\pi_\mathrm{stst}^\kappa)$ for three different values of
$v_\infty$ for a sequence of decreasing $\kappa$. We clearly see the predicted
development of convergence against towards the limit
$(\theta_\mathrm{stst}^0,\pi_\mathrm{stst}^0)$ taking only two different values. Moreover, the
values are roughly independent of $v_\infty$, where the active plastic zone
$(-h,h)$ behaves like $h=v_\infty/\pi_*$, as proved in Proposition \ref{prop3}.
\begin{figure}
\begin{tabular}{@{}ccc@{}}
\includegraphics[width=0.31\textwidth]{Theta3_vi_04.pdf}&
\includegraphics[width=0.31\textwidth]{Theta3_vi_08.pdf}&
\includegraphics[width=0.31\textwidth]{Theta3_vi_12.pdf}
\\
$v_\infty=0.4$ & $v_\infty=0.8$ & $v_\infty= 1.2$
\\
\includegraphics[width=0.25\textwidth]{Pi3_vi_04.pdf}&
\includegraphics[width=0.25\textwidth]{Pi3_vi_08.pdf}&
\includegraphics[width=0.25\textwidth]{Pi3_vi_12.pdf}
\end{tabular}
\caption{\small\sl A study for the limit $\kappa\to 0^+$ of the steady state solutions
$(\theta_\mathrm{stst},\pi_\mathrm{stst})$. For $v_\infty \in \{0.4,0.8,1.2\}$ the profiles
are plotted for $\kappa \in \{0.03, 0.01, 0.003, 0.001, 0.0003, 0.0001
\}$. Convergence to rectangular profiles is observed.}
\label{fig:Lim.kappa0}
\end{figure}
\subsection{An ODE model showing oscillations in time }
\label{su:NumSimODE}
Oscillatory behavior is most easily seen in a simple finite dimensional model,
consisting only of $\sigma(t)$ and $\overline\theta(t)$, where we may consider
$\overline\theta(t)$ as the average of $\theta(t,x)$
over the critical plasticity region where $\pi(t) = P(\sigma(t),\theta(t))$ is
positive. We also refer to the analysis of a spring-slider model in
\cite{Miel18TECI} as well as the geophysical paper \cite{AbeKat12CECS}.
Thus, our simplified model \eqref{eq:SM} is even more simplified to the ODE
system
\begin{equation}
\label{eq:lumped}
\frac{2H}{\mathbb{C}} \DT \sigma = 2v_\infty - 2h \,\varPi(\sigma,\overline\theta) \quad
\text{and} \quad
\DT{\overline\theta} = 1 - \frac{\overline\theta}{\theta_\infty}
- 10 \varPi(\sigma,\overline\theta)\,\overline\theta.
\end{equation}
Here $h \in {]0,H[}$ represents the width of the plastic zone, which has to be
adapted accordingly. We may consider \eqref{eq:lumped} as an evolutionary
lumped-parameter system, which in geophysical literature is often referred to
as a {\it 1-degree-of-freedom slider} and is considered as a basic test of
every new friction model.
The nice feature of this ODE model is that the steady states
can be calculated explicitly, and even a stability analysis can be
performed. Indeed there is exactly one steady state, namely
\[
\overline\theta_\mathrm{stst} = \frac{\theta_\infty }{1{+}10
(v_\infty/h) \theta_\infty} \quad \text{ and } \quad
\sigma_\mathrm{stst} =\mu_0+A\Big(\frac{v_\infty}h\Big)+B(\overline\theta_\mathrm{stst} ).
\]
Instead of performing a rigorous analysis, we simply display the solution
behavior of this ODE by a few numerical results. We find that for small
positive $v_\infty$ we obtain oscillatory behavior, while for larger $v_\infty$
the solutions converge to the steady state, see Figure \ref{fig:ODE}. Indeed,
the oscillations can be interpreted physically in terms of geophysical
processes as {\it seismic cycles}.
During the oscillatory behavior there is a large part of the interval where
there is no plastic slip (i.e.\ $\pi(t)=0$). In these intervals the stress is
growing linearly with a slope that is proportional to $v_\infty$, and the aging
variable $\overline\theta$ is relaxing exponentially back to its equilibrium value
$\theta_\infty$. However, if the stress reaches a critical value, then the
plastic strain rate is triggered, which leads to reduction of the aging
variable. This leads to a simultaneous weakening of the plastic yields stress
$\mu(\pi,\overline\theta)$ such that $\pi$ can grow even more. As a result the stress
is drastically reduced in a rather short time interval, and $\overline\theta$ is
reduced almost down to $0$ (refreshing). If the inertial term would be
included, then this fast rupture-like processes could emit elastic waves, i.e.\
{\it earthquakes}. Because of the stress release
the plastic strain rate reduces to $0$, and the process starts
again by a slow aging and building up the stress.
\begin{figure}[h]
\begin{tabular}{ccc}
\includegraphics[width=0.37\textwidth]{ODE_h03_v12}&
\includegraphics[width=0.27\textwidth,trim=0 0 66 0,clip=true]{ODE_h03_v17}&
\includegraphics[width=0.27\textwidth,trim=0 0 90 0,clip=true]{ODE_h03_v18}
\\
{\small$v_\infty=0.12$}&{\small$v_\infty=0.17$}&{\small$v_\infty=0.18$}
\end{tabular}
\caption{\small\sl Solutions $(\overline\theta(t),\sigma(t))$ together with
$\pi(t)=P(\sigma(t),\overline\theta(t))$ for $h=0.3$ and three different values of $v_\infty$. In the first two cases the solutions
start very close to the unstable steady state. In the third case the
solution starts far away but soon returns to the stable fixed point.}
\label{fig:ODE}
\end{figure}
In fact, choosing $h=0.3$ a closer analysis of the system shows that the steady
states are stable if and only if $v>v_\infty^{(1)}\approx 0.17462$. However,
stable oscillations are already seen for $ v< v_\infty^{(2)} \approx
0.175452$. A careful analysis of the trajectories in the phase plane for
$(\overline\theta,\sigma)$ reveals that for $v_\infty \in (v_\infty^{(1)},
v_\infty^{(2)})$ there are two periodic solutions, as smaller unstable one that
encircles the stable fixed point and a larger stable one that encircles the
unstable one, see Figure \ref{fig:TwoPeriodicOrbits}. Thus, in the small
parameter interval $(v_\infty^{(1)}, v_\infty^{(2)})$ we have coexistence of a
stable fixed point and a stable periodic orbit.
\begin{figure}[h]
\centering \begin{tikzpicture}
\node[above] at (0,0){\includegraphics[width=0.3\textwidth]{ODE_TwoCycles_Inner}};
\node[above] at (7,0){\includegraphics[width=0.37\textwidth]{ODE_TwoCycles}};
\draw (4.3,0.6) rectangle (4.7,1);
\draw (4.3,0.6) --(2.45,0.35);
\draw (4.3,1.0) --(2.45,4.8);
\node[right] at (10.2,0.35){$\sigma$};
\node[left] at (4,6){$\overline\theta$};
\end{tikzpicture}\par
\caption{\small\sl The $(\sigma,\overline\theta)$ phase plane for $h=0.3$ and $v_\infty=0.175$,
where all trajectories rotate clockwise around the fixed point
$(\sigma_\mathrm{stst} , \overline\theta_\mathrm{stst} )\approx (1.973,0.168)$.
There are two periodic solutions. The outer one is stable and is approached by
the blue trajectories from inside and outside. The unstable periodic orbit lies
between the orange and the brown trajectory.}
\label{fig:TwoPeriodicOrbits}
\end{figure}
\subsection{Convergence to steady states versus oscillations for \eqref{eq:SM}}
\label{su:NumSimPDE}
The behavior of the evolutionary coupled system \eqref{eq:SM} coupling the
parabolic PDE for the aging variable $\theta(t,x)$ to the ODE for the stress
$\sigma(t)$ displays roughly a similar behavior as the lumped ODE system
\eqref{eq:lumped}. For large $|v_\infty|$ one observes convergence into the
steady states analyzed in Section \ref{se:AnaSteady} and displayed numerically
in Section \ref{su:NumSimSteady}. For small nontrivial values of $v_\infty$ one
observes oscillatory behavior. Of course, the new feature is the spatial
distribution of the plastic rate $\pi(t,x)=\varPi(\sigma(t),\theta(t,x))$ and
the aging variable $\theta(t,x)$. In most cases one observes that $\pi(t,x)$
has a nontrivial support in the sense that the support of $\pi(t,\cdot)$ is
compactly contained in $({-}H,H)$. Moreover, in the oscillatory case, we also
observe that there are large parts of the periodicity interval, in which there
is no plastic flow at all (i.e.\ $\pi=\DT p=0$), but there is aging and
slow building up of stress. Then, in sudden plastic bursts there is a
strong plastic flow that leads to stress release and refreshing, i.e.\
reduction of $\theta$ almost down to $0$ inside the cataclastic zone.
Figure \ref{fig:SM.converge} displays two simulation results featuring convergence
into steady state.
\begin{figure}[h]
\centering
\begin{tabular}{cc@{\qquad}cc}
\includegraphics[width=0.27\textwidth]{Evol_pi_ka016_v06}\hspace*{-1.3em}&
\includegraphics[width=0.22\textwidth]{Evol_theta_ka016_v06}\hspace*{-1.3em}&
\includegraphics[width=0.27\textwidth]{Evol_pi_ka004_v02}\hspace*{-1.3em}&
\includegraphics[width=0.22\textwidth]{Evol_theta_ka004_v02}
\\
\multicolumn{2}{c}{\small$\kappa=0.16,\ v_\infty=0.6$}&
\multicolumn{2}{c}{\small$\kappa=0.004,\ v_\infty=0.2$}
\end{tabular}
\par
\caption{\small\sl Simulation of the solution $\theta$ (left) and
$\pi=\varPi(\sigma,\theta)$ (right) for \eqref{eq:SM}.
Convergence to a steady state can be observed in both cases.}
\label{fig:SM.converge}
\end{figure}
In the case $\kappa=0.04$ and the smaller shear rate $v_\infty=0.15$ one
observes oscillatory behavior. In fact, we start the solution very close to the
steady state and the solution needs some time to develop the instability but
then it switches quickly into a periodically looking regime,
see Figure \ref{fig:SM.osc}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\node at (0,3)
{\includegraphics[width=0.97\textwidth,trim=150 0 150 0,clip=true,angle=2]
{Evol_theta_ka004_v015none}};
\node at (-1.3,1.8) { time $t$};
\node at (7.5,3.8) {$\theta(t,x)$};
\node at (0,0)
{\includegraphics[width=0.85\textwidth,angle=4.3]{Evol_pi_ka004_v015none}};
\node at (-1.3,-1.3) { time $t$};
\node at (7.5,0.4) {$\pi(t,x)$};
\end{tikzpicture}
\vspace{-2em}
\caption{\small\sl Simulation of the solution $\theta$ (top) and
$\pi=\varPi(\sigma,\theta)$ (bottom) for \eqref{eq:SM} with $\kappa=0.04$ and
$v_\infty=0.15$. Convergence to a periodic behavior where $\pi$ is
localized in space and time can be observed.}
\label{fig:SM.osc}
\end{figure}
\bigskip\bigskip\bigskip
\paragraph*{Acknowledgments.}
A.M.\ was partially supported by DFG via the Priority Program
SPP\,2256
\emph{Variational Methods for Predicting Complex Phenomena in
Engineering Structures and Materials} (project no.\,441470105, subproject Mi 459/9-1 \emph{Analysis for thermo-mechanical models with internal
variables}).
T.R.\ is thankful for the hospitality of the Weierstra\ss{}--Institut
Berlin and also acknowledges the support of the M\v SMT \v CR
(Ministry of Education of the Czech Republic) project
CZ.02.1.01/0.0/0.0/15-003/0000493, and the institutional support RVO:
61388998 (\v CR).
{\small
| proofpile-arXiv_065-15544 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{}
\section{\label{intro}Introduction}
Predicting damage initiation and its progression in structural materials relies heavily on the knowledge of local mechanical stresses present in the material. For particular aging processes, the material damage is initiated at the grain boundaries (GBs), where intergranular microcracks form. With time, these microcracks may grow along the GBs and combine into larger macroscopic cracks, which can eventually compromise the structural integrity of the entire component under load.
\TM{Since microcracks are invisible to non-destructive inspection techniques, the detection instruments can only reveal the existence of macroscopic cracks, which roughly appear in the final $10\%$ of the component’s lifetime.}
Having accurate models for predicting the component’s susceptibility to microcracking in its earlier stages is therefore of uttermost importance in many different applications, as this could reduce the costs needed for frequent inspections and replacements.
InterGranular Stress-Corrosion Cracking (IGSCC) is one of the most significant ageing-degradation mechanisms. It corresponds to the initiation and propagation of microcracks along the GBs and is common in alloys, that are otherwise typically corrosion-resistant (austenitic stainless steels~\cite{nishioka2008,lemillier,stephenson2014,gupta,fujii2019}, zirconium alloys~\cite{cox,cox1990}, nickel based alloys~\cite{rooyen1975,shen1990,panter2006,IASCC_IAEA}, high strength aluminum alloys~\cite{speidel,burleigh} and ferritic steels~\cite{wang,arafin}). The IGSCC is a multi-level process that includes electro-chemical, micro-mechanical and thermo-mechanical mechanisms. The activation of these mechanisms depends on material properties, corrosive environment and local stress state. It is believed that GB stresses are the driving force of intergranular cracking, therefore they need to be accurately determined, in order to make quantitative predictions about IGSCC initiation.
Various approaches \TM{to IGSCC-initiation modeling are being considered}. One such approach is to treat IGSCC phenomenon on a local GB scale, where GB-normal stresses $\snn$ \TM{can be} studied separately (decoupled) from the environmental effects that degrade the GB strength $\sigma_c$;
${\rm IGSCC}\approx\mathcal{F}(\snn)\cdot\mathcal{F}(\sigma_c)$.
A GB-normal stress $\snn$ is defined here as a \TM{component of local} stress tensor \TM{along} the GB-normal direction, \textit{i.e.}, perpendicular to the GB plane
\footnote{In the terminology of fracture mechanics, $\snn$ corresponds to the opening-mode stress (Mode~$1$).}
\TM{Hence}, a single stress-based criterion for a local IGSCC initiation can be assumed on every GB: IGSCC gets initiated wherever $\snn>\sigma_c$, \TM{with both these quantities being local in a sense, that they in principle depend on the position of the GB within the aggregate}.
The introduced local criterion can be used to evaluate the probability, that a randomly selected GB on a component’s surface, where it is in contact with the corrosive environment, is overloaded (or soon-to-be cracked). This probability can be estimated by calculating a fraction $\eta$ of GBs with $\snn>\sigma_c$ as
$\eta=\int_{\sigma_c}^{\infty}\pdf(\snn)d\snn$,
for the assumed probability-density function $\pdf(\snn)$. If a fraction of overloaded GBs exceeds a threshold value, $\eta>\eta_f$, a specimen-sized crack may develop, possibly resulting in a catastrophic failure of the component.
This approach, based on the accurate knowledge of GB-normal stresses, seems feasible when a GB strength $\sigma_c$ is known and approximately constant within the examined surface section. Unfortunately, this is not the case in real materials.
Measurements have shown that different GBs show different IGSCC sensitivities~\cite{rahimi2011,fujii2019,rahimi2009,liu2019}, implying that GB strength $\sigma_c$ depends not only on the material and environmental properties but also strongly on a GB type;
$\sigma_c=\mathcal{F}({\rm GB\ type, material, environment})$.
Here, a GB type denotes a GB microstructure (inter-atomic arrangements in the vicinity of the GB), which affects the GB energy and, eventually, its strength $\sigma_c$. In the continuum limit, five parameters are \TM{needed to} define a GB neighborhood: four parameters are \TM{required} to specify a GB plane in \TM{crystallographic systems of} the two adjacent grains and one parameter defines a twist rotation between the associated crystal lattices about the plane normal.
\TM{In principle, the term ``GB type'' thus refers to GBs with the same GB strength. Sometimes it is convenient to specify a GB type by less than five parameters (\textit{e.g.}, when the values of skipped parameters do not affect the $\snn$ distribution).
For instance, in the $[abc]$-$[def]$ GB type, with a GB-plane normal along the $[abc]$ direction in one grain and $[def]$ direction in the other grain, the twist angle can be assumed random (and thus remains unspecified).
}
In \TM{addition} to $\sigma_c$ being a function of GB type, also the distributions of GB-normal stresses should be evaluated for different GB types in order to later perform a meaningful calculation of fraction $\eta$. Hence,
$\pdf(\snn)=\mathcal{F}({\rm GB\ type, applied\ stress, material})$.
Since exact general solutions for both the local $\snn$ and statistical $\pdf(\snn)$ are too complex to be derived analytically, researchers have restricted themselves to numerical simulations limited to few selected materials and specific (usually \TM{uniaxial}) loading conditions.
Crystal-plasticity finite element (FE) simulations~\cite{diard2002,diard2005,kanjarla,gonzalez2014,hure2016,elshawish2018} and crystal-plasticity fast Fourier transform simulations~\cite{lebensohn2012,disen2020} have been used to obtain intergranular stresses on random GBs in either synthetic or realistic polycrystalline aggregates, providing valuable information for IGSCC initiation in those specific cases. In particular, the fluctuations of intergranular normal stresses (the widths of $\pdf(\snn)$) have been found to depend primarily on the elasto-plastic anisotropy of the grains with either cubic~\cite{gonzalez2014,hure2016,elshawish2018} or hexagonal lattice symmetries~\cite{elshawish2018}.
Although the computationally demanding simulations can provide accurate results, such an approach deems impractical for a general case and provides little insights into involved physics. Thus, efforts have been made to identify most influential parameters affecting the GB-normal stresses on \TM{any single} GB type~\cite{west2011,elshawish2021}. In the elastic regime of grains with cubic lattice symmetry, Zener elastic anisotropy index $A$~\cite{zener} and effective GB stiffness $\Enn$, measuring the average stiffness of GB neighborhood along the GB-normal direction, have been identified and demonstrated to be sufficient for quantifying normal-stress fluctuations on any GB type in a given material under uniaxial external loading~\cite{elshawish2021}. The empirical relation (still lacking a satisfactory explanation) has been established for the standard deviation of $\snn$ \TM{distribution} evaluated on $[abc]$-$[def]$ GBs, which is a function of $A$ and $\Enn$. On the contrary, the mean value of the same $\snn$ \TM{distribution} has been shown to be independent of the chosen material and/or the GB type on which it is calculated.
To account for \TM{elastic--perfectly plastic} grains at applied tensile yield stress, a simple Schmid-Modified Grain-Boundary-Stress model has been proposed~\cite{west2011} to investigate the initiation of an intergranular crack, based on a normal stress acting at GB. The model considers combined effects of GB-plane orientation and grain orientations through their Schmid factors. It has been pointed out, that intergranular cracks occur most likely at highly stressed GBs. In other similar studies~\cite{stratulat2014,zhang2019,fujii2019}, the same model has been used to discuss crack initiation in austenitic stainless steel, concluding that initiation sites coincide with the most highly stressed GBs.
Building upon partial results, limited to either specific loading conditions~\cite{west2011,elshawish2021} and/or specific grain-lattice symmetries~\cite{elshawish2021}, the goal of this study is to develop a \TM{model} of GB-normal stresses, that would provide accurate analytic or semi-analytic expressions for $\snn$, with the corresponding statistical measure $\pdf(\snn)$, depending on a general GB type, general applied stress and general elastic polycrystalline material. Once the knowledge of GB strength $\sigma_c$ becomes available, the resulting expressions will be directly useful in the mechanistic modeling of GB-damage initiation (such as IGSC
\footnote{Since material and mechanical aspects of IGSCC are decoupled from the environmental factors hidden in the GB strength, this study may also be relevant for other degradation mechanisms, where GB-normal stresses are the driving force for crack initiation, such as GB sliding or fatigue.
) and should therefore become a quick and reliable tool to all the experts dealing with local damage modeling and characterization.
The paper is structured as follows: in Sec.~\ref{sec:2} typical material and GB-type effects on GB-normal stresses are introduced. In Sec.~\ref{sec:3} the \TM{perturbative framework for predicting} GB-normal stresses is developed, providing analytical and semi-analytical models along with their solutions. In Sec.~\ref{sec:4} the upgraded models are verified with FE-simulation results. Practical implications are discussed in Sec.~\ref{sec:5} and in Sec.~\ref{sec:6} some concluding remarks are given. All technical details are deferred to the set of Appendices.
\section{Material and grain boundary type effects on intergranular normal stresses}
\label{sec:2}
The anisotropic elasticity of crystals is governed by the generalized Hooke's law, $\sigma_{ij}=C_{ijkl} \, \epsilon_{kl}$, where $C_{ijkl}$ is a 3D fourth-order stiffness tensor. Depending on the symmetry of the underlying grain lattice, $C_{ijkl}$ can be expressed in terms of two (isotropic), three (cubic), or more (up to $21$ for triclinic) independent elastic parameters. All grains in a polycrystalline aggregate are assigned the same elastic material properties, but different, \TM{random} crystallographic orientations \TM{(no texture)}.
\TM
However, for practical purposes, we artificially increase the share of GBs of a certain type in our
finite element aggregate models, by imposing specific orientations to a relatively small fraction of grains.}
In the continuum limi
\footnote{
On the atomistic scale, more parameters would be required to characterize a GB \TM{by describing} the arrangement of atoms on both sides of the GB plane (\textit{e.g.}, coherent vs. non-coherent GBs).
},
a general GB type is defined by five independent parameters, which specify the orientations of two nearest grains relative to the GB plane. It can be expressed in the form of $[abc]$-$[def]$-$\Delta\omega$ GBs, where their GB plane is the $[abc]$ plane in one grain and $[def]$ plane in the other grain, with $\Delta\omega$ denoting a relative twist of the two grain orientations \TM{about} the GB normal
\footnote{GB type can also be defined by specifying less than five parameters. In such cases, \TM{the value of certain} parameters \TM{can be} assumed random. For example, misorientation GBs have only one fixed parameter and coincidence-site-lattice GBs, such as $\Sigma$3, $\Sigma$5, $\Sigma$7, have three fixed parameters~\cite{elshawish2021}.}
Due to topological constraints, not all GBs can be assigned the same GB character. In practice, a particular $[abc]$-$[def]$-$\Delta\omega$ GB type can be ascribed to at most $\sim$17\% of the GBs in a given aggregate, with the remaining GBs being of random type (\textit{i.e.}, defined by two randomly oriented neighboring grains). A polycrystalline aggregate and two particular GB types are visualized in Fig.~\ref{fig:geom}.
\begin{figure}
\includegraphics[width=\columnwidth]{fig01}
\caption{(a) 3D periodic Voronoi aggregate with $4000$ grains used in this study. Different grains are denoted by different colors. Finite element mesh is shown for one selected grain. Visualization of two different GBs with fixed GB plane (with normal $n$) but different crystallographic orientations: (b) $[001]$-$[001]$-$30^{\circ}$ GB and (c) $[111]$-$[111]$-$30^{\circ}$ GB.}
\label{fig:geom}
\end{figure}
The constitutive equations of the generalized Hooke's law \TM{are solved numerically for a chosen uniform loading with FE solver Abaqus~\cite{abaqus
. The obtained stresses $\sigma$, corresponding to the nearest integration points of a particular GB $k$, are then used to produce a single value $\snn(k)$ as their weighted average. Besides local stresses $\snn(k)$, first two statistical moments of $\pdf(\snn)$, the mean value and standard deviation, are calculated for the distribution of stresses on GBs of a chosen, overrepresented $[abc]$-$[def]$-$\Delta\omega$ GB type, whose density was artificially boosted} (see Appendix~\ref{app:fem} for further details).
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig02}
\caption{(a) \TM{Normalized} local stress responses $\snn/\Sigma$ and (b) \TM{their statistical distributions} $\pdf(\snn/\Sigma)$ in a polycrystalline lithium under macroscopic tensile loading $\Sigma$ \TM{for $3$ different GB types. Large influence of a chosen GB type and poor prediction capability of the isotropic model ($\snn/\Sigma=\cos^2\theta$) are clearly visible. In panel (a) the results are shown for just $15$ randomly selected GBs of each type (GB index).}}
\label{fig:effectGB}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig03}
\caption
\TM{Similarly as in Fig.~\ref{fig:effectGB}, but evaluated on $[111]$-$[111]$ GB type in different materials to demonstrate the effect of their elastic properties. Panel (b) shows how isotropic model begins to fail with the growing anisotropy of the grains.}}
\label{fig:effectMat}
\end{figure}
Figs.~\ref{fig:effectGB} and~\ref{fig:effectMat} show typical (strong) effects of different GB types and different materials on both, local stresses $\snn$ and the corresponding stress distributions $\pdf(\snn)$, for the assumed macroscopic uniaxial tensile loading $\Sigma$. \TM{In Fig.~\ref{fig:effectGB}, a comparison of different $[abc]$-$[def]$ GB types is made, with $\Delta\omega$ assumed random. Each value of GB index refers to a particular
GB within the
aggregate of fixed grain topology, shown in Fig.~\ref{fig:geom}(a).
In this way, the effect of the GB type can be isolated from
other contributions
}
While the mean stress is independent of the GB type (with $\ave{\snn}=\Sigma/3$ for all types), the stress fluctuations are much larger on the (stiffest) $[111]$-$[111]$ GBs than on the (softest) $[001]$-$[001]$ GBs~\cite{elshawish2021}.
A similar behavior is observed in Fig.~\ref{fig:effectMat}, where the effect of different material properties is isolated from other contributions by comparing $\snn$ on \TM{identical} GBs. All stress distributions $\pdf(\snn)$ are again centered around $\ave{\snn}=\Sigma/3$, \TM{while they are at the same time getting} considerably wider with increasing grain anisotropy~\cite{elshawish2021}.
In most cases depicted in Figs.~\ref{fig:effectGB} and~\ref{fig:effectMat}, a poor prediction capability of the isotropic mode
\footnote{Isotropic model assumes isotropic material properties of the grains, resulting in local stresses that are equal to the applied stress.}
is observed, implying that local GB stresses are non-trivially dependent on the GB type, material properties and loading conditions.
\section{\label{model}Perturbative model of grain boundary normal stresses}
\label{sec:3}
\subsection{Assumptions}
To develop an accurate \TM{prediction} for $\snn$ (and the corresponding $\pdf(\snn)$), a
\TM{step-by-step} approach is taken, \TM{inspired by} perturbation theory. In this sense, the solution for $\snn^{(k)}$, starting from the trivial isotropic-grain solution $\snn^{(0)}$, is \TM{refined} in each successive step $k$ by considering \TM{the contribution of more distant grain
.} To provide an analytic solution, \TM{sensible} approximations and assumptions are used. For example, following Saint Venant’s principle, the effects of more distant neighborhood on a GB are described in less detail, using only average quantities such as elastic grain anisotropy $A^u$~\cite{ranganathan} or isotropic bulk stiffness $\ave{E}$. The strategy for building a perturbative model is shown schematically in Fig.~\ref{fig:pert}.
\TM{In the simplest approximation ($k=0$), the neighborhood of a chosen GB can be
modeled as isotropic, in which case the only relevant degree of freedom is the orientation of the GB plane.
In the next order iteration ($k=1$), the two (anisotropic) grains enclosing the GB are
considere
, while their combined (axial) strain is assumed equal as if both grains were made from (isotropic) bulk material.}
\begin{figure*}
\includegraphics[width=\columnwidth]{fig04}
\caption{\TM{Perturbation-theory based} strategy for finding GB-normal stress $\snn^{(k)}$. \TM{In each successive step $k$, a more complex GB neighborhood is taken into account. For simplicity, the scheme presented here is only 2D and subjected to tensile loading $\mathbf{\Sigma}$, but in practise a 3D case for a general uniform loading is considered.}}
\label{fig:pert}
\end{figure*}
\TM
This assumption works well for average grains, but the stiffer or softer the grains in the pair are, the more it starts to fail.
To relax that condition, in the next order iteration ($k=2$), ``buffer'' grains are introduced along the GB-normal (axial) direction. Then not only bicrystal pair, but the whole axial chain (containing also the buffer grains) is supposed to deform as if it was made from bulk material.} In a similar manner, buffer grains are added also along the transverse directions, forming transverse chains of grains whose axial strain is constrained by the bulk ($k=3$).
In the isotropic-grain solution ($k=0$), the GB-normal stress is equal to the externally applied stress projected onto a GB plane, $\snn^{(0)}=\Sigma_{zz}$, which for \TM{uniaxial} loading $\Sigma$ translates to $\snn^{(0)}=\Sigma \cos^2\theta$, where $\theta$ is the angle between the GB normal and loading direction. The isotropic-grain solution may be a good \TM{initial approximation}, but \TM{it turns to be} a poor solution for moderate and highly anisotropic materials, see Figs.~\ref{fig:effectGB} and~\ref{fig:effectMat}.
To obtain higher-order ($k>0$) solutions $\snn^{(k)}$, the effect of two nearest grains \TM{enclosing} the GB is considered in more detail, while the effect of \TM{more distant, buffer} grains is accounted for \TM{less rigorously}. Instead of a full 3D solution, several partial 1D solutions are obtained simultaneously and properly combined to accurately approximate $\snn^{(k)}$. Schematically, the corresponding general model can be viewed in Fig.~\ref{fig:chains}, as composed of one axial chain of length $L_n+2$ and four lateral chains of length $L_t+1$ crossing the two grains, \TM{that are adjacent to GB}.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{fig05}
\caption{A 2D sketch of \TM{perturbative model for GB stresses}, \TM{consisting} of two anisotropic grains of unit size, \TM{enclosing} the GB, and several isotropic buffer grains of variable length, composing one axial chain of length $L_n+2$ and two (four in 3D) transverse chains of length $L_t+1$. Stresses and strains are assumed constant within the grains. Total strain of each chain is \TM{prescribed} to \TM{match} that of isotropic bulk \TM{of the same length and under the same external loading} (Voigt-like assumption on a chain\TM{-length} scale). 3D coupling of the chains with the surrounding bulk is modeled by assuming variable chain stiffness. External loading $\mathbf{\Sigma}$ is dressed by fluctuations $\mathbf{f}$.}
\label{fig:chains}
\end{figure}
The
chains are assumed decoupled from each other, \TM{but} they interact with the \TM{surrounding} bulk. \TM{The bulk is taken as} isotropic, with \TM{average (bulk) properties, such as} elastic stiffness $\ave{E}$ and Poisson's ratio $\ave{\nu}$. The chain-bulk interaction is, in the first approximation (\textit{i.e.}, without lateral 3D coupling), assumed to be along the chain direction. \TM{It constrains} the total strain of the chain
to that of the isotropic bulk. This boundary condition corresponds to the Voigt-like assumption, but on a chain\TM{-length} scale.
Buffer grains are assumed isotropic \TM{as well, but} with elastic stiffness $E_b$ and Poisson's ratio $\nu_b$, both corresponding to the average response of a chain \TM{with} $L_n$ (or $L_t$) randomly oriented grains. However, when accounting for the lateral 3D effects (cf.~Sec.~\ref{sec:3Deffects}), the chains are \TM{allowed} to interact also laterally with the bulk \TM{and in the limit of long chains} both parameters \TM{approach} those of the bulk, $E_b\sim\ave{E}$ and $\nu_b\sim\ave{\nu}$.
The two grains \TM{on either side of} the GB are assumed anisotropic, with \TM{their crystallographic orientations determining the $[abc]$-$[def]$-$\Delta\omega$ type of the corresponding GB}.
Finally, the stresses and strains are considered homogeneous within all the grains. In addition, a general analytical expression for $\snn^{(k)}$ \TM{is derived by applying a reduced set of
boundary conditions.
To facilitate a simple, closed-form solution for $\sigma_{nn}^{(k)}$, only the conditions for stresses are imposed at the GB, while those for strains are neglected.
Hence}, the stress equilibrium is fulfilled everywhere in the model, while the strain compatibility at the GB is not guaranteed.
These assumptions \TM{will be justified \textit{a posteriori} by comparing the model
results with those from numerical simulations.}
\subsection{Analytical models}
\subsubsection{General setup}
\begin{figure}
\includegraphics[width=0.8\columnwidth]{fig06}
\caption{Definition of three coordinate systems: laboratory coordinate system $(X,Y,Z)$, local-grain coordinate system $(n_1,n_2,n_3)$, and GB coordinate system $(x,y,z)$. The latter \TM{can be arbitrarily chosen with respect to the} twist angle $\tau$ \TM{about} the GB normal $\hat{n}||\hat{z}$. Passive rotations $\mathbf{R}^{\text{lab}}$ and $\mathbf{R}^{\TM{\text{cry}}}$ \TM{transform} external and local\TM{-grain} quantities, respectively, to the GB coordinate system. \TM{Since in the following, two grains will be considered, four coordinate systems will be in use, namely one crystallographic system for each grain, together with associated rotations $\mathbf{R}^{\TM{\text{cry}, abc}}$ and $\mathbf{R}^{\TM{\text{cry}, def}}$.}}
\label{fig:cs}
\end{figure}
Analytical expressions for $\snn^{(k)}$ are \TM{presented in} the GB coordinate system $(x,y,z)$ \TM{with $z$-axis along the GB normal}. All quantities expressed in the local-grain coordinate system $(n_1,n_2,n_3)$, \TM{aligned with crystallographic (eigen-)axes}, and the laboratory coordinate system $(X,Y,Z)$, therefore need to be appropriately \TM{transformed} using the following (passive) rotations $\mathbf{R}^{\TM{\text{cry}}}$ and $\mathbf{R}^{\text{lab}}$, respectively (see also Fig.~\ref{fig:cs}),
\begin{widetext}
\ba
\label{eq:Rloc}
\mathbf{R}^{\text{cry}}&=&\left(
\begin{array}{ccc}
\phantom{-}\frac{h l \cos \omega }{\sqrt{h^2+k^2} \sqrt{h^2+k^2+l^2}} - \frac{k \sin \omega}{\sqrt{h^2+k^2}} & \phantom{-}\frac{k l \cos \omega}{\sqrt{h^2+k^2} \sqrt{h^2+k^2+l^2}} + \frac{h \sin \omega}{\sqrt{h^2+k^2}} & -\frac{\sqrt{h^2+k^2} \cos \omega
}{\sqrt{h^2+k^2+l^2}} \\
-\frac{h l \sin \omega}{\sqrt{h^2+k^2} \sqrt{h^2+k^2+l^2}} - \frac{k \cos \omega}{\sqrt{h^2+k^2}} & -\frac{k l \sin \omega}{\sqrt{h^2+k^2} \sqrt{h^2+k^2+l^2}} + \frac{h \cos \omega}{\sqrt{h^2+k^2}} & \phantom{-}\frac{\sqrt{h^2+k^2} \sin \omega
}{\sqrt{h^2+k^2+l^2}} \\
\frac{h}{\sqrt{h^2+k^2+l^2}} & \frac{k}{\sqrt{h^2+k^2+l^2}} & \frac{l}{\sqrt{h^2+k^2+l^2}} \\
\end{array}
\right),\\
\label{eq:Rlab}
\mathbf{R}^{\text{lab}}&=&\left(
\begin{array}{ccc}
\phantom{-}\cos \theta \cos \psi \cos \phi - \sin \psi \sin \phi & \phantom{-}\cos \theta \sin \psi \cos \phi + \cos \psi \sin \phi & -\sin \theta \cos \phi \\
-\cos \theta \cos \psi \sin \phi - \sin \psi \cos \phi & -\cos \theta \sin \psi \sin \phi + \cos \psi \cos \phi & \phantom{-}\sin \theta \sin \phi \\
\sin \theta \cos \psi & \sin \theta \sin \psi & \cos \theta \\
\end{array}
\right).
\ea
\end{widetext}
While standard notation with three Euler angles $(\psi,\theta,\phi)$, \TM{corresponding to a sequence of rotations $\mathbf{R}_1$ about $\hat{n}_3$ (angle $\psi$), $\mathbf{R}_2$ about $\mathbf{R}_1 \hat{n}_2$ (angle $\theta$) and $\mathbf{R}_3$ about $\mathbf{R}_2 \mathbf{R}_1 \hat{n}_3 = \hat{z}$ (angle $\phi$)}, is used for matrix $\mathbf{R}^{\text{lab}}$, the rotation $\mathbf{R}^{\text{cry}}$ is expressed in terms of $(h,k,l,\omega)$, where the GB
normal corresponds to the $[h k l]$ directio
\footnote{The $[h k l]$ direction is determined by two (not three) independent parameters.}
in the local-grain coordinate system, and $\omega$ denotes a twist angle \TM{about} the GB normal. This notation is particularly useful for analyzing the response of $[abc]$-$[def]$-$\Delta\omega$ GBs. In the following, we shall always use $(x,y,z)$ to refer to the axes of the GB coordinate system and $(X,Y,Z)$ for laboratory system associated with the external loading $\mathbf{\Sigma}$. In this respect,
\be
\label{eq:sigLAB}
\mathbf{\Sigma}^{\text{lab}}=\left(
\begin{array}{ccc}
\Sigma _{XX} & \Sigma _{XY} & \Sigma _{XZ} \\
\Sigma _{XY} & \Sigma _{YY} & \Sigma _{YZ} \\
\Sigma _{XZ} & \Sigma _{YZ} & \Sigma _{ZZ} \\
\end{array}
\right),
\ee
and
\be
\label{eq:sigGB}
\mathbf{\Sigma}^{\text{GB}}=\mathbf{R}^{\text{lab}}\mathbf{\Sigma}^{\text{lab}}(\mathbf{R}^{\text{lab}})^T=
\left(
\begin{array}{ccc}
\Sigma _{xx} & \Sigma _{xy} & \Sigma _{xz} \\
\Sigma _{xy} & \Sigma _{yy} & \Sigma _{yz} \\
\Sigma _{xz} & \Sigma _{yz} & \Sigma _{zz} \\
\end{array}
\right).
\ee
\TM{To find the solution of perturbative models in Fig.~\ref{fig:pert}, the number of variables needs to match the number of boundary conditions.
In isotropic limit, $\sigma_{ij}^{(0)} = \Sigma_{ij}$, and thus there are no constraints and no degrees of freedom.
In a bicrystal model with (1D) axial constraint, there is only a single unknown ($\sigma_{zz}^{(1)} = \sigma_{zz}^{(2)}$), and also a single
constraint on the axial strain ($\epsilon_{zz}^{(1)} + \epsilon_{zz}^{(2)} = 2 \epsilon_{zz}^{\text{bulk}}$). The situation does not change even when buffer grains are added.
For a (3D) model in Fig.~\ref{fig:chains}}, the following set of conditions is used, constraining the axial strains of all five 1D chains:
\ba
\label{eq:constr}
\begin{split}
L_n \epsilon_{zz}^{(\TM{1_z = 2_z})} +\epsilon_{zz}^{(1)} + \epsilon_{zz}^{(2)}&=(L_n+2)\epsilon_{zz}^{\text{bulk}} , \\
L_t \epsilon_{xx}^{(1_x)} +\epsilon_{xx}^{(1)}&=(L_t+1)\epsilon_{xx}^{\text{bulk}} ,
\\
L_t \epsilon_{xx}^{(2_x)} +\epsilon_{xx}^{(2)}&=(L_t+1)\epsilon_{xx}^{\text{bulk}} ,
\\
L_t \epsilon_{yy}^{(1_y)} +\epsilon_{yy}^{(1)}&=(L_t+1)\epsilon_{yy}^{\text{bulk}} ,
\\
L_t \epsilon_{yy}^{(2_y)} +\epsilon_{yy}^{(2)}&=(L_t+1)\epsilon_{yy}^{\text{bulk}}.
\end{split}
\ea
\TM{Strain of each grain is weighted by its length, \textit{i.e.}, either $L_n, L_t\ge 0$ for buffer grains, or $1$ for unit-size GB grains.
Superscript label of each strain-tensor component (and similarly for stresses) indicates to which particular grain it corresponds; $N=1,2$ for GB grains or $N_i$ for buffer grains in $i=x,y,z$ directions.}
\TM{Applying} the generalized Hooke's law to GB grain $N$, the $ii$ component of its strain tensor can be written a
\footnote{\TM{The summation indices $1,2,3$ correspond to $x,y,z$, respectively.}}
\ba
\epsilon_{ii}^{(N)}&=& \sum_{k,l=1}^{3} s_{iikl}^{\text{GB},N} \, \TM{\sigma_{kl}^{(N)}}\\
&=& \hspace{-.5cm} \sum_{k,l,m,n,o,p=1}^{3} \hspace{-.6cm} R_{im}^{\text{cry},N} R_{in}^{\text{cry},N} R_{ko}^{\text{cry},N} R_{lp}^{\text{cry},N} \, s_{mnop}^{\text{cry}} \, \TM{\sigma_{kl}^{(N)}} \nonumber,
\ea
\begin{comment}
\ba
\epsilon_{ii}^{(N)}&=& \TM{\sum_{k=1}^{3} s_{iikk}^{\text{GB},N} \, \sigma_{kk}^{(N)}} + \sum_{k=1}^{3}\sum_{l\neq k} s_{iikl}^{\text{GB},N} \, \Sigma_{kl}^{\text{GB}}\\
&=& \TM{\hspace{-.5cm} \sum_{k,m,n,o,p=1}^3 \hspace{-.5cm} R_{im}^{\text{cry},N} R_{in}^{\text{cry},N} R_{ko}^{\text{cry},N} R_{kp}^{\text{cry},N} \, s_{mnop}^{\text{cry}} \, \sigma_{kk}^{(N)}} + \nonumber \\
& & + \hspace{-.8cm} \sum_{k,l\neq k,m,n,o,p=1}^3 \hspace{-.8cm} R_{im}^{\text{cry},N} R_{in}^{\text{cry},N} R_{ko}^{\text{cry},N} R_{lp}^{\text{cry},N} \, s_{mnop}^{\text{cry}} \, \Sigma_{kl}^{\text{GB}}\nonumber,
\ea
\end{comment}
\TM{with all stress-tensor components $\sigma_{kl}^{(N)}$ listed in Table~\ref{tab:load}. Note that shear stresses do not appear as variables in either grain, but have their values assigned ($\sigma_{ij}^{(N)}=\Sigma_{ij}$ for $i\ne j$), \textit{i.e.}, they are set equal to the components of external-stress tensor, rotated to a local GB system; cf.~Eq.~\eqref{eq:sigGB}. Out of the $6$ remaining stress components in both grains, two are set equal due to stress-continuity condition ($\sigma_{zz}^{(1)}=\sigma_{zz}^{(2)}:=\sigma_{zz}$), hence the number of unknowns (five) matches the number of constraints in Eq.~\eqref{eq:constr}.
}
Compliance tensor $s^{\text{cry}}$, expressed in the \TM{local (crystallographic) coordinate system of the grain, is readily transformed to the GB system, where rotation matrices $\mathbf{R}^{\text{cry}}$ can be different for both grains}. Depending on the symmetry of the grain lattice, $s^{\text{cry}}$ can be expressed as a function of minimum two (isotropic) and maximum $21$ (triclinic) independent elastic parameters. Here, no preference \TM{for} the underlying symmetry is assumed, thus keeping the approach as general as possible.
To maintain the clarity \TM{of the manuscript}, only functional dependence of $\epsilon_{ii}^{(N)}$ is retained her
\footnote{Parameters in $\mathcal{F}$ are grouped into three sets, \TM{separated by semicolons. They are related either to} material properties, GB orientation or loading.}
(with full analytic expressions for cubic lattice symmetry presented in Appendix~\ref{app:epsnn}),
\ba
\label{eq:eps12}
\begin{split}
\epsilon_{ii}^{(1)}=\mathcal{F}(s^{\text{cry}};a,b,c,\omega_1;\sigma_{xx}^{(1)},\sigma_{yy}^{(1)},\sigma_{zz},\Sigma_{xy},\Sigma_{xz},\Sigma_{yz}) , \\
\epsilon_{ii}^{(2)}=\mathcal{F}(s^{\text{cry}};d,e,f,\omega_2;\sigma_{xx}^{(2)},\sigma_{yy}^{(2)},\sigma_{zz},\Sigma_{xy},\Sigma_{xz},\Sigma_{yz}) .
\end{split}
\ea
\TM{Generic $(h,k,l,\omega)$ parameters} in $\mathbf{R}^{\text{cry}}$ have been replaced by specific values $(a,b,c,\omega_1)$ and $(d,e,f,\omega_2)$ in GB grains $1$ and $2$, respectively. This \TM{setting} corresponds to a well-defined GB type $[abc]$-$[def]$-$\Delta\omega$ with $\Delta\omega:=\omega_2-\omega_1$.
\TM{Similar expressions apply also to buffer grains $N_i$. The only difference is, that there all stress components correspond to projected external loading $\mathbf{\Sigma}^{\text{GB}}$. The only exception is the axial stress $\sigma_{ii}^{(N_i)}$, which matches $\sigma_{ii}^{(N)}$ in GB grain $N$ due to stress-continuity along the chain length. Stress components in each of the grains are summarized in Table~\ref{tab:load}.}
\begin{table}[h]
\caption{\label{tab:load
Assumed stress components in different grains of the model. Buffer grain \TM{label $N_i$ denotes the corresponding GB grain ($N=1,2$) and the direction of the chain, to which} it belongs ($i=x,y,z$).
}
\begin{ruledtabular}
\begin{tabular}{lll}
Grain& Assigned stresses& Unknown stresses\\
\colrule
GB grain $1$ & $\sigma_{ij}^{(1)}=\Sigma_{ij}$, $i\ne j$ & $\sigma_{xx}^{(1)}$, $\sigma_{yy}^{(1)}$, $\sigma_{zz}^{(1)}$\\
GB grain $2$ & $\sigma_{ij}^{(2)}=\Sigma_{ij}$, $i\ne j$ & $\sigma_{xx}^{(2)}$, $\sigma_{yy}^{(2)}$, $\sigma_{zz}^{(2)}=\sigma_{zz}^{(1)}$\\
buffer $1_x$ & $\sigma_{ij}^{(1_x)}=\Sigma_{ij}$, $\TM{ij\ne xx}$ & $\sigma_{xx}^{(1_x)}=\sigma_{xx}^{(1)}$\\
buffer $1_y$ & $\sigma_{ij}^{(1_y)}=\Sigma_{ij}$, $\TM{ij\ne yy}$ & $\sigma_{yy}^{(1_y)}=\sigma_{yy}^{(1)}$\\
buffer $2_x$ & $\sigma_{ij}^{(2_x)}=\Sigma_{ij}$, $\TM{ij\ne xx}$ & $\sigma_{xx}^{(2_x)}=\sigma_{xx}^{(2)}$\\
buffer $2_y$ & $\sigma_{ij}^{(2_y)}=\Sigma_{ij}$, $\TM{ij\ne yy}$ & $\sigma_{yy}^{(2_y)}=\sigma_{yy}^{(2)}$\\
buffer $1_z(=2_z)$ & $\sigma_{ij}^{(1_z)}=\Sigma_{ij}$, $\TM{ij\ne zz}$ & $\sigma_{zz}^{(1_z)}=\sigma_{zz}^{(1)}$
\end{tabular}
\end{ruledtabular}
\end{table}
\TM{Sufficiently far from the GB, the grains can be treated as isotropic. This allows for much simpler expressions for strain components $\epsilon_{ii}$ (with $i=x,y,z$) in both, buffer grains and the bulk material,}
\ba
\label{eq:epsbuf}
\epsilon_{ii}^{(N_i)}&=&\frac{1}{E_b}\left(\sigma_{ii}^{(N)}-\nu_b(\operatorname{tr}(\mathbf{\Sigma}^{\text{GB}})-\Sigma_{ii})\right),\\
\label{eq:epsbul}
\epsilon_{ii}^{\text{bulk}}&=&\frac{1}{\ave{E}}\left(\Sigma_{ii}-\ave{\nu}(\operatorname{tr}(\mathbf{\Sigma}^{\text{GB}})-\Sigma_{ii})\right).
\ea
\TM{With relevant strain components in individual grains defined in Eqs.~\eqref{eq:eps12}, \eqref{eq:epsbuf} and~\eqref{eq:epsbul}, the set of conditions in Eq.~\eqref{eq:constr}, constraining the axial strains of all five chains, can be solved analytically} for all five unknown stresses $\sigma_{ii}^{(N)}$, including the \TM{GB-normal stress} $\snn:=\sigma_{zz}$.
However, the resulting $\snn$ has a significant deficiency. It depends on the choice of the \TM{local} GB coordinate system (the value of twist angle $\tau$ in Fig.~\ref{fig:cs}). This dependence originates in the prescribed directions of the four lateral chains, which are directed along the \TM{local} $x$ and $y$ axes. A different choice of $x$ and $y$ axes would produce different lateral constraints, \TM{which would result in a} different $\snn$. To avoid this ambiguity, a symmetrized lateral boundary condition is derived below.
\subsubsection{Symmetrized model}
\TM{The model is symmetrized by averaging the lateral boundary condition over all possible GB coordinate systems. The twist of the local GB system for an arbitrary angle $\tau$ about the GB normal ($z$-axis) changes how $\mathbf{\Sigma}^{\text{GB}}$ and $s^{\text{GB},N}$ are expressed in that system. Specifically, the rotation changes the Euler angles $\omega_N$ and $\phi$ in transformation matrices~\eqref{eq:Rloc} and~\eqref{eq:Rlab}, respectively,
\ba
\begin{split}
\omega_N & \to \omega_N+\tau \, ; \quad (N=1,2), \\
\phi & \to \phi + \tau,
\end{split}
\ea
which in turn affect Eqs.~\eqref{eq:eps12}, \eqref{eq:epsbuf} and~\eqref{eq:epsbul}, and make them $\tau$ dependent.
Since all twist rotations should be equivalent}, averaging over $\tau$ \TM{replaces Eq.~\eqref{eq:constr} with} new, symmetrized boundary conditions
\ba
\begin{split}
\label{eq:constrSym}
\TM{\tfrac{1}{2\pi}}\hspace{-.1cm}\int_0^{2\pi}\hspace{-.2cm}\left(L_n \epsilon_{zz}^{(\TM{1_z = 2_z})} +\epsilon_{zz}^{(1)} + \epsilon_{zz}^{(2)}\right) d\tau&=\TM{\tfrac{1}{2\pi}}\hspace{-.1cm}\int_0^{2\pi}\hspace{-.2cm}(L_n+2)\epsilon_{zz}^{\text{bulk}} d\tau , \\
\TM{\tfrac{1}{2\pi}}\hspace{-.1cm}\int_0^{2\pi}\hspace{-.2cm}\left(L_t \epsilon_{xx}^{(1_x)} +\epsilon_{xx}^{(1)}\right) d\tau&=\TM{\tfrac{1}{2\pi}}\hspace{-.1cm}\int_0^{2\pi}\hspace{-.2cm}(L_t+1)\epsilon_{xx}^{\text{bulk}}d\tau , \\
\TM{\tfrac{1}{2\pi}}\hspace{-.1cm}\int_0^{2\pi}\hspace{-.2cm}\left(L_t \epsilon_{xx}^{(2_x)} +\epsilon_{xx}^{(2)}\right) d\tau&=\TM{\tfrac{1}{2\pi}}\hspace{-.1cm}\int_0^{2\pi}\hspace{-.2cm}(L_t+1)\epsilon_{xx}^{\text{bulk}}d\tau , \\
\TM{\tfrac{1}{2\pi}}\hspace{-.1cm}\int_0^{2\pi}\hspace{-.2cm}\left(L_t \epsilon_{yy}^{(1_y)} +\epsilon_{yy}^{(1)}\right) d\tau&=\TM{\tfrac{1}{2\pi}}\hspace{-.1cm}\int_0^{2\pi}\hspace{-.2cm}(L_t+1)\epsilon_{yy}^{\text{bulk}}d\tau , \\
\TM{\tfrac{1}{2\pi}}\hspace{-.1cm}\int_0^{2\pi}\hspace{-.2cm}\left(L_t \epsilon_{yy}^{(2_y)} +\epsilon_{yy}^{(2)}\right) d\tau&=\TM{\tfrac{1}{2\pi}}\hspace{-.1cm}\int_0^{2\pi}\hspace{-.2cm}(L_t+1)\epsilon_{yy}^{\text{bulk}}d\tau .
\end{split}
\ea
Solving the above set of symmetrized equations for five unknowns $\sigma_{ii}^{(N)}$ (with $i=x,y,z$ and $N=1,2$), provides analytical $\snn:=\sigma_{zz}$, independent of $\tau$. However, \TM{for the most general case} the resulting expression is too cumbersome to be \TM{presented here}. \TM{Hence, we again resort to its} functional dependence
\begin{widetext}
\ba
\begin{split}
\label{eq:snnGeneral}
\snn&={\mathcal F}(s^{\text{cry}},\ave{E},\ave{\nu},E_b,\nu_b;a,b,c,\omega_1,d,e,f,\omega_2;L_n,L_t;\mathbf{\Sigma}^{\text{\TM{GB}}}) \\
&={\mathcal F}(s^{\text{cry}},\ave{E},\ave{\nu},E_b,\nu_b;a,b,c,\omega_1-\phi,d,e,f,\omega_2-\phi;L_n,L_t;\mathbf{\Sigma}^{\text{lab}},\theta,\psi) ,
\end{split}
\ea
\end{widetext}
from which it is clear, that $\snn$ does not depend on the choice of the GB coordinate system due to observed $\omega_1-\phi$ and $\omega_2-\phi$ dependence. The normal stress $\snn$ is a \TM{complicated} function of many parameter
\footnote{To account for loading fluctuations due to anisotropy of the bulk, a universal elastic anisotropy index $A^u$ should be added to the list of influencing parameters (see Sec.~\ref{sec:gauss}). \TM{On the other hand, $A^u$, $\ave{E}$ and $\ave{\nu}$ are all only functions of $s^{\text{cry}}$.}}
(\textit{e.g.}, up to \TM{$39$} independent parameters in a material with triclinic lattice symmetry and for a most general external loading). However, not all parameters are of same significance, as shown is Sec.~\ref{sec:comp}, where $\snn$ is tested against numerical results. In order to derive a compact, but still meaningful expression, further approximations are needed.
So far, the strategy \TM{was based on} adding more complexity to the model when getting closer to the GB. In this respect, grains closest to it have been modeled in greater detail (\textit{e.g.}, employing anisotropic elasticity and mostly unknown loading conditions), while the grains further away required less modeling (\textit{e.g.}, employing isotropic elasticity and mostly known loading conditions).
With the goal to provide a compact and accurate analytical expression for $\snn$ (and the corresponding $\pdf(\snn)$), few selected limits of the general result, Eq.~$\eqref{eq:snnGeneral}$, are investigated and discussed in more detail. Some of these limits will become very useful later, when a comparison with the numerical results is done in Sec.~\ref{sec:comp}.
\subsubsection{Isotropic limit ($k=0$)}
\label{sec:iso}
\TM{The initial (zeroth order) approximation $\snn^{(0)}$, representing the exact solution in the isotropic material limit, can be reproduced from Eq.~$\eqref{eq:snnGeneral}$ in two ways, either by assuming isotropic properties of the grains (\textit{i.e.,} by taking the appropriate $s^{\text{cry}}$) or taking the limit of very long chains ($L_n,L_t\to\infty$) with average properties ($E_b=\ave{E}$, $\nu_b=\ave{\nu}$)}, in which the chain-strain constraints become ineffective, resulting in stresses equal to external loading,
\ba
\snn^{(0)}&=&\Sigma_{zz}\\
&=&\Sigma_{XX}\sin^2\theta\cos^2\psi+\Sigma_{YY}\sin^2\theta\sin^2\psi \nonumber\\
&+&\Sigma_{ZZ}\cos^2\theta + \Sigma_{XY}\sin^2\theta\sin2\psi \nonumber\\
&+&\Sigma_{XZ}\sin2\theta\cos\psi + \Sigma_{YZ}\sin2\theta\sin\psi\nonumber.
\ea
Having a sufficient number of GBs with normals uniformly distributed on a sphere, the corresponding first two statistical moments of $\pdf(\snn^{(0)})$, the mean \TM{value} and standard deviation, can be straightforwardly expressed as
\ba
\begin{split}
\label{eq:isopdf}
\ave{\snn^{(0)}}&=&\frac{1}{3} \operatorname{tr}(\mathbf{\Sigma}^{\text{lab}}) , \\
s(\snn^{(0)})&=&\frac{2}{3\sqrt{5}}\ \Sigma^{\text{lab}}_{\text{mis}} ,
\end{split}
\ea
where \TM{$\tfrac{1}{3} \operatorname{tr}(\mathbf{\Sigma}^{\text{lab}})$ is a hydrostatic pressure, related to volume change of the aggregate,} and $\Sigma^{\text{lab}}_{\text{mis}}$ corresponds to von Mises external stress,
\TM{traditionally associated with the yielding of ductile material
. Von Mises stress is related to deviatoric tensor (responsible for volume preserving shape changes of the aggregate),
\ba
\begin{split}
\label{eq:mises}
\Sigma^{\text{lab}}_{\text{mis}}&:=\frac{\sqrt{3}}{\sqrt{2}} \sqrt{\operatorname{tr}\left ((\mathbf{\Sigma}^{\text{lab}}_{\text{dev}})^2 \right )} , \\
\label{eq:deviatoric}
\mathbf{\Sigma}^{\text{lab}}_{\text{dev}} &:= \mathbf{\Sigma}^{\text{lab}} - \frac{1}{3} \operatorname{tr}(\mathbf{\Sigma}^{\text{lab}}) \mathbb{1}_{3\times 3}.
\end{split}
\ea
Both, $\operatorname{tr}(\mathbf{\Sigma}^{\text{lab}})$ and $\Sigma^{\text{lab}}_{\text{mis}}$, are rotational invariants and thus assume identical form in all coordinate systems
}
\TM{Even though Eq.~\eqref{eq:isopdf} is derived for isotropic case ($k=0$), the same functional dependence of the first two statistical moments on $\mathbf{\Sigma}^{\text{lab}}$ is retained for all orders $k$, suggesting that the loading part can be trivially decoupled from the material and GB-type contributions. In a specific case, when the external stress is of hydrostatic form (\textit{i.e.}, proportional to identity matrix; $\mathbf{\Sigma}^{\text{lab}} := \Sigma_0 \, \mathbb{1}_{3\times 3}$), this can be easily confirmed. In that case, there is no effect of grain orientations, since stress tensor is invariant to rotations. Hence, the trivial (hydrostatic) solution applies to the whole aggregate ($\snn^{(k)} = \Sigma_0$), resulting in an infinitely narrow stress (and strain) distribution. On the other hand $\Sigma^{\text{lab}}_{\text{mis}} = 0$, therefore $s(\snn)\sim\Sigma^{\text{lab}}_{\text{mis}}$ applies for any, not just isotropic material.}
\subsubsection{Axially constrained bicrystal ($k=1$)}
\label{sec:bi}
The first non-trivial solution $\snn^{(1)}$ corresponds to a bicrystal, embedded axially in the isotropic bulk ($L_n\to 0$). As there are no lateral constraints imposed \TM{on} the two GB grains, this model corresponds to the $L_t\to\infty$ limit of the general model shown in Fig.~\ref{fig:chains}. However, to obtain a compact expression for $\snn^{(1)}$, another \TM{simplification is required}, which will be justified in Sec.~\ref{sec:comp}. Since we are interested in the response of $[abc]$-$[def]$-$\Delta\omega$ GBs, which have a well-defined difference of the two twist angles, $\snn^{(1)}$ is obtained by \TM{replacing $\omega_2$ in Eq.~\eqref{eq:snnGeneral} with $\omega_1+\Delta\omega$, and averaging it over $\omega_1$:}
\ba
\begin{split}
\label{eq:bi}
\snn^{(1)}&:=\frac{1}{2\pi}\int_{0}^{2\pi}\left.\left( \lim_{\substack{L_n\to0\\L_t\to\infty}}\snn\right)\right\vert_{\omega_2=\omega_1+\Delta\omega} \hspace{-1.5cm} d\omega_1\\
&= \Enn\Sigma_{zz} + \Enn\left(\nu_{12}-\ave{\nu}\right)\left(\Sigma_{xx}+\Sigma_{yy}\right) ,
\end{split}
\ea
where
\ba
\label{eq:e12nu12}
\begin{split}
\Enn&=\frac{2\ave{E}^{-1}}{E_{abc}^{-1}+E_{def}^{-1}}=\frac{2\ave{E}^{-1}}{s^{\text{GB},abc}_{3333}+s^{\text{GB},def}_{3333}} , \\
\nu_{12}&=-\frac{\ave{E}}{4}\left(s^{\text{GB},abc}_{3311}+s^{\text{GB},abc}_{3322}+s^{\text{GB},def}_{3311}+s^{\text{GB},def}_{3322}\right) ,
\end{split}
\ea
and
\ba
s^{\text{GB},hkl}_{33jj}&=&\hspace{-.4cm}\sum_{m,n,o,p=1}^3 \hspace{-.4cm} R_{3m}^{\text{cry},hkl} R_{3n}^{\text{cry},hkl} R_{jo}^{\text{cry},hkl} R_{jp}^{\text{cry},hkl} s_{mnop}^{\text{cry}} , \phantom{xxx}
\ea
for $j=1,2,3$ and $hkl = abc$ or $def$.
\TM{This approximation removes (averages out) all the twist-angle degrees of freedom. We will refer to it as the \textit{reduced} version of the model, intended to mimic the behavior observed in numerical studies.}
The derived compact expression for $\snn^{(1)}$ is the first main result of this study. It suggests that GB-normal stress is a simple function of the loading part, \TM{contained in} $\Sigma_{xx}$, $\Sigma_{yy}$ and $\Sigma_{zz}$, and the GB\TM{-type} (and material) part, which is represented compactly by only two (composite) parameters $\Enn$ and $\nu_{12}$. While $\Enn$ has already been introduced in Ref.~\cite{elshawish2021} as an effective GB stiffness, measuring the average stiffness of GB neighborhood along the GB-normal direction, the newly introduced $\nu_{12}$ can be seen as an effective GB Poisson's \TM{ratio}, measuring the average ratio of \TM{transverse and} axial responses \TM{(strains)} in both GB grains. Both $\Enn$ and $\nu_{12}$ are unitless and characterize the $[abc]$-$[def]$-$\Delta\omega$ GB neighborhood in terms of local material and GB-type parameter
\footnote{\TM{With the exception of $\Delta\omega$, whose influence is implicitly removed from Eq.~\eqref{eq:bi} by integration over $\omega_1$.}},
\ba
\Enn&=&\mathcal{F}(s^{\text{cry}},\ave{E},a,b,c,d,e,f) , \\
\nu_{12}&=&\mathcal{F}(s^{\text{cry}},\ave{E},a,b,c,d,e,f) .
\ea
Full analytic expressions for $\Enn$ and $\nu_{12}$ \TM{(as well as $\ave{E}$ and $\ave{\nu}$)} depend on the choice of the grain lattice symmetry (expressions for cubic lattice symmetry are given in Appendix~\ref{app:epsnn}). Note that expressions simplify considerably with more symmetric lattices. In cubic lattices, for example, a GB is fully characterized by $\Enn$ \TM{alone}, since $\nu_{12}=\ave{\nu}+\tfrac{1}{2}(\Enn^{-1}-1)$. In isotropic grains, $\Enn=1$ and $\nu_{12}=\ave{\nu}$, which recovers the $\snn^{(0)}$ solution.
Switching to a statistical behavior of \TM{infinitely} many $[abc]$-$[def]$-$\Delta\omega$ GBs with randomly \TM{oriented} GB planes, the first two statistical moments of $\pdf(\snn^{(1)})$, the mean \TM{value} and standard deviation, become
\ba
\begin{split}
\label{eq:bipdf}
\ave{\snn^{(1)}}&=\frac{\operatorname{tr}(\mathbf{\Sigma}^{\text{lab}})}{3}\Enn\left(1+2(\nu_{12}-\ave{\nu})\right),\\
s(\snn^{(1)})&=\frac{2\ \Sigma^{\text{lab}}_{\text{mis}}}{3\sqrt{5}} \Enn\sqrt{\left(1-\nu_{12}+\ave{\nu}\right)^2}.
\end{split}
\ea
For cubic lattices they simplify to $\ave{\snn^{(1)}}=\operatorname{tr}(\mathbf{\Sigma}^{\text{lab}})/3$ and $s(\snn^{(1)})=\Sigma^{\text{lab}}_{\text{mis}}/(3\sqrt{5}) \left|1-3\Enn\right|
.
The fact that the mean stress is equal to $\Sigma/3$ for the uniaxial loading $\Sigma$, \TM{while the fluctuation of GB-normal stress in cubic grains is a monotonic function of a single GB parameter $\Enn$ (although the functional dependence differs from that of Eq.~\eqref{eq:bipdf}),
has already been identified in (realistic) FE simulations~\cite{elshawish2021}.} However, the observed behavior can now be easily extended to other non-cubic lattices and for general external loading.
The accuracy of the derived expressions for local $\snn^{(1)}$, Eqs.~\eqref{eq:bi}--\eqref{eq:e12nu12}, and statistical $\pdf(\snn^{(1)})$, Eq.~\eqref{eq:bipdf}, is investigated in more detail in Sec.~\ref{sec:comp}.
\subsubsection{Axially constrained chain \TM{with} $L_n+2$ grains ($k=2$)}
\label{sec:axial}
The next-order solution $\snn^{(2)}$ corresponds to a single chain with $L_n+2$ grains, axially constrained by the isotropic bulk. The reason for adding a buffer grain of length $L_n>0$ \TM{to the bicrystal is to relax the axial strain constraint. In the previous ($k=1$) iteration, this constraint applies directly to the bicrystal, which produces too large (resp. small) stresses $\snn^{(1)}$ on very stiff (resp. soft) GBs, see Sec.~\ref{sec:comp}.}
Following the same reasoning and steps as in the bicrystal model, the resulting \textit{reduced} version is derived for a general grain-lattice symmetry and arbitrary external loading
\begin{widetext}
\ba
\begin{split}
\label{eq:chain}
\snn^{(2)}&:=\frac{1}{2\pi}\int_{0}^{2\pi}\left.\left( \lim_{L_t\to\infty}\snn\right)\right\vert_{\omega_2=\omega_1+\Delta\omega} \hspace{-1.5cm} d\omega_1 \\
&= \frac{2+L_n}{2\Enn^{-1}+L_n E_3^{-1}}\Sigma_{zz}+\frac{2}{2\Enn^{-1}+L_n E_3^{-1}} \left(\nu_{12}-\ave{\nu}-\frac{1}{2} L_n\left(\ave{\nu}-\nu_b E_3^{-1}\right)\right) \left(\Sigma_{xx}+\Sigma_{yy}\right) \\
&\approx\frac{2+L_n}{2\Enn^{-1}+L_n}\Sigma_{zz}+\frac{2 \left(\nu_{12}-\ave{\nu}\right)}{2\Enn^{-1}+L_n}\left(\Sigma_{xx}+\Sigma_{yy}\right) .
\end{split}
\ea
\end{widetext}
Same definitions for $\Enn$ and $\nu_{12}$ apply as in Eq.~\eqref{eq:e12nu12}, while $E_3:= E_b/\ave{E}$ and $\nu_b$ denote, respectively, the normalized elastic stiffness and Poisson's \TM{ratio} of the (isotropic) buffer grain. \TM{Its response corresponds to the average response of a chain with $L_n$ randomly oriented grains}
\ba
\begin{split}
\label{eq:buffer}
E_b&:= E_{L_n}^{\text{rnd}}=\ave{\frac{L_n}{\sum_{i}s^{\text{GB},i}_{3333}}}_{L_n} , \\
\nu_b&:=\nu_{L_n}^{\text{rnd}}=-\ave{\frac{\sum_{i} s^{\text{GB},i}_{1133}}{\sum_{i} s^{\text{GB},i}_{3333}}}_{L_n} .
\end{split}
\ea
The averaging $\ave{\ldots}_{L_n}$ is assumed over all \TM{possible linear configurations of $L_n$ grains with random orientations}, and the summation \TM{index $i$ runs} over the grains in each chain.
The elastic response of a buffer grain, \TM{calculated in this way, is usually} softer than that of the bulk ($E_3<1$). Nevertheless, it is convenient to assume $E_3\approx 1$ and $\nu_b\approx\ave{\nu}$. In fact, this assumption becomes realistic, when the 3D effects are considered, \textit{e.g.,} the lateral coupling of buffer grain \TM{to} the neighboring bulk (see Sec.~\ref{sec:3Deffects}).
Assuming $E_3=1$ and $\nu_b=\ave{\nu}$, the mean \TM{value} and standard deviation of $\pdf(\snn^{(2)})$ become
\ba
\begin{split}
\label{eq:chainpdf}
\ave{\snn^{(2)}}&=&\frac{\operatorname{tr}(\mathbf{\Sigma}^{\text{lab}})}{3}\frac{2+L_n+4(\nu_{12}-\ave{\nu})}{2\Enn^{-1}+L_n} , \\
s(\snn^{(2)})&=&\frac{2\ \Sigma^{\text{lab}}_{\text{mis}}}{3\sqrt{5}}\frac{\sqrt{\left(2+L_n-2(\nu_{12}-\ave{\nu})\right)^2}}{2\Enn^{-1}+L_n},
\end{split}
\ea
which simplify for cubic lattices to $\ave{\snn^{(2)}}=\operatorname{tr}(\mathbf{\Sigma}^{\text{lab}})/3$, $s(\snn^{(2)})=2\Sigma^{\text{lab}}_{\text{mis}}/(3\sqrt{5}) \sqrt{\left ((3+L_n)-\Enn^{-1}\right )^2}/(2\Enn^{-1}+L_n)$.
In contrast to the bicrystal model, the $\snn^{(2)}$ expression depends also on the parameter $L_n$, which \TM{makes it} a mixture of bicrystal \TM{solution} $\snn^{(1)}$ (reproduced for $L_n\to 0$) and isotropic \TM{solution} $\snn^{(0)}$ (reproduced for $L_n\to\infty$). However, the effect of $L_n$ is negligible for GBs with $\Enn\sim 1$ and $\nu_{12}\sim\ave{\nu}$. As shown in Sec.~\ref{sec:comp}, the value $L_n\sim 2$ best replicates the numerical results.\\
\subsubsection{Axially constrained chains \TM{with} $L_n+2$ and $L_t+1$ grains ($k=3$)}
\label{sec:extended}
The highest-order solution considered in this study is $\snn^{(3)}$. It corresponds to the complex configuration of chains, shown in Fig.~\ref{fig:chains}. The axial chain consists of $L_n+2$ grains and the four transverse chains of $L_t+1$ grains. All the chains are assumed to be axially constrained to the \TM{strain of} isotropic bulk \TM{of equal length}. \TM{In a similar fashion to previous iterations}, the \textit{reduced} version can be derived for a general grain-lattice symmetry and \TM{arbitrary} external loading
\ba
\begin{split}
\label{eq:INS_general}
\snn^{(3)}&:= \frac{1}{2\pi}\int_{0}^{2\pi}\snn\bigg\vert_{\omega_2=\omega_1+\Delta\omega} \hspace{-1.4cm} d\omega_1 \\
&=A^{(3)}\Sigma_{zz} + B^{(3)} (\Sigma_{xx}+\Sigma_{yy}) ,
\end{split}
\ea
where, assuming $E_3=1$ and $\nu_b=\ave{\nu}$,
\begin{widetext}
\ba
\begin{split}
A^{(3)} &= \frac{(2+L_n) (s^{abc}_{tt}+\ave{E}^{-1} L_t) (s^{def}_{tt}+\ave{E}^{-1} L_t) + \ave{\nu} ((s^{abc}_{tt}+\ave{E}^{-1} L_t) s^{def}_{tl} + (s^{def}_{tt}+\ave{E}^{-1} L_t) s^{abc}_{tl})}{(2 E_{12}^{-1}+L_n) (s^{abc}_{tt}+\ave{E}^{-1} L_t) (s^{def}_{tt}+\ave{E}^{-1} L_t) - \tfrac{1}{2} \ave{E} ((s^{abc}_{tt}+\ave{E}^{-1} L_t) (s^{def}_{tl})^2 + (s^{def}_{tt}+\ave{E}^{-1} L_t) (s^{abc}_{tl})^2)} , \\
B^{(3)} &= -\frac{2 \, \ave{\nu} (s^{abc}_{tt}+\ave{E}^{-1} L_t) (s^{def}_{tt}+\ave{E}^{-1} L_t) + \tfrac{1}{2} (1+L_t-\ave{\nu}) ((s^{abc}_{tt}+\ave{E}^{-1} L_t) s^{def}_{tl} + (s^{def}_{tt}+\ave{E}^{-1} L_t) s^{abc}_{tl})}{(2 E_{12}^{-1}+L_n) (s^{abc}_{tt}+\ave{E}^{-1} L_t) (s^{def}_{tt}+\ave{E}^{-1} L_t) - \tfrac{1}{2} \ave{E} ((s^{abc}_{tt}+\ave{E}^{-1} L_t) (s^{def}_{tl})^2 + (s^{def}_{tt}+\ave{E}^{-1} L_t) (s^{abc}_{tl})^2)} , \\ \label{eq:C_trans_general}
\end{split}
\ea
\end{widetext}
for\\
\ba
\begin{split}
s^{hkl}_{tt} &:= \frac{1}{2} \left(s_{1111}^{\text{GB}, hkl} + s_{2222}^{\text{GB}, hkl}\right) + s_{1122}^{\text{GB}, hkl} , \\
s^{hkl}_{tl} &:= s_{1133}^{\text{GB}, hkl} + s_{2233}^{\text{GB}, hkl} , \\
s^{hkl}_{ll} &:= s_{3333}^{\text{GB}, hkl} := E_{hkl}^{-1} . \label{eq:combination}
\end{split}
\ea
The combinations of compliance-tensor component
\footnote{\TM{The compliance-tensor components $s_{ijkl}^{\text{GB}, hkl}$ depend on the twist angle $\omega$, but their linear combinations, defined in Eq.~\eqref{eq:combination}, do not. Hence, the reduced model solution $\snn^{(3)}$ in Eq.~\eqref{eq:INS_general} is indeed independent of $\Delta\omega$.}
},
introduced in Eq.~\eqref{eq:combination}, are related through a material dependent (but GB type independent) linear combination
\ba
\begin{split}
2 s^{hkl}_{tt} + 2 s^{hkl}_{tl} + s^{hkl}_{ll} = & (s_{1111}^{\text{cry}} + s_{2222}^{\text{cry}} + s_{3333}^{\text{cry}})+ \\
&+ 2 (s_{1122}^{\text{cry}} + s_{1133}^{\text{cry}} + s_{2233}^{\text{cry}}),
\end{split}
\ea
which suggests that $\snn^{(3)}$ is a function of (at most) four local GB parameters (in addition to bulk properties $\ave{E}$, $\ave{\nu}$ and chain parameters $L_n$, $L_t$). \TM{In the $L_t\to\infty$ limit, $\snn^{(3)}$ reduces to $\snn^{(2)}$, see Eq.~\eqref{eq:chain}.}
The corresponding first two statistical moments can also be expressed analytically (but they are not shown here for brevity). They have the already familiar loading dependence,
\ba
\begin{split}
\ave{\snn^{(3)}}&\sim\frac{\operatorname{tr}(\mathbf{\Sigma}^{\text{lab}})}{3} , \\
s(\snn^{(3)})&\sim\frac{2\Sigma^{\text{lab}}_{\text{mis}}}{3\sqrt{5}} .
\end{split}
\ea
Expressions simplify further for higher lattice symmetries. For cubic lattices, for example, $A^{(3)}$ and $B^{(3)}$ become (again) only functions of Young's moduli $E_{abc}$ and $E_{def}$ along the GB-normal direction (see Appendix~\ref{app:cubic}
\footnote{\TM{For cubic lattices, the $E_{abc}$ and $E_{def}$ parameters appear in a single combination ($\Enn$) in $\snn^{(1)}$ and $\snn^{(2)}$, while in $\snn^{(3)}$ there are two such combinations ($\Enn$ and $\Delta_{12}$), see Appendix~\ref{app:cubic}.}}.
All \TM{compact-form} solutions $\snn^{(k)}$, representing the special limits of the general solution, Eq.~\eqref{eq:snnGeneral}, are summarized in Table~\ref{tab:models}.
\begin{table*}
\caption{\label{tab:models}
A summary of derived models. \TM{Analytical solutions can be written in a compact form only in certain limits.}
}
\begin{ruledtabular}
\begin{tabular}{llllll}
\TM{$k$} & Model & Version\footnote{In contrast to the full version, the reduced version of the model \TM{eliminates} the twist-angle degrees of freedom, which makes the solution only approximate, but significantly more condensed. Note that both versions provide the same mean \TM{value} $\ave{\snn^{(k)}}$.} & Assumptions\footnote{Assumptions are taken with respect to the general solution; cf.~Eq.~\eqref{eq:snnGeneral}.}& Fitting parameters & Compact solution\footnote{Compact solutions are derived for a general grain-lattice symmetry.}\\
\colrule
$0$ & isotropic & full & $A^u=0$ or $L_n,L_t\to\infty$& - & $\snn^{(0)}=\mathcal{F}(\mathbf{\Sigma}^{\text{GB}})$\\
$1$ & bicrystal & full & $L_n\to 0,L_t\to\infty$ & - & - \\
& & reduced & $L_n\to 0,L_t\to\infty, \int d\omega_1\vert_{\omega_2=\omega_1+\Delta\omega}$ & - & $\snn^{(1)}=\mathcal{F}(\mathbf{\Sigma}^{\text{GB}},\Enn,\nu_{12},\ave{E},\ave{\nu})$\\
$2$ & $L_n$-chain & full & $L_t\to\infty$ & $L_n\ge 0$ & -\\
& & reduced & $L_t\to\infty, \int d\omega_1\vert_{\omega_2=\omega_1+\Delta\omega}$ & $L_n\ge 0$ & $\snn^{(2)}=\mathcal{F}(\mathbf{\Sigma}^{\text{GB}},\Enn,\nu_{12},\ave{E},\ave{\nu},L_n)$\\
$3$ & $L_n$-$L_t$-chain & full & - & $L_n, L_t\ge 0$ & -\\
& & reduced & $\int d\omega_1\vert_{\omega_2=\omega_1+\Delta\omega}$ & $L_n, L_t\ge 0$ & $\snn^{(3)}=\mathcal{F}(\mathbf{\Sigma}^{\text{GB}},s^{abc}_{tt},s^{abc}_{ll},s^{def}_{tt},s^{def}_{ll},\ave{E},\ave{\nu},L_n,L_t)$
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsection{Models validation}
\label{sec:comp}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig07a}
\vskip 0.5cm
\includegraphics[width=0.9\columnwidth]{fig07b}
\caption{Effect of external loading can be decoupled from other influences by a suitable choice of normalization factor (a) $\operatorname{tr}(\mathbf{\Sigma}^{\text{lab}})$ for mean value $\ave{\snn}$ and (b) $\Sigma^{\text{lab}}_{\text{mis}}$ for standard deviation $s(\snn)$. Simulation results are shown for Fe, $27$ different external loadings $\mathbf{\Sigma}^{\text{lab}}$ and three GB types. Non-normalized values are shown in the insets~(a) and~(b). Panel~(c) shows correspondence between $\mathbf{\Sigma}^{\text{lab}}$ and loading index ($1$-$27$).}
\label{fig:effectLoadNorm}
\end{figure}
In this section, the solutions $\snn^{(k)}$ of derived models are tested against numerical results
\footnote{Having the exact constitutive (Hooke's) law, there are practically no physical uncertainties in numerical simulations besides finite size effects, which can be diminished by using sufficiently large aggregates and sufficiently dense finite element meshes.}
For demonstration purposes, only cubic elastic materials are chosen for comparison (see Appendix~\ref{app:mat} for the corresponding elastic properties).
Following the derived expressions, Eqs.~\eqref{eq:isopdf}, \eqref{eq:bipdf} and~\eqref{eq:chainpdf}, the mean \TM{value} $\ave{\snn}$ and standard deviation $s(\snn)$ of $\pdf(\snn)$ should depend trivially on the external loading $\mathbf{\Sigma}^{\text{lab}}$. Using suggested normalization, $\ave{\snn}/\operatorname{tr}(\mathbf{\Sigma}^{\text{lab}})$ and $s(\snn)/\Sigma^{\text{lab}}_{\text{mis}}$, the first two statistical moments become independent of $\mathbf{\Sigma}^{\text{lab}}$, which is demonstrated in Fig.~\ref{fig:effectLoadNorm} for $27$ different loading configurations. Very good agreemen
\footnote{Observed deviations from $1/3$ in Fig.~\ref{fig:effectLoadNorm}(a) are due to numerical artifacts which result from the division of two small numbers, $\ave{\snn}/\operatorname{tr}(\mathbf{\Sigma}^{\text{lab}})$, and the fact that $\ave{\snn}$ is approximate.}
between the prediction and numerical results confirms the validity of the derived expressions being of the form $\snn^{(k)}=A^{(k)} \Sigma_{zz}+B^{(k)} (\Sigma_{xx}+\Sigma_{yy})$ for any $k$. Hence, a tensile loading $\Sigma$ will be used hereafter without the loss of generality.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig08}
\caption{Standard deviation $s(\snn/\Sigma)$ as a function of effective GB-stiffness parameter $\Enn$. A comparison is shown between numerical results (FE) and different model predictions from Table~\ref{tab:models}. The results are evaluated for Li on all GBs of a specific type, and thus corresponding to a certain $\Enn$ value. Six GB types are used in total (here, solid lines are unphysical and are meant only to indicate the trend).}
\label{fig:effectE12}
\end{figure}
In Fig.~\ref{fig:effectE12} the normalized standard deviation $s(\snn/\Sigma)$ is shown for polycrystalline Li (cubic symmetry) as a function of effective GB stiffness parameter $\Enn$, which is a single characteristic parameter of the $[abc]$-$[def]$ GB. Results of different models from Table~\ref{tab:models} are compared with the results of finite element simulations. The Li is chosen because of very high elastic anisotropy ($A^u=7.97$), which makes the comparison more challenging.
Although none of model predictions for $s(\snn/\Sigma)$ are very accurate, some of the models are more appropriate than others. The $L_n$-$L_t$-chain (full version) model results are grouped into three families (with a given color) with a common axial chain length $L_n=0,2$ or 5. While the response of the $L_n=0$ (red) family is too steep for all transverse chain lengths $L_t$, overestimating the $s(\snn/\Sigma)$ at large $\Enn$, the response of $L_n=2$ (green) and $L_n=5$ (blue) families is too gradual for $L_t\lesssim 2$, overestimating the $s(\snn/\Sigma)$ at small $\Enn$. These models are recognized as inappropriate. In addition, all three model families show, for $L_t=0$, a sudden change in the slope of $s(\snn/\Sigma)$, which is not observed numerically, suggesting that $L_t=0$ models are also unsuitable. Most favorable are therefore $2\lesssim L_n\lesssim 5, L_t\gtrsim 1$ models, which predict $s(\snn/\Sigma)$ consistently below the numerical curve. This systematic underestimation of fluctuations is compensated later in Sec.~\ref{sec:gauss} by accounting for loading fluctuations, which are generated by external loading mediated through the anisotropic bulk surrounding the $L_n$-$L_t$-chain model (see last stage in Fig.~\ref{fig:pert}).
In Fig.~\ref{fig:effectE12} also the results of the $L_n$-chain (reduced version) model are shown for comparison. The advantage of the latter is the compact formulation of the $\snn^{(2)}$ and its statistical moments. The resulting $s(\snn/\Sigma)$ curves show similar dependence of $\Enn$ as the corresponding $L_n$-$L_t\to\infty$ (full version) models, however, with slightly reduced fluctuations in the mid-$\Enn$ rang
\footnote{Since the responses on $[001]$-$[001]$ GB type (with corresponding $E_{12,\text{min}}$) and $[111]$-$[111]$ GB type (with corresponding $E_{12,\text{max}}$) are independent of twist angles $\omega_1, \omega_2$, the predictions of the $L_n$-chain model (reduced version) and $L_n$-$L_t$-chain model (full version, with $L_t\to\infty$) are the same.}.
Since in either case additional fluctuations need to be added to fit the numerical results, also the validity of the $L_n$-chain model, with $2\lesssim L_n\lesssim 5$, is considered appropriate.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig09}
\caption{Standard deviation $s(\snn/\Sigma)$ as a function of twist-angle difference $\Delta\omega$ associated with the $[112]$-$[112]$-$\Delta\omega$ GB type. A comparison is shown between numerical results (FE) and different model predictions from Table~\ref{tab:models}. The properties of Li are used. Note that $[112]$-$[112]$-$\Delta\omega$ GBs correspond to $\Enn=0.77$, irrespective of the value of $\Delta\omega$. Inset shows the agreement between the FE result and the response of $L_n$-$L_t$-chain model for $L_n=2$ and $L_t=1$ (note the artificial shift accounting for the missing fluctuations).}
\label{fig:effectdom}
\end{figure}
In Fig.~\ref{fig:effectdom} a response of a single $[112]$-$[112]$-$\Delta\omega$ GB type is shown in terms of $s(\snn/\Sigma)$ as a function of $\Delta\omega$, using numerical simulations and model predictions from Table~\ref{tab:models} for a polycrystalline Li to associate with results of Fig.~\ref{fig:effectE12}. According to the numerical curve, very small variations in $s(\snn/\Sigma)$ are observed across the whole $\Delta\omega$ range, which is consistent with previous results~\cite{elshawish2021}. Using the same coloring and labeling scheme as in Fig.~\ref{fig:effectE12}, a very good agreement with simulations is achieved for the $L_n$-$L_t$-chain model for $L_n=2$ and $L_t=1$ (see the inset of Fig.~\ref{fig:effectdom}). The other two families of curves produce either too big ($L_n=0$, in red) or too small ($L_n=5$, in blue) variations across the $\Delta\omega$ range. Since the twist angle degrees of freedom are integrated out, the response of the $L_n$-chain (reduced version) model is independent of $\Delta\omega$, which is, by design, also in good agreement with numerical results.
Results of Figs.~\ref{fig:effectE12} and~\ref{fig:effectdom} seem to \TM{favor} the $L_n$-$L_t$-chain model with $L_n\sim 2$ and $L_t\sim 1$. This is further corroborated by noting that the overall shift in $s(\snn/\Sigma)$ (by $\sim$0.1), used to fit the simulation results in the inset of Fig.~\ref{fig:effectdom}, is matching very well the gap at $\Enn=0.77$ (corresponding to $[112]$-$[112]$-$\Delta\omega$ GB) between the two corresponding curves in Fig.~\ref{fig:effectE12}.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig10}
\caption{(a) Mean local stress $\ave{\snn/\Sigma}$ and (b) corresponding standard deviation $s(\snn/\Sigma)$, both evaluated numerically as a function of $\cos^2\theta$ for a finite range of GB tilt angles, $\delta(\cos\theta)=0.05$. The averaging range is denoted by horizontal error bars and the averaged values by dots. Lines in panel (a) are linear fits with slope $K$. Twist angle difference $\Delta\omega$ in $[112]$-$[112]$-$\Delta\omega$ GB has a negligible influence on slope $K$ (inset (a)) but a significant effect on the standard deviation (inset (b)).}
\label{fig:effectCos2}
\end{figure}
In the following, the evaluation of derived models is shifted from macro- to mesoscale using a linear correlation property, $\snn^{(k)}/\Sigma=A^{(k)}+B^{(k)}\cos^2\theta$, derived for the external uniaxial loading $\Sigma
\footnote{For cubic lattices and $E_3=1$, $A^{(2)}=(1-\Enn)/(2+L_n\Enn)$ and $B^{(2)}=1+3(\Enn-1)/(2+L_n\Enn)$.}.
Statistical analysis employed on a subset of GBs with a fixed angle $\theta$ (or $\cos^2\theta$) between the GB normal and uniaxial loading direction is useful because it allows one to test the validity of individual parts of expressions in $\snn^{(k)}$ (\textit{e.g.}, $A^{(k)}$ and $B^{(k)}$). Such analyses are demonstrated in Figs.~\ref{fig:effectCos2} and~\ref{fig:effectK} for polycrystalline Li.
The local mean $\ave{\snn/\Sigma}$ and standard deviatio
\footnote{The $s(\snn/\Sigma)$ results from Fig.~\ref{fig:effectCos2}(b) are discussed latter in Sec.~\ref{sec:gauss}.}
$s(\snn/\Sigma)$ are shown in Fig.~\ref{fig:effectCos2} as a function of $\cos^2\theta$. Due to finite aggregate size, the mean and standard deviation are obtained at given $\cos^2\theta$ by averaging over Euler angles $\psi$ and $\phi$ on a finite (but small) range of GB tilt angles $\delta(\cos\theta)=0.05$. The proposed linear trend is nicely reproduced, showing a clear effect of different GB types on the corresponding slopes $K$ of fitted lines. In general, slope $K$ increases with increasing GB stiffness (parameter $\Enn$, see also Fig.~\ref{fig:effectK}). However, there is a very weak effect of $\Delta\omega$ on the corresponding slope $K$ when evaluated on the $[112]$-$[112]$-$\Delta\omega$ GB. This suggests that, on average, the GB stiffness (which is independent of $\Delta\omega$, see Eq.~\eqref{eq:E12cubic}) is the main contributor to $\snn$ at given $\cos^2\theta$.
It is interesting to note a crossing point in Fig.~\ref{fig:effectCos2}(a) at $\cos^2\theta=1/3$ at which $\snn$ becomes independent of both material and GB type properties. This point is exactly reproduced by all non-trivial models ($\snn^{(k)}, k>0$). The value of $\snn$ at this point is $\Sigma/3$ (actually $\operatorname{tr}(\mathbf{\Sigma})/3$ for arbitrary loading).
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig11}
\caption{Slope $K$, obtained from Fig.~\ref{fig:effectCos2}(a), versus effective GB-stiffness parameter $\Enn$. A comparison is shown between numerical results (FE) and different model predictions from Table~\ref{tab:models}. The properties of Li are used. Note that $L_n$-chain model (reduced version) and $L_n$-$L_t$-chain model (full version with $L_t\to\infty$) provide identical slopes $K=1+3(\Enn-1)/(2+L_n\Enn)$, assuming $E_3=1$. There is no effect of $L_n$ when $\Enn=E_3= 1$.}
\label{fig:effectK}
\end{figure}
The simulation results from Fig.~\ref{fig:effectCos2}(a) are analyzed further in Fig.~\ref{fig:effectK} where the actual dependence of slope $K$ with $\Enn$ is presented and compared with predictions of the models from Table~\ref{tab:models}. While the increasing trend is captured well by all the models, none of the presented curves fit the numerical result very accurately for all $\Enn$. Actually, this holds true for any combination of $L_n, L_t$ values in the $L_n$-$L_t$-chain model. The most suitable solution is chosen to be that of the $L_n$-chain model (reduced version) for $L_n=2$, which matches the true $K$ values at the two extreme $\Enn$ points. The same agreement is observed also for other cubic materials (not shown).
The $L_n$-$L_t$-chain model with $L_n\sim 2$ and $L_t\sim 1$, which has been selected as the most suitable model at the macroscale (see Figs.~\ref{fig:effectE12} and~\ref{fig:effectdom}), provides in Fig.~\ref{fig:effectK} a very similar $K(\Enn)$ response as the $L_n$-chain model for $L_n=2$. In this sense, both models seem equally well acceptable, however, the latter one will be preferred due to much more compact formulation.
In summary, although the qualitative behavior of $\snn$ is well reproduced by the selected model ($\snn^{(2)}$ for $L_n\sim 2$) on a wide range of parameters (associated with external loading, material properties and GB type), two ingredients still seem to be missing. The first one is related to the systematic shortage of stress fluctuations observed on the macroscale and the second one is linked to the insufficient agreement of mean stresses on the mesoscale. Both issues are addressed in the next section.
\subsection{Model upgrades}
\subsubsection{Variable axial strain constraint and 3D effects}
\label{sec:3Deffects}
The observed inconsistency in Fig.~\ref{fig:effectK} is attributed to the (i) imposed axial strain constraint of the $L_n$-chain model and (ii) 3D effects which have been omitted in the model derivation. The 3D effects include primarily a non-zero lateral coupling of the axial grain chain with the elastic bulk. Depending on the relative axial stiffness of the chain with respect to the bulk, this coupling may effectively either increase or decrease the chain stiffness, resulting in larger or lower $\snn$, respectively. To model this in 1D framework, the elastic properties of both GB grains and buffer grain need to be amende
\footnote{Alternatively, one could try to resolve the observed inconsistency by simply assuming a variable buffer length $L_n=\mathcal{F}(\Enn,A^u)$. However, it becomes clear from Fig.~\ref{fig:effectK} that such an approach fails to produce correct slopes $K$ for $\Enn\sim 1$ (as there is no effect of $L_n$ for $\Enn=E_3= 1$). This confirms that the observed mismatch cannot be resolved solely by assuming a variable axial strain constraint, controlled by $L_n$ in Eq.~\eqref{eq:constrSym}, and that 3D effects need to be employed on $\Enn$ and $\nu_{12}$, too.}.
Regarding the GB grains, therefore
\ba
\begin{split}
\label{eq:dE12}
\Enn&\to \Enn+\delta\Enn , \\
\delta\Enn&=\mathcal{F}(\Enn,\nu_{12},L_n,A^u) ,
\end{split}
\ea
and similarly
\ba
\begin{split}
\label{eq:dnu12}
\nu_{12}&\to \nu_{12}+\delta\nu_{12} , \\
\delta\nu_{12}&= \mathcal{F}(\Enn,\nu_{12},L_n,A^u) .
\end{split}
\ea
The assumed functional dependence of $\delta\Enn$ in Eq.~\eqref{eq:dE12} can be explained with the help of Fig.~\ref{fig:dE12} where the field lines are used to visualize schematically the force field around the two GB grains under tensile loading. While force lines are always parallel in the 1D model (no lateral coupling with the bulk), they concentrate within/outside the stiffer/softer (larger/smaller $\Enn$) GB grains in the 3D model. Obviously, the effect gets stronger for $\Enn\to E_{12,\text{max}}$ or $\Enn\to E_{12,\text{min}}$ and for increasing material anisotropy $A^u$. To account for more (less) field lines in stiffer (softer) GB grains, $\delta\Enn>0$ ($\delta\Enn<0$) should be used in 1D modeling. However, using a non-zero $\delta\Enn$ (or $\delta\nu_{12}$) affects also the boundary condition applied on the chain scale in Eq.~\eqref{eq:constrSym}. Since the latter is regulated also by the length of the buffer grain $L_n$, both $\delta\Enn$ and $L_n$ are coupled as indicated in Eq.~\eqref{eq:dE12}.
\begin{figure}
\includegraphics[width=\columnwidth]{fig12}
\caption{Schematic view of field lines crossing through the stiff (orange) and soft (green) GB grains in 3D and 1D models.}
\label{fig:dE12}
\end{figure}
In a similar way, the properties of the buffer grain (of length $L_n$) are modified due to lateral coupling with the bulk
\ba
\begin{split}
\label{eq:dbuffer}
E_3&\to E_3+\delta E_3\approx 1 , \\
\nu_b&\to \nu_b+\delta\nu_b\approx\ave{\nu} .
\end{split}
\ea
The above mapping follows from the fact that a chain of randomly oriented grains, when coupled laterally to the bulk, should, on average, behave similarly as the bulk itself. An equality in Eq.~\eqref{eq:dbuffer} is achieved for $L_n\to\infty$, while very small deviations are observed at $L_n=2$ (see footnote~\ref{f1}). As already mentioned, this modification has been already implemented in Eq.~\eqref{eq:chain} by setting $E_3= 1$ and $\nu_b=\ave{\nu}$.
Since $E_b$ and $\nu_b$ can be evaluated (\textit{e.g.}, numerically) by Eqs.~\eqref{eq:buffer} for a given material and $L_n$, the corresponding increments can be estimated directly from Eqs.~\eqref{eq:dbuffer}. By design, the same increments should also apply to GB grains, $(\delta\Enn,\delta\nu_{12})=(\delta E_b,\delta\nu_b)$, if $(\Enn,\nu_{12})=(E_b,\nu_b)$. Unfortunately, there seems to be no analytical approach to identify the increments for a general pair $(\Enn,\nu_{12})$. In the following, the functional dependence of $\delta\Enn$ (and $\delta E_3$) is therefore derived empirically for materials with cubic lattice symmetry, where further simplification is used due to mutual dependence of $\Enn$ and $\nu_{12}$ ($\nu_{12}=\ave{\nu}+(\Enn^{-1}-1)/2$).
The expression for the reduced version of the $L_n$-chain model, Eq.~\eqref{eq:chain}, simplifies for cubic materials and general macroscopic loading to
\ba
\begin{split}
\label{eq:cubic}
\snn^{(2)}&=\frac{2+L_n}{2\Enn^{-1}+L_n E_3^{-1}}\Sigma_{zz} + \\
&+\frac{1}{2}\left(1-\frac{2+L_n}{2\Enn^{-1}+L_n E_3^{-1}}\right) \left(\Sigma_{xx}+\Sigma_{yy}\right) ,
\end{split}
\ea
which reduces further for uniaxial macroscopic loading $\Sigma$ as
\ba
\begin{split}
\snn^{(2)}/\Sigma&=\frac{1}{2}\left(1-\frac{2+L_n}{2\Enn^{-1}+L_n E_3^{-1}}\right) + \\
&+\frac{3}{2}\left(-\frac{1}{3}+\frac{2+L_n}{2\Enn^{-1}+L_n E_3^{-1}}\right) \cos^2\theta ,
\end{split}
\ea
where $\theta$ is an angle between the GB normal and uniaxial loading direction. As discussed before, the model can be upgraded by assuming $\delta\Enn=\mathcal{F}(\Enn,L_n,A^u)$ for the two GB grains and $\delta E_3=\mathcal{F}(E_3,L_n,A^u)$ for the buffer grain. Both increments can be calculated numerically from the requirement that the resulting modified slope (a factor in front of $\cos^2\theta$),
\be
\label{eq:slopeK}
K=\frac{3}{2}\left(-\frac{1}{3}+\frac{2+L_n}{2(\Enn+\delta\Enn)^{-1}+L_n (E_3+\delta E_3)^{-1}}\right),
\ee
is matching the corresponding $K^{\text{FE}}$ slope obtained from the FE simulations for different $\Enn$ values and materials (see Fig.~\ref{fig:effectK} where the results for Li are shown
\footnote{The corresponding increments are deduced in two steps. First, $\delta E_3$ is identified from $K(\Enn= E_3,E_3)=K^{\text{FE}}(E_3)$ for the assumed $\Enn= E_3$ and $\delta\Enn=\delta E_3$ in Eq.~\eqref{eq:slopeK}, where $E_3$ is evaluated numerically using Eq.~\eqref{eq:buffer} for a given material $A^u$ and buffer length $L_n$. In practice, the $K^{\text{FE}}(E_3)$ value is estimated by interpolating from several $K^{\text{FE}}(\Enn)$ values. Once $\delta E_3$ is known, $\delta\Enn$ is obtained from $K(\Enn,E_3)=K^{\text{FE}}(\Enn)$.}.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig13}
\caption{Calculated (dots) and fitted (lines) increments $\delta\Enn$ as a function of $\Enn$ for the chosen reduced $L_n$-chain model (with $L_n=2$) applied to various materials with cubic lattice symmetry. Fitting function, Eq.~\eqref{eq:fit}, has been \TM{chosen} based on the observed symmetry $\delta\Enn(\Enn)$ for $L_n=2$. Inset shows that very good agreement is preserved also when the symmetric $L_n=2$ fitting function is used for other $L_n$ (shown for Li).}
\label{fig:dE12cubic}
\end{figure}
The results for $\delta\Enn$ are shown in Fig.~\ref{fig:dE12cubic} for $L_n=2$ and various material
\footnote{\label{f1}Results also show that the relative buffer stiffness, when coupled to the bulk, is bounded by $1\le E_3+\delta E_3\le 1.03$ for $L_n\ge 2$ and all the materials shown in Fig.~\ref{fig:dE12cubic}.}.
As anticipated, $\delta\Enn$ depends strongly on the GB stiffness $\Enn$, elastic grain anisotropy $A^u$ and buffer length $L_n$ (inset of Fig.~\ref{fig:dE12cubic}). Interestingly, for $L_n=2$ a (quasi) symmetry is recognized in $\delta\Enn(\Enn)$ curves for all investigated material
\footnote{The authors haven't resolved yet whether the observed symmetry is a coincidence or an intrinsic property of the (reduced) $L_n$-chain model.}.
The symmetry is lost when $L_n\ne2$.
Based on the observed symmetry in Fig.~\ref{fig:dE12cubic} for $L_n=2$, the following empirical fit is proposed for all cubic materials with corresponding elastic anisotropy index $A^u$
\ba
\begin{split}
\label{eq:fit}
\delta\Enn&=C_1-\left|\Enn-\bar{E}_{12}\right|^{C_2} , \\
\bar{E}_{12}&=\frac{1}{2} \left(E_{12,\text{min}}+E_{12,\text{max}}\right) , \\
\delta\Enn(E_{12,\text{min}})&= 0 , \\
\delta\Enn(E_{12,\text{max}})&= 0 .
\end{split}
\ea
The best agreement with FE results is obtained for
\ba
\begin{split}
\label{eq:fit2}
C_1&=0.08 (A^u)^{0.85} , \\
C_2&=\frac{\log C_1}{\log (E_{12,\text{max}}-E_{12,\text{min}})-\log 2} .
\end{split}
\ea
It seems quite surprising that the proposed fitting function, Eq.~\eqref{eq:fit}, with only two adjustable parameters (0.08 and 0.85) in Eq.~\eqref{eq:fit2} provides such a good agreement for a wide range of (cubic) materials shown in Fig.~\ref{fig:dE12cubic}. Good agreement remains also when the $L_n=2$ fitting function is used for $L_n\ne2$ models (assuming $E_3+\delta E_3= 1$ in Eq.~\eqref{eq:slopeK}) as shown in the inset of Fig.~\ref{fig:dE12cubic}. The identified empirical relation represents the second main result of this study.
\subsubsection{Stochastic loading fluctuations}
\label{sec:gauss}
So far, the original external loading $\Sigma^{\text{lab}}$ (also $\Sigma$) has been assigned to all GB models from Table~\ref{tab:models}. However, in reality, this assumption is true only on average. In fact, a GB and its immediate neighborhood far away from the external surfaces feel an external loading modified by fluctuations, $\Sigma+f$, where $f$ stands for the fluctuation stress tensor. The fluctuations $f$ arise as a consequence of bringing far-away loading $\Sigma$ onto a GB neighborhood through the elastic bulk of anisotropic grains (see the last stage in Fig.~\ref{fig:pert}).
To account for loading fluctuations in the estimation of $\snn$ and $\pdf(\snn)$, it is assumed for simplicity that fluctuation normal stress $f_{nn}$ is a random variable with Gaussian distribution $\mathcal{N}(0,s^2(f_{nn}))$, where the standard deviation depends on the external loading and on grain anisotropy, $s(f_{nn})\approx\mathcal{F}(\Sigma,A^u)$. Dependence of $s(f_{nn})$ on internal GB degrees of freedom (\textit{e.g.}, $\theta$, $\Enn$, $\omega_1$, $\omega_2$) is neglected to a first approximation, which is supported by the results of Fig.~\ref{fig:effectCos2}(b). The latter indeed show that standard deviation $s(\snn/\Sigma)$, evaluated on various $[abc]$-$[def]$ GB types at fixed GB tilts $\cos^2\theta$ with respect to external tensile loading $\Sigma$, is practically independent of $\cos^2\theta$ (and thus of $\snn$ itself), but slightly dependent on GB typ
\footnote{While the primary source of fluctuations in Fig.~\ref{fig:effectCos2}(b) is the anisotropic GB neighborhood, the secondary source is a finite range of GB tilt angles, $\delta(\cos\theta)=0.05$, which provides negligible contribution to $s(\snn/\Sigma)$.}.
Model of stress fluctuations is derived in Appendix~\ref{app:gauss}.
Considering that stress fluctuations are independent of stresses themselves, a new update can be proposed as
\ba
\begin{split}
\label{eq:conv}
\tilde{\sigma}_{nn}^{(k)}&=\snn^{(k)}+f_{nn}^{(k)} , \\
s^2(\tilde{\sigma}_{nn}^{(k)})&=s^2(\snn^{(k)})+s^2(f_{nn}^{(k)}) , \\
\pdf(\tilde{\sigma}_{nn}^{(k)})&=\left( \pdf\star\ \mathcal{N}(0,s^2(f_{nn}^{(k)})\right)(\snn^{(k)}) ,
\end{split}
\ea
where symbol $\star$ denotes a convolution.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig14}
\caption{Standard deviation of GB\TM{-stress} (normal) fluctuations $s(f_{nn}/\Sigma)$, evaluated numerically on different GB types with fixed GB tilt (and later averaged over different GB tilts, see Fig.~\ref{fig:effectCos2}(b)) and for different materials ($A^u$) under tensile loading $\Sigma$. A shaded region represents the proposed empirical domain of fluctuations. A generalization to arbitrary loading can be done by substituting $\Sigma$ with $\Sigma_{\text{mis}}$ on the vertical axis (see Appendix~\ref{app:gauss}).}
\label{fig:stdG}
\end{figure}
To identify standard deviation $s(f_{nn}^{(k)})$ for tensile $\Sigma$, the results from Fig.~\ref{fig:effectCos2}(b) can be averaged over different GB tilts and shown in Fig.~\ref{fig:stdG} for different materials as a function of $A^u$. Obtained standard deviation $s(f_{nn}/\Sigma)$ is set to be a measure of local stress fluctuations $f_{nn}$. As expected, $s(f_{nn}/\Sigma)$ increases with $A^u$ following a simple empirical law $s(f_{nn}/\Sigma)=(0.070\pm0.018) \left(A^u\right)^{0.37\mp 0.03}$. The $\pm$ sign denotes a finite width of $s(f_{nn}/\Sigma)$ domain, which is attributed to GB internal degrees of freedom.
The empirical fit is generalized further to arbitrary loading using the familiar normalization for the second statistical moment (see Appendix~\ref{app:gauss} for more detail),
\be
\label{eq:conv2}
s(f_{nn})=\Sigma_{\text{mis}} (0.070\pm0.018) \left(A^u\right)^{0.37\mp 0.03}.
\ee
The above relation applies not only to cubic but also to non-cubic material
\footnote{It is also interesting to note that a hydrostatic loading $\Sigma$ provides no GB stress fluctuations even in the case of anisotropic grains.}.
For example, tensile fluctuations evaluated in calcium sulfate (CaSO$_4$), with orthorhombic lattice symmetry and $A^u=2.78$, fit accurately within the proposed domain in Fig.~\ref{fig:stdG}.
\section{\label{verifi}Verification of upgraded models}
\label{sec:4}
\subsection{Cubic materials}
\begin{figure*}
\includegraphics[width=0.6\columnwidth]{fig15a}
\includegraphics[width=0.6\columnwidth]{fig15b}
\includegraphics[width=0.6\columnwidth]{fig15c}
\caption{Statistical stress distributions $\pdf(\snn)$ evaluated on three different GB types in Fe for $12$ different macroscopic loadings (grouped into purely diagonal, purely shear and mixed loadings $\mathbf{\Sigma}$). An excellent agreement is shown between simulation results (solid lines) and upgraded model predictions (dashed lines) for all the cases; cf.~Eq.~\eqref{eq:cubic2}.}
\label{fig:effectLoad}
\end{figure*}
Statistical response $\pdf(\tilde{\sigma}_{nn}^{(2)})$ of the upgraded cubic GB model (using $L_n= 2$ and $E_3+\delta E_3= 1$ in Eq.~\eqref{eq:cubic}),
\ba
\begin{split}
\label{eq:cubic2}
\tilde{\sigma}_{nn}^{(2)}&=\frac{2}{(\Enn+\delta\Enn)^{-1}+1}\Sigma_{zz} + \\
&+\frac{1}{2}\left(1-\frac{2}{(\Enn+\delta\Enn)^{-1}+1}\right)\left(\Sigma_{xx}+\Sigma_{yy}\right) + \\
&+ f_{nn} ,
\end{split}
\ea
where $\delta\Enn$ is estimated by Eqs.~\eqref{eq:fit}, \eqref{eq:fit2} and $s(f_{nn})$ by Eq.~\eqref{eq:conv2}, is verified in Fig.~\ref{fig:effectLoad} for polycrystalline Fe under different macroscopic loadings $\Sigma$. The predicted $\pdf(\tilde{\sigma}_{nn}^{(2)})$ distributions are calculated numerically using Monte Carlo sampling of the tw
\footnote{Since $\snn^{(k)}=A^{(k)} \Sigma_{zz}+B^{(k)} (\Sigma_{xx}+\Sigma_{yy})$ for any $k$, the third Euler angle $\phi$ drops out from the $\snn^{(k)}$ expression.}
Euler angles $(\theta,\psi)$, which are used to evaluate $\Sigma_{xx}$, $\Sigma_{yy}$ and $\Sigma_{zz}$ defined in Eq.~\eqref{eq:sigGB}. An excellent agreement with simulation results is demonstrated, confirming the accuracy of the proposed model for $\textit{arbitrary}$ GB type, $\textit{arbitrary}$ (cubic) material and $\textit{arbitrary}$ macroscopic loading conditions.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig16}
\caption{Stress distributions $\pdf(\snn/\Sigma)$ evaluated on all (random) GBs in Fe with grains elongated along the $Z$-axis (elongation factor $\lambda_z$) for tensile loading $\Sigma$ along the $X$-axis. An excellent agreement is shown between simulation results (black) and model predictions (red) for all three cases; cf.~Eqs.~\eqref{eq:elong}--\eqref{eq:elong2}.}
\label{fig:elongated}
\end{figure}
In Fig.~\ref{fig:elongated} a comparison is shown for a polycrystalline Fe with elongated grains to verify the applicability of the derived models in materials with non-zero morphological texture (but with zero crystallographic texture). In this comparison, the $\pdf$ response is calculated on all GBs (random type) using the following simple relation (see Appendix~\ref{app:random})
\be
\label{eq:elong}
\pdf_{\text{rnd}}(\tilde{\sigma}_{nn}^{(k)})\approx\pdf(\tilde{\sigma}_{nn}^{(0)}).
\ee
The response of random GBs is calculated using the convolution of the isotropic solution $\pdf(\snn^{(0)})$ and Gaussian distribution $\mathcal{N}(0,s^2(f_{nn})$ with $s(f_{nn})$ from Eq.~\eqref{eq:conv2
\footnote{Since the $\pdf$ of the FE model is calculated on all GBs of an aggregate, including those with smallest GB areas, the finite-size effects (due to poor meshing) result in wider $\pdf$ distributions. For this reason, a $\sim$40$\%$ larger $s(f_{nn})$ is used in Fig.~(\ref{fig:elongated}) to fit accurately the FE results.}.
The distributions are calculated numerically using Monte Carlo sampling of the two Euler angles $(\theta,\psi)$ with the following distribution functions (see Appendix~\ref{app:elongated})
\ba
\begin{split}
\label{eq:elong2}
f(\cos\theta)&=\frac{\lambda_z}{2}\left(\frac{1}{1+(\lambda_z^2-1)\cos^2\theta}\right)^{3/2} , \\
f(\psi)&=\frac{1}{2\pi} ,
\end{split}
\ea
for $-1\le\cos\theta\le1$ and $0\le\psi\le2\pi$, with a scaling factor $\lambda_z>0$ accounting for grain elongation along the $Z$-axis ($\lambda_z=1$ denoting no scaling).
Again, an excellent agreement with simulation results is demonstrated in Fig.~\ref{fig:elongated}, which confirms the accuracy of the proposed model when applied to materials with $\textit{arbitrary}$ morphological texture.
\begin{figure*}
\includegraphics[width=0.8\columnwidth]{fig17}
\caption{(a) Local $\snn/\Sigma$ stress response in Li under macroscopic tensile loading $\Sigma$. A comparison is shown between FE simulation (lines) and the \TM{results of three GB models (shapes). The $50$ largest GBs in a $4000$-grain aggregate (see Fig.~\ref{fig:geom}) are shown, to which a specific GB type ($[001]$-$[001]$, $[334]$-$[102]$ or $[111]$-$[111]$) is assigned in each panel. The GBs are sorted (indexed) in descending order with respect to FE results for stresses. (b) Probability distributions ($\pdf$) of discrepancy between the model prediction and FE result for GB stress, demonstrating how accurate the three GB models are (locally). All $1631$ special GBs are considered in the $\pdf$ (as opposed to $50$ shown in (a)).} Gaussian distributions with standard deviations $s(f_{nn}/\Sigma)$ from Fig.~\ref{fig:stdG} are added for comparison (no fitting has been applied).}
\label{fig:local}
\end{figure*}
In Fig.~\ref{fig:local}, the accuracy of GB models is furthermore tested on a local GB scale using FE simulations of a polycrystalline Li under tensile loading $\Sigma$ as a reference. In particular, three models (of increasing complexity) are compared: (i) the isotropic model $\snn^{(0)}$, (ii) the reduced and upgraded version of the $L_n$-chain ($L_n=2$, $L_t\to\infty$) model $\snn^{(2)}$ and (iii) the full version of the $L_n$-$L_t$-chain ($L_n=2$, $L_t=1$) model $\snn^{(3)}$. The accuracy of the models is tested locally by comparing $\snn^{(k)}$ values with FE results $\snn^{\text{FE}}$ evaluated on individual GBs of particular type (three GB types are tested in total).
According to Fig.~\ref{fig:local}, both $\snn^{(2)}$ and $\snn^{(3)}$ models are comparable in accuracy, outperforming the simplest $\snn^{(0)}$ model on softer $[001]$-$[001]$ and stiffer $[111]$-$[111]$ GBs. The uncertainties (deviations from true FE values) in the $\snn^{(2)}$ model (and also $\snn^{(3)}$ model) are of Gaussian type with zero mean and standard deviation (exactly!) equal to $s(f_{nn})$ from Fig.~(\ref{fig:stdG}) (see dashed lines in Fig.~\ref{fig:local}(b)). This confirms the validity (and consistency) of the $\snn^{(2)}$ model, which is shown to be accurate up to unknown loading fluctuations $f_{nn}$ (which are substantial in Li). The latter are therefore the onl
\footnote{In case of an invalid GB model, standard deviation of local stress errors would be larger than that of loading stress fluctuations, $s(\snn^{(2)}-\snn^{\text{FE}})>s(f_{nn})$.}
source of local stress uncertainties (errors), $\snn^{(2)}-\snn^{\text{FE}}\approx f_{nn}$, suggesting that $\tilde{\sigma}_{nn}^{(2)}\approx\snn^{\text{FE}}$.
\subsection{Non-cubic materials}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig18}
\caption{\TM{$19$ representative directions $[abc]$, from which GB normals in either GB grain were selected for orthorhombic material (CaSO$_4$). They correspond to $190$ different GB types $[abc]$-$[def]$, considered in our numerical studies.} The standard stereographic triangle is shown for reference.}
\label{fig:stereo}
\end{figure}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig19}
\caption{\TM{Simulation results (circles) for $\ave{\snn/\Sigma}$ in CaSO$_4$ under tensile loading $\Sigma$, evaluated on $190$ $[abc]$-$[def]$ GB types, constructed from the $19$ selected directions shown in Fig.~\ref{fig:stereo}. For comparison, a smooth prediction (iso-lines) is shown for $\ave{\snn^{(2)}/\Sigma}$ of the $L_n=2$ reduced model; cf.~Eq.~\eqref{eq:chainpdf}.} The inset shows the same results in 3D plot.}
\label{fig:ortho}
\end{figure}
To provide accurate stress distributions for non-cubic materials, the evaluation of $\delta\Enn$ and $\delta\nu_{12}$ would need to be derived (see Eqs.~\eqref{eq:dE12} and~\eqref{eq:dnu12}) to account for variable axial strain constraint and 3D effects missing in the $L_n$-chain model. The procedure for that should follow the one described for cubic materials in Sec.~\ref{sec:3Deffects}. However, this is left for future analyses.
In Fig.~\ref{fig:ortho} the simulation results for average stress $\ave{\snn/\Sigma}$ are presented which are evaluated on 190 $[abc]$-$[def]$ GBs obtained as combinations of 19 planes defined in Fig.~\ref{fig:stereo} for the orthorhombic material CaSO$_4$ under tensile loading $\Sigma$. For comparison, a smooth prediction of $\ave{\snn^{(2)}/\Sigma}$ from Eq.~\eqref{eq:chainpdf} is shown for arbitrarily chosen $L_n=2$. A good qualitative agreement is demonstrated (without fine-tuning of $L_n$), implying that, to a good approximation, only two parameters, $\Enn$ and $\nu_{12}$, are needed to characterize the response of a general GB, in agreement with the prediction of the GB model.
\section{\label{discussion}Discussion}
\label{sec:5}
The derived $\pdf(\snn)$ distributions \TM{are not only very accurate, as demonstrated for various scenarios (see Figs.~\ref{fig:effectLoad} and~\ref{fig:elongated}), but also relatively undemanding in computational sense. If they are produced numerically, using Monte Carlo sampling for GB-normal directions,
the results} can be immediately used for several practical applications. \TM{For instance, we could predict} the GB-damage initiation in complex geometries, using the probabilistic approach. \TM{If the GB strength $\sigma_c$ of each GB type was known (or measured), and stress field $\tou{\Sigma}(\tou{r})$ in the investigated component at least roughly estimated (\textit{e.g.}, in FE simulations, using
homogeneous material)}, one can immediately obtain the probability of finding an overloaded GB of a specific type at \textit{arbitrary} location $\tou{r}$ in the component, $P(\tou{r})=\int_{\sigma_c}^{\infty}\pdf(\snn)d\snn$, using $\tou{\Sigma}(\tou{r})$ as an input for external loading to produce $\pdf(\snn)$. If that probability exceeded the threshold value, $P(\tou{r})>P_f$, a macroscopic-size crack may develop at $\tou{r}$, which might result in a catastrophic failure of the component. With such approach, potentially dangerous regions, susceptible to intergranular cracking, can be quickly identified for any component and its loading. A more detailed analysis of such an application will be presented in a separate publication~\cite{elshawish2022draft}.
In all the examples presented so far, static elastic loads have been assumed in expressions for $\snn$ and $\pdf(\snn)$. The procedure can be generalized also to dynamic stresses, provided that stress amplitudes remain in the elastic domain and inertia effects are negligible. In this respect, $\pdf(\snn)$ spectra can be used to predict even the initiation of GB-fatigue cracks~\cite{koyama2015}. Following the above procedure for static load and assuming time-dependent evolution of GB strength (due to the build-up of strain localization~\cite{koyama2015}), the probability $P(\tou{r},t)=\int_{\sigma_c(t)}^{\infty}\pdf(\snn)d\snn$ becomes time dependent too. The \TM{measurement} data can then be used, for example, to estimate \TM{how $P_f$ and GB-strength evolution $\sigma_c(t)$ change with the number of loading cycles}.
Although the semi-analytical $\snn$ expression, derived for cubic crystal lattices, provides accurate $\pdf(\snn)$ distributions for a wide range of situations, it relies not only on analytical, but also on empirical considerations (estimation of $\delta\Enn$ and $s(f_{nn})$). \TM{A (quasi) symmetry of $\delta\Enn(\Enn)$ curves was observed for $L_n=2$ and all investigated materials. The origin of this feature is not yet understood, it might even be only accidental. Nonetheless, it can be very useful, since it allows us to make the search of the fitting function significantly simpler (Fig.~\ref{fig:dE12cubic}). Due to that, it is important to gain a better understanding of this (quasi) symmetry for cubic lattices (and possibly even non-cubic lattices) in the future.}
In a similar way, the effect of \TM
more distant grains} has not been modeled explicitly. Instead it was conveniently packed into an empirical fit of $s(f_{nn})$, which represents the amplitude of GB-stress fluctuations (Fig.~\ref{fig:stdG}). The fact that these fluctuations are more or less independent of stresses, makes the fitting function $s(f_{nn})$ relatively simple and, most importantly, the calculation of $\pdf(\snn)$ very accurate (by applying Gaussian broadening with known \TM{width} $s(f_{nn})$). However, the accuracy on a local GB scale is limited by the same $s(f_{nn})$ (representing the uncertainty \TM{of model predictions}), and can be substantial in highly anisotropic materials. A possible improvement would necessarily include an exact modeling of the \TM{more distant} grains (\TM{whose structure should probably be considered in a similar level of detail as the two GB grains}). \TM{Unfortunately, this} would probably result in very cumbersome and impractical solutions.\\
The \TM{derivation of GB models was based on two major ideas}: the \TM{perturbative approach} and the Saint Venant's principle. A possible alternative approach could follow one of the well-known methods, used for calculation of the effective elastic constants of polycrystals from single-crystal and structure properties. For example, in the self-consistent method invented by Kr\"oner~\cite{kroner58}, an effective stress-strain relation is derived, taking into account the boundary conditions for stresses and strains at the GBs, \TM{which are only statistically correct}. Analytical results are given for macroscopically isotropic polycrystals, composed of crystal grains with cubic symmetry~\cite{hershey}, and also for a general lattice symmetry~\cite{kroner58}. Replicating such approach, the established relation between the local (single-grain) and macroscopic quantities would need to be modified to account for bi-crystal instead of single-crystal local quantities. While this might be worth trying, it has (at least) one significant shortcoming, common to all multi-scale techniques. It fails to reproduce additional degrees of freedom on a local scale (that would manifest themselves in stress fluctuations and thus in wider $\pdf(\snn)$ distributions), given there are fewer degrees of freedom on a macroscopic scale. Therefore, additional improvements would be needed (as it was done here) to obtain accurate $\pdf(\snn)$. A detailed analysis \TM{along these lines} is left for future work.
\section{Conclusions}
\label{sec:6}
In this study, a perturbative \TM{model} of grain-boundary-normal stresses has been derived for an arbitrary grain-boundary type \TM{within} a general polycrystalline material, composed of randomly shaped elastic continuum grains with arbitrary lattice symmetry, and under a general uniform external loading. The \TM{constructed} perturbative models have been solved under reasonable assumptions, \TM{needed to obtain compact, yet still} accurate analytical and semi-analytical expressions for local grain-boundary-normal stresses and the corresponding statistical distributions. The strategy \TM{for} deriving the models \TM{was based on two central} concepts. Using the perturbation principle, the \TM{complexity of the model is gradually increased in each successive step, allowing us to first solve and understand simpler variants of the model}. Following the Saint Venant's principle, \TM{anisotropic elastic properties of the two grains closest to grain boundary have been considered in full}, while the effect of \TM{more distant} grains has been modeled in much smaller detail, using average quantities such as elastic grain anisotropy or bulk isotropic stiffness parameter.
The following conclusions have been reached from the solutions of derived perturbative models:
\begin{itemize}
\item The general $k$-th order solution for the local grain-boundary-normal stress is of the following form: $\tilde{\sigma}_{nn}^{(k)}=A^{(k)}\Sigma_{zz}+B^{(k)}\left(\Sigma_{xx}+\Sigma_{yy}\right)+f_{nn}^{(k)}$, where $A^{(k)}$ and $B^{(k)}$ are the analytic functions of grain-boundary type and elastic material properties, $\Sigma_{ii}$ is a \TM{diagonal} component of the external loading tensor $\mathbf{\Sigma}$, \TM{expressed in a local grain-boundary system}, and $f_{nn}^{(k)}$ is a random variable, representing loading fluctuations.
\item To a good approximation ($k=2$), the response on a chosen grain boundary can be characterized by just two parameters: $\Enn$ measures the average stiffness of grain-boundary neighborhood along the normal direction, while $\nu_{12}$ is an effective Poisson's \TM{ratio}, measuring the average ratio of \TM{transverse and} axial responses in both adjacent grains.
\item For an arbitrary lattice symmetry, $A^{(2)}$ and $B^{(2)}$ are simple functions, $\mathcal{F}(\Enn,\nu_{12},\ave{E},\ave{\nu},L_n)$, where $\ave{E}$ and $\ave{\nu}$ denote average elastic bulk properties, and $L_n\ge 0$ is a modeling parameter accounting for the \TM{amount of buffer grains}. Also higher order solutions ($k>2$) have been obtained, but with resulting expressions too cumbersome to be useful in practice.
\item To account for 3D effects and realistic boundary conditions, a model upgrade has been proposed by assuming $\Enn\to\Enn+\delta\Enn$ and $\nu_{12}\to\nu_{12}+\delta\nu_{12}$ in the expressions for $A^{(2)}$ and $B^{(2)}$, with $\delta\Enn$ and $\delta\nu_{12}$ \TM{obtained from fitting the results of numerical simulations}. A simple empirical relation for $\delta\Enn$ (and $\delta\nu_{12}$) has been derived for materials with cubic crystal lattices.
\item To account for realistic stresses acting on a grain-boundary model, the external loading has been dressed by fluctuations, $\mathbf{\Sigma}\to\mathbf{\Sigma}+\mathbf{f}$. To a good approximation, the resulting fluctuations of grain-boundary-normal stresses ($f_{nn}$), have been found to be independent of stresses. \TM{Their distribution is Gaussian}, with standard deviation of the form $s(f_{nn})\approx\Sigma_{\text{mis}}\mathcal{F}(A^u)$, where $\mathcal{F}(A^u)$ is an \TM{empirical function, that increases with the value of} universal elastic anisotropy index $A^u$.
\item A comparison with finite element simulations has demonstrated that the derived semi-analytical expression for a local $\tilde{\sigma}_{nn}^{(2)}$ is accurate only up to unknown stress fluctuations, \TM{\textit{i.e.}, the uncertainty of model prediction is $s(f_{nn})$}. However, the corresponding statistical distributions, $\pdf(\tilde{\sigma}_{nn}^{(2)})$, have been shown to be very accurate. Indeed, an \textit{excellent agreement} with the simulation results has been found for arbitrary grain-boundary types in a general elastic untextured polycrystalline materia
\footnote{Materials with cubic lattice symmetry have been chosen in this article for demonstration purposes only.}
under arbitrary uniform loading.
\item From the application point of view, a reliable tool has been derived for quick and accurate calculation of grain-boundary-normal-stress distributions. \TM{We expect its results should prove extremely} useful for the probabilistic modeling of grain-boundary-damage initiation such as IGSCC.
\end{itemize}
\section*{Acknowledgments}
\label{sec:7}
We gratefully acknowledge financial support provided by Slovenian Research Agency (grant P2-0026). We also thank J\'er\'emy Hure for useful discussions and comments that \TM{helped us to improve} the manuscript.
| proofpile-arXiv_065-15580 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The dynamics of a spherical shell expanding or contracting subject to
a gravitational pull in an given background space is an interesting
problem with applications in astrophysics, for instance the ejection
of matter in the explosion of a supernova\cite{Sato}, and in
cosmology to model the structure formation of galaxies and
clusters\cite{Benzin}. The dynamics of a shell moving in the
background of a charged black hole were beautifully discussed in the
pioneering work of Israel\cite{Israel1} and de la Cruz and
Israel\cite{Israel}. These studies showed that it is in principle
possible for a body collapsing to a black hole to re-emerge from a
white hole in a different universe, so the analytical extensions of
a charged black hole to a white one appeared as a result of the
different phenomena view by the different observers, for the observer
at infinity the shell just go to the hole and is infinitely
red-shifted as it asymptotically approaches the exterior horizon, on
the other hand, for the observer falling with the shell, it takes a
finite amount of time to reach not only the horizon but further and
then re-bounce under some circumstances.
The case of two shells colliding had even more surprises. It is
remarkable that the predictions of new phenomena emerge from the good
old energy-momenta conservation law; everybody knows that energy and
momentum are the same before and after the collision, but an
important thing to notice is that for the contracting shell there is
a potential energy with respect to the expanding shell and after
collision this potential energy is all of a sudden removed, so it has
to transform into kinetic energy, generating therefore the known blue
shift effect. For the expanding shell the changes are as radical;
from an expansion in a space determined only by the gravitational
mass located at the origin, say $m_D$, after collision an observer
moving with the shell finds himself bound to a greater gravitational
mass: $m_D$ plus the gravitational mass of the imploding shell which
has increased. The changes can be so dramatic that the observer may
find himself inside a newly generated black hole!.
Some of the implications of these phenomena have been the conclusion
that the white holes have a short life\cite{Eardley,Blau}, (they are
buried by a black hole), and the mass inflation
phenomenon\cite{P&I:90,Israel2}, which is opening a new field of
research on the black holes interiors, for a review about these
see\cite{Israel3}.
The studies mentioned above have been done using null shells as a
model, there remained the question whether or not the conclusions
reached were still valid in a more realistic model using one or both
shells massive. This is the problem with which we are concerned in
this paper. In the next section we will present a brief review of the
dynamics of massive spherical thin shells moving in curved
backgrounds. In the third section we use the energy-momentum
conservation law and arrive at a general constraint equation which
relates the different parameters involved in the collision. We prove
that when going to the light-like case we recover the known results,
also we present the case of collision between a light-like shell and
a massive one, we also work with some particular cases for the
collision of two massive shells. Finally we present our conclusions
and suggest some lines of further research.
\section{Massive shells}
The junction conditions for arbitrary boundary surfaces and the
equations of motion for a time-like thin shell have been well
understood since the works of Israel\cite{Israel1},
Barrab\`es\cite{Barrabes}, de la Cruz and Israel\cite{Israel},
Kuchar\cite{Kuchar}, Chase\cite{Chase}, Balbinot and
Poisson\cite{BB}, and Lake\cite{Lake}. In this section we give a
brief review of this subject and obtain the equations of motion for a
thin shell.
Let $\Sigma$ be a hypersurface separating a given space-time M into
two parts, $M^\pm$, and ${n^\alpha}$ be the unit normal vector of
$\Sigma$ pointing from ${M^-}$ to ${M^+}$. Let ${x^\alpha_{\pm}}$ be
a system of coordinates in ${M^{\pm}}$ and let ${\xi^a}$ be a system
of intrinsic coordinates on $\Sigma$. The vectors ${e^\alpha_{(a)}}$
tangent to $\Sigma$ are defined by
\begin{equation}
e^\alpha_{(a)}=\frac{\partial x^\alpha}{\partial \xi^a},
\end{equation}
\begin{equation}
n_\alpha \, e^\alpha_{(a)}=0,
\end{equation}
and act as projectors from M onto the hypersurface $\Sigma$; from now
on we will suppress, in general, the use of the $ \pm$ indices.
It can be shown that the covariant derivative of any vector
$A^\alpha$ tangent to $\Sigma$ have components along the vectors
${e^\alpha_{(a)}}$, ${n^\alpha}$ and is given by:
\begin{equation}
A^\alpha_{|\beta}
\,e^\beta_{(b)}=A^a_{;b}e^\alpha_{(a)}+A^a\,K_{ab}\,n^\alpha,
\label{eq:A}
\end{equation}
where $A^\alpha=A^a \, e^\alpha_{(a)}$, the stroke denotes covariant
derivative with respect to the four-metric ${g_{\mu\nu}}$, and the
semicolon denotes covariant derivative with respect to three-metric
${h_{ab}=e^\alpha_{(a)}e_{\alpha(b)}}$. The extrinsic curvature
${K_{ab}}$ is defined by
\begin{equation}
K_{ab}=n_{\alpha|\beta} \, e^\alpha_{(a)}e^\beta_{(b)}=-n_\alpha \,
e^\alpha_{(a)|\beta} \, e^\beta_{(b)}.\label{eq:k}
\end{equation}
Taking $A^\alpha $ as the basis vector $e^\alpha_{(d)}$ in
(\ref{eq:A}) and calculating the covariant derivative along the
vector $e^\beta_{(c)}$ and using the Ricci commutation relations we
can obtain the well-known Gauss-Codazzi equations
\begin{equation}
R_{\alpha \beta \gamma \delta} \, e^\alpha_{(a)} \,e^\beta_{(b)}
\,e^\gamma_{(c)} \,e^\delta_{(d)} = R_{abcd}- K_{ac} \, K_{bd} +
K_{bc} \, K_{ad}.\label{eq:B}
\end{equation}
\begin{equation}
R_{\alpha \beta \gamma \delta} \,n^\alpha \,e^\beta_{(b)}
\,e^\gamma_{(c)} \,e^\delta_{(d)} =K_{bc;d} - K_{bd;c}.\label{eq:C}
\end{equation}
Acting on (\ref{eq:B}) and (\ref{eq:C}) with $ h^{a b}$ and using the
relation
\begin{equation}
h^{ab} \, e^\alpha_{(a)} \,e^\beta_{(b)} = g^{\alpha \beta} -n^\alpha
\, n^{\beta},
\end{equation}
we find
\begin{equation}
{}^3R - K_{ab} \,K^{ab} + K^2 =-2 G_{\alpha \beta} \,n^\alpha
\,n^\beta ,
\end{equation}
\begin{equation}
K^b_{a;b} - K_{;a} = -G_{\alpha \beta}\, e^\alpha_{(a)} \,n^\beta ,
\end{equation}
where ${}^3R $ is the intrinsic 3-curvature invariant of $\Sigma $, $
K=h^{ab} \,K_{ab}$ and $ G_{\alpha \beta }$ is the Einstein tensor.
In general, the components of the tensor $K_{ab} $ when measured with
respect to $M^+$ and $ M^- $ may be different. In order that $\Sigma
$ be the history of a thin shell we must impose the condition that
\begin{equation}
\gamma_{ab}=K^+_{ab} - K^-_{ab} = [K_{ab}],\label{eq:gam}
\end{equation}
be non-vanishing. Such discontinuity is related to the intrinsic
stress energy
tensor of the surface, $S_{ab}$, through the Lanczos
equation\cite{Israel1}:
\begin{equation}
\gamma_{ab} - \gamma \, g_{ab} = - 8\,\pi\,S_{ab},\label{eq:S}
\end{equation}
where $g_{ab}$ is the intrinsic metric of $\Sigma$ and $\gamma =
g^{ab}\,\gamma_{ab}$.
The proper surface density, $\sigma$, is defined by the following
equation:
\begin{equation}
\sigma = {S_a}^b\,u^a\,u_b ,\label{eq:sig}
\end{equation}
where $u^a$ is the vector tangent to the shell, already normalized,
$u^a\,u_a = -1$. From (\ref{eq:S}) and (\ref{eq:sig}), we can easily
find that
\begin{equation}
\gamma_{ab}\,u^a\,u^b + \gamma = - 8\,\pi\,\sigma.\label{eq:Ssig}
\end{equation}
Now we will consider the case of a spherical shell where the
intrinsic coordinates on $\Sigma$ are given by $\xi^a=(\tau, \theta,
\phi)$; $\tau$ is the proper time along the streamlines
$\theta,\phi=const.$ so the line element
$ds^2|_\Sigma$ is given by
\begin{equation}
ds^2|_\Sigma=R^2(\tau) \, d\Omega^2 - d\tau^2, \label{eq:Slin}
\end{equation}
where $d\Omega^2=d\theta^2 + \sin^2\theta \,d\phi^2$, and $R(\tau )$
is the radius of the shell. According to the Birkhoff's theorem, the
line element in both $M^+$ and $M^-$ is reducible to
\begin{equation}
ds^2_\pm= {\cal H}_\pm\,dv_\pm \, ( - {\cal H}_\pm\,f_\pm\,dv_\pm +
2\, dr) + r^2 \, d\Omega^2, \label{eq:ell} \end{equation}
where ${\cal H}_\pm$ and $f_\pm$ are functions of $r$, and the
respective $v_\pm$.
The unit normal and velocity are easily calculated and have the
following expression:
\begin{equation}
n_\alpha = \epsilon \, {\cal H}\,(-\dot R, \dot v, 0, 0),
\label{eq:nor}
\end{equation}
\begin{equation}
u^\alpha = (\dot v, \dot R, 0, 0). \label{eq:vel},
\end{equation}
where we have introduced the factor $\epsilon=\pm 1$ to indicate the
increasing or decreasing of the radius $r$ along the normal.
At this point we are able to deduce the motion equation for $\Sigma$
in a direct and general way. Following Lake\cite{Lake}, from
eqs.~(\ref{eq:Ssig}) and (\ref{eq:Slin}) it follows that
\begin{equation}
\gamma_{\theta \theta}= - 4\,\pi R^2\,\sigma,
\end{equation}
notice that the right hand side is the proper mass of the shell,
$M(\tau )$
\begin{equation}
4\,\pi R^2\,\sigma = M(\tau ).
\end{equation}
Now, with this last equation and recalling the definition of the
$\gamma_{\theta \theta}$, eq.~(\ref{eq:gam}), we obtain that
\begin{equation}
{K_{\theta \theta}}^+ - {K_{\theta \theta}}^- = M(\tau
).\label{eq:k1}
\end{equation}
It proves convenient to rewrite (\ref{eq:k1}) as follows
\begin{equation}
{K^2_{\theta \theta}}^+ = \frac1{4 M^2(\tau )}\left({K^2_{\theta
\theta}}^- -
{K^2_{\theta \theta}}^+ - M^2(\tau ) \right)^2.\label{eq:kfin}
\end{equation}
{}From the line element of a general spherically symmetric space,
eq.~(\ref{eq:ell}), we can calculate the extrinsic curvature using
eq.~(\ref{eq:k}), to obtain
\begin{equation}
K_{\theta \theta}= \epsilon R(\tau ) \left( {\cal H}\,f\, \dot v -
\dot R \right),
\end{equation}
also, equating the line element eq.~(\ref{eq:ell}) at both sides of
the surface with the line element at the surface, $ds^2_\pm|_\Sigma =
ds^2|_\Sigma$, we obtain that
\begin{equation}
{\cal H}_\pm\,\dot v_\pm ({\cal H}_\pm\,f_\pm\,\dot v_\pm - 2 \dot R
)= 1.
\end{equation}
Substituting these last two equations in eq.~(\ref{eq:kfin}) we
obtain the following motion equation:
\begin{equation}
{\dot R}^2 = (\frac{R}{2\,M})^2\,(f_+ - f_-)^2 - \frac12 (f_+ + f_-)
+
(\frac{M}{2\,R})^2. \label{eq:mot}
\end{equation}
We want to make some remarks about this motion equation: first as
long as in the deduction of this motion equation the explicit form of
the stress energy tensor of the shell, $S_{ab}$, was not used, then
this result is valid for any spherically symmetric thin shell;
second, it is an equation valid for any spherically symmetric
background, eq.~(\ref{eq:ell}). Finally, we also want to call the
attention to the fact that it was not necessary to obtain the
equation for the acceleration, $\ddot R(\tau )$, and then make a
first integration, as it is done in other derivations of the motion
equation of a shell.
Before going into the implications of this relation in the case of
collision, we think that going to the Schwarzschild case can clarify
the meaning of the terms that appear in the equation(\ref{eq:mot}).
In this case $f_\pm=1 - \frac{2m_{1/2}}R$, so eq.~(\ref{eq:mot}) can
be put in the following form:
\begin{equation}
m_2 - m_1 =M(1-\frac{2m_1}{R}+{\dot R}^2)^{1/2}- \frac{M^2}{2 \,
R^2}.
\end{equation}
This last equation expresses the total gravitational mass of the
shell; expanding the squared root to first order it can be reduced to
a sum of four well known terms: the proper mass of the shell, $M$,
the kinetic energy $\frac M2{\dot R}^2 $, the mutual potential energy
$ -\frac{M \, m_1}R $ and a self-potential energy $-\frac{M^2}{2\, R}
$.
In our specific case we are dealing with two concentric spherical
thin massive shells colliding without interaction and propagating in
the field due to a spherically symmetric mass distribution $m_D$ near
its centre (fig. 1). In this case the space-time is separated into
four radial sectors A, B, C, D and the gravitational mass of the
in-falling shell is given by the difference $m_C-m_B$ and that of
the outgoing shell by $m_B - m_D$. The equations of motion for the
two shells before collision are
\begin{eqnarray}
\dot {r_{IV}}^2 & = & (\frac{r_{IV}}{2M_{IV}})^2\,(f_B - f_D)^2
-\frac12 (f_B + f_D) + (\frac{{M_{IV}}}{2\,{r_{IV}}})^2,\nonumber \\
\dot r_{III}^2 & = & (\frac{r_{III}}{2M_{III}})^2\,(f_C - f_B)^2
-\frac12 (f_B + f_C) + (\frac{{M_{III}}}{2\,{r_{III}}})^2, \nonumber
\\\label{eq:mota}
\end{eqnarray}
and after collision
\begin{eqnarray}
\dot {r_{II}}^2 & = & (\frac{r_{II}}{2M_{II}})^2\,(f_C - f_A)^2
-\frac12 (f_A + f_C) + (\frac{{M_{II}}}{2\,{r_{II}}})^2, \nonumber \\
\dot {r_I}^2 & = &(\frac{r_{I}}{2M_{I}})^2\,(f_A - f_D)^2 -\frac12
(f_A + f_D) + (\frac{{M_{I}}}{2\,{r_{I}}})^2,\nonumber
\\\label{eq:motb}
\end{eqnarray}
where $M_i (i=I,...,IV)$ are the rest mass of the shells.
\section{conservation relation for the collision of massive shells}
\setcounter{equation}{0}
The energy-momentum conservation law can be used to obtain a relation
the energies and momenta of the different regions before and after
the collision. This problem has been studied by several authors as
Redmount\cite{Redmount}, Dray and 't Hooft\cite{DH} and Barrab\`es,
Israel and Poisson\cite{P&I:90,BIP}, among others and they have
obtained an expression for the case of collision between null shells
and has proved to be central in further studies as we mentioned in
the introduction. Now we proceed to derive a generalization of such a
relation in the case of collision of massive spherical shells.
We have four surfaces, two after and two before the collision. The
normal vector to the respective surface is given by eq.
(\ref{eq:nor}), which can be expressed as
\begin{equation}
n_\alpha = \epsilon {\cal H}(- \dot R ,\frac{\dot R \pm \sqrt{f +
{\dot R}^2}}{f\,{\cal H}} , 0,0), \label{eq:n}
\end{equation}
where we have use the line element equation (\ref{eq:ell}), evaluated
at the surface to obtain a relation between the velocities $\dot t$
and $\dot R$.
In order to proceed, we need the velocities of the shells,
$u^\alpha$, given by eq. (\ref{eq:vel}), but we remark that these
velocities can be normalized with respect to one or the other
adjacent region, so we finally have 8 velocities, ${u^\alpha}_i
|_{\pm} = \frac{d{x^\alpha}_i}{d\tau_i} |_{\pm}$, with plus (minus)
stands for the space to the right (left) of the shell {\it in the
direction of the motion}, see figure 1!:
\begin{eqnarray}
{u^\alpha}_I |_\pm & = & (\frac{\dot {r_{I}} - \sqrt{f_{A/D} + {\dot
r_{I}}^2}}{f_{A/D}\,{\cal H}_{A/D}} , \dot r_I , 0, 0), \nonumber \\
{u^\alpha}_{II} |_\pm & = & (\frac{\dot {r_{II}} + \sqrt{f_{C/A} +
{\dot r_{II}}^2}}{f_{C/A}\,{\cal H}_{C/A}} , \dot r_{II} , 0, 0),
\nonumber \\
{u^\alpha}_{III} |_\pm & = & (\frac{\dot {r_{III}} + \sqrt{f_{C/B} +
{\dot r_{III}}^2}}{f_{C/B}\,{\cal H}_{C/B}}, \dot r_{III} , 0, 0),
\nonumber \\
{u^\alpha}_{IV} |_\pm & = & (\frac{\dot {r_{IV}} - \sqrt{f_{B/D} +
{\dot r_{IV}}^2}}{f_{B/D}\,{\cal H}_{B/D}} , \dot r_{IV} , 0, 0),
\nonumber \\
\end{eqnarray}
$f_{M/N}$ corresponds to the respective $\pm$.
The relations between the spacetime angles with which the shells
collided and went on, can be given in terms of a relation for the
scalar products between the different normals, and we have four
different such products:
\begin{eqnarray}
u_{IV}|_+ \cdot u_{III}|_- & = {g_{\alpha \beta}}_B {u_{IV}}^\alpha
|_+ {u_{III}}^\beta |_- & =
\frac{\sqrt{( f_B + {\dot r_{IV}}^2)( f_B + {\dot r_{III}}^2)} +
\dot r_{IV}\dot r_{III}}{f_B} ,\nonumber \\
u_{I}|_+ \cdot u_{II}|_- & = {g_{\alpha \beta}}_A {u_{I}}^\alpha |_+
{u_{II}}^\beta |_- & = \frac{\sqrt{( f_A + {\dot r_{I}}^2)( f_A +
{\dot r_{II}}^2)} +
\dot r_{I}\dot r_{II}}{f_A} , \nonumber \\
\label{dotp}
\end{eqnarray}
analogously, the relation between the angles can be given in terms of
the scalar products in the other two regions, C and D. Notice how in
eq.~(\ref{dotp}) the metric coefficient ${\cal H}$ of the different
regions does not appear at all.
Now, from the conservation of the 4-momentum
\begin{equation}
{p^\mu}_I + {p^\mu}_{II} = {p^\mu}_{III} + {p^\mu}_{IV},
\end{equation}
and considering a collision in which the interaction is purely
gravitational, so the rest mass of the shell after the collision is
the same as before
\begin{equation}
M_I=M_{III}, \mbox{\hspace{.25in}} M_{II}=M_{IV},\label{eq:mas}
\end{equation}
the conservation of momenta implies the following relation between
the velocities
\begin{equation}
u_{I}|_+ \cdot u_{II}|_- = u_{IV}|_- \cdot u_{III}|_- ,
\end{equation}
and using (\ref{dotp}) we obtain that
\begin{equation}
\frac{\sqrt{(f_A + {\dot r_{I}}^2)( f_A + {\dot r_{II}}^2)} +
\dot r_{I}\dot r_{II}}{f_A}|_c =
\frac{\sqrt{(f_B + {\dot r_{IV}}^2)( f_B + {\dot r_{III}}^2)} +
\dot r_{IV}\dot r_{III}}{f_B}|_c \label{eq:non}
\end{equation}
we remark that this relation is evaluated at the collision 2-surface.
In order to proceed further, we have to take in account the motion
equations, eq.~(\ref{eq:mota}) and eq.~(\ref{eq:motb}) for each
region, which it proves helpful to rewrite as:
\begin{eqnarray}
{\dot r_{I}}^2 & = & \frac{{{\cal R}_I}^2}
{4 {r_I}^2 {M_{I}}^2} - f_A, \nonumber \\
{\dot r_{II}}^2 & = & \frac{{{\cal R}_{II}}^2}
{4 {r_{II}}^2 {M_{II}}^2} - f_A, \nonumber \\
{\dot r_{III}}^2 & = & \frac{{{\cal R}_{III}}^2}
{4 {r_{III}}^2 {M_{III}}^2} - f_B, \nonumber \\
{\dot r_{IV}}^2 & = & \frac{{{\cal R}_{IV}}^2}
{4 {r_{IV}}^2 {M_{IV}}^2} - f_B,
\end{eqnarray}
where we have defined
\begin{eqnarray}
{\cal R}_{I}& = & {r_I}^2(f_D - f_A ) - {M_{I}}^2, \nonumber \\
{\cal R}_{II} & = & {r_{II}}^2(f_C - f_A ) - {M_{II}}^2, \nonumber
\\
{\cal R}_{III} & = & {r_{III}}^2 (f_C - f_B ) - {M_{I}}^2, \nonumber
\\
{\cal R}_{IV} & = & {r_{IV}}^2(f_D - f_B ) - {M_{II}}^2.
\label{eq:ri}
\end{eqnarray}
Now we can work with the conservation relation (\ref{eq:non}),
remembering that all the $r_i$ are $r_c$, the radius of collision,
and that the respective masses are equal, eq.~(\ref{eq:mas}), after
some manipulation we can rewrite (\ref{eq:non}) as :
\begin{equation}
{\cal A}[{\cal R}_{III}\, {\cal R}_{IV} (\alpha + \beta ) -
{\cal R}_{I}\, {\cal R}_{II}( \gamma + \beta )] =
{{r_c}}^2\,(f_B \alpha - f_A \gamma + (f_A - f_B)\beta )^2 ,
\label{eq:cc}
\end{equation}
where
\begin{eqnarray}
\alpha & = & {r_c}^4\,[{M_{I}}^2(f_C - f_A)^2 + {M_{II}}^2(f_D -
f_A)^2], \nonumber \\
\gamma & = & {r_c}^4\,[{M_{I}}^2(f_D - f_B)^2 + {M_{II}}^2(f_C -
f_B)^2], \nonumber \\
\beta & = & {M_I}^2 {M_{II}}^2[{M_I}^2 + {M_{II}}^2 - 2 {r_c}^2 (f_C
+ f_D) ],
\nonumber \\
{\cal A} & = & {\cal R}_{I}\, {\cal R}_{II} f_B - {\cal R}_{III}\,
{\cal R}_{IV} f_A.
\end{eqnarray}
and with ${\cal R}_i$ defined in (\ref{eq:ri}), evaluated at $r_c$.
Equation (\ref{eq:cc}) is the constraint equation for the collision
of two massive spherical shells with arbitrary stress energy tensor,
in a general spherically symmetric background, under the assumption
that the collision is transparent, so the interaction is purely
gravitational.
In order to go to the light-like limit, we proceed as follows: let us
make one of the proper masses, say $M_I$ equal to zero, so after some
simplifications we obtain
\begin{eqnarray}
&(f_C - f_B)\,(f_D - f_A)[(f_C + f_D) (f_A + f_B - f_C -
f_D)\,{r_c}^2 + (f_A - f_B + f_C - f_D)\,{M_{II}}^2]\times &
\nonumber \\
&[(f_A - f_B)\,(f_A\,f_B - f_C\,f_D)\,{r_c}^2 + (f_A\,f_C - f_B\,f_D
)\,{M_{II}}^2] =& \nonumber \\
&[(r_c\,M_{II})^2\,[f_B\,(f_D - f_A)^2 -f_A\,(f_C - f_B)^2]^2, &
\label{eq:mix}\end{eqnarray}
which represents the conservation equation in the case of collision
between a null shell, in this case the imploding one, with a massive
exploding one. Below we analyse more about this case, to proceed
further with the light-light limit now we set the other proper mass,
$M_{II}$, equal to zero so we obtain that
\begin{equation}
(f_C - f_B)\,(f_D - f_A)\,(f_C + f_D)\,(f_A + f_B - f_C - f_D)(f_A -
f_B)(f_A\,f_B - f_C\,f_D) = 0,
\end{equation}
which is the conservation law for the case of collision of two
massiveless shells. Now, if we suppose that there are actually
shells, then always
$f_C \neq f_B$, $f_A \neq f_D$, and always $f_C + f_D \neq 0$, so the
last equation implies that
\begin{equation}
(f_A - f_B)(f_A + f_B - f_C - f_D)(f_A\,f_B - f_C\,f_D) = 0,
\end{equation}
which implies that either one, two or the three factor are equal to
zero, that is
\begin{equation}
f_A\,f_B = f_C\,f_D , \label{eq:null}
\end{equation}
or
\begin{equation}
f_A= f_B, \label{eq:nulla}
\end{equation}
or
\begin{equation}
f_A + f_B = f_C + f_D,\label{eq:nullb}
\end{equation}
equation (\ref{eq:null}) is the known result for the collision of non
massive shells\cite{P&I:90,Redmount,DH}. What comes as a surprise is
the fact that the conservation relation in the light-like limit also
would be satisfied if eq.~(\ref{eq:nulla}) or eq.~(\ref{eq:nullb})
holds. This fact implies that under the circumstances given by these
two equations, the known result (\ref{eq:null}) does not necessarily
have to be satisfied.
As long as many works on the light-light shells collision have been
done in the Schwarzschild background, we found interesting to study
in more detail this particular case of our master equation
(\ref{eq:cc}). Using the fact that for Schwarzschild $f = 1 - \frac{2
m}r$ and ${\cal H} = 1$ in eq.~(\ref{eq:ell}), it can be proved that
in this case equation (\ref{eq:cc}) is in general a sixth order
polynomial in the radius of collision, $r_c$, (actually, seventh
order but one of the roots is $r_c=0$), where the coefficients are
functions of the rest of the parameters, namely the four
gravitational masses and the two proper masses. However, in the case
of a collision between a light-like shell and a massive one, either
the null shell expanding and the massive contracting or viceversa,
equation (\ref{eq:cc}) reduces to a third order polynomial in $r_c$.
The case when the proper masses are the same, $M_I=M_{II}$, also
produces some simplification in the equation (\ref{eq:cc}), reducing
the order to fourth. We recall that the light-light collision reduces
to a first order polynomial in $r_c$, namely
\begin{equation}
r_c=\frac{2(m_A\,m_B - m_C\,m_D)}{m_A + m_B - m_C -m_D}.
\end{equation}
Using numerical analysis we studied the dependence of the radius of
collision with respect to the proper masses, for fixed values of the
gravitational ones, that is, the coefficients in eq.(\ref{eq:cc})
were only functions of $M_I$ and $M_{II}$, so we solved it
numerically. We worked with different values of the gravitational
masses and the results seem to point to the fact that the radius of
collision for the light-light shells collision is a maximum; we
present an example showing this result in figure 2, the chosen values
of the gravitational masses in this figure were, $m_A = 3$, $m_B =
5$, $m_C = 7$, $m_D = 0$, the radius of collision for the light-light
case is $r_c = 30$, notice how this is a maximum.
Now, returning to the master equation (\ref{eq:cc}), let us discuss
something about the mass inflation, for details about it
see\cite{Israel2,P&I:90}. According to this model, inside the charged
black hole there is the cross flux of two null shells, the incoming
one is parallel and very close to the inner horizon, which in the
unperturbed case is also the locus of the Cauchy horizon. In figure
1, line I-III would represent this in-flux. In this way, the
collision is supposed to happen close to the inner horizon of a
charged black hole, so in (\ref{eq:null}) $f_C$ is going to zero, but
$f_A$ and $f_B$ remain finite, so does their product, which due to
the equality, implies that the product $f_C\,f_D$ has to remain
finite, which implies in turn that $f_D$ has to diverge. This is a
practical way to see the mass inflation.
Now, as written, our master equation, eq.~(\ref{eq:cc}), does not
have the kind of behavior mentioned for the null-null case; it is not
expressed as an equality between products that would repeat the
procedure mentioned above and so there is no way to conclude that
some factors or coefficients should diverge in order to maintain the
equality.
Also, during the collision between a light-like shell and a massive
one, eq.~(\ref{eq:mix}), although it is simpler, as it is, either has
the mentioned behavior of the light light case, so in order to say
something about the mass inflation for the massive case, it is better
to do a thin-shell analysis as done by Brady et. al\cite{Brady}, this
analysis seems to point at the fact that for a null shell incoming
parallel to the inner horizon and colliding with a massive one in the
interior of a charged black hole, there would be a mass inflation
described in the leading terms by light-light case; the respective
massive-massive collision is still under study.
\section{Conclusions}
The main conclusion of the present work is that the results obtained
from the analysis of the collision of null shells are, in general,
valid, collecting the leading behavior of the master equation
(\ref{eq:cc}). With respect to white holes, it seems that the
collision point for null shells is a maximum, so the conclusions
reached by Eardley\cite{Eardley} do describe the leading phenomena in
the more complete analysis using the master equation, mainly that if
the white holes appear in a medium with matter, they will be buried
down very quickly by the black hole formed after the collision. The
same seems to apply for the mass inflation phenomena: the leading
behavior is correctly described by the null case.
Nevertheless, we want to stress the fact that it is needed a more
realistic analysis of the collision, such as the one described in
this paper, to give a more solid ground to the former conclusion.
The fact of having a full master equation (\ref{eq:cc}) is important
to analyse particular cases which might provide us with some new
situations, for instance, the cases in which the known relation for
the null case is not necessarily satisfied, eqs.~(\ref{eq:nulla},
\ref{eq:nullb}), deserve a deeper study which is currently on its
way.
Finally, we want to remark that the master equation obtained,
eq.~(\ref{eq:cc}), is valid for the most general spherically
symmetric background and that it applies for a wide type of massive
thin shells, as long as the specific form of the stress energy tensor
of the shell is not used in the derivation of the master equation
(\ref{eq:cc}).
\section{Acknowledgements}
It is a great pleasure to thank Werner Israel for suggesting the
problem and for fruitful discussions and encouragement, we are also
grateful to Patrick Brady for comments and suggestions. D. N. is
grateful for the awards given by External Affairs and International
Trade Canada, Government of Canada Awards, administrated by the
International Council for Canadian Studies. He also thanks DGAPA and
UNAM for support. This work was also partly supported by the Natural
Sciences and Engineering Research Council of Canada. H. de O. would
like to acknowledge CAPES for financial support, and J. S. does it
to CNPq. Figure~2, and some calculations in this work were done using
Mathematica$^{\copyright}$, we are grateful to A. Macpherson and S.
Droz for useful suggestions during the calculations.
| proofpile-arXiv_065-15702 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Perturbative calculations of gauge invariant quantities necessarily
proceed in a gauge noninvariant manner due to the gauge-fixing
required in the Lagrangian. In order to verify the gauge-invariance
of the final result, and to check against possible errors,
computations are usually repeated for different choices of the
gauge-fixing or they are performed in a general
class of gauges labelled by an arbitrary gauge-fixing parameter.
In the latter case, one ascertains that the dependence on the
gauge-parameter drops out for physical quantities. For Yang-Mills
($YM$) theories, the complicated tensor structure of the vertices
makes calculations in a general gauge containing a gauge parameter
extremely tedious. In this paper I describe how, for pure $YM$
theories, one may perform calculations in any particular
gauge with a convenient propagator (e.g. Feynman) and yet retain a
nontrivial check on the
gauge-invariance of the result.
The idea uses the fact that pure YM theory
in two dimensional space-time is
perturbatively free. This is established by going to the
axial ($A_{1} = 0$) gauge
whence the gauge self-interactions vanish.
Since, by definition, gauge-{\it invariant}
quantities are independent of the choice of gauge-fixing, all
gauge-invariant quantities in pure $YM_{2}$ theory must vanish.
The strategy to
use this fact for calculating physical quantities in some $D_{0}$
dimensional space-time is as follows : perform the Lorentz algebra
and loop integrals for an arbitrary $D$ dimensions; then if the
quantity being calculated is truly gauge-invariant, a
{\it necessary}
condition is that it should vanish at $D=2$. In this way the
dimensionality of space-time is used as a gauge-invariance
parameter.
As will be seen later, for all but one example in this paper the $D$
dependence of the Lorentz algebra gives the sole useful check on
gauge-invariance. However we will encounter an example of a
gauge-invariant quantity whose only $D$ dependence is in the loop
integral. In order to treat all possibilities in a unified manner
it is necessary to adopt prescriptions for defining the
$D \rightarrow 2$ limit in the integrals. Integrals like those
from zero-temperature
Feynman diagrams are defined in $D$ dimensions by analytic
continuation \cite{UV,IR,Col,Muta} with the $D \rightarrow
2$ limit
taken after doing the integrals. For nonzero temperature integrals
containing Bose-Einstein factors, an infrared cutoff will be
imposed.
The reader is advised that it is { \it not} the aim of this paper
to
provide, in a single attempt, a perturbative analysis of pure $YM$
theory for {\it all} $D_0 \ge D \ge 2 $ dimensions (if indeed such
a thing is possible), but rather to define a pragmatic procedure
that
connects correctly calculated gauge-invariant quantities near
$D = D_0$ to the value zero at $D =2$. The prescriptions are needed
for the loop integrals because even if the gauge-invariant
quantity
is well defined near $D= D_0$, in the limit $D \rightarrow 2$ one
will encounter infrared (IR) singularities symptomatic of
lower dimensional field theories. Of course the prescriptions
mentioned
were chosen because they gave sensible results for the examples
considered. They remain to be checked in other cases as the author
has no general proof of their validity. Note that for zero
temperature
type of integrals dimensional continuation is being used here to
extrapolate
gauge invariant quantities from $D =D_0$ down to $D =2$, in
contrast to its
usual role of regulating ultraviolet (UV) and IR
\cite{UV,IR,Col,Muta}
singularities near $D =D_0$.
The $D \rightarrow 2$ check described above cannot be used for
gauge-invariant
quantities which are dimension specific. An example is the
perturbative beta
function of $D_0 =4$ $YM$ theory which gives information about
the UV behaviour of Green's functions. Within mass-independent
renormalisation
schemes, the beta function is scheme-independent up
to second order and is manifestly gauge independent when minimal
subtraction
is used \cite{Col,Muta}. It is a dimension specific quantity
because it is
obtained from the residue of the pole, as $D \rightarrow 4$, of
the coupling
constant renormalisation factor. $YM$ theory is super
renormalisable for $D <
4$ and therefore lacks the conventional UV beta function. It
is thus
not apparent if one may sensibly
extrapolate the conventional beta function beyond an infinitesimal
range
near $D=4$.
More examples of dimension specific
gauge invariant quantities may be found in $D_0 =3$ pure $YM$
theory with
an added Chern-Simons term \cite{CS}. The Chern-Simons term is
specific
to odd dimensions and so here again one does not, in general,
expect
gauge-invariant quantities to vanish as $D \rightarrow 2$.
The nice thing about performing the Lorentz algebra in $D$
dimensions
(in addition to the integrals) is that
it takes almost no more effort than in doing it for the physical
$D_{0}$
dimensions. The benefit, as mentioned above, is that the
$D$ parameter used in a simple gauge provides one with an
algebraically
efficient way of checking gauge invariance. Of course, one may
use the $D$
parameter in conjunction with a
conventional
gauge parameter ($\alpha$) to give additional checks
and insight. The $D$ parameter is a book-keeping device
keeping track of the ``relevant'' $(D-2)$
pieces in a calculation while the $\alpha$ parameter prefaces the
``irrelevant'' pieces.
What about fermions? Clearly $QCD_{2}$ with fermions is a
nontrivial
theory \cite{GtH}. Fortunately, the contribution of fermions to
amplitudes can be kept
track of by using the usual trick of working with an arbitrary
$N_{f}$
copies of them. A gauge-invariant quantity must be separately gauge
invariant in the $N_{f}=0$ and $N_{f} \neq 0$ sectors. In the
first sector,
the calculations may be performed as described above using the $D$
parameter to check gauge-invariance while the $N_{f} \neq 0$ sector
can be analysed separately.
Usually diagrams with one or more
fermion lines are algebraically simpler to deal with than those
with only gluon lines so the methodology described here is not
without promise.
The idea outlined in the preceding paragraphs will be exemplified
in this paper for zero and nonzero
temperature ($T = 1/{\beta}$) pure $YM$ theory
with gauge group $SU(N_{c})$ at $D_{0} =4$.
In Sect.(2) gluon-gluon scattering at zero temperature is
considered at tree level. This is a relatively simple example since
there are no loop integrals to complicate matters. The metric
used in Sect.(2)
is Minkowskian, diag($g_{\mu \nu}$)$={(1, -1,....,-1)}$.
In Sects.(3-5) the examples are at nonzero $T$ and the metric is
Euclidean, $g_{\mu \nu} =\delta_{\mu \nu}$ (for orientation to nonzero
temperature field theory see, for example, \cite{Ber,GPY}).
The measure for loop integrals in Sects.(3-5) is
\begin{equation}
\int [dq] \equiv T \sum_{q_0} {\mbox{$\displaystyle{\int{d^{(D-1)}q
\over (2\pi)^{(D-1)}}}$}} \, , \label{meas}
\end{equation}
where the sum is over discrete Matsubara frequencies
\cite{Ber,GPY}, $q_0 =2 \pi
nT$ for gauge bosons and ghosts, $n \in \cal{Z}$.
For quantities which depend on the external momenta, an analytic
continuation
to Minkowski space is made as usual after the loop sums are done
\cite{GPY}. In Sect.(3) the one-loop gluon self-energy is
considered and the
two prescriptions for loop-integrals are introduced while in
Sect.(4) a
discussion is given of ``hard thermal loops'' and propagator
poles in $D$ dimensions. Sect.(5) considers the free energy of a
gluon
plasma to third
order. The ``plasmon'' contribution in $D$ dimensions requires
the simultaneous
use of both prescriptions introduced in Sect.(3), therefore
providing a check on
their consistency. The conclusion is in Sect.(6) while the Appendix
contains some expressions and discussion mentioned in the main
text.
The following gauges will be frequently
referred to throughout the paper : the strict Coulomb
gauge ($\xi =0$ limit of the $({\nabla}.\vec{A})^2 / 2 \xi$ gauge-
fixing), the $\alpha$-covariant
gauge with gauge-fixing term $ ({\partial}_{\mu} A_{\mu})^2 /
{2(\alpha +1)}$
and the Feynman gauge ($\alpha =0$). The Feynman rules, being
standard
\cite{Col,Muta,Ber,GPY}, will not be spelled out. $D$- vectors
will be denoted
by uppercase and have Greek indices, $Q_{\mu} = (q_{0} , \vec{q}
)$,
$q \equiv |\vec{q} |$, and the $(D-1)$
spatial components will be labelled by Roman letters ($i,j$).
Keep in
mind that in $D$ dimensions the coupling $g^2$ has a mass
dimension $(4-D)$.\\
\setcounter{equation}{0}
\section{Gluon-gluon scattering}
The scaterring amplitude $M(gg \to gg)$, for two gluons into two
gluons,
involves
at lowest order four Feynman diagrams \cite{GG}. The first comes
from
the order $g^2$ four-point vertex in the Lagrangian while the
other three are formed from
two three-point vertices tied by a propagator and represent the
usual $s,t $
and $u$ channel scatterings. The sum of the four amputated
Feynman diagrams
gives the tensor $T_{\mu \nu \sigma \tau}$, where the Lorentz
indices indicate the
external gluon legs. The gauge-invariant amplitude is then given by
\begin{equation}
M = T_{\mu \nu \sigma \tau} \epsilon_{1}^{\mu} \epsilon_{2}^{\nu}
\epsilon_{3}^{\sigma}
\epsilon_{4}^{\tau} \, . \label{M}
\end{equation}
Here $\epsilon_{(n)}^{\mu} \equiv \epsilon^{\mu}(\vec{k},\lambda_{(n)}) $
represents the
polarisation vector for the $n$-th ($n=1,2,3,4$) gluon with
{ \it physical}
polarisation
$\lambda_{(n)}$ and on-shell momentum $K^{2} = k_{0}^{2} -{\vec{k}}^2
= K^{\mu}
\epsilon_{\mu}(\vec{k},\lambda) =0$. In practice one usually needs the squared
amplitude
summed over initial and final spin (and colour) variables.
Choosing the basis
$\epsilon_{\mu}(\vec{k},\lambda) \equiv (0,\vec{\epsilon})$, one has the
transverse projection
operator
\begin{eqnarray}
P_{\mu \nu}(K) &=& \sum_{\lambda} \ \epsilon_{\mu}(\vec{k},\lambda)
\epsilon_{\nu}(\vec{k},\lambda) = (\delta_{ij} - {k_{i}k_{j}
\over k^2})\delta_{\mu i} \delta_{\nu j} \, . \label{P}
\end{eqnarray}
When the relation (\ref{P}), which is true in any dimension, is
used to
evaluate $\sum_{\lambda} |M|^2$ in $D$ dimensions, factors of
$D$ will appear.
For example $g^{\mu \nu} P_{\mu \nu} = (D-2)$ and so in
particular $P_{\mu \nu}
=0$ in two dimensions because then there are no transverse
states. From Ref.{\cite{ES}} one obtains
\begin{equation} {\displaystyle \Sigma_{\mbox{\small spin,colour}}}
|M(gg \to gg)|^2 = 4g^4 N_{c}^2
(N_{c}^2-1) (D-2)^2 \, \left[3 -{ut \over s^2} - {us \over t^2} -
{st \over u^2} \right] \, . \label{M2}
\end{equation}
For $D=4$ this reduces to earlier results \cite{GG} and it also
vanishes when
$D \rightarrow 2$ as desired. However there are two subtleties
which should be
noted. Firstly, since the on-shell gluons are massless, there are
kinematic singularities in (\ref{M2}) even for $D \ne 2$ : for example,
$s = (K_{1}^{\mu} + K_{2}^{\mu})^{2} = 0 $ when $\vec{k_{1}}$ is parallel
to $\vec{k_{2}}$. As $D \rightarrow 2$, the Mandelstam
variables ($s,t,u$) vanish when the vectors
$(\vec{k_{2}}, \vec{k_{3}},
\vec{k_{4}} )$ are respectively in the same direction as $\vec{k_{1}}$.
Thus the $D \rightarrow 2$ limit
of (\ref{M2}) is unambiguous only if the
kinematic singularities are regulated.
Secondly, if one also averages over initial spins in $D$
dimensions, then
(\ref{M2}) is divided by $(D-2)$ for each of the incoming lines.
This averaging is fine if one is working near $D=4$ say
\cite{ES}, but is clearly inadvisable
if one wants to check gauge-invariance by the $D \rightarrow 2$
procedure : In the $D \to 2$ method one
should check gauge-invariant quantities before performing other
extraneous $D-$dependent operations.\\
\setcounter{equation}{0}
\section{Self-energy}
The self-energy by itself is not a gauge-invariant quantity.
However at nonzero temperature there
is a gauge-invariant piece of it which is easy to extract at low
orders. This is the inverse screening length for static electric
fields,
also called the electric mass, $m_{el}$. If $\delta^{ab}
\Pi_{\mu \nu} (k_0, \vec{k})$
is the gluon polarisation tensor at nonzero temperature, then at
lowest order one may define
\begin{eqnarray}
m_{el}^{2} &\equiv& - \Pi_{00}(0, {\vec{k} \rightarrow 0}) \ .
\label{me1}
\end{eqnarray}
\\
At $D_0 =4$, the order $(gT)^2$ result for (\ref{me1}) is well
known
\cite {GPY,Nad}. Remarkably it was found in Ref.\cite{Toi} that
the next term
of order $g^2 |\vec{k}|T$ in the low momentum expansion of
$\Pi_{00} (0, \vec{k})$ at one-loop
is independent of the $\alpha$ parameter in the
$\alpha$-covariant gauge and also has the same value in the Coulomb
gauge, thus suggesting that even this term is gauge-invariant.
I repeat here the analysis of Ref.\cite{Toi} using the $D$
parameter.
In the $\alpha$-covariant gauge one finds
for the sum of one-loop gluonic
and ghost diagrams, the relevant object
\begin{eqnarray}
\Pi_{00}(0, \vec{k}) &=& {g^{2} N_{c} \over 2} [ A_{0}(\vec{k}) +
\alpha A_{1}(\vec{k}) + {\alpha}^2 A_{2}(\vec{k})] \; ,
\label{me2}
\end{eqnarray}
where
\begin{eqnarray}
A_{0}(\vec{k}) &=& \int [dq] {2(D-2)(2q_{0}^2 -Q^2) + 4k^2 \over
Q^2 [q_{0}^2
+ (\vec{q}-\vec{k})^2] } \; , \label{me3} \\
&& \nonumber \\
A_{1}(\vec{k}) &=& \int [dq] {2[4(\vec{k}.\vec{q})^2 -2k^2 Q^2 +
2q_{0}^2k^2
] \over Q^4 [q_{0}^2 + (\vec{q}-\vec{k})^2] } \; , \label{me4} \\
&& \nonumber \\
A_{2}(\vec{k}) &=& \int [dq] {q_{0}^2 k^4 \over Q^4 [q_{0}^2
+ (\vec{q}-\vec{k})^2]^2 } \; . \label{me5}
\end{eqnarray}
\\
The only difference between the integrands in eqns.(\ref{me2} -
\ref{me5})
and the expressions studied in
\cite{Toi,KK} is the presence of the factor $(D-2)$, coming
from the Lorentz algebra, in eq.(\ref{me3}). This
factor is invisible in \cite{Toi,KK} because they work with
$D=D_{0}=4$. From the above expressions, one gets for the
electric mass squared at order $g^2$ :
\begin{equation}
m_{el}^2 = g^{2} N_{c} (D-2) \int [dq] {(2q^2 -Q^2)\over Q^4} \, .
\end{equation}
After performing the frequency sum and angular integrals one
obtains
\begin{equation}
m_{el}^2 = g^2 N_{c} (D-2) \ T^{(D-2)} \omega(D) \ [ 2J(D) -I(D)]
\, , \label{mD1}
\end{equation}
where
\begin{eqnarray}
{1 \over \omega(D)} &=& 2^{(D-2)} \ \pi^{({D-1 \over 2})} \
\Gamma\left({D-1 \over 2}\right) \; , \label{ang} \\
I(D) &=& \int_{0}^{\infty} dx \ x^{(D-3)} \ n_x \, ,\label{I} \\
J(D) &=& {1 \over 2} \int_{0}^{\infty} dx \ x^{(D-3)} \
[n_x - x {d \over dx} n_x] \, , \label{J} \\
n_x &=& 1/(e^x -1) \, . \label{BEF}
\end{eqnarray}
Both of the integrals $I(D)$ and $J(D)$ are IR finite for $D > 3$.
The second term in $J(D)$
may be
integrated by parts, and the surface term dropped when $D >3$,
resulting in
\begin{equation}
J(D) = {1 \over 2} (D-1) I(D) \; \; \; , \, D > 3 \; .
\end{equation}
The integral $I$ can be written in terms of gamma and zeta
functions \cite{GR}
\begin{equation}
I(D) = \Gamma(D-2) \ \zeta(D-2) \; \; \; , \, D > 3 \; . \label{I2}
\end{equation}
Thus one may write eq.(\ref{mD1}) as
\begin{equation}
m_{el}^2 = g^2 N_{c} (D-2) \ T^{(D-2)} \omega(D) \ \Gamma(D-1)
\zeta(D-2) \; \; \; , \,
D>3 \, . \label{mD2}
\end{equation}
The divergence as $D \rightarrow 3$ shows up in the zeta-function.
In a consistent calculation at $D_0 =3$, the logarithmic
divergence in the naive expression for $m_{el}$ will be cutoff by
$g^2/T$ \cite{EDH}. Suppose one continues (\ref{mD2}) down to
$D=2$. Then the result vanishes because of
the $(D-2)$ Lorentz factor. However this may
be fortuitous as it is related to the possibility of
simplifying $J(D)$ (\ref{J})
through an integration by parts, dropping a surface term, and
getting a result
proportional to $I(D)$, so that the square brackets in (\ref{mD1})
has no net singularity at $D =2$. In more complicated examples
one may not be so lucky. Therefore a prescription will now be
introduced to handle the IR singularities in integrals like
$I(D)$ and $J(D)$ above. It is simply this :
integrals with Bose-Einstein factors will be interpreted for
$D \le 3$ with an infrared cutoff $\lambda$:
\begin{equation}
\int_{0}^{\infty} \rightarrow \int_{\lambda}^{\infty} \, .
\label{pres1}
\end{equation}
That is, the lowest order electric mass is given in $D > 3$
dimensions by the expressions (\ref{mD2}) and is defined,
for the purpose of this paper, by (\ref{mD1} - \ref{BEF},
\ref{pres1}) in $D \le 3$ dimensions. The cutoff in
(\ref{pres1}) is left unspecified
since it is required here only to allow the limit
$D \rightarrow 2$ to
be taken with impunity. If one is really interested in the problem
in $D_0 \le 3$ dimensions then the cutoff must be determined self-
consistently. In this paper the interest is in gauge-invariant
quantities
near $D_0 =4$ and the prescription (\ref{pres1}) allows the
connection
to be made with the free theory at $D =2$. The prescription
(\ref{pres1})
will be tested in Sects.(4,5).
Now consider the order $|\vec{k}|T$ term in (\ref{me2}) for $D=4$.
As discussed in
\cite{Toi}, this can only arise from the infrared region of the
integrals.
That is, it only arises from the $q_{0}=0$ part of the frequency
sum
(\ref{meas}) in (\ref{me3}, \ref{me4}). For the gauge-fixing
dependent
piece (\ref{me4}), the zero
mode contains pieces exactly of order $|\vec{k}|T$ but the net
contribution vanishes after the elementary integrals are
done \cite{Toi}. The zero mode in the $\alpha$ independent piece
(\ref{me3})
contributes
\begin{eqnarray}
&&T \int {d^{(D-1)} q \over {(2 \pi)^{(D-1)} }} \left[ {-2(D-2)
\over (\vec{q} -\vec{k})^2} +
{4k^2 \over q^2 (\vec{q}-\vec{k})^2}\right] \; . \label{kT1}
\end{eqnarray}
The first term in (\ref{kT1}) vanishes by dimensional
regularisation. The
second gives, at $D=4$, the contribution proportional to
$|\vec{k}|T$
found in \cite{Toi}. In $D$ dimensions this last piece has no
$(D-2)$
factor from the Lorentz algebra but the integral is highly singular
for $D \le 3$ even when $ \vec{k} \ne 0$. As the integral is
similar to
that occurring in zero temperature field-theory (indeed (\ref{kT1})
is a contribution in the effective $(D-1)$ dimensional Euclidean
field
theory which represents the far infrared,
or infinite temperature, limit of the $D$ dimensional finite
temperature field theory \cite{GPY,JT}.) it is natural to use
dimensional
continuation methods \cite{UV,IR,Col,Muta} for its evaluation.
A standard calculation of (\ref{kT1}) yields,
\begin{equation}
T \left({D-2 \over 4} \right) { k^{(D-3)} \over
{(4 \sqrt{\pi})^{(D-4)}}} { 1 \over \Gamma(D/2) \cos(\pi D/2)} \,
. \label{kT2}
\end{equation}
Amazingly, $D=2$ is the only positive value of $D$ for which
(\ref{kT2}) vanishes. Thus the $(kT)$ term in (\ref{kT1}) at
$D=4$ {\it does} satisfy the
necessary condition for gauge-invariance once the integral is
defined by dimensional
continuation for the $D \rightarrow 2$ limit. Of course the above
analysis does not explain {\it why} the $kT$ term is
gauge-invariant. In Ref.\cite{Toi}
it was related to a higher order term in the free energy but its
direct
physical significance is unclear to the present author. It might
be
interesting also to have a general proof for the gauge-invariance
of the $kT$ term using, for example, the techniques of
Ref.\cite{LR}.
The use of dimensional continuation to evaluate zero temperature
type integrals is the second prescription that will be used in this
paper. Another example of its use will be given in Sect.(5).
Here it is noted that
with the replacement $(D-1) \to D$, the second integral in
(\ref{kT1}) occurs in the zero temperature
self-energy in $D$ dimensions.
The zero-temperature self-energy thus diverges when
$D \rightarrow 2$
(i.e. $D \to 3$ in (\ref{kT2})) but this is not worrisome since
the self-energy is a gauge-dependent object. \\
\setcounter{equation}{0}
\section{`Hard Thermal Loops' and propagator poles}
For $QCD_4$, at nonzero temperature, there are an infinite number
of bare loop diagrams which are as large as the tree amplitudes
when
the momentum entering the external legs is soft ($ \sim gT$) and
the
internal loop momentum is hard $(\sim T)$. These ``hard thermal
loops'' (HTL)
occur only at one-loop and have been extensively analysed
by Braaten and Pisarski \cite{BP} and Frenkel and Taylor
\cite{FT}. The HTL's exist for amplitudes when all the $N \geq 2$
external lines are gluons or when one pair is fermionic and the
other $(N-2)$ are gluons.
By explicit calculations \cite{BP,FT},
the HTL's were found to be the same in Coulomb,
$\alpha$-covariant and axial gauges. General proofs of
gauge-fixing independence may be constructed \cite{KKR}.
A gauge-invariant generating functional for the HTL's that was
constructed by Taylor and Wong has been cast into myriad
forms \cite {TW}. In some
recent work, Blaizot and Iancu \cite{BI} have rederived the
results of
\cite{BP,FT,TW} by analysing the kinetic equations obtained
through a
self-consistent truncation of the Schwinger-Dyson equations for
sources and fields at finite temperature.
From the expressions contained in \cite{BP} or \cite{TW,BI} one
sees that
the $N_f=0$ sector of the $N$-gluon HTL contains an overall
factor of
$(D-2)$ when the Lorentz algebra is done in $D$ dimensions.
Even the HTL's with external quark lines are seen to be
proportional
to $(D-2)$. As noted in the above papers, this is because
the HTL's, which are the
leading high temperature (and essentially classical) parts of
the one loop
diagrams, receive contributions only from the $(D-2)$ physical
transverse gluon degrees of freedom.
To consider the pure gluonic HTL's in $D$ dimensions (the $N_f
\neq 0$ sector is not of interest here), the $D$ dependence of
the integrals
must also be taken into account (see also Frenkel and Taylor
\cite{TW}). For the purpose of power
counting it is convenient to introduce the
dimensionless coupling $g_0$ in $D$ dimensions through the
relation $g^2 =
g_{0}^{2}T^{(4-D)}$, where the temperature has been chosen as
the mass scale
since that is the natural parameter in the problem. Now a hard
momenta
is of order $T$ while soft refers to $\sim g_0 T$. With this
notation one can repeat all the relevant analysis of
\cite{BP,FT,TW,BI} and show
that it remains valid for $D > 3$ dimensions.
However naive power counting suggests that
for $D \le 3$ dimensions {\it soft} thermal loops (loop momenta
$\sim g_0 T$)
are no longer suppresed relative to HTL's.
This is related to the occurrence of IR divergences; for example,
the static
limit of the HTL in the gluon self-energy \cite{KW} is simply
the electric
mass squared (\ref{me1}) which was noted in the last section to
diverge in the
naive $D \rightarrow 3$ limit. Therefore, just as in the case of
$m_{el} $ ,
for the purpose of taking the $D \rightarrow 2$ limit, HTL's are
defined
in this paper for $D \le 3$ with the infrared cutoff
(\ref{pres1}). Then they vanish as $D \rightarrow 2$ simply
because of the Lorentz algebraic factor.
Just as at zero temperature, the physical poles of the propagator
at non-zero temperature are gauge invariant \cite{KKR}.
At nonzero temperature, the real part of the gauge propagator
pole at zero external three momentum
defines the induced thermal masses for the gluons and for $D_0 =4$
the leading ($ \sim gT$) result is easily obtained at one-loop
\cite{GPY}. When using the $D$ parameter, the thermal mass will
vanish near $D =2$ as $\sim \sqrt{(D-2)}$, just like the
electric mass (\ref{me1}), when the prescription (\ref{pres1}) is
adopted. The imaginary part (at $D_0 =4$) turns
out to be of subleading order ($g^2T$) and a practical
consistent calculation
requires the Braaten-Pisarski \cite{BP} resummation using
propagators and vertices dressed with HTL's. If the calculation
of the imaginary part is done in $D$ dimensions there will be
three sources of $D$ dependence : from the HTL's in the effective
propagators and vertices, from the Lorentz algebra of the
dressed diagrams, and from the loop integral of the dressed
diagrams. It
would be interesting to see how the $D \rightarrow 2$ limit
looks like
in this case but this will not be attempted here because the
analysis
is tedious. In the next section an example will be considered which
also involves a resummation but is easier to analyse.\\
\setcounter{equation}{0}
\section{Free energy}
The free-energy is physical quantity equal to the negative of
the pressure and is directly obtainable by
calculating bubble diagrams in perturbation theory \cite{Kap}.
Since it is physical,
it must be gauge-invariant. In the Feynman gauge, the ideal gas
pressure ($P_0$) of gluons is given by \cite{Ber,GPY}
\begin{eqnarray}
{ P_{0} V \over T } &=& (N_{c}^2 -1) \ \ln \left\{ \left[
Det (-\partial ^{2} \delta_{\mu \nu}) \right]^{-{1 \over 2}} .
Det( -\partial ^2) \right\} \label{det} \\
&=& (D-2) (N_{c}^{2} -1) \ \ln [Det(-\partial ^{2} )]^{-{1
\over 2}} \, ,
\label{IG}
\end{eqnarray}
where $V$ is the volume. The first determinant in (\ref{det})
is the contribution of gluons
while the second determinant is the ghost contribution. The first
two terms in (\ref{IG}) count the number of physical degrees of
freedom.
The remaining expression in (\ref{IG}) may be evaluated
(see appendix)
to yield the free gluonic pressure in $D$ dimensions,
\begin{equation}
P_{0} = (D-2) (N_{c}^{2}-1) \ T^{D} \ {\pi}^{-{D \over 2}} \
\Gamma(D/2)
\zeta(D) \; \; \; ,\ D > 1 \, . \label{IG2}
\end{equation}
The result is positive for $D > 2$ and vanishes smoothly in the
limit
$D \rightarrow 2$. The first singularity
appears in the zeta-function at $D =1$ when field theory
collapses to
quantum mechanics.
Consider next the order
$g^2$ correction to the ideal gas pressure, $P_2$. In the Feynman
gauge one obtains after some algebra,
\begin{eqnarray}
P_2 &=& g^2 N_c (N_{c}^2 -1) \left[\int {[dq] \over Q^2}\right]^2
\left\{ -{ 1 \over 2} ({1 \over 2}) + {1 \over 8} [2D(1-D)] +
{1 \over 12} [9(D-1)] \right\} \nonumber \\
& & \label{P21} \\
& = & -\left({{D-2} \over 2}\right)^2 g^{2} N_{c}(N_{c}^{2} -1)
\left[\int
{[dq] \over Q^2}\right]^2 \label{P22} \\
& = & -\left({{D-2} \over 2}\right)^2 g^{2} N_{c}(N_{c}^{2} -1) \
T^{(2D-4)} \ {\omega}^{2}(D) \ I^{2}(D) \; . \label{P23}
\end{eqnarray}
\\
The terms within brackets in (\ref{P21}) come
respectively from the two-loop bubble
diagrams with
one, two and three gluon propagators. Shown explicitly in front
of each contribution are the symmetry factors and the minus sign
for the
ghost loop. The functions $\omega(D)$ and $I(D)$ in (\ref{P23})
are those
defined earlier in eqns.(\ref{ang}, \ref{I}). When $D>3$ one may
also use
eq.(\ref{I2}) and at $D=4$ one recovers a known
result \cite{Kap}. For $D <3$ the prescription (\ref{pres1}) is
again to be
used for the integral $I(D)$. Then the net result in (\ref{P23})
vanishes for $D=2$ as required for a gauge invariant quantity.
The main point here is that if one had made errors (for example in
the symmetry factors in (\ref{P21})), these would likely have
shown up
in the nonvanishing
of the net result at $D=2$. A similar calculation
in an $\alpha$-covariant gauge for the purposes of checking
algebra
is far more tedious, especially for the diagram
with three gluon lines. The complexity of the algebra in an
{$\alpha$}-gauge in fact increases the sources of possible errors
at intermediate
steps. As a curiosity, it might interest the reader to note that
nevertheless the result (\ref{P22}) can
also be established in an $\alpha$-covariant gauge
{\it before} doing any explicit
integrals, albeit with greater algebraic effort,
the $\alpha$ dependence cancelling in the sum of
diagrams as required (see appendix).
The next correction to the pressure in four dimensions is of
order $g^3$.
This ``plasmon'' correction is a nonperturbative contribution and
it was
computed in $QCD$ by Kapusta \cite{Kap,PlasC}. It is obtained by
summing an infinite class of IR divergent diagrams, formed by
adding two or more self-energy subdiagrams along the gluon line
of the
one-loop bubble diagram.
The leading correction ($\sim g^3$) is due to the electric mass,
$\Pi_{00}(0,\vec{k} \to 0)$. Summing the electric mass insertions
in $D$ dimensions
gives
\begin{equation}
P_3 = -{ (N_{c}^2 -1) \over 2 \beta} \int {d^{(D-1)} q \over
{(2 \pi)^{(D-1)} }}
\left[ \ln (1 + {m_{el}^2 \over q^2}) - {m_{el}^2 \over
q^2}\right] \, . \label{P31}
\end{equation}
The above expression is well defined for $D >3 $ with $m_{el}$
given by (\ref{mD2}).
The loop integral may be evaluated using zero temperature
techniques (see appendix)
to give
\begin{equation}
P_3 = { (N_{c}^2 -1) \over 2 \beta} { \Gamma({1-D \over 2})
(m_{el}^2)^{D-1 \over 2} \over (4 \pi)^{D-1 \over 2}} \; \; \;
, \ D > 3 \, . \label{P32}
\end{equation}
Since $m_{el} \sim g$, the result (\ref{P32})
is subleading, when $D >3$, to the order $g^2$ contribution
$P_2$ given
by eq.(\ref{P23}).
Also note that (\ref{P32}) is positive for $ 3<D <5$ so that it
opposes $P_2$ in that range.
In order to apply the $D \to 2$ check on (\ref{P31}) we need
to use both of
the prescriptions introduced earlier. Firstly, for $D <3$ the
electric mass $m_{el}$ is defined
by the cutoff prescription (\ref{mD1}- \ref{BEF}, \ref{pres1}).
Secondly, the loop integral
in (\ref{P31}) is IR divergent for $D<3$ and so it is defined by
the analytic
continuation prescription (the IR divergence coincides with a
physical effect : the magnetic contribution may no longer be
subleading (see appendix)). Thus one takes
the $D \to 2$ limit in (\ref{P32}) with $m_{el}$ defined by
(\ref{mD1}, \ref{pres1}).
Since $ m_{el} \sim \sqrt{(D-2)}$, therefore when $D \to 2$,
$P_3$ vanishes as
$ \sim (D-2)^{\gamma}$ with $\gamma = {1 \over 2} + {(D-2) \over
2}$. The $D$ dependent exponent
$\gamma$ is another sign of the nonperturbative nature of the
plasmon term. The nontrivial
point here is that the loop integral has not introduced any
adverse powers
of $(D-2)$ which would have had a disastrous effect for the
$D \to 2$ limit. This
example shows a successful cohabitation of the two IR
prescriptions
that were introduced for defining the $D \to 2$ limit.
\section{Conclusion}
The dimensionality of spacetime ($D$) has been
proposed and illustrated
as a possibly efficient and beneficial way to check
gauge-invariance in pure {YM}
theories. Gauge invariant quantities which are not dimension
specific should vanish as $D \to 2$.
The
converse is not necessarily true. For example, any quantity,
even if gauge
{\it variant}, when calculated in the axial gauge should vanish
as $D \to 2$ due to the free nature of pure $YM_2$.
Although in most of the examples it was the Lorentz algebra
which contained the
useful $D$-dependent information, the procedure required the
use of two
{\it ad hoc} prescriptions to define the $D \to 2$ limit in
loop integrals.
Zero-temperature-type integrals were defined by analytic
continuation while
integrals containing a Bose-Einstein factor were cut off by an
infrared regulator.
In the examples considered the prescriptions allowed one to
extrapolate gauge-invariant
quantities calculated near $D =D_0 =4$ down to $D =2$ in the
required manner.
Instead of the two prescriptions, one might try the
following single
condition : analytically continued gauge invariant quantities
in pure $YM$
theory should be nondiverging at $D=2$. A relook at the
examples shows that
this also provides a nontrivial check.
In the absence of an {\it a priori} justification of the
prescriptions, one is actually checking both the IR prescriptions
and the gauge-invariance. Still, the analysis of gauge-invariant
structures for a variable $D$ appears instructive and one might
want to consider more examples and at higher order. It might also
be interesting to explore the $D \to 2$ procedure for
gauge-invariant
quantities correctly evaluated near
$D_0 =3$ \cite{EDH,JT} .
Fermions can be accomodated by using the number of flavours,
$N_{f}$, as a
parameter. The $N_f \neq 0$ part of any gauge-invariant
quantity must be invariant by itself. At low orders in
perturbation theory, one may even entertain the notion of
calculating the $N_f =0$ and $N_f \neq 0$ sectors with different
gauge-fixing. For example, the pure glue part can
be calculated in the Feynman-$D$ gauge while the $N_f \neq 0$
part can be calculated in the $\alpha$-gauge to check
gauge-invariance. Whether such
hybrid calculations are useful or practical should be decided
on a case by
case basis. Likewise, scalars can be coupled by taking $N_s$
copies of them.
Finally some comment on the background field gauge \cite{Dew}.
This is one way of
calculating in quantum field theory while keeping classical gauge
invariance at every step. The gauge-invariance here is with
respect to the background field $B_{\mu}$ which is introduced for
this purpose and gives no information about the physical
gauge-invariance
of any quantity calculated. In particular, the quantum part of
the action
must still be gauge fixed. Thus even here one might use the $D$
parameter
without redundancy.
{\noindent{\bf Acknowledgements}}\\
I thank J.P. Blaizot, C. Corian{\'o}, A.S. Goldhaber, E. Iancu,
H. Osborn, R.D. Pisarski and J.C. Taylor for very helpful
discussions. I also acknowledge stimulating and hospitable visits
to DAMTP-Cambridge and Martignano-Italy
during the course of this work.\\
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
\noindent{{\bf Appendix}}\\
1.Some formulae are collected here for ease of reference.
a)The bosonic ($q_0 =2 \pi n T$) sums needed in Sect.(3) are
\cite{GPY}
\begin{eqnarray}
T \sum_{q_0} {1 \over Q^2} &=& {n(q) \over q} + {1 \over 2q} \,
, \label{S1} \\
T \sum_{q_0} {1 \over Q^4} &=& {n(q) \over 2q^3 }- {1 \over 2q^2 }
{ d n(q) \over dq} + { 1 \over 4 q^3} \; , \label{S2}
\end{eqnarray}
where $n(q) = (\exp{\beta q} -1)^{-1}$ is the Bose-Einstein
factor. The last terms in the above sums are temperature
independent and drop when dimensional
regularisation is used for the $ \vec{q}-$integrals .
b)The angular integrals for $(D-1)$ dimensional Euclidean space
have been defined by \cite{Muta}
\begin{equation}
\omega(D) = \int {d \Omega_{D-1} \over (2 \pi)^{(D-1)}} \equiv
\left[ 2^{(D-2)} \ \Gamma \left({D-1 \over 2}\right) \
\pi^{(D-1) \over 2}
\right]^{-1} \, .
\end{equation}
c)The zero temperature integrals in Sects.(3,5) are evaluated
using \cite{UV,IR,Col,Muta},
\begin{eqnarray}
\mbox{$\displaystyle{\int {d^{s}q \over (2\pi)^{s}}}$} {1 \over Q^2 (Q + K)^2}
&=& (4 \pi)^{-{s \over 2}} \
(K^2)^{(s-4)
\over 2} \ {\Gamma(2 -{s \over 2}) \Gamma^{2}({s \over 2} -1)
\over \Gamma(s-2)} \; \; , \\
\nonumber \\
\mbox{$\displaystyle{\int {d^{s}q \over (2\pi)^{s}}}$}
{1 \over Q^2 + M^2 } &=& (4 \pi)^{-{s \over 2}} \
(M^2)^{(s-2) \over 2}
\ \Gamma(1- {s \over 2}) \, .
\end{eqnarray}
d)Expressions containing gamma-functions can be simplified with
the following
very useful identities \cite{GR}
\begin{eqnarray}
\Gamma(1+z) &=& z \ \Gamma(z) \; ,\\
\Gamma(z) \ \Gamma(1-z) &=& {\pi \over \sin{\pi z}} \; , \\
\sqrt{\pi} \ \Gamma(2z) &=& 2^{(2z-1)} \ \Gamma(z) \
\Gamma(z + {1 \over 2}) \; .
\end{eqnarray}
\\
\noindent{2. The one-loop gluon self energy is given by}
\begin{equation}
\Pi_{\mu \nu}^{a b}(K) = \int [dq] \{ L_{\mu \nu}^{ab}(K,Q) +
{1 \over 2}
M_{\mu \nu}^{a b}(K,Q) + {1 \over 2} N_{\mu \nu}^{ab}(K,Q) \} \;
, \label{Se}
\end{equation}
where $({\mu \nu})$ are the Lorentz indices and $(ab)$ the group
indices. The symmetry
factors have been explicitly displayed. $L$ is the ghost loop
contribution
($-1$ factor included), $M$ the
tadpole diagram and $N$ is due to the tri-gluon coupling.
Expressions for $L,M$ and $N$
in $D$ dimensions may be found, for example, in \cite{Muta}.
The complete result
at zero temperature, in an $\alpha$-covariant gauge, with the
integrals done, may also
be found in \cite{Muta}. For the Landau gauge ($\alpha = -1$) the
expression is contained in Ref.\cite{DLM} which
studies $QCD$ in $2 +\epsilon$ ($\epsilon \ll 1$) dimensions and
also notes the divergence
of the self-energy as $\epsilon \to 0$.\\
\newpage
\noindent{3. The Free energy.}
a)The contribution of each massless, bosonic degree of freedom
to the ideal
gas pressure is
\begin{eqnarray}
P_{0}^{b} &=& -{1 \over 2 V \beta} \ln Det (- \partial^{2} ) \\
&=& -{1 \over 2} \int [dq] \ln (Q^2) \\
&=& -T \mbox{$\displaystyle{\int{d^{(D-1)}q \over (2\pi)^{(D-1)}}}$}
\ln (1- e^{- \beta q}) \\
&=& -T^{D} \ \omega(D) \int_{0}^{\infty} dx x^{(D-2)}
\ln(1- e^{-x}) \\
&=& T^{D} \ \omega(D) \ \sum_{p= 1}^{\infty} \ {1 \over p}
\int_{0}^{\infty}
dx x^{(D-2)} e^{-p x} \\
&=& T^{D} \ \omega(D) \Gamma(D-1) \zeta(D) \\
&=& T^{D} \ \pi^{-{D \over 2}} \ \zeta(D) \Gamma(D/2) \, .
\label{Pb}
\end{eqnarray}
The determinant above is evaluated with the required
periodic boundary conditions \cite{Ber,GPY}.
In the second line a $T$-independent piece was
dropped. The interchange, in order, of the integration and power
series
summation is justified for $D >1$. Final simplification is
achieved using the
definition of the gamma and zeta functions \cite{GR} and the use
of formulae
in Note(1) of this appendix. In passing it is noted that for
massless
fermions at zero chemical potential, each modes contribution to
the ideal
pressure will turn out to be eq.(\ref{Pb}) multiplied by a
statistical factor
$(1-2^{(1-D)})$. For the case of massive particles, nonzero
chemical
potentials and background fields, see \cite{Wel}.
b)For the calculation of the order $g^2$ contribution to the
pressure,
one can save some effort and reduce errors by proceeding as follows
\begin{equation}
P_2 = \int [dk] \int [dq] D(K) \{ {1 \over 2} L + {1 \over 8} M +
{1 \over 12} N \} \, .
\end{equation}
That is, compute the expression in curly brackets first.
Here $D(K) = {\delta^{ab} \over K^2} ( \delta{\mu \nu} + \alpha
{K^{\mu} K^{\nu}
\over K^2})$ is the free propagator in the $\alpha$-covariant gauge
and $L,M$ and
$N$ are the $\alpha$ dependent tensors used in eq.(\ref{Se}) above.
The $\alpha$ dependent pieces
of $P_2$ cancel only after frequent use of the identity
$2K.Q = (K+Q)^2 -K^2 -Q^2$, changes of sum-integration variables,
and
shifts of sum-integration variables
(assumed valid), to obtain the final answer displayed in
(\ref{P21}). Expressions similar to (\ref{P21}) in the background
Feynman gauge may be found in \cite{SH}.
c)The plasmon contribution in four dimensions has been calculated
by Kapusta \cite{Kap} in the Feynman gauge. Here I sketch the $D$
dimensional analog in the Coulomb gauge, using the notation of
Toimela \cite{PlasC}.
One begins with
\begin{eqnarray}
P_{plas} &=&{1 \over 2 \beta} \int [dq] \sum_{p=2}^{\infty} \
{1 \over p} Tr (-D^{c} \Pi)^{p} \\
&=& {(N_{c}^2 -1) \over 2 \beta} \int [dq] \sum_{p=2}^{\infty}
{(-1)^{p} \over p} \left[ \left(F \over q^2\right)^p +
(D-2) \left(G \over Q^2 \right)^p
\right]
\end{eqnarray}
In the above $D^{c}$ is the free propagator in the (strict)
Coulomb gauge
\begin{equation}
D^{c}_{\mu \nu} = { \delta_{\mu 0} \delta_{\nu 0} \over k^2} +
{ \delta_{\mu i}
\delta_{\nu j} \over K^2} \left( \delta_{ij} -{k_i k_j \over k^2}
\right) \, ,
\end{equation}
$\Pi$ is the one-loop self energy, $F \equiv \Pi_{00}$, and
$G$ is the transverse part of $\Pi_{ij} $ :\\
$(D-2)G = \Pi_{ij} ( \delta_{ij} -q_{i}q_{j}/q^2)$, with sum over
repeated indices.
In order to obtain the leading plasmon-like ($ > {g^4}$)
contribution from $P_{plas}$,
one need only look at the infrared region which lies in $q_0$
sector.
Now, in four dimensions we have $F \sim g^2 T^2$ and
$G \sim g^2 k T$. In $D$ dimensions near $D=4$ one therefore
expects $F \sim
g^2 T^{(D-2)}$ and $G \sim g^2 T k^{(D-3)}$. Using the
dimensionless
coupling $g_0$ defined by $g^2 =g_{0}^2 T^{(4-D)}$, and assuming
$g_0 \ll 1$,
consider the
contribution of soft ($\sim g_0 T$) loop momenta to the $p$-th
term in
$P_{plas}$. The electric ($F$ type) contribution will be
\begin{equation}
\sim T \left({g_{0}^2 T^{(4-D)} T^{(D-2)} \over (g_{0} T)^2}
\right)^p (g_0 T)^{(D-1)}
= T^D \ g_{0}^{(D-1)} \, ,
\end{equation}
while the magnetic ($G$ type) contribution is
\begin{equation}
\sim T \left({g_{0}^2 T^{(4-D)} T (g_0 T)^{(D-3)} \over
(g_{0} T)^2} \right)^p
(g_0 T)^{(D-1)} = T^D \ g_{0}^{(D-1)+(D-3)p} \, .
\end{equation}
The electric contribution is plasmon like for all $p$ and for
$D < 5$.
When $D >3$, the magnetic
contribution is plasmon-like only for $p < (5-D)/(D-3)$.
Also since $p \ge 2$,
this implies $ D < 11/3 $. Thus for $3 < D < 11/3$, only the finite
number of terms, $2 \le p < (5-D)/(D-3)$, give a plasmon-like
contribution in the magnetic sector. The magnetic contribution
might also be plasmon-like for $D \le 3$.
On the other hand, it is easy to see from the equations that
for $D>3$, the magnetic contribution is always subleading to the
electric contribution. Thus the leading plasmon
contribution for $D>3$ is given by eq.(\ref{P31}) of the main
text. The integral may be evaluated by the formulae listed
in Note(1) of this appendix : The second term in (\ref{P31})
drops in dimensional
regularisation while the logarithm is integrated by considering
first its derivative with respect to $m_{el}^2$. It is amusing
to note
that the peculiar ratio $11/3$ appearing in the above analysis
occurs
in a natural but apparently unrelated way also in the beta
function. \\
\noindent{4. Beta Function}\\
The beta function is easiest to calculate by using background
field techniques
\cite{Dew}. One first computes (see Abbott \cite{Dew} for
details and
further references)
the wavefunction renormalisation factor $Z_{B}$ of the background
field
$B_{\mu}$ obtained from its self-energy. In the background
Feynman gauge one can obtain
\begin{equation}
\Pi^{(B)}_{(\mu \nu),(ab)}(K) = -g^2 N_{c} \delta_{ab}
(K^2 g_{\mu \nu} - K_{\mu}
K_{\nu}) \ {(7D-6) \over (2D-2)} \ \mbox{$\displaystyle{\int{d^{D}q
\over (2\pi)^{D}}}$} {1 \over Q^2 (Q+K)^2} \,
. \label{SBG}
\end{equation}
Like the usual self-energy (\ref{Se}), this self-energy
(\ref{SBG}) is not
gauge-invariant. Using formulae given in the beginning of this
appendix one may
check that this background field self energy also diverges as
$D \to 2$.
The gauge-invariant information in (\ref{SBG}) comes from the
residue,
$Z_{B}^{(1)}$, of the $\epsilon = (4-D)/2$
pole in $Z_{B}$, when the integrals are computed in dimensional
regularisation. The beta function is given by $\beta(g) =
-{1 \over 2} g^2
{ \partial Z_{B}^{(1)} \over \partial g}$. The $D$ dependent
terms outside
the integrals in (\ref{SBG}) give the famous $11/3$ factor.
\newpage
\baselineskip = 0.5\baselineskip
| proofpile-arXiv_065-15737 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The problem of observables is probably the key problem presently
confronting non-perturbative approaches to the quantization of the
gravitational field. The problem is difficult because it reflects, in two
different ways, the problem of making sense of a diffeomorphism
invariant quantum field theory. Already at the classical level, it is
non-trivial to construct and interpret functions of the dynamical
variables
that are invariant under spacetime diffeomorphisms. While, it is
true, one
can describe in words a certain limited number of them, what is
needed is
much more than this. First of all, we must have an infinite number
of
observables, corresponding to the number of physical degrees of
freedom of the theory. Second, to make the transition to the
quantum
theory one must know their Poisson algebra.
When we come to the quantum theory, new problems emerge
because
essentially all diffeomorphism invariant observables involve
products of local field observables, and so are not directly defined
in the representation space of any quantum field theory. They must
be defined through a limiting procedure which is analogous to the
regularization procedures of conventional quantum field theories.
Furthermore, it is necessary to confront the fact that none of the
conventional regularization or renormalization procedures can be
applied to this case. This is because they all depend on the presence
of
a background metric. It is then necessary to define new
regularization
procedures which may be applied to field theories constructed
without
a background metric. Additional background structure does come in
the
definitions of the regulated
operators; what must be shown is that in the
limits
that the regulators are removed the resulting action is finite and
background independent.
In the last year and a half, some
progress\cite{weave,review-lee} has been made on the
problem of observables in the context of a nonperturbative approach
to
quantum gravity based on the Ashtekar variables\cite{Abhay} and
the loop
representation\cite{carlolee,gambini-loop}\footnote{Reviews of
previous
work in this direction are found
in \cite{review-abhay,review-carlo,review-lee}.}.
This approach is based on taking as a starting point
a quantum kinematical framework which is based on non-Fock
representations\cite{rayner,abhaychris,review-lee} of certain
non-local observable algebras. It seems
necessary to pick non-Fock representations as the starting point
for the construction of diffeomorphism invariant quantum field
theories to avoid the dependence of the Fock structure on a fixed
background metric.
What has been learned using this approach may be summarized as
follows\cite{weave,review-lee}:
a) It seems impossible to construct background independent
{\it renormalization} procedures for products of local fields.
This is
because any
local renormalization procedure is ambiguous up to a local density.
As a
consequence, because we are working in a formalism in which the
basic local observable is a frame field, this theory has no operator
which can represent the measurement of the metric at a point.
This further
means that diffeomorphism invariant operators cannot normally be
constructed
from integrals over products of local fields.
b) Despite this, there are several non-local observables that can be
constructed as operators acting on kinematical states by a
regularization procedure appropriate to the non-perturbative
theory.
In all of the cases in which a well defined operator exists in the
limit that the regularization is removed, that operator is finite
and background independent.
Among these operators are those that measure the area of any given
surface, the volume of any given region and the spatial integral of
the
norm of any given one form. By measurements of these
observables the metric can be determined, in spite of the fact that
there is no local operator which can represent the metric.
c) The connection between background independence and finiteness
is most
likely general, as there are arguments that for any operator that
can be constructed through a point splitting regularization
procedure of the type
used in \cite{weave,review-lee}, background indepdence
implies finiteness\cite{carloun}.
d) The spectra of the operators that measure areas and
volumes are discrete series of rational numbers times the
appropriate Planck units.
e) Using these results, the
semiclassical limit of the theory can be understood.
Given any classical three metric, slowly varying at the
Planck scale, it is possible to construct a kinematical
quantum state
which has the property that it is an eigenstate of the
above mentioned operators and the eigenvalues agree
with the corresponding classical values in terms of that
metric, up to terms of order of the inverse of the measured
quantity in Planck units.
These results are very encouraging, but they are
subject to an important limitation. They concern the
kinematical level of the theory, which is the original state
space on which the unconstrained quantum theory is
defined. The physical states, which are those states that
satisfy the Hamiltonian and diffeomorphism constraints,
live in a subspace of this space.
It would then be very desirable to find results analogous to
these holding for physical operators. In this paper I report
results which bring us significantly closer to that goal. These
include the construction of a number of operators which
are invariant under spatial diffeomorphisms.
For example, as I will show below, it is possible to
construct a diffeomorphism invariant operator that
measures the area of surfaces which are picked out by
the values of some dynamical fields. Just as in the
kinematical case, the spectrum of this operator is discrete
and includes the integral multiples of half the Planck area.
The basic idea on which these results are based is to use
matter fields to define physical reference frames and then
use these to construct diffeomorphism invariant observables.
Of course, the idea of using matter fields
to specify dynamically a coordinate system is very
old. It goes back to
Einstein\cite{albert-snail}, who pointed out that in order to realise
the operational definition of lengths and times in terms
of rulers and clocks in general relativity, it was necessary
to consider the whole dynamical system of the rulers
and clocks, together with the gravitational field. The
application of this idea to the quantum theory was first
discussed by DeWitt\cite{bryce-snail}, and has been recently revived
by Rovelli\cite{carlo-matter,carloobserves} and by Kuchar and
Torre\cite{karel-matter}.
In a paper closely related to this one, Rovelli has used a scalar field
to pick out a set of surfaces whose areas are then measured.
In this paper I take for the matter field an
antisymmetric tensor gauge field, with dynamics as first written
down by Kalb and Ramond\cite{kalbramond}. There are two
reasons
for this. First, as we will see below,
the coupling of the antisymmetric
tensor gauge field to gravity is particularly simple in the
Ashtekar formalism, which allows us to hope that it
will be possible to get results about physical observables,
which must commute also with the Hamiltonian constraint.
The second
reason is that, as I will describe, the configurations of
the Kalb-Ramond field can be associated
with open surfaces, which has certain advantages.
Now, it is clear on the kinematical level that if one measures the
area of every two dimensional surface one determines the
spatial metric
completely. One can then imagine that if one has a finite, but
arbitrarily large, set of surfaces, one can use measurements of
their areas to make a partial measurement of the metric. Such
an arrangement can serve as a model of an apparatus that might
be used to measure the gravitational field, because indeed, any
real physical measuring device returns a finite amount of
information and thus makes only a partial measurement of a
quantum field. As I discuss below, there are a number of results
and lessons that can be learned about measurement theory for
quantum gravity by using a finite collection of surfaces as a model
of a measuring apparatus.
Once we have a set of finite spatially diffeomorphism invariant
operators,
defined by using matter fields as a quantum reference frame,
it is interesting to try to employ the same strategy to construct
physical observables\footnote{Note that, as pointed out by
Rovelli\cite{carloobserves}, there is a model diffeomorphism
invariant
theory in $3+1$ dimensions whose Hamiltonian constraints
is proportional to a linear combination of the gauge and
diffeomorphism constraints. This is the Husain-Kuchar
model\cite{husainkuchar}, which corresponds, at the classical level,
to a
limit of general relativity in which the speed of light has
been taken to zero. The physical state space of this model
is just the space of diffeomorphism invariant states of quantum
gravity and the
physical inner product is known and takes a simple form in
the loop representation\cite{husainkuchar}. The observables I
construct
here are then examples of physical observable if we
adjoin to the Husain-Kuchar model the antisymmetric
gauge field.}. One can add to the theory
additional matter degrees
of freedom which can represent physical clocks and use these to
construct
operators which commute with the Hamiltonian constraint but
describe
measurements made at particular times as measured by the physical
clock. This idea is developed in \cite{me-dieter}. Furthermore, once one
has
physical operators that correspond to measurements localized in
space
and time by the use of matter fields to form a spacetime reference
system, it is possible to give a formulation of a measurement theory
which may be applied to quantum cosmology. A sketch of such a
measurement theory is also developed there.
This paper is organized as follows. In the next section I show how to
couple an antisymmetric tensor gauge field to gravity. This is
followed
by section 3 in which I show how to quantize the tensor gauge field
in terms of a surface representation that is closely analogous to
the abelian loop representation for Maxwell
theories\footnote{Rodolfo
Gambini has kindly
informed me that many of the results of this section
were found previously
in \cite{rodolfo-antisym}.}. I then show how to combine these
results
with the loop representaion of quantum gravity and how to construct
diffeomorphism
invariant states of the coupled system. Here it is also shown how
to construct
the diffeomorphism invariant operator that measures the area of the
surface picked out by the quantum state of the antisymmetric tensor
gauge field. In section 4 I consider a
straightforward extension of these results in which certain degrees of
freedom are added which result in quantum states which are labled
by
open rather than closed surfaces. In the next section, which is section
5, I show how the
matter field may also be used to construct a diffeomorphism
invariant
loop operator. This has the effect of adding
Wilson loops of the left-handed
spacetime connection around the edges of the surfaces picked out by
the
quantum states of the matter. These results are then used
in section 6, where I show
how to construct quantum reference systems by combining the
surfaces
from a number of independent matter degrees of freedom to
construct
simplicial complexes. We find here a very interesting
correspondence
between certain peicewise-flat Regge manifolds and the elements of
a
basis of diffeomorphism invariant states of the coupled matter-
gravity
system. I also sketch an approach to a measurement theory
for quantum gravity which is based on the
results described here\cite{me-dieter}.
Finally, the implications of these results
are the subject of the conclusion.
\section{Coupling an antisymmetric tensor gauge theory to gravity}
An antisymmetric tensor gauge field\cite{kalbramond} is a two form,
$C_{ab}=-C_{ba}$ subject to a gauge transformation
generated by a one form $\Lambda_a$ by,
\begin{equation}
\delta C_{ab} = d\Lambda _{ab} .
\end{equation}
It's field strength is a three form which will be
denoted $W_{abc}=dC_{abc}$. The contribution
to the action for these fields coupled
to gravity is, in analogy with electromagnetism,
\begin{equation}
S_C= {k \over 4}
\int d^4x \sqrt{g}g^{ad}g^{be}g^{cf}W_{abc}W_{def}
\end{equation}
where $k$ is a coupling constant with dimensions of inverse action
and all the quantities are,
just for moment, four dimensional.
In the Hamiltonian
theory\footnote{From now on we consider that the indices
$a,b,c,...$ are spatial indices, while indices $i,j,k,...$ will be
internal $SO(3)$ indices. Densities, as usual, are sometimes, but
not always, indicated by a tilde.} its conjugate momenta is
given by $\tilde{\pi}^{ab}=-\tilde{\pi}^{ba}$ so that
\begin{equation}
\{ C_{ab} (x) , \tilde{\pi}^{cd} (y) \} =
\delta^{[c}_a \delta^{d]}_b \delta^3 (y,x)
\end{equation}
where the delta function is understood to be a density with
the weight on the first entry and $\tilde{\pi}^{cd}$ is also a density.
We will also find it convenient to work with the dualized fields
\begin{equation}
\tilde{W}^*={1 \over 3!}\epsilon^{abc}W_{abc}
\end{equation}
and
\begin{equation}
\pi^*_a={1 \over 2} \epsilon_{abc}\tilde{\pi}^{bc}.
\end{equation}
The gauge transform (1) is then generated by the constraint
\begin{equation}
G=\partial_c \tilde{\pi}^{cd} =d\pi^*_{cd}=0
\end{equation}
This field can be coupled to gravity in the Ashtekar formalism by
adding to
the Hamiltonian constraint the term
\begin{equation}
{\cal C}_{matter}= { k \over 2} (\tilde{W}^*)^2 + {1 \over 2k }
\pi^*_a \pi^*_b \tilde{\tilde{q}}^{ab}
\end{equation}
and adding to the diffeomorphism constraint the term
\begin{equation}
{\cal D}^{matter}_a = \pi^*_a \tilde{W}^*.
\end{equation}
We may note that the term added to the Hamiltonian
constraint is naturally a density of weight two, so that
it is polynomial without the necessity of changing the
weight of the constraint by multiplying by a power of
the determinant of the metric, as is necessary in
Maxwell or Yang-Mills theory\cite{ART}.
The antisymmetric tensor gauge field can be understood
to be a theory of surfaces in three dimensions in the same
sense that Maxwell theory is a theory of the Faraday flux
lines\cite{carloted}. By (6) there is a scalar field $\phi $
such that locally\footnote{Global considerations play no role
in this paper.}
\begin{equation}
\pi^*_a =d\phi_a
\end{equation}
The equipotential surfaces of $\phi $ define a set of surfaces
which are the analogues of the Faraday lines of
electromagnetism. Further, any two dimensional
surface $\cal S$ defines a distributional configuration of the
$\pi^{ab}$ by,
\begin{equation}
\pi^{ab}_{\cal S} (x) = \int d^2S^{ab}(\sigma ) \delta^3 (x ,{\cal S}
(\sigma ))
\end{equation}
here $\sigma$ are coordinates on the surface.
Note that $\pi^{ab}_{\cal S}$ is automatically divergence free.
These are completely analogous to the distributional
configurations of the electric field\cite{review-lee,lee-cs}
that may be associated to a curve $\gamma$ by
\begin{equation}
\tilde{E}^a_{\gamma} (x) = \int d\gamma^a (s) \delta^3 (x,\gamma (s))
\end{equation}
We can define a diffeomorphism invariant observable which depends
on
both the metric and $\tilde{\pi}^{cd}$ which has the property that
when
$\tilde{\pi}^{cd}$
has such a distributional configuration it measures the area of that
surface.
This can be defined either by
generalizing the definition of the area
observable\cite{weave,review-lee} or more directly by
\begin{equation}
A(\pi, \tilde{E}) \equiv Q (\pi, \tilde{E}) = \int
\sqrt{\tilde{\tilde{q}}^{ab} \pi^*_a \pi^*_b}
\end{equation}
An equivalent expression for this is given by
\begin{equation}
A(\pi ,\tilde{E} ) = \lim_{N \rightarrow \infty} \sum_{N=1}^N
\sqrt{A^2_{approx} [{\cal R}_i]}
\end{equation}
where space has been partitioned into $N$ regions ${\cal R}_i$ such
that in the limit $N \rightarrow \infty$ the regions all shrink to
points. Here, the
observable that is measured on each region is defined
by\footnote{Here $T^{ab}(x,y)$ is defined as follows\cite{review-lee}.
Let there
be a procedure, based on a background flat metric, to associate a
circle
$\gamma_{x,y}$ to every two points in the three manifold $\Sigma$.
Then define
$T^{ab}(x,y) \equiv TrU_{\gamma_{xy}}(y,x) \tilde{E}^a (x)
U_{\gamma_{xy}}(x,y) \tilde{E}^b (y)$, where
$U_\gamma (x,y) \equiv Pexp G \int_\gamma A $ is parallel
transport along the curve $\gamma$ from $x$ to $y$.},
\begin{equation}
A^2_{approx} [{\cal R}] \equiv \int_{\cal R} d^3 x \int_{\cal R} d^3y
\ \ T^{ab} (x,y) \pi^*_a (x) \pi^*_b (y)
\end{equation}
To show the equivalence between these two expressions, we may
start with
(12) and regulate it the way it is done in the quantum theory by
introducing a background euclidean coordinate system and a set of
test fields $f_{\epsilon} (x,y)$ by
\begin{equation}
f_{\epsilon}(x,y) \equiv {\sqrt{q(x)} \over \epsilon^3 }
\theta [{\epsilon \over 2}-|x^1-y^1 |]
\theta [{\epsilon \over 2} -|x^2-y^2 |]\theta [{\epsilon \over 2}-|x^3-
y^3 |]
\end{equation}
In these coordinates
\begin{equation}
\lim_{\epsilon \rightarrow 0} f_{\epsilon} (x,y) = \delta^3 (x,y)
\end{equation}
We can then write
\begin{equation}
A(\pi , \tilde{E}) = Q(\pi,\tilde{E}) = \lim_{\epsilon \rightarrow 0}
\int d^3x
\sqrt{\int d^3 y \int d^3z T^{ab}(y,z) \pi^*_a (y) \pi^*_b (z)
f_\epsilon(x,y) f_\epsilon (x,z)}
\end{equation}
When the expression inside the square root is slowly
varying in $x$ we can
reexpress it in the following way. We divide space into
regions ${\cal R}_i$ which are cubes of volume
$\epsilon^3$ centered on the points
$x_i = (n\epsilon , m\epsilon , p \epsilon )$ for $n,m,p$ integers. We
then
write,
\begin{eqnarray}
A(\pi , \tilde{E}) &= &\lim_{\epsilon \rightarrow 0} \sum_i
\epsilon^3
\sqrt{\int d^3 y \int d^3z T^{ab}(y,z) \pi^*_a (y) \pi^*_b (z)
f_\epsilon(y,x_i) f_\epsilon (z,x_i)} \nonumber \\
&=&\lim_{N \rightarrow \infty} \sum_{N=1}^N \sqrt{A^2_{approx}
[{\cal R}_i]}
\end{eqnarray}
If we now plug into these expressions the distributional
form (10) it is straightforward to show that
\begin{equation}
A(\pi_{\cal S} , \tilde{E} )= \int_{\cal S} \sqrt{h}
\end{equation}
where $h$ is the determinant of the metric of the two
surface, which is given
by $h=\tilde{\tilde{q}}^{ab}n_a n_b$ where $n^a$ is the
unit normal of the surface.
\section{Quantization}
It is straightforward to construct an algebra of loops and
closed surfaces to
coordinatize the gauge constraint surface, corresponding to
the imposition of both $G=0$ and the $SU(2)$ Gauss's
law of the gravitaional fields. We may associate
to every closed surface $\cal S$ a gauge invariant observable,
\begin{equation}
T[{\cal S}] \equiv e^{\imath k \int_{\cal S} C} \ \ \ .
\end{equation}
Conjugate to $T[{\cal S}]$ we have the observables
$\tilde{\pi}^{ab} (x)$
which
satisfy the algebra,
\begin{equation}
\{ T[{\cal S}] , \tilde{\pi}^{ab} (x) \} =
\imath k \int d^2 {\cal S}^{ab}
(\sigma )
\delta^3 (x , {\cal S}(\sigma )) T[{\cal S}]
\end{equation}
We would like now to construct a representation of this algebra
as an algebra of operators. We can construct a surface
representation
in which the states are functions of a set of closed surfaces
$\Psi [\{ {\cal S} \} ]$ \footnote{Such a representation was first
constructed in \cite{rodolfo-antisym}.}. In order
to implement the abelian gauge invariance we require
that these states are invariant under reparametrization
invariance and satisfy two relations. First, we require that
\begin{equation}
\Psi [ {\cal S} \circ {\cal S}^\prime ] = \Psi [ {\cal S} \cup {\cal
S}^\prime ]
\end{equation}
where ${\cal S} $ and ${\cal S}^\prime$ are any two
surfaces that touch at one point and
${\cal S} \circ {\cal S}^\prime$ is the surface made
by combining them. Second, we require that
$\Psi [ {\cal S}] = \Psi [ {\cal S}^\prime] $ whenever
$e^{\int_{\cal S} F }= e^{\int_{{\cal S}^\prime} F }$ for every
two form $F$.
We then define the representation by,
\begin{equation}
\hat{T}[{\cal S}^\prime ] \Psi [ {\cal S}]
= \Psi [ {\cal S}^\prime \cup {\cal S}]
\end{equation}
and
\begin{equation}
\hat{\pi}^{ab} (x) \Psi [{\cal S}] = \hbar k \int d^2 {\cal S}^{ab}
(\sigma )
\delta^3 (x, {\cal S}(\sigma ) ) \Psi [{\cal S}] .
\end{equation}
It then follows that the operators satisfy
\begin{equation}
[ \hat{T}[{\cal S}] , \hat{\pi}^{ab} (x) ] = -\hbar k \int d^2
{\cal S}^{ab} (\sigma )
\delta^3 (x, {\cal S}(\sigma ) )
\hat{T}[{\cal S}]
\end{equation}
We should now say a word about dimensions. In order
that the interpretation of $A[\pi ,\tilde{E}]$ as an area
work, it is necessary that
$\tilde{\pi}^{bc}$ have dimensions of inverse length,
from which it follows from (7) that $k$ have dimensions
inverse to $\hbar$ and that the dimensions of
$C_{ab}$ are $mass/length$.
This choice is consistent with both the Poisson bracket and
the requirement that the exponenent in (20) be dimensionless.
We now bring gravity in via the standard loop
representation\cite{carlolee}. The states are then
functions, $\Psi [\alpha , {\cal S}]$, of loops and surfaces.
We may introduce a set of bra's $<\alpha , {\cal S}|$ labled
by loops and surfaces so that,
\begin{equation}
\Psi [\alpha , {\cal S}] =<\alpha , {\cal S}|\Psi >
\end{equation}
We then want to express the area observable (13) as a
diffeomorphism invariant operator and show that it
does indeed measure areas. It is straightforward to
show that the bras \newline
$<\alpha , {\cal S}|$ are, for
nonintersecting loops $\alpha$, eigenstates of
the operator $\hat{A}$.
This operator may be constructed by using the expression (13) as a
regularization, in the way described in detail in \cite{review-lee}.
A straightforward calculation shows that for the case
of nonintersecting loops\footnote{The appearance of the
Planck area is due to the presence of $G$ in the definition of the
parallel propogators for the Ashtekar connection, $A$, as in the
kinematical case \cite{weave,review-lee}. This is due to the fact
that it is $GA_a$ that has dimensions of inverse length. The
dimensionality of the gravitational constant
thus manifests itself in the
appearance of the Planck area in the operator algebra for quantum
gravity.}
\begin{equation}
<\alpha , {\cal S}| \hat{A}^2_{approx} [{\cal R}] =
({ \hbar kl_{Planck}^2 \over 2 })^2
I[\alpha , {\cal S} \cap {\cal R} ]^2 <\alpha , {\cal S}|
\end{equation}
where $I[\gamma , {\cal S}] $ is the intersection number given by,
\begin{equation}
I[\gamma , {\cal S}] \equiv \int d\gamma ^a (s) \int d^2 {\cal S}^{bc}
(\sigma ) \delta^3 ({\cal S}(\sigma ) , \gamma (s)) \epsilon_{abc}
\end{equation}
and
where ${\cal S} \cap {\cal R} $ means the part of the surface that lies
inside the region. It then follows from (13) that
\begin{equation}
<\alpha , {\cal S}| \hat{A} = { \hbar k l_{Planck}^2 \over 2 }
I^+[\alpha , {\cal S}] <\alpha , {\cal S}|
\end{equation}
where $I^+ [\alpha ,{\cal S}]$ represents the positive definite
unoriented
intersection number which simply counts all intersections
positively.
Thus, we see that the operator assigns to the surface
an area which is given by
$\hbar k l_{Planck}^2 / 2 $ times the number of
intersections of the loop with
the surface\footnote{When the loop $\alpha$ has intersections
at the surface ${\cal S}$ there are additional terms in the
action of the area operator\cite{review-lee}.}.
The action (29) of the area
operator is diffeomorphism invariant, because the surface is
picked
out by the configuration of the field. (One may check that
this is also the case when the loop has an intersection at
the surface.) The operator
is then well defined acting on
states of the form
\begin{equation}
\Phi [\{ \alpha , {\cal S} \} ] = < \{ \alpha , {\cal S} \} |\Phi >
\end{equation}
where $\{ ...\}$ denotes equivalence classes
under diffeomorphisms.
On the space of diffeomorphism invariant
states we can impose the natural inner product. Again, restricted
to the case of nonintersecting loops this must have the form,
\begin{equation}
<\{ \alpha , { \cal S} \} | \{ \beta , {\cal S}^{\prime} \} >
= \delta_{\{ \alpha , {\cal S} \} \{ \beta , {\cal S}^\prime \} }
\end{equation}
where the delta function is a kronocker delta of knot classes.
The definition of the inner product on intersecting loops may
be obtained by imposing reality conditions. The complete set
of reality conditions at the diffeomorphism invariant level is
not known, but it is known that an inner product that
satisfies (31) is consistent with the requirment that
$\hat A$ be a hermitian
operator.
We may then conclude that the spectrum
of $\hat A$ is discrete.
It consists first of the series integer multiples of
$\hbar k l_{Planck}^2 / 2$,
together with a discrete series of other
eigenvalues that come from eigenstates
similar to those discussed in \cite{review-lee} in which the
loops have intersections at
the surfaces.
FInally, may then note that if we require that the
diffeomorphism invariant operator
yield, when acting on kinematical states of the
kind described in \cite{weave,review-lee}, the
same areas as the kinematical area operator,
we get the condition that,
\begin{equation}
k={1 \over \hbar}
\end{equation}
With its coupling thus set by $\hbar$, the
antisymmetric tensor gauge field is
then in a sense purely a quantum phenomena.
\section{Adding a boundary}
In the next section I am going to make use of the
quantum antisymmetric
tensor gauge field to construct a quantum reference
system for measuring
the diffeomorphism invariant states of the gravitational field.
For this and
other purposes, it is convenient to have states which are
labled by open surfaces in addition to those described in the
previous
section in
which gauge invariance restricts
the surfaces to be closed. As I will now describe, there is a
very simple way
to do this, which is analogous to the Abelian Higgs model and was
described first by Kalb and Ramond\cite{kalbramond}.
We will see that
by coupling the $C_{ab}$ field to a vector field in a way that
preserves the
gauge invariance (1) we open up the possibility for our
surfaces to have
boundaries.
Let us consider then adding to the system
described by (2) an ordinary
abelian gauge field,
$b_a$, with an Abelian gauge group given by
\begin{equation}
\delta b_a = \partial_a \phi
\end{equation}
where $\phi$ is a scalar field. We may couple this
field to the Abelian
tensor gauge field by
supplementing the gauge transformations (1) by
\begin{equation}
\delta b_a = \Lambda_a
\end{equation}
Thus, we see that this vector field can be set to zero
by a gauge transform.
A field strength for $b_a$ that is invariant under
both abelian gauge
invariances may be defined by
\begin{equation}
F_{ab} = db_{ab} -C_{ab}
\end{equation}
To define the dynamics of this coupled system we add
to the action the term
\begin{equation}
S_{b}={k\over 4} \int d^4x \sqrt{g}g^{ab} g^{cd} F_{ac} F_{bd}
\end{equation}
We can define a constained Hamiltonian system by
adding (2) and (36) to the
gravitational action. If the conjugate momenta to the
$b_a$ are labled as
$ \tilde{p}^a $ the diffeomorphism and gauge
constraints (8) and (6) are now
\begin{equation}
D_a = \tilde{W}^* \pi_a^* + \tilde{p}^c F_{ac}
\end{equation}
and
\begin{equation}
G^a = \partial_b \tilde{\pi}^{ab} + \tilde{p}^a
\end{equation}
The Hamiltonian constraint has additional terms,
which are given by
\begin{equation}
{1 \over 2k} \tilde{p}^a\tilde{p}^bq_{ab} +
{k \over 2} det(q) F_{ac}F_{bd}q^{ab}q^{cd}
\end{equation}
Note that the new terms are non-polynomial, when expressed
in terms of the canonical variables $\tilde{E}^a_i$, as in the
case of the Maxwell
and Yang-Mills theories\cite{ART}. (As in that case this can be
remedied by
multiplying through by
$det(q_{ab})$.) Finally, there is a new constraint,
\begin{equation}
g=\partial_c \tilde{p}^c
\end{equation}
which generates (33). This, however, is not independent of (38) as
\begin{equation}
\partial_a G^a = g
\end{equation}
As a result, there are now three independent gauge
constraints and six
each of canonical coordinates and momenta. Thus, the
theory now has three
degrees of freedom per point. The two additional
degrees of freedom are
reflected in the fact that in addition to the one gauge
invariant field $\tilde{W}^*$,
we now have the gauge invariant two form $F_{ab}$.
Three of these four
gauge invariant degrees of freedom are independent,
because we have
\begin{equation}
dF_{abc} = -W_{abc}
\end{equation}
As a result, we can associate gauge invariant observables
to each open surface $\cal S$. This is given by
\begin{equation}
T[ {\cal S} ] = e^{{1 \over k} \int_{\cal S} F}
\end{equation}
The poisson brackets of this with the canonical
momenta $\tilde{\pi}^{ab}$ and
$p^a$ are given by
\begin{equation}
\{ \tilde{\pi}^{ab} (x) , T[{\cal S}] \} = -{\imath \over k} \int d^2S^{ab}
(\sigma )
\delta^x (x , S(\sigma ) ) T[{\cal S}]
\end{equation}
\begin{equation}
\{ p^a (x) , T[{\cal S}] \} = {\imath \over k} \int ds
\delta^x (x , \partial S(s ) ) \dot{\partial {\cal S}}(s) T[{\cal S}]
\end{equation}
The surface representation defined by (23) and (24) can be
extended in the obvious
way. The arguements of the states are now open
surfaces and the
obvious combination laws of surfaces hold. In addition to
(24), which still
holds, there is the operator
\begin{equation}
\hat p^a (x) \Psi [{\cal S}] = {\hbar \over k}
\int ds
\delta^x (x , \partial S(s ) ) \dot{\partial {\cal S}}(s) \Psi [{\cal S}] .
\end{equation}
Finally, one may check that the gravitational degrees of freedom
may be added and the area operator defined, so that all the
results of the previous section extend naturally to the surface
representation with boundaries.
\section{A diffeomorphism invariant loop operator}
Given that the matter fields specify a set of surfaces with boundaries,
we may imagine constructing a diffeomorphism invariant
holonomy operator, analogous to the $T^0[\alpha ]$ operators of the
kinematical theory, in which the loop $\alpha$ is given by the
boundary $\partial {\cal S}_I$ of the surface determined by the
$I$'th
matter field.
To do this we first need to construct an appropriate diffeomorphism
invariant classical observable that will measure
the holonomy of $A_a^i$ on such loops.
This can be done by using the fact that
$\tilde{p}^a$, by virtue of its being
a divergence free vector density, defines a
congruence of flows. These flows may be labeled by a two
dimensional
coordinate $\sigma^\alpha$, with $\alpha =1,2$, which may be
considered to
be scalar fields on $\Sigma$ that are constant along the flows.
The idea is to
define a generalization of the trace of the
holonomy from a curve to a congruence
by taking the infinite product of the traces of the holonomies over
each
curve in the congruence. This may be done in the following way.
Each divergence free vector density may be written as a two form in
terms of the two functions $\sigma^\alpha $ as \cite{carloted},
\begin{equation}
p^*_{ab }= (d \sigma^1 \wedge d \sigma^2 )_{ab}
\end{equation}
where the $\sigma^\alpha$ are two scalar functions that are constant
along
the curves of the congruences and so may be
taken to label them. The curves of the congruences may be written
as
$\gamma^a_p (\sigma , s) $ and satisfy,
\begin{equation}
\dot{\gamma}^a_p(\sigma , s) \equiv {d \gamma^a_p (\sigma , s)
\over ds}
= \tilde{p}^a
\end{equation}
We may note that because each $\tilde{p}^a$ is
divergence free it is the case that through every point $x$ of
$\Sigma $ there passes at most one curve of
the congruence. We will denote this curve by $\gamma_{p}(x)$.
We may
take as a convention that if no curve of the congruence passses
through $x$
we have $\gamma_{p}(x)=x$, which is just the degenerate
curve whose image is just the point $x$. Further, note that we
assume
that either appropriate boundary conditions have been imposed
which
fix the gauge at the boundary or we are working in the context of a
closed manifold $\Sigma$, for which the curves $\gamma_p (x)$ are
closed.
We may then define a
classical observable which is the trace of
the holonomy of the connection around the curve $\gamma_{p}(x)$.
\begin{equation}
W[p,A](x)= Tr U_{\gamma_{p}(x)}
\end{equation}
where $U_\gamma$ is the usual path ordered holonomy of $A$ on
the curve
$\gamma$. We may note that the observable $W[p,A](x)$
transforms as a scalar field.
We may now write a diffeomorphism and gauge invariant observable
which
is
\begin{equation}
T[p,A] \equiv e^{\int d\sigma_1 d\sigma_2 LN
TrU_{\gamma_{p,\sigma}}}
\end{equation}
To show that this is indeed diffeomorphism invariant, as well
as to facilitate
expressing it as a quantum operator, it is useful to rewrite
it in the following
way. Let ${\cal S}_{\tilde{p}}$ be an arbitrary two surface
subject only to the condition that it intersects each curve
in the congruence determined by
$\tilde{p}^a$ exactly once
so that ${\cal I}[\gamma_{p,\sigma},{\cal S}_{\tilde{p}}] =1 $.
Then we may write
\begin{equation}
T[p,A]= e^{\int d^2 {\cal S}^{ab}_{\tilde{p}}
p^*_{ab} LN W[p,A] }
\end{equation}
The diffeomorphism invariance of this observable is
now manifest.
To see why this form may be translated to a
diffeomorphism
invariant quantum
operator, we may note that it reduces
to a simple form if we plug in for $\tilde{p}^a$
the distributional
divergence free vector density
\begin{equation}
\tilde{p}^a_\alpha \equiv \int ds \delta^3 (x, \alpha (s))
\dot{\alpha}^a (s) .
\end{equation}
It is then not hard to show that
\begin{equation}
T[p_{\alpha} ,A] = Tr P e^{\int_\alpha A } = T[\alpha ]
\end{equation}
We may now define a quantum operator $\hat{T}$
corresponding to (51) by replacing
$\tilde{p}^a$ with the corresponding operator (46),
\begin{equation}
\hat{T} = T[\hat{p},\hat{A}] .
\end{equation}
As all the operators in its definition commute, there is
no ordering issue. It is then straightforward to show that
\begin{equation}
< \{ S , \gamma \} | \hat{T}=
< \{ S , \gamma \cup \partial S\} |
\end{equation}
That is, the action of the $\hat{p}$ operators in
(51) is by (46) to
turn the operator into a loop operator for the holonomy
around the
surface. The result is that what the operator $\hat{T}$
does is to
add a loop
to the diffeomorphism equivalence
class $\{ S, \gamma \}$ which
is exactly the boundary of the surface.
Thus, we have succeeding
in constructing a diffeomorphism invariant loop operator.
I close this section by noting two extensions of this result. First,
if one considers the case of Maxwell-Einstein theory, where both
fields are treated in the loop
representation\footnote{The loop representation for Maxwell
fields is described in\cite{abhaycarlo,gambini} and the coupling
of Maxwell to gravity in the Ashtekar formalism is described in
\cite{ART}.}, one has an analogous
operator, where $\tilde{p}^a$ should be taken to be just the electric
field. In this case, if a diffeomorphism invariant quantum state is
given by $\Psi [\{ \alpha , \gamma \}]$ where $\alpha $ are the
abelian
loops that represent the electromagnetic field and $\gamma$ are
the loops that represent the gravitational field and
$\hat{T}$
is
the operator just described we have
\begin{equation}
\hat{T} \Psi [\{ \alpha , \gamma \} ]=
\Psi [\{ \alpha , \gamma \cup \alpha \} ] .
\end{equation}
That is, the operator puts a loop of the self-dual graviational
connection over each loop of the electromagnetic potential.
Second, all of the considerations of this paper apply to the system
which is gotten by taking the $G \rightarrow 0$ limit of general
relativity in the Ashtekar formalism\cite{Gtozero}. This limit yields
a chirally asymmetric theory whose phase space consists
of all self-dual configurations together with their linearized
anti-self-dual perturbations. In this case there are operators
$\tilde{e}^a_i$ and $A_a^i$ which are cannonically conjugate, but the
internal gauge symmetry is the abelian $U(1)^3$ reduction of the
internal
$SU(2)$ gauge symmetry. One then has an operator analogous
to (51), which is just
\begin{equation}
T[e,A]_{G \rightarrow 0 } \equiv
e^{\int_\sigma \tilde{e}^a_i A_a^i}
\end{equation}
The corresponding quantum operator
has the effect of increasing the winding numbers of loops that
are already present. It is also interesting to note that in this
case, $T[e,A]_{G \rightarrow 0 } $ commutes with the Hamiltonian
constraint, so that it is actually a constant of the motion
\cite{Gtozero}.
\section{A quantum reference system}
I would now like to describe how the preceding
results can be used to construct a physical interpretation
of a very large class of diffeomorphism invariant states.
As I mentioned in the introduction,
the idea of using matter fields to provide a dynamically
defined coordinate system with respect to which a
diffeomorphism invariant interpretation of the
gravitatational fields can be defined
was introduced into quantum gravity by De Witt's paper
\cite{bryce-snail} in which he applied the Bohr-Rosenfeld analysis
to the problem of the measurability of the quantum gravitational
field.
It is interesting to note that in this paper DeWitt
concluded that it was impossible to make measurements in
quantum gravity that resolved distances shorter than the
Planck scale. The results of the present paper reinforce
this result and add to it two important dimensions: first
that, at least in one approach, it is impossible to measure
things smaller than Planck scales because the fundamental
geometrical quantities are quantized in Planck units and
second, that it is areas and volumes\footnote{For the volume
operator see \cite{review-lee}.}, and not lengths,
whose measurements are so quantized.
Let us consider, for simplicity, that there are many species
of antisymmetric-tensor gauge fields, $(C_{ab}^I, b_a^I)$,
labled by the index $I=1,...,N$, where $N$ can be taken
arbitrarily large. This is a harmless assumption as long
as we are concerned only with spatially diffeomorphism
invariant states. I will come back to this point in the conclusion.
By the straightforward extension of all the results of
the previous section, quantum states are now functions
of $N$ surfaces, ${\cal S}_I$, so that
\begin{equation}
\Psi [\{ \gamma , {\cal S}_I \} ] = < \{ \gamma , {\cal S}_I \} | \Psi >
\end{equation}
We may note that the space of diffeomorphism equivalence classes,
$\{ \gamma , {\cal S}_I \}$ of loops and $N$ labeled open
surfaces is countable\footnote{Note that each surface may be
disconnected.}. The diffeomorphism invariant
state space of quantum gravity coupled to the $N$
antisymmetric tensor gauge fields then has a
countable basis given
by
\begin{equation}
\Psi_{ \{ \alpha , {\cal S}^\prime_I \} }
[\{ \gamma , {\cal S}_I \} ] =
\delta_{ \{ \alpha , {\cal S}^\prime_I \} \{ \gamma , {\cal S}_I \} }
\end{equation}
in the case that the loop $\gamma$ is not
self-intersecting. In the intersecting case, the
form of the basis elements is more complicated because of
the presence of the non-trivial relations among intersecting loops
which result from the identities satisfied by $SU(2)$ holonomies.
For the kinematical case, these relations, and the effect on
the characteristic inner product are described in \cite{review-lee}.
For the present, diffeomorphism invariant, case they have not
yet been completely worked out. However, for the results I
will describe below it is sufficient to restrict attention to
diffeomorphism equivalence classes involving
only non-intersecting loops.
Let us now consider a particular subspace of states of this
form which are defined in the following way. Let us
consider a particular triangulation of the three
manifold, $\Sigma$, labled $\cal T$. It consists of some
number,
$M$, of tetrahedra, labled ${\cal T}_\alpha$, where
$\alpha =1,...,M$, that have been
joined by identifying faces. Let us call the faces
${\cal F}_I$ and let us consider only $\cal T$ that contain
exactly $N$ faces so that $I=1,...,N$.
The idea is then to use this triangulation
to construct a quantum coordinate system by identifying
each face
${\cal F}_I$ with the surface ${\cal S}_I$ which is an
excitation of the $I$'th
matter field.
We do this in the following way.
For each such triangulation of $\Sigma$ we can consider a
subspace of states, which I will denote
${\cal S}_{\cal T}$, which consists of all
states that have the form
\begin{equation}
\Psi [\{ \gamma , {\cal S}_I \} ] =
\delta_{ \{ {\cal F}_I \} \{ {\cal S}_I \} }
\psi [\{ \gamma , {\cal S}_I \} ] .
\end{equation}
where the $\delta_{ \{ {\cal F}_I \} \{ {\cal S}_I \} } $
is, again, a topological Kronocker delta that is
equal to one if and only if
each surface ${\cal S}_I$ can be
put in correspondence with the face
${\cal F}_I $ such that all the topological
relations among the surfaces are preserved.
Such
an arrangement of surfaces can be taken to
constitute a quantum reference frame. The
states in ${\cal S}_{\cal T}$ can
then take any value as we vary over the
countable set of diffeomorphism equivalence classes
in which the loops are knotted and linked with the
surfaces in $\cal T$ and with each other in all
possible diffeomorphically inequivalent ways.
If we impose an additional restriction, we can make a
correspondence between a basis for
${\cal S}_{\cal T}$ and a countable set of peicewise
flat three dimensional manifolds based on the
simplicial complex $\cal T$. This restriction is
the following: in any three dimensional simplical
complex the number of faces, $F({\cal T})$ is greater
than or equal to the number of links,
$L({\cal T})$ \cite{alan-personal}.
For a reason that will be clear in
a moment, let us
restrict attention to $\cal T$ such that
$F({\cal T})=L({\cal T})$.
Let us then consider the characteristic basis for
${\cal S}_{\cal T}$ given by
(59) with $\{ {\cal S}_I \} = {\cal T}$. In any such
state we may then associate a definite value for
the area of each face
in ${\cal T}$, which is given by the eigenvalue
of ${\hat A}^I$.
We may then associate to each set of areas
${\cal A}^I$ a piecewise flat manifold, which I will
call ${\cal M}_{ \{ {\cal A}^I , {\cal S}_I \} }$, which is
composed of flat tetrahedra glued together with the
topology of $\cal T$ such that the areas of the faces are
given by the ${\cal A}^I$. We know that generically
this can be done, because such piecewise geometries are
determined by the edge lengths of the triangulation,
and we have assumed that the number of edges in
$\cal T$ is equal to the number of faces. Thus, we
may in general invert the $N$ relations between the
edge lengths and the areas of the faces to find the
edge lengths. However, when doing this, we need to
be careful of one point, which the following.
Note that
we have chosen
the signs while taking the square root in (13) so that all
areas are positive. However, if we consider a tetrahedron
in $\cal T$, there is no reason for the areas of the four sides
to satisfy the tetrahedral identities, which imply that the sum
of the areas of any three sides is greater than the area of the
fourth side. This means that we cannot associate to each
tetradhedra of $\cal T$ a metrically flat tetrahedra, if
we require
that the signature of its metric be positive definite.
Instead,
we must associate a flat metric of either positive or
negative signature, depending
on whether or not the classical tetrahedral identities are
satisfied. Thus, whether a particular surface of a
particular tetrahedra is spacelike,
timelike or null depends on how the identities are satisfied
in that tetrahedra.
However, each surface bounds two tetrahedra and there is no
reason that the signiture of the metric may not change as the
surface is crossed. Thus, a surface may be, for example,
timelike with respect
to its imbedding in one of the tetrahedra it bounds, and
spacelike in another, as long as the absolute values of the
areas are the same. Similarly, when the edge lengths are
determined from the areas it is necessary to use the
appropriate formula for each tetrahedra, which depends on
the signature of the metric in that tetrahedra.
Thus, the result is that the piecewise flat manifold
${\cal M}_{ \{ {\cal A}^I , {\cal S}_I \} }$ that is determined
from the $N$ areas ${\cal A}^I$ in general contains
flat tetrahedra with different signatures, patched together
so that the absolute values of the areas match. Additional
conditions, which are precisely the tetrahedral identities,
must be satisfied if the geometry of
${\cal M}_{ \{ {\cal A}^I , {\cal S}_I \} }$ is to correspond
to a positive definite metric on $\Sigma$.
We may note also that the
correspondence between the piecewise flat
three geometry,
${\cal M}_{ \{ {\cal A}^I , {\cal S}_I \} }$,
and the diffeomorphism equivalence classes
$\{ \gamma , {\cal S}_I \} $ is
not one to one. Given
${\cal M}_{ \{ {\cal A}^I , {\cal S}_I \} }$ we have
fixed only the topology of the surfaces and their
intersection numbers with the loops. There remain
a countable set of
diffeomorphism equivalence classes with
these specifications; they are
distinghished by the knotting of the loops and their
linking with each
other.
Of this remaining information, a certain
amount may be said to correspond to
information about the spacial geometry that
cannot be resolved by
measurements made using the quantum
coordinate system $\cal T$. We
may imagine further refining the quantum
reference system by introducing
new surfaces by subdividing the tetrahedra in $\cal T$.
If we consider how
this may be done while keeping the toplogical relations
of the loops with
themselves and with the original set of surfaces fixed,
we may see that there
is a sense in which we can obtain a more precise
measurement of the
spatial quantum geometry associated with the topology
of the loops,
$\gamma$. Of course, there is also a danger that by
subdividing too much
we may reach a point where additional surfaces
tell us nothing more about
the quantum geometry; the information about the
matter state and the quantum
geometry is entangled and cannot be easily
separated. In a
further work, I hope to return to the problem
of how to disentangle the
geometrical from the matter information in such
measurements.
At the same time, it is clear that there is
information in the topology of the
loops that is not about the spatial geometry
and so cannot be resolved by
further refinement of the simplex based on the
matter state. This includes
information about the routings through
intersections. It is clear from this
and earlier considerations that the routings
through the intersections carry
information about the degrees of freedom
conjugate to the three geometry.
One can obtain this conjugate
information by measuring the
operators $T^I \equiv T[\partial {\cal S}_I$
defined
in the previous section. If one measures
all $N$ of these operators, rather than the
$N$ areas, one determines the
parallel transport of the Ashtekar connection
around the edges of each
of the faces of the simplex $\cal T$. Essentially,
this means
that one determines, instead of the areas of
the faces, the left handed curvatures evaluated
at the faces. There is also a classical description
that can be associated with this measurement, it is described
in \cite{me-dieter}.
I would like to close this section by describing the sense in which
the results just described suggest an approach to a measurement
theory for quantum gravity\footnote{These remarks are enlarged
in \cite{me-dieter}.}. The idea is to extend the principle
enunciated by Bohr that
what is observed in quantum mechanics must be
described in terms of
the whole system which includes a specification of both the atomic
system and the measuring apparatus.
In the case of quantum gravity,
the quantum system is no longer an atom, it is the whole spacetime
geometry. As the quantum system is no longer
microscopic, but in fact
encompasses the whole universe, we can no
longer treat the measuring instrument classically while we treat
the spacetime geometry quantum mechanically. Thus,
it is necessary
that
a measuring system that is to be used to determine
something about the spacetime
geometry must be prepared for the measurement
by putting it in some
definite quantum state.
In this paper I have described two conjugate sets of
measurements, which
determine either the areas of or the left handed
parallel transport around areas a set of $N$ surfaces.
However, the basic
features of the measurement process and how we
describe it should
extend to more general measurements.
Any measurement theory must have two components:
preparation and
measurement.
If we are to use this measuring instrument to
probe the quantum
geometry, we must prepare the quantum state of the
measuring instrument appropriately. As we are
interested in describing
the theory at a diffeomorphism invariant level, we must give a
diffeomorphism invariant specification of the quantum state of
the measuring instrument such that, when we act on the combined
gravity matter state with the area operators we measure a set of
areas which are meaningful.
Now, the requirement of diffeomorphism invariance
forbids us from
preparing the measuring system in some state and
then taking the
direct product of that "appratus state" with a state
of the system.
Instead, the preparation of the measuring system must be
described by
restricting the system to an appropriate diffeomorphism invariant
subspace of the combined apparatus-gravity system. Thus,
what I have done above is to prepare the quantum state of the whole
system in a way appropriate to the specification of the
measuring instrument by restricting the topological relations among
the $N$ surfaces so that they are faces of a given simplex $\cal T$.
This is done by restricting the quantum state of the system, prior to
the measurement, to be of the form (60). After we have made this
restriction, we can be sure that the results of the $N$ measurements
will be a set of $N$ areas that can be ascribed to the faces of the
simplex $\cal T$. Thus, the result of the measurement of the area
operators on prepared states of the form (60) is to produce a partial
description of the spatial geometry which is given by the piecewise
flat manifold ${\cal M}_{\gamma,{\cal S}_I} $.
Now, the $N$ area operators $\hat{A}^I$ commute with each other,
but they do not make a complete set of commuting
observables. This is
because to each such peicewise flat manifold, which encodes the
results of the $N$ observables, there are a
countably infinite number of
diffeomorphism invariant states in the subspace
${\cal S}_{\cal T}$
which are degenerate as far as the values of the
$\hat{A}^I$ are
concerned.
We would then like to ask whether we can add
operators to the $\hat{A}^I$
to make a complete set of commuting operators.
We certainly can extend
the set, by subdividing some or all of the tetrahedra in $\cal T$ to
produce a simplicial complex with more surfaces. This would
correspond to introducing more matter fields, which would make
it possible to specify more surfaces whose area is to
be measured and
by so doing make a more refined measurement of
the quantum geometry.
But, notice that there is a natural limit to how much
one can refine one's
observations of the quantum geometry because one
can never measure
the area of any surface to be less than one-half Planck area.
Further, note that no matter how large $N$ is, and no matter how
the $N$ surfaces are arranged topologically, there are always a
countably infinite set of states associated to each measurement of
the $\hat{A}^I$'s. In this sense, it seems that one can
never construct
a physical measuring system that
suffices to extract all the information out of the quantum
gravitational field. This is, of course, just a reflection of the fact that
the quantum gravitational field has an infinite number of degrees of
freedom, while any physical measuring instrument can
only record a finite amount of information.
However, note that we have come to an expression of this
fact in a way that is completely diffeomorphism invariant. In
particular,
we have a characterization of a field with an infinite number of
degrees
of freedom in which we do not say how many degrees of freedom are
associated to each "point." This is very good, as we know that no
diffeomorphism invariant meaning can be given to a point of space
or
spacetime.
Of course, there remains one difficulty with carrying
out this type of interpretation, which is that
the problem of time in quantum gravity
must be resolved so that we know how to speak of the time of the
observation. In \cite{me-dieter} I show that
the resolution of the problem of
time may be carried out using
the ideas of spacetime diffeomorphism invariant
observables of
Rovelli\footnote{Note that such observables
have been described in the $2+1$ case by
Carlip\cite{Carlip-time}, in the Gowdy
model by Husain\cite{viqar-time} and in the Bianchi-I model by
Tate\cite{tate-time}.}
\cite{problemoftime}. To do this one constructs
the physical time dependent operators that correspond
to the
$\hat{A}^I$ and $T^I$. These
depend on a time parameter $\tau$ which is the
reading of a physical clock built into the measuring instrument.
The $N$ operators $\hat{A}^I(\tau )$ will then commute with the
Hamiltonian constraint, and so act on physical states and their
eigenvalues return the values of the areas of the $N$ surfaces when
the physical clock reads $\tau$. Given this, there seems to be no obstacle
to the observer
employing the projection postulate and
saying that the quantum state
of the matter plus gravity system is projected into a subspace of the
physical Hilbert space spanned by the appropriate eigenstates of
the $\hat{A}^I (\tau )$ and that this is something that occurs just
after the measurement is made, in spite of the fact that she and her
apparatus are living inside the quantum system under study.
Thus, despite various assertations to the contrary, there seems
to be no difficulty in applying a Copenhagen-like description of the
measuring process to the case of quantum cosmology in spite of the
fact
that the measuring instrument is inside the universe. As long as we
can
prepare the measuring instrument in such a way that the quantum
state
of the whole matter-gravity state space is inside a subspace of the
state
space associated with a particular specification of the measuring
instrument
one can assign meaning to a set of commuting observations.
The implications of this are discussed further in
\cite{me-dieter}.
Finally, we may note that one finds that the
$\hat{A}^I (\tau =0)$ are equal to the area operators constructed
in this paper\cite{me-dieter}, so that
the quantization of areas becomes a
physical prediction based on the spectra of a set of physical
operators of the theory.
\section{Conclusions}
I would like to close by making a number of comments about the
implications of the results obtained here.
1) We see that in each case when we have succeded in
constructing the
definition of an operator in such a way that
when it is diffeomorphism invariant
it is automatically finite. This is in accord with
the general arguments that
all spatially invariant diffeomorphism invariant
operators must be finite
that were given previously in \cite{review-lee,carloun}.
This suggests strongly that the
problem of constructing a finite theory of quantum gravity can be to
a great extent resolved at the diffeomorphism invariant
level. The reason
is that once one imposes spatial diffeomorphism invariance
there is no
longer any physical meaning that can be given to a point in space.
As a result, although the theory still has an infinite
number of degrees
of freedom, in the sense discussed in section 6, it is no
longer meaningful
to speak of the field as having a certain number of
degrees of freedom
per point. Instead, there seeems to be a natural limitation to how
many degrees of freedom there can be inside of a Planck
volume due to the
discreteness of the spectra of the geometrical operators that measure
area and
volume\cite{review-lee}.
This in
turn suggests that the problem
of finiteness has little to do with the dynamics of the
theory or the choice of matter couplings, which are
coded in the Hamiltonian
constraint.
2) We also see that the conclusions of
previous analyses of the measurement
problem in quantum gravity by DeWitt and
others are confirmed. The key
conclusion of these works was that it should
be impossible to meaningfully
resolve distance scales shorter than the Planck
scale. We see that this is
the case here, because the possible values of physical
areas that can be
gotten from a diffeomorphism invariant measurement
procedure are
quantized in units of the Planck area.
We also see that any particular configuration of the
matter fields that are
used to define the reference system can only be used
to resolve a certain
finite amount of information about the space time
geometry. This is a
consequence of using the quantum theory to describe
the reference system
as well as the gravitational field. This is certainly consistent with
the general observation that diffeomorphism
invariant measurements are
about relations between the gravitational field and the measuring
instruments. If we want a measurement system
which is able to resolve
$N$ different spatial distances, it had better come equiped with $N$
distinguishable components.
3) We see that a large class of doubts about
the physical applicability of the
description of quantum states of geometry by means of the loop
representation can now be put to rest. Note
that any spacial geometry in which the components of
the curvatures are small in Planck units can
be approximated by
a Regge
manifold in which the areas of the faces are
integral multiples of half the
Planck area. As a result we see from the
correspondence between Regge manifolds and
quantum states arrived
at in section 6 that any such spatial geometry can be associated
with a diffeomorphism invariant
quantum state in the loop representation. This allows
us to extend the
discussion of the classical limit of quantum gravity developed in
\cite{weave} to the diffeomorphism invariant level.
4) It would be very interesting to be able to
characterize the quantum
geometry associated with diffeomorphism invariant
states of the pure
gravitational field. The results obtained with matter
fields as reference
systems suggest that there should be a basis of states
which are diagonal
in some set of diffeomorphism invariant operators which
measure the three
geometry and that this basis contains the characteristic
states of non-intersecting knots and links. The problem is
to construct an
appropriate set of diffeomorphism invariant classical
observables which
are functions only of the gravitational field and translate
them into quantum
operators while preserving the diffeomorphism invariance. We
already know how to construct a few such operators,
which measure
the areas of
extremal surfaces and, in the case that it is spatially compact,
the volume
of the universe\cite{review-lee}.
One approach to the construction of such observables
could be by mimicking
the results of this paper by constructing observables that measure
the areas
of surfaces on the faces of a given simplex, and asking that all
the areas are
extremized as the whole simplex is moved around in the geometry.
Constructions along this line are presently under study.
5) Given the present results, a new approach to the construction
of the full
dynamical theory becomes possible. This is to impose an inner
product consistent with the reality conditions at the diffeormorphism
invariant level and then project the Hamiltonian constraint into the
resulting Hilbert space of diffeomorphism invariant states.
The physical
state space would then be found as a subspace of the space of
diffeomorphism invariant states. constraint The
main difficulties facing such an approach are the problem
of expressing
both the reality conditions and the Hamiltonian constraint in
diffeomorphism
invariant forms.
6) As I commented above, the form of the Hamiltonian constraint (7)
for gravity coupled
to the simple, massless antisymmetric tensor gauge
field is particularly simple.
It would be very interesting to see if solutions to the
Hamiltonian constraint
for the coupled gravity matter system could be obtained,
if not exactly, in the
context of some perturbative expansion. It is intersting to
note that exact solutions
can be obtained in the strong coupling limit in which $k$ is
taken to zero
(because $k$ is inverse to what would usually be written
as the coupling
constant). In
this limit, only the second term of (7) survives. It is easy
to show, using
a regularization of the type introduced in \cite{carlolee}
that one then has a class of solutions
of the form $\Psi [\{ {\cal S},\gamma \}]$ in which the loops
never intersect the
surfaces or in which the loops always lie in the surfaces.
It would be very
interesting to then develop a strong coupling expansion
to construct
approximate expressions for solutions
for finite $k$. It would also be very interesting to see if
one could recover
from the semiclassical limit of the gravity matter system
the solutions to the
Schroedinger equation described in \cite{rodolfo-antisym}.
7) It is interesting to note that surfaces play an
interesting role in two
mathematical developments connected to the Ashtekar variables
and the loop representation. In \cite{baez}
Baez extends the loop representation to the case in which the
spatial manifold has a boundary, and shows that in this
case there is an interesting
algebra of operators that acts on the diffeomorphism
invariant states. In
\cite{catalouis}, Crane proposes a new interpretative
scheme for quantum gravity
in which Hilbert spaces of states coming from conformal
field theories are defined
on surfaces which are identified with observers
and measuring instruments. Both
proposals need to be completed by the construction
of explicit diffeomorphism
invariant observables associated to surfaces, and it
would be interesting
to see if the operators
described here can thus play a role in these proposals.
8) Finally, I would like to address the issue of the
use of $N$ separate
matter fields to label the operators that measure the
areas of the $N$
surfaces. This is clearly necessitated by the idealization
in which I use
the values of a field to specify a set of physical
surfaces in a very simple
way. The point is that there must be a physical
way to distinguish the
$N$ different surfaces in terms of the configurations
of the matter fields.
In real life, in which measuring instruments of
arbitrary complexity are
constructed from a small number of fields there is
no difficulty with specifying quantum states associated
to some degree of
precision with an arbitrary number and configuration
of surfaces. In
a realistic situation the configuration is complex
enough to allow an
intrinsic labeling of the different
surfaces\footnote{This accords with the observation
stressed by Barbour that real physical observables are
well defined because the world is sufficiently complex to
allow the events of spacetime to be distinguished
by the values of the physical fields \cite{julian-heap}.} Of course,
another issue also
arises when we construct the surfaces out of
realistic physical fields,
which is
that there will be restrictions to the accuracy of the measurements of
thea
areas due to the fact that matter is made out of atoms. It is not,
however,
impossible that such limitations can be overcome by a clever use of
matter and other fields to specify very small surfaces. What the
present
results suggest, however, is that no
matter how clever we are with the design
of our measuring instruments, it will be impossible to
measure the area of
any physical surface to be less than half the Planck area.
\section*{ACKNOWLEDGEMENTS}
I would like to thank Abhay Ashtekar, Rodolfo Gambini and
Carlo Rovelli for
critical readings
of a draft of this manuscript. I am very grateful to them and to
Julian Barbour and Louis Crane for crucial
conversations about this work and
the general problem of constructing
diffeomorphism invariant observables. I would also like
to thank Alan Daughton and Rafael Sorkin for conversations about
simplicial manifolds.
This work was supported by the National
Science Foundation under grants PHY90-16733 and
INT88-15209 and
by research funds provided by Syracuse University.
| proofpile-arXiv_065-15738 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\subsection{The problem}
Let $\Box$ be a convex polytope all of whose vertices belong to a lattice $M$.
The question of calculating the number of points of
$M$ contained in $\Box$ is a well-known
one in convex geometry. The oldest formula appears to be
Pick's classical result \cite{pick}, valid for arbitrary
polygons in 2 dimensions:
$$\#(\Box\cap M)={\rm Area\,}(\Box)+{1\over2}\#({\rm
boundary}(\Box)\cap
M)+1.$$
Following Ehrhart's work on Hilbert polynomials, Macdonald
\cite{mac:lattice,mac:poly} subsequently generalised Pick's
formula to arbitrary $n$. His formula expresses
the volume of $\Box$ in terms of the number of lattice points
of its' multiples $k\Box$ for
finitely many integers $k$. Although these formulae are valid
for arbitrary (non-convex) polygons, they do not give any
convenient way of calculating either the volume, or the
number of
lattice points of $\Box$.
A review of this and other problems concerning lattice
points can be found
in \cite{hammer,erdos}.
{}From an elementary point of view, for large polytopes one
expects the volume
to be a good approximation to the
number of lattice points, so that one can imagine a general
formula of the
form
\begin{equation}\label{eq:RR}
\mbox{number of points = volume + correction terms}
\end{equation}
where the corrections terms are negligible in the large
limit. The formula we present here is however quite different in
nature.
\subsection{The results}
Given a parameter $\zeta$, to each extreme point $\alpha$
of a simple convex polytope $\Box$, we associate a
rational number depending the local geometry of $\Box$ at $\alpha$. Their sum
is independent of $\zeta$ and yields the number of lattice points in $\Box$
(Theorem \ref{thm:number}). Our main formula (Theorem \ref{thm:formula-sing})
is more general, since it expresses not just the number, but {\em which\/}
points of the lattice belong to the polytope, as a finite Laurent polynomial in
$n$ variables (the lattice points corresponding to the monomials via $m\mapsto
x^m=x_1^{m_1}x_2^{m_2}\cdots x_n^{m_n}$). I give an initial form of this using
the Lefschetz-fixed point theorem for orbifolds. By expanding in Laurent series
this is shown to be equivalent to another formulation (Theorem
\ref{thm:formula-sing-b}) given by Brion \cite{brion} which doesn't involve
cyclotomic sums. I use this form to calculate the number of lattice points. The
volume of $\Box$ is obtained by taking the leading order terms for finer and
finer subdivisions of the lattice. The Laurent series expansions extend
Ishida's \cite{ishida} and provide a convex geometric interpretation of the
formula (Theorem \ref{thm:chi-decomposition}). This in turn suggests a proof of
the formula involving {\em no toric geometry\/} --- only convex geometry and
elementary Laurent expansions. This could be considered as a variation of
Ishida's proof \cite{ishida} based on the contractibility of convex sets.
This paper is an amplification of my 1990 transfer dissertation at Oxford
university \cite{sacha}. This was originally written whilst I was unaware of
Michel Brion's 1988 paper \cite{brion}, where a toric approach is used to
calculate the number of lattice points. There has also been a paper by Ishida
\cite{ishida} where similar Laurent expansions similar to mine are performed.
This is the revised version of my original which takes these works into
account. Let me briefly mention their relationship to this paper.
Brion relies on the Lefschetz-Rieman-Roch theorem for equivariant K-theory
\cite{BFQ} and obtains theorem \ref{thm:formula-sing-b}. He calculates the
number of lattice points by subdividing the tangent cones into basic cones.
The formula that I obtain using the Lefschetz fixed point theorem involves
instead cyclotomic sums for the action of the finite quotient group. By
extending Ishida's Laurent series expansions \cite{ishida} in section
\ref{sec:laurentexpansions} of this article, I prove that the two are
equivalent, and provide a combinatorial interpretation of the formula. It is
also not necessary for me to subdivide the tangent cones in order to obtain a
formula for the number of lattice points.
\subsection{The method}
Our main tool is the theory of toric varieties. This
associates a holomorphic line bundle $L_\Box$ over a complex
orbifold $X_\Box$ to any n-dimensional simple polytope $\Box$
on a lattice $M$. The variety comes equipped with the action of an algebraic
$n$-torus $T_N$ (the character group of $M$) and $L_\Box$ is equivariant with
respect to this action. Its cohomology is trivial in positive dimension,
whereas its space of sections is naturally isomorphic to a vector space
generated by the lattice points in $\Box$.
In \cite{bern,khov}, the Rieman-Roch theorem is used to
calculate the number of lattice points in $\Box$. This yields a
formula similar to equation (\ref{eq:RR}) above. The
problem with this approach, however, is that the correction
terms are not readily computable.
In this paper I follow an idea of Atiyah and exploit the
torus action. I apply Atiyah \& Bott's Lefschetz fixed point theorem
\cite{ab:lefI} --- suitably extended to orbifolds \cite{kawasaki} --- to the
(geometric endomorphism induced by the) action of $t\in T_N$ on the
$d''$-complex of $L_\Box$. The $d''$-complex is elliptic \cite{ab:lefII} and its
cohomology groups are (canonically isomorphic to) those of $(X_\Box,L_\Box)$. The
fixed points of the torus action on $X_\Box$ correspond to the extreme points of
$\Box$. The Lefschetz theorem in this case expresses the equality between the
Lefschetz number (an element of ${\Bbb C}[M]$)
and the sum of the indexes $\nu_\alpha$ for $\alpha$ in the set of extreme
points. The $\nu_\alpha$ define elements of ${\Bbb C}(M)$. The formula I obtain
initially involves sum over the characters of the finite abelian groups which
charaterise the singularities at the points $P_\alpha\inX_\Box$ corresponding to
$\alpha\in{\rm ext}\,\Box$. By studying characteristic series for cones in section
\ref{sec:laurentexpansions} I eliminate the summation over group elements.
If one restricts $t$ to the one-parameter subgroup of the torus determined by
an element $\zeta$ of its' Lie algebra one obtains an equality between a
polynomial and a sum of rational functions in one variable. When $t\to 1$ the
polynomial tends to the the number of lattice points of $\Box$, and this is
given by the sum of the constant terms in the one variable Laurent series for
the rational functions: this gives theorem \ref{thm:number}. By identifying the
coefficient of the leading order terms in the asymptotic expansions of the
formula for submultiples of the lattice --- the `classical limit' in quantum
terminology --- I derive a formula for the volume of $\Box$ in Theorem
\ref{thm:volume}.
I review the toric geometry results I shall need in the first part of this
paper. The reader who is familiar with the notation in Oda \cite{oda} can {\tt
GOTO PART II}, which contains the application proper.
\subsection{Acknowledgments}
I would like to thank Michael Atiyah and Peter Kronheimer for their stimulating
ideas and encouraging support. Thanks also to Frances Kirwan for her
suggestions and to Mark Lenssen, Jorgen Andersen and Jorge Ramirez-Alfonsin
for interesting discussions. I was supported by a Rhodes Scholarship while I
did this research.
\subsection{Notation}
\label{subsec:notation}
Throughout this paper, let $N\cong {\Bbb Z}^n$ denote an n-dimensional
integral lattice, $M\cong\hom_{\Bbb Z}(N,{\Bbb Z})$ it's dual and
$N_{\RR}=N\otimes_{{\Bbb Z}}{\Bbb R}$ its' associated real vector space. The
complex torus $N\otimes_{{\Bbb Z}}
{\Bbb C}^\times\cong\hom_{{\Bbb Z}}(M,{\Bbb C}^\times)$ is denoted $T_N$ and the compact
real sub-torus $N\otimes S^1\subset N\otimes {\Bbb C}^\times$ is denoted $CT_N$.
If $A$ is any commutative ring with identity and $S$ any additive semi-group,
we write $A[S]$ for the group algebra of $S$ with coefficents in $A$; this is
generated by elements ${\bf e}(s)$ for $s\in S$ satisfying the relations
${\bf e}(s){\bf e}(s')={\bf e}(s+s')$. We write $A(S)$ for its total quotient ring
(i.e., its' field of fractions if $A={\Bbb C}$).
Occasionally I choose coordinates $t_i$ for $T_N$.
This is equivalent to choosing generators $n_i$ for $N$. I denote the dual
generators by $m^j\in M$. Then if $\alpha\in M$ and $z \in T_N$ have
coordinates $\vect\alpha1n$ and $\vect z1n$ with respect
to the appropriate bases we have
$$\alpha(z)=z^\alpha=z_1^{\alpha_1}z_2^{\alpha_2}\cdots
z_n^{\alpha_n}.$$
This identifies ${\Bbb C}[M]$ with the Laurent polynomials in the variables $t_i$.
\newpage
\part{Toric Geometry}
The theory of toric varieties establishes correspondances
between convex geometry in $n$ real dimensions and the
geometry of compactifications of $n$-dimensional complex
tori. I refer to \cite{kempf,oda,danilov}.
Briefly, there is a functor that associates, to a pair
$(N,\Sigma)$ (where $\Sigma$ is a fan in $N$),
an irreducible normal Hausdorff complex analytic space $X_{N,\Sigma}$.
A convex polytope $\Box$ in $M$ determines
a unique fan $\Sigma$ in $N$, and we set \(X_\Box = X_{N,\Sigma} \). The
polytope contains more information than simply its cone
structure, and this determines a piecewise linear function
$h=h_{\Box}$ on the support $|\Sigma|\subset N_{\RR}$ of $\Sigma$. This
corresponds under the functorial construction above to an
equivariant line bundle $L_h$ on $X_{N,\Sigma}$, which we denote by
$L_{\Box}$.
\section{Cones and Affine Toric Varieties}
\subsection{Cones}
Let $V$ denote a vector space and $V^*$ its dual.
A {\em cone\/} in a vector space $V$ is a finite intersection
of half-spaces in $V$. Cones are always convex and polyhedral. I shall take
them to be also strongly convex, namely such that they do not contain any
proper subspace of $V$.
For $\ntup{v_1}{v_k} \in N_{\RR}$,
let $\gen{v_1}{v_k}$ denote the smallest cone containing
$\ntup{v_1}{v_k}$. Any cone is generated in this way. A cone
is said
to be {\em simplicial\/} if it can be generated by linearly
independent elements of $N_{\RR}$. If it can be generated by part
of a ${\Bbb Z}$-basis of $N$, then the cone is called {\em
basic\/}. Finally, a cone is
said to be {\em integral\/} with respect to $N$ if it can be
generated by elements of $N$.
When we speak of a {\em cone in a lattice\/} $N$ we mean a
cone in $N_{\RR}$ which is integral with respect to $N$. I only
consider such cones henceforth.
The {\em dimension\/} of a cone is the dimension of the subspace it
generates. By the {\em interior\/} of a cone we usually mean the relative
interior in the subspace it generates.
\subsection{Duality}
Given a subset $A\subset V$ its {\em dual\/}
$A{}^{\vee}\subset V{}^{\ast}$ is defined by:
\[ A{}^{\vee}=\{\theta\in V{}^{\ast} : \forall v\in V,
\ip{\theta}{v} \ge 0\}.\]
\begin{prop}
\label{prop:duality}
The dual of a cone (respectively, a simplicial cone, a basic cone,
or an integral cone) is a cone (respectively a simplicial cone,
a basic cone, or an integral cone). Moreover, for
any cone $\sigma$ we consider, we have $\sigma{}^{\vee}\dual=\sigma$.
\end{prop}
For a proof of all the results regarding cones, see
\cite{rockaf}.
A summary of the results I require will be found in
\cite{oda}.
\subsection{Affine Toric Varieties}
Let $\sigma$ be a cone in $N$. Recall \cite[Prop.\ 1.1]{oda} that
the subset of $M$ given by
\[ S_{\sigma} = M\cap \sigma{}^{\vee} \] is finitely generated as an
additive semigroup, generates $M$ as a group, and is
saturated. Such semigroups are in one-one correspondance
with cones in $N$.
Denote by $U_{\sigma}=U_{N,\sigma}$ the set of semigroup
homomorphisms from
$(S_{\sigma}, +)$ to $({\Bbb C}, \cdot)$, namely
\[U_{\sigma}=\{u:S_{\sigma}\to{\Bbb C}: u(0)=1,
u(m+m^{\prime})=u(m)u(m^{\prime}),\forall m,m^{\prime}
\in S_{\sigma}\}.\]
This can be given the structure of an n-dimensional
irreducible
normal complex analytic space by choosing generators
$\ntup{m_1}{m_p}$ for $S_{\sigma}$ and embedding $U_{\sigma}$ in
${\Bbb C}^p$ via the
evaluation maps ${\bf ev}(m_i) : u\mapsto u(m_i)$ on the
generators $m_i$.
The structure is inherited from the usual structure on
${\Bbb C}^p$ and is
independent of the generators chosen.
In other words, $U_{\sigma}$ is just equal to the (set of points of the) affine
scheme $\mbox{Spec}({\Bbb C}[S_{\sigma}])$. Identifying $U_{\sigma}$ with its ${\Bbb C}$-points
corresponds to identifying ${\bf ev}(m)$ with ${\bf e}(m)$. I spend little
effort making the distinction. The following proposition is easy to show
\cite[Th. 1.10]{oda}:
\begin{prop}
\label{prop:non-sing}
The variety $U_{\sigma}$ is non-singular if and only if $\sigma$
is basic.
\end{prop}
\section{Fans and General Toric Varieties}
\subsection{Faces, Fans and Gluing}
Let $\sigma$ be a cone in $N$.
\begin{dfn}
A {\em face\/} of $\sigma$ is a subset of the form
$\sigma\cap\{m_0\}{}^{\bot}$,
where $m_0\in M=\hom(N,{\Bbb Z})$ is non-negative on $\sigma$. A
face of a cone is also a cone.
\end{dfn}
We immediately have:
\begin{lemma}
\label{lemma:open}
If $\tau$ is a face of $\sigma$ then, for some $m_0\in M$,
we have
\[ U_{\tau} = \{u\inU_{\sigma} : u(m_0)\ne 0\},\]
so that $U_{\tau}$ is naturally an open subset of $U_{\sigma}$.
\end{lemma}
Given this, one constructs collections of cones (called
{\em fans\/}) which have the property that their
corresponding varieties fit together in a natural way:
\begin{dfn}
A {\em fan\/} in $N$ is a collection $\Sigma=\{\sigma:
\sigma \mbox{ a cone
in }N\}$ satisfying the following conditions:
\begin{itemize}
\item if $\tau$ is a face of $\sigma$ and $\sigma \in\Sigma$, then
$\tau\in \Sigma$.
\item $\sigma\cap\sigma^{\prime}$ is a face of both $\sigma$ and
$\sigma^{\prime}$, for all
$\sigma,\sigma^{\prime}\in\Sigma$.
\end{itemize}
\end{dfn}
The set of cones of $\Sigma$ of
dimension $k$ is
called the {\em k-skeleton\/} of $\Sigma$ and is denoted
$\Sigma^{(k)}$.
The union of all the cones of $\Sigma$ is called the {\em
support\/}
of $\Sigma$ and is denoted $|\Sigma|\subset N_{\RR}$.
\begin{thm}
\label{thm:general_toric}
The {\em toric variety\/} associated to $(N,\Sigma)$ is the
space obtained by gluing together the affine varieties
$U_{N,\sigma}$ for $\sigma\in\Sigma$, using lemma \ref{lemma:open}.
It is an n-dimensional Hausdorff complex analytic space
$X_{N,\Sigma}$ which is irreducible and normal \cite[Theorem
1.4]{oda}. It is compact if and only if $\Sigma$ is {\em
complete\/}, namely if and only if $|\Sigma|=N_{\RR}$.
\end{thm}
\subsection{The torus action}
The torus $T_N$ acts on $U_{\sigma}$ by \((t\cdot u)(m)=t(m)u(m)\),
and this gives an action on $X_{N,\Sigma}$. For $\sigma=\{0\}$, one has
$U_{\{0\}}=T_N$, and the action coincides with group
multiplication on the torus.
The $T_N$-orbits on $X_{N,\Sigma}$ are given by the
quotient algebraic tori
\begin{equation}
\label{eq:orb}
\mbox{orb}(\tau)=\hom_{Z}(M\cap\tau{}^{\bot},{\Bbb C}^\times),
\end{equation}
for each $\tau\in\Sigma$. The orbit corresponding to $\tau$
has
dimension equal to the codimension of $\tau$ in $N_{\RR}$. It is
also
easy to see that
$U_{\sigma}$ decomposes as the disjoint union of the orbits
corresponding to its faces, and that $\mbox{orb}(\sigma)$ is the
only
closed orbit in
$U_{\sigma}$. I record a special case of this for later use:
\begin{lemma}
\label{lemma:fixpts}
The fixed points of the $T_N$ action on $X_{N,\Sigma}$ are in one-
one
correspondance with the orbits
$\mbox{orb}(\sigma)\inU_{\sigma}$, for the cones
$\sigma$ in the $n$-skeleton $\Sigma^{(n)}$.
\end{lemma}
\subsection{Functoriality}
Recall the following characterisation of toric varieties:
\begin{quote}
\em
$X$ is a toric variety if and only if it is an irreducible
normal variety, locally of finite type over ${\Bbb C}$, with a
densely embedded torus whose action on itself extends to
the whole variety.
\end{quote}
The assignment $(N,\Sigma) \mapsto X_{N,\Sigma}$ is a functor of
categories:
\begin{dfn} A {\em map of fans\/}
\(\phi:(N^{\prime},\Sigma^{\prime})\to(N,\Sigma)\) is
a ${\Bbb Z}$-linear homomorphism \(\phi:N^{\prime}\to N\)
whose scalar
extension \(\phi_{R}:N^{\prime}_{R}\to N_{R}\) satisfies the
following property: for each $\sigma^{\prime}\in\Sigma^{\prime}$,
there
exists
$\sigma\in\Sigma$ such that $\phi_{R}(\sigma^{\prime})\subset\sigma$.
\end{dfn}
\begin{thm} \cite[page 19]{oda} A map of fans
\(\phi:(N^{\prime},\Sigma^{\prime})\to(N,\Sigma)\) gives
rise to a holomorphic map
\[\phi{}_{\ast}:X_{N^{\prime},\Sigma^{\prime}}\toX_{N,\Sigma}\]
whose restriction to the open subset $T_{N^{\prime}}$
coincides
with the
homomorphism of algebraic tori
\(\phi_{{\Bbb C}^\times}:N^{\prime}\otimes_{{\Bbb Z}}{\Bbb C}^\times\to
N\otimes_{{\Bbb Z}}{\Bbb C}^\times.\)
Through
this homomorphism, $\phi{}_{\ast}$ is $(T_{N^{\prime}}, T_N)$-
equivariant.
Conversely any holomorphic map $\psi:X'\to X$ between toric
varieties which restricts to a homomorphism $\chi: T'\to T$
on the algebraic tori $T'$ and $T$ in such a way that $\psi$
is $\chi$-equivariant corresponds to a unique
${\Bbb Z}$-linear homomorphism
\(f:N^{\prime}\to N\) giving rise to a map of fans
\((N^{\prime},\Sigma^{\prime})\to(N,\Sigma)\)
such that $f{}_{\ast}=\psi$.
\end{thm}
\subsection{Finite Quotients}
I will be interested in the case when $N^{\prime}$ is a
${\Bbb Z}$-submodule of $N$ of finite index and
$\Sigma^{\prime}=\Sigma$. I
write $X^{\prime}$
and $X$ for the corresponding varieties:
\begin{prop} With the data as above, $X^{\prime}\to X$
coincides
\label{prop:quotient}
with the projection of $X^{\prime}$ with respect to natural
action of the
finite group
\[K= N/N^{\prime} \cong\hom_{Z}(M^{\prime}/M,{\Bbb C}^\times)=
\ker[T_{N^{\prime}}\to T_N].\]
\end{prop}
\begin{proof}
\cite[Cor. 1.16, p.22]{oda}
\end{proof}
\section{Toric Varieties, Equivariant Line Bundles and
Convex Polytopes}
\subsection{Polytopes}
\label{subsec:polytopes}
Recall first some basic notions of convex geometry.
A {\em convex polytope\/} $\Box$ in a vector space $V$ is a
bounded intersection of a finite number of affine half-
spaces of
$V$. The set of extreme points of $\Box$ is denoted
${\rm ext}\,\Box$. Since $\Box$ is bounded, it is equal to the convex
hull of ${\rm ext}\,\Box$.
By a {\em polytope on the lattice\/} $M$ we mean a polytope
in
$M_{\Bbb R}$ such that ${\rm ext}\,\Box\subset M$. Suppose $\Box$ is
such a
polytope, and let $\alpha$ be an extreme point. I define the
{\em (tangent) cone of $\Box$ at $\alpha$\/} to be the
cone $C_\alpha$ in $M$ given by:
\begin{equation}
\label{eq:calpha}
C_\alpha={\Bbb R}_{\ge 0}(\Box-\alpha)=\{r(v-
\alpha):r\ge0,v\in\Box\}.
\end{equation}
Let $\lambda_{\alpha}^{i}, i=1,\dots,k$ be the shortest
generators
for $C_{\alpha}$ which belong to the lattice $M$. I call
these the
{\em edges of $\Box$ emanating from}\/ $\alpha$, or simply the {\em
edge vectors for
$\Box$ at $\alpha$}. If $C_\alpha$ is simplicial (respectively,
basic), then $\Box$ is called {\em simple\/} (respectively,
{\em basic}) at $\alpha$. Henceforth, all the polytopes we
consider are convex, integral
and simple at all extreme points. They may be non-basic.
\subsection{Toric Varieties Defined by Polytopes}
\subsubsection{The Fan Defined by a Polytope}
The construction of $C_\alpha$ described in the previous
section can be generalised to show that a polytope $\Box$ in
$M$ defines a complete
fan in $N$. To each face $\Gamma$ of $\Box$ we associate the
cone
$C_{\Gamma}$ in $M$ defined by
\[ C_{\Gamma}={\Bbb R}_{\ge 0}(\Box-m_{\Gamma}),\]
where $m_{\Gamma}$ is any element of $M$ strictly in the interior of the
face $\Gamma$. If $F=\{\alpha\}$ we set $C_{\{\alpha\}}=C_\alpha$, as defined
previously in equation (\ref{eq:calpha}). Taking duals one obtains a collection
of cones in $N$
\[\Sigma_\Box=\{\sigma_{\Gamma}=C{}^{\vee}_{\Gamma} :
\Gamma
\mbox{ a face of }\Box\}.\]
One has the following easy lemma:
\begin{lemma}
\label{lemma:fanBX}
$\Sigma_{\Box}$ is equal to the fan consisting of the
cones
$\sigma_{\alpha}=C_{\alpha}{}^{\vee}$, for $\alpha\in{\rm ext}\,\Box$
and
all their faces. It is complete, and its n-skeleton is
\(\Sigma^{(n)}=\{\sigma_{\alpha}: \alpha\in{\rm ext}\,\Box\}\)
\end{lemma}
\subsubsection{The Variety Defined by a Polytope}
I define $X_\Box$ to be $X_{\Sigma}$, for $\Sigma=\Sigma_{\Box}$. By
\cite[Theorem
2.22]{oda}, $X_\Box$ is an orbifold (i.e., it has at worst quotient singularities)
if $\Box$ is simple.
\begin{prop}
\label{prop:XBaction}
The variety $X_\Box$ is compact, and is covered by affine
pieces
$$U_{\alpha}=U_{\sigma_{\alpha}}=\mbox{Spec}({\Bbb C}[M\cap
C_\alpha]),$$
for $\alpha\in{\rm ext}\,\Box,$ each containing a unique $T_N$--fixed
point $P_{\alpha}={\rm orb}\,(\sigma_\alpha)$ (see equation
(\ref{eq:orb})).
Furthermore, when $U_{\alpha}$ is non-singular, the weights
of
the $T_N$ action on the tangent space
$T_{P_{\alpha}}U_{\alpha}$ are given by the edges vectors
for $\Box$ at $\alpha$.
\end{prop}
\begin{proof}
The first claim follows directly from theorem
\ref{thm:general_toric} and lemmas
\ref{lemma:fixpts}
and \ref{lemma:fanBX}. For the second part, observe (prop.\
\ref{prop:non-sing} and \ref{prop:duality}) that $U_{\alpha}$
is
non-singular if and only if the edge vectors at $\alpha$
generate
$M$ as a group. The semigroup $C_\alpha$ is then free on
these generators. They correspond to the weights of $T_N$ on
$U_\alpha$, and hence, by linearity, to the weights on
$T_{P_{\alpha}}U_{\alpha}.$
\end{proof}
\subsection{Equivariant Line Bundles}
The polytope $\Box$ contains more information than the fan
$\Sigma_\Box$. This extra information turns out to be exactly
what
one needs to specify a $T_N$--equivariant line bundle $L_\Box$
over
$X_\Box$.
\subsubsection{Line Bundles and Piecewise Linear Functions}
\label{subsub:LBPLF}
In general (equivalence classes of) equivariant line bundles
over
$X_{N,\Sigma}$ are in one-one correspondence with the space
$PL(N,\Sigma)$
of {\em piecewise linear functions} on $(N,\Sigma)$, namely
functions \(h:|\Sigma|\to{\Bbb R}\) that are linear on each
$\sigma\in\Sigma$
and which take integer values on the integer points of
$|\Sigma|$.
Defining an element $h\in PL(N,\Sigma)$ involves, by
definition,
specifying an element $l_\sigma\in M$ for each
$\sigma\in\Sigma$
such that $h(n)=\ip{l_\sigma}{n}$ for all $n\in\sigma$.
These
elements determine a line bundle $L_h$
equiped with a $T_N$-action and whose projection
$L_h\toX_{\Sigma}$ is equivariant
with respect to that action. Note that in general, the
elements $l_\sigma$ are not uniquely determined by $h$, but
different choices give rise to equivariantly equivalent
bundles.
The bundle $L_h$ is defined to be trivial over the varieties
$U_\sigma$, with transition functions given by
\begin{equation}
g_{\tau\sigma}(x)={\bf e}(l_\sigma-l_\tau)(x).
\label{eq:trans}
\end{equation}
The action of $T_N$ on the piece
$U_\sigma\times{\Bbb C}\subset L_h$
is defined by
\begin{equation}
t(x,c)=(tx,{\bf e}(-l_\sigma)(t)c). \label{eq:actionL}
\end{equation}
\subsubsection{Cohomology}
The cohomology groups for equivariant line bundles
decompose
under the action of $T_N$ into weight spaces, and can be
expressed as a direct sum (see \cite[Th. 2.6]{oda}):
\[H^q(X_{\Sigma},{\cal O}_{X_{\Sigma}}(L_h))=\oplus_{m\in M}
H^q_{Z(h,m)}(N_R,{\Bbb C}){\bf e}(m),\]
where $Z(h,m)=\{n\in N_R:\ip{m}{n} \ge h(n)\},$ and
$H^q_{Z(h,m)}(N_R,{\Bbb C})$ denotes the $q$-th cohomology group of
$N_R$
with support in $Z(h,m)$ and coefficients in ${\Bbb C}$.
\paragraph{The Line Bundle $L_\Box$}
The polytope $\Box$ defines a piecewise linear function
$h_\Box$
on $\Sigma_\Box$ by putting $l_{\sigma_{\alpha}}=\alpha$
(and
$l_\sigma=\alpha$ for the faces $\sigma$ of
$\sigma_{\alpha}$).
The corresponding bundle is denoted $L_\Box$.
Its cohomology is given by \cite[Cor. 2.9]{oda}
\begin{equation}
\label{eq:coho}
H^q(X_\Box,{\cal O}_{X_\Box}(L_\Box))=\left\{
\begin{array}{ll}
{\Bbb C}[M]_\Box = \oplus_{m\in M\cap\Box} {\Bbb C}{\bf e}(m) & \mbox{if $q=0$} \\
0 & \mbox{otherwise}
\end{array}\right.
\end{equation}
\newpage
\part{The Polytope Formula}
\section{The Lefschetz Fixed-Point Theorem}
Recall \cite[Theorem 4.12]{ab:lefII} the following
application of
the Lefschetz fixed point theorem to the case of holomorphic
vector bundles:
\begin{thm}
\label{thm:lefschetz}
Let $X$ be a compact complex manifold $X$, $F$ a
holomorphic
vector bundle over $X$, $f:X\to X$ a holomorphic map with
simple
fixed points and $\phi:f{}^{\ast} F\to F$ a holomorphic bundle
homomorphism. Let $L(T)$ be the Lefschetz number of the
endomorphism $T$ of the $d''$-complex of $F$:
\[L(T)=\sum(-1)^q\mbox{trace}H^q T|_{H^q(X;F)}.\]
Then
\(L(T)=\sum_{P=f(P)}\nu_{P}\), where
\[\nu_{P}={{\mbox{trace}_{{\Bbb C}}
\phi_{P}}\over{\mbox{det}_{{\Bbb C}}(1-df_{P})}}.
\]
\end{thm}
(Recall that since $P$ is a fixed point, $\phi_{P}$ and
$df_{P}$ are
endomorphisms of $F_{P}$ and $T_{P}X$ respectively.)
\subsection{Application}
I apply this to the case where $X=X_\Box$, $L=L_\Box$ and
$f: X \to X$ is given by the action of a non-trivial element of $t\in T_N$. The
fixed points are simple and are given by
$P_{\alpha}=\mbox{orb}(\sigma_{\alpha})\in
U_{\sigma_{\alpha}}$, for $\alpha\in{\rm ext}\,\Box$.
The bundle homorphism $\phi_{t}: t{}^{\ast}\L\to L$ is given by
the action of $-t$ (recall that $T_N$ acts on line bundles).
The cohomology groups are all zero, except $H^0(X_\Box,L_\Box)$ which is isomorphic
to the subspace ${\Bbb C}[M]_\Box$ of ${\Bbb C}[M]$ determined by $\Box$.
In this context, the Lefschetz number is an element of ${\Bbb C}[M]$ and the indexes
$\nu_\alpha=\nu_{P_\alpha}$ are elements of ${\Bbb C}(M).$ (As we shall see in
section~\ref{sec:laurentexpansions}, they are characteristic functions for the
tangent cones to $\Box$.)
\begin{lemma}
\label{lemma:Laction}
We have
\[{\rm trace}\,(\phi_{t})_{P_\alpha} = \alpha(t).\]
\end{lemma}
\begin{proof}
Recall (equation
(\ref{eq:actionL})), that $t$ acts on the fibres of $L$ over
$U_{\sigma}$ by
multipication by ${\bf e}(-l_{\sigma})(t)$, where
$l_{\sigma}$ are
the elements of $M$ corresponding to $L$ as in
\ref{subsub:LBPLF}. In the present case, at a fixed point
$P_\alpha\in U_{\alpha}$ we have
$l_{\sigma_{\alpha}}=\alpha$, so $\phi_t$ acts
by ${\bf e}(-\alpha)(-t)=\alpha(t)$.
\end{proof}
In the case of a basic polytope $\Box$ in $M$, applying
Theorem \ref{thm:lefschetz} directly one obtains:
\begin{thm}
\label{thm:non-sing}
For a basic simple convex polytope $\Box$ in $M$, we have
\begin{equation}\label{eq:sum-non-sing}
\sum_{m\in\Box}m(t)=\sum_{\alpha\in {\rm ext}\,\Box}
\nu_\alpha(t)
\end{equation}
where
\begin{equation}\label{eq:nu-non-sing}
\nu_\alpha(t)=\sum_{\alpha\in {\rm ext}\,\Box}
{\alpha(t)
\over
(1-{\lambda_{\alpha}^1}(t))
\cdots
(1-{\lambda_{\alpha}^n}(t))},
\end{equation}
the vectors
$\ntup{\lambda_{\alpha}^1}{\lambda_{\alpha}^n}$ are the
edge vectors of $\Box$ at $\alpha$.
\end{thm}
\begin{proof}
The decomposition of $H^0(X_\Box;{\cal O}_{X_\Box}(L_\Box))$ given by
equation (\ref{eq:coho}) shows that the left-hand side of
equation
(\ref{eq:sum-non-sing}) is equal to the
Lefschetz number of the endomorphism induced by the
action of $t$.
Lemma \ref{lemma:Laction} and Proposition
\ref{prop:XBaction}
yield equation (\ref{eq:nu-non-sing}).
\end{proof}
\subsection{The Lefschetz Fixed-Point Theorem for Orbifolds}
In \cite{kawasaki} the Lefschetz formula is generalised to
orbifolds (also known as V-manifolds), using zeta-function
techniques.
As I do not need the full power of this approach, I
present an alternative more elementary argument.
The Lefschetz fixed-point formula is essentially local in
nature, the formula for the multiplicities $\nu_\alpha$ only
involving
the properties of $f$ and $\phi$ at the point $P_\alpha$.
This fact is
clearly apparent in Atiyah and Bott's proof in \cite{ab:lefI}
(see their
remarks at the beginning of section 5, and Proposition 5.3).
To extend the
formula to orbifolds, it is sufficient therefore to extend it
to global
quotient spaces, of the form $X=X'/K$.
\begin{prop}
\label{prop:Lef-Quotient}
Suppose that a finite abelian group $K$ acts on a smooth
$X'$ and equivariantly on a
holomorphic bundle $F'$ over $X'$. Let $f':X'\to X'$ and
$\phi':f'{}^{\ast} F'\to F'$ be as in Theorem \ref{thm:lefschetz},
and suppose
they are $K$-equivariant. Denote by $L'(T')$ the Lefschetz
number of the
corresponding endomorphism $T'$ of the $d''$-complex of
$F'$. Because of the
$K$-equivariance, we can
define $X=X'/K$, $f:X\to X$, $F=(F')^K$, $\phi:f{}^{\ast} F\to F$
and the
corresponding Lefschetz number
\[L(T)=\sum(-1)^q{\rm trace}\, H^qT|_{H^q(X;F)}.\]
Then we have
\begin{equation}\label{eq:L-Quotient}
L(T)={1\over{|K|}}\sum_{k\in K}L'(k\circ T).\end{equation}
\end{prop}
\begin{proof}
Note that since $T$ determines an endomorphism of the primed complex, it
makes sense to write
$L'(T)$. The claim then follows by applying the following
easy lemma of linear
algebra, recalling that $H^q(X;F)$ is just the $K$-invariant
part of
$H^q(X';F')$.
\end{proof}
\begin{lemma}
Suppose we have a linear action of a
finite abelian group $K$ on a finite dimensional vectorspace
$V$, commuting with an endomorphism $T$ of $V$. Denote by
$V^K$
the $K$-invariant subspace of $V$. Then $T$ is an
endomorphism
of $V^K$ and we have
$${\rm trace}\, T|_{V^K}={1\over{|K|}}\sum_{k\in K}
{\rm trace}\, (k\circ T)|_V .$$
\end{lemma}
\begin{proof}
Define $P$ to be the following endomorphism of $V$:
$$Pv = {1\over{|K|}}\sum_{k\in K} k\cdot v.$$
Then $P^2=P$, so $P$ is the projection $V\to V^K$. Since $T$
commutes with $P$, it follows that $T$ respects the
decomposition
$V=V^K\oplus \ker P.$
Furthermore we have $${\rm trace}\, T|_{V^K}={\rm trace}\, TP|_V =
{\rm trace}\,
PT|_V ,$$
so the result follows.
\end{proof}
Now, given a general orbifold $X$, at each point $P\in X$,
choose a {\em local model\/} $(U'_P,f'_P,K_P,L'_P)$ as follows:
Let $U_P$ be an $f$-invariant neighbourhood of $P$ in $X$ and
$U'_P$ be a smooth cover with an action of a finite group $K_P$,
free away from $P$, such that $U_P=U'_P/K_P$. Thus $X$ has a quotient
singularity of type $K_P$ at $P$. Let $f'_P:U'_P\to U'_P$ be a
$K_P$-equivariant lifting of $f|_{U_P}$. A line bundle $L$ over $X$ is
understood to be an invertible sheaf $L$ over $X$ such that for any $P\in X$
with local model $(U'_P,f'_P,K_P)$, there exists a line bundle $L'_P\to U'_P$
such that $L|_{U_P}=(L'_P)^{K_P}$.
With these definitions, our remarks at the beginning of the
section and Proposition
\ref{prop:Lef-Quotient} imply the following:
\begin{thm}
\label{thm:lefschetz-orbifold}
Let $X$ be a compact complex orbifold $X$, $F$ a
holomorphic
vector bundle over $X$, $f:X\to X$ a holomorphic map with
simple
fixed points and $\phi:f{}^{\ast} F\to F$ a holomorphic bundle
homomorphism. Let $L(T)$ be the Lefschetz number of the
endomorphism $T$ of the $d''$-complex of $F$:
\[L(T)=\sum(-1)^q\mbox{trace}H^q T|_{H^q(X;F)}.\]
Then
\(L(T)=\sum_{P=f(P)}\nu_{P}\), where
\[\nu_{P}={1\over |K_P|} \sum_{k\in K_P} {{\mbox{trace}_{{\Bbb C}}
(k\circ\phi'_{P})}\over{\mbox{det}_{{\Bbb C}}(1-(k\circ df')_{P})}},
\]
and $\phi', f'$ are lifts for $\phi, f$ respectively, in the same
spirit as that of the local models above.
\end{thm}
\subsection{Singular Case}
Suppose that $\Box$ is not basic relative to $M$ at
$\alpha$. Then $X=X_\Box$ has a singularity at the point $P_\alpha$. Let
$C_\alpha$ be the cone of $\Box$ at $\alpha$ and let $\sigma_\alpha$ be the
dual cone.
\begin{dfn}
\label{dfn:dual-edge-vectors}
I define the {\em dual edge vectors for $\Box$ at\/} $\alpha$ to be the
primitive generators of the cone $\sigma_\alpha$ in $N$. When $\sigma_\alpha$
is not basic, the dual edge
vectors do not generate $N$ as a group, but instead a
sublattice $N'_\alpha$ of $N$ of finite index, which I call the {\em dual edge
lattice for $\Box$ at\/} $\alpha$.
\end{dfn}
The cone $\sigma_\alpha$ is basic with respect to
$N'_\alpha$, and the corresponding variety $X'_\alpha=
X_{\sigma_\alpha,N'_\alpha}$ is smooth at
$P_\alpha$. By Corollary
\ref{prop:quotient}, the map
$X'_\alpha \to X_\alpha = X_{\sigma_\alpha,N}$ is the quotient map by the
action of the
finite abelian group
$K_\alpha=N/N'_\alpha\cong\hom_{{\Bbb Z}}(M'_\alpha/M,{\Bbb Q}/{\Bbb Z})$. Here $M'_\alpha$
is the dual of $N'_\alpha$ and is naturally a superlattice of $M$. There is a
unique pairing $M'\times N \to {\Bbb Q}/{\Bbb Z}$ which extends the pairings $M\times N
\to {\Bbb Z}$ and $M'\times N'\to {\Bbb Z}$. We then use the morphism ${\Bbb Q}/{\Bbb Z} \to {\Bbb C}^\times$
given by the exponential map to identify $K_\alpha$ with
$\hom_{{\Bbb Z}}(M'_\alpha/M,{\Bbb C}^\times)$. If
we
identify $k\in K$ with the morphism $k:M'_\alpha\to{\Bbb Q}/{\Bbb Z}$ such that
$k(M)=0$, the action is given by
\begin{equation}
\label{eq:Kaction}
k\cdot u'(m')=\exp(2\pi i\ip{k}{m'})u'(m'),
\end{equation}
for $u'\in U_{\sigma}'$. Since the invariant part of $M'_\alpha$ under $K_\alpha$ is
$M$, the line bundles $L_\alpha$ and $L'_\alpha$ over $X_\alpha$ and
$X'_\alpha$ defined by the polytope $\Box$ are related by
$L_\alpha=L_{\alpha}'^K$. Equation (\ref{eq:coho}) shows that the cohomology of
$L_\alpha$ can be identified with the $K_\alpha$-invariant part of that of
$L_\alpha'$.
In summary, $(U'_\alpha, t, K_\alpha, L'_\alpha)$ is a local model for $X$ at
$P_\alpha$. Applying the Lefschetz formula for orbifolds, one deduces:
\begin{thm}
\label{thm:formula-sing}
For a simple convex polytope $\Box$ in $M$, we have
\begin{equation}
\label{eq:lef-fns}
\sum_{m\in\Box}m(t)=\sum_{\alpha\in {\rm ext}\,\Box}
\nu_\alpha(t)
\end{equation}
where
\begin{equation}\label{eq:nu-sing}
\nu_\alpha(t)= {1\over{|K_{\alpha}|}}\sum_{k\in K_{\alpha}}
{\alpha(t)
\over
(1-e_k(\lambda_{\alpha}^{\prime 1})
\lambda_{\alpha}^{\prime 1}(t))
\cdots
(1-e_k(\lambda_{\alpha}^{\prime n})
\lambda_{\alpha}^{\prime n}(t))},
\end{equation}
and we write $e_k(\lambda)$ for $\exp(2\pi i{\ip{k}{\lambda}}).$ Here, the
vectors $\ntup{{\lambda_{\alpha}^{\prime
1}}}{\lambda_{\alpha}^{\prime n}}$ are the edge vectors of
$\Box$ at $\alpha$ in the dual $M'_\alpha$ of the dual edge lattice $N'_\alpha$
of definition \ref{dfn:dual-edge-vectors}, and $K_\alpha$ is the finite abelian
group $N/N'_\alpha$ acting according to equation (\ref{eq:Kaction}).
\end{thm}
\section{Laurent Expansions}
\label{sec:laurentexpansions}
In this section I expand the rational functions $\nu_\alpha$
away from their poles, i.e., in the domains where
$|\lambda_\alpha^i(t)|$ is not $1$, for $i=1,\dots,n$.
This has two benefits.
Firstly, it produces another formula which does not involve sums over roots of
unity. We shall use this in calculating the number of lattice points and the
volume.
Secondly it leads us to interpret the formula as a combinatorial statement,
decomposing the (characteristic polynomial for the) polytope $\Box$ as an
algebraic sum of the (characteristic series for the) cones $C_\alpha$ for each
extreme point. Ultimately this could be used to prove the formula using
elementary convex geometric reasoning. We don't attempt this here, as Ishida
has already reduced the proof to the contractibility of convex sets
\cite{ishida}.
We begin by some general remarks about characteristic series for convex cones.
\subsection{Characteristic functions and series for convex cones}
\label{subsec:characteristic}
We recall some notation, following \cite{ishida}. Let $A$ be a commutative ring
with identity. Recall that $A[M]$ denotes the {\em group algebra of $M$\/}
generated by elements ${\bf e}(m)$ for $m\in M$ satisfying relations
${\bf e}(m){\bf e}(m')={\bf e}(m+m')$ and ${\bf e}(0)=1$. We denote by $A(M)$ the
total quotient ring of $A[M]$.
We define $A[[M]]={\rm Map}(M,A)$. Elements $f\in A[[M]]$ can also be
expressed as formal Laurent series $f=\sum_{m\in M} f(m){\bf e}(m)$ and this
defines a $A[M]$-module structure on $A[[M]]$ by:
$${\bf e}(x)(\sum f(m){\bf e}(m)) = \sum f(m-x) {\bf e}(m).$$
The relationship of $A[[M]]$ to $A(M)$ is as follows. To a given element
$\nu\in A(M)$ correspond (possibly) several elements of $A[[M]]$ called the
{\em Laurent expansions\/} of $\nu$. As we see below a convex cone $C$ in $M$
gives rise to elements $\nu^M_C\in A(M)$ and $\chi_{C\cap M}\in A[[M]]$ and the
latter is a Laurent expansion of the former.
\begin{dfn}
For $S$ a subset of $M$, we define the {\em characteristic series of $S$\/} to
be the element $\chi[S]=\chi_S$ of $A[[M]]$ corresponding to the set-theoretic
chacteristic function of $S$ (the function which takes values 1 on $S$ and 0
elsewhere), namely to the series $$\chi_S = \sum_{m \in S} {\bf e}(m).$$
\end{dfn}
Let $C$ be a (strongly convex rational simplicial) cone in $M_{\Bbb R}$. We write
${\rm gen}^M_C=\{\lambda_1,\dots,\lambda_n\}$ for the primitive generators in
$M$ of $C$. The unit parallelepiped
$$Q^M_C=\{\sum a_i\lambda_i: 0\leq a_i < 1\}$$ defined by $C$ in $M$ intersects
$M$ in $\{c_1,\dots,c_{k}\}$. Here $k=|K|$, the order of the finite abelian
group which is the quotient of the dual lattice $N$ to $M$ by the lattice
generated by the primitive generators ${\rm
gen}^N_{C{}^{\vee}}=\{\sigma^1,\dots,\sigma_n\}$ of $C{}^{\vee}$ in $N$.
\begin{dfn}
For $C$ strictly convex, we define the {\em characteristic function for $C$
with respect to $M$\/} is the following element of $A(M)$:
\begin{eqnarray*}
\nu^M_C & = & \sum_{c\in Q^M_C\cap M} {\bf e}(c) \prod_{\lambda\in{\rm gen}^M_C}
(1-{\bf e}(\lambda)){}^{-1}.\\
& = & \sum_{j=1}^{|K|} {\bf e}(c_j) \prod_{i=1}^n
(1-{\bf e}(\lambda_i)){}^{-1}.
\end{eqnarray*}
For the translate of a cone $C$ by $\alpha\in M$, we set
$\nu^M_{\alpha+C}={\bf e}(\alpha)\nu^M_C$.
\end{dfn}
Denote by ${\rm PL}_A(M)$ the $A[M]$-submodule of $A[[M]]$ generated by the set
of {\em polyhedral Laurent series\/}: $$\{\chi_{C\cap M} : C \mbox{ a basic
cone in }M_{\Bbb R}\}.$$
Ishida proves that the following \cite{ishida}
\begin{prop}
There exists a unique $A[M]$-homomorphism
$$\varphi:{\rm PL}_A(M) \to A(M)$$
such that $\varphi(\chi_{C\cap M})=\nu^M_C$, for all basic cones $C$ in
$M_{\Bbb R}$.
\end{prop}
Actually, we have:
\begin{prop}
For {\em any\/} cone $C$, $\chi_{C\cap M}\in {\rm PL}_A(M)$ and
$\varphi(\chi_{C\cap M})=\nu^M_C$ for $\varphi$ defined above.
\end{prop}
\begin{proof}
This follows from the remark that any element of $m\in M$ can be expressed
uniquely as $q+\sum x_i \lambda_i$ with $q\in Q^M_C\cap M$ and $x_i\in {\Bbb N}$.
\end{proof}
The existence of $\varphi$ says essentially that we loose no information by
passing from the characteristic function of a cone to its' Laurent series, even
though the latter might not always have a well defined convergence on all of
$T_N$ (in the case $A={\Bbb C}$).
\paragraph{Remark} Whereas Ishida \cite{ishida} uses open cones, we find it
more convenient to use closed ones. The correpondence between the two is of
course that $C\cap M =\cup_{F < C} ({\rm int} F)\cap M$, where the union runs
over the faces of $C$.
\subsubsection{Action of $K$}
The group $K$ acts on $M'$ and hence on $A[[M']]$ by
$$k\cdot f = \sum_{m \in M} e_k(m)f(m){\bf e}(m),$$ and we have $A[[M]] =
A[[M']]^K$. The following elementary remark gives the relationship between the
characteristic series for $C$ with respect to the two lattices $M$ and $M'$.
\begin{prop}
\label{prop:chi}
For any cone $C$, we have
$$\chi_{C\cap M} = {1\over |K|} \sum_{k\in K} k\cdot \chi_{C\cap M'}.$$
\end{prop}
\begin{proof} Note that $k\cdot\chi(m')=e_k(m')\chi(m')$. Since $e_k$, for
$k\in K$, are nothing but the characters of the finite abelian group $M'/M$, we
have $e_k(M)=|K|$ and $e_k(m'+M)=0$, for all $m'\not\in M$. Hence the formula
follows.
\end{proof}
By the uniqueness of $\varphi$ we deduce that the same equality holds between
the characteristic functions of $C$:
\begin{cor} For any cone $C$, we have
$$\nu^M_C={1\over |K|} \sum_{k\in K} k\cdot \nu^{M'}_C.$$
\end{cor}
\subsection{Recovery of Brion's result}
We apply the results of the previous section with $A={\Bbb C}$. Then ${\Bbb C}[M]$ is the
affine coordinate ring for the algebraic torus $T_N$ and its' field of
fractions ${\Bbb C}(M)$ is the ring of rational functions on $T_N$.
The Lefschetz formula is expressing the chacteristic series $\chi_\Box$ of
$\Box$ as a sum of elements of ${\Bbb C}(M)$. The theorem below says that these are
simply the characteristic functions for the tangent cones of $\Box$ at its'
extreme points. See \cite[Th\'eor\`eme 2.2]{brion}
\begin{thm}
\label{thm:formula-sing-b}
Let $\Box$ be a simple convex polytope $\Box$ in $M$. Denote by $C_\alpha$
its' tangent cone at $\alpha\in{\rm ext}\,\Box$. Then we have \begin{equation}
\label{eq:lef-fns-b}
\chi_{\Box\cap M}= \sum_{\alpha\in{\rm ext}\,\Box} \nu^M_{C_\alpha}.
\end{equation}
\end{thm}
\begin{proof}
By theorem \ref{thm:formula-sing} we have $\nu_\alpha= {1\over|K|}\sum_{k\in
K} k\cdot\nu^{M'}_{C_\alpha}$, which by the corollary of the previous section
is nothing but $\nu^M_{C_\alpha}$.
\end{proof}
\subsection{Laurent expansions of $\nu_C$ and their domains of validity}
We take $A={\Bbb C}$ and give all the different possible Laurent expansions of
$\nu^M_C$ for a cone $C$. When we attempt to evaluate these on elements of
$T_N$ these series only converge on certain open subsets which we specify here.
\subsubsection{The expansions}
We adopt the same notation as in section \ref{subsec:characteristic}. The
primitive generators of $C_\alpha$ are ${\lambda_\alpha}^i$ in $M$ and
$\lambda_\alpha^{\prime i}$ in $M'_\alpha$.
\begin{prop}[Basic Expansion] For $|\lambda_\alpha^{\prime i}(t)|<1,$ for
$i=1,\dots,n,$ we have
\begin{equation}
\label{eq:nu-basic-expansion}
\nu_\alpha(t)=\chi_{\alpha+C_\alpha\cap M}(t).
\end{equation}
\end{prop}
\begin{proof}
Applying the elementary expansion (valid for $|z|<1$)
$$ (1-z){}^{-1}= 1 + z + z^2 + z^3 + \cdots $$
to the individual factors $(1-e_k(\lambda_{\alpha}^{\prime
i})\lambda_{\alpha}^{\prime i}(t)){}^{-1}$ gives:
$$\nu_\alpha(t)=\alpha(t)
{1\over{|K_{\alpha}|}}\sum_{k\in K_{\alpha}}(\sum_{c_1,\dots,c_n=0}^{\infty}
e_k(c\cdot \lambda'_\alpha)
(c\cdot \lambda'_\alpha)(t)),$$
where I have written $c\cdot \lambda'_\alpha$ for $\sum_{i=1}^{n}c_i
\lambda_{\alpha}^{\prime i}$. Since the series is convergent, one has
$$
\nu_\alpha(t) =
\sum_{c_1,\dots,c_n=0}^{\infty} (\alpha+c\cdot \lambda'_\alpha)(t)
{1\over{|K_{\alpha}|}}\sum_{k\in K_{\alpha}} e_k(c\cdot \lambda'_\alpha),$$
and the result follows from the proof of proposition \ref{prop:chi}.
\end{proof}
There are in fact $2^n$ different possible expansions for $\nu_\alpha(t)$
depending on whether we expand about $\lambda_\alpha^{\prime i}(t)=0$ or
$\infty$, each expansion being valid for
$ |\lambda_\alpha^{\prime i}(t)|<1$ or $>1$ respectively.
\paragraph{Notation:} Let $s$ be an $n$-tuple $s\in \{\pm1\}^n$. As a
shorthand, I will write:
\begin{eqnarray*}
\lambda'_\alpha & \stackrel{{\rm def}}{=} & (\lambda_\alpha^{\prime 1}, \dots,
\lambda_\alpha^{\prime n})\\
s\lambda'_\alpha & \stackrel{{\rm def}}{=} & (s_1 \lambda_\alpha^{\prime 1}, \dots,
s_n\lambda_\alpha^{\prime n}).
\end{eqnarray*}
I also write $\langle\lambda'_\alpha\rangle$ for the cone
$\langle\lambda_\alpha^{\prime 1},\dots,\lambda_\alpha^{\prime n}\rangle$.
I define the quantity $s_{-\kern -0.2em}\cdot\lambda'_\alpha$ by:
$$s_{-\kern -0.2em}\cdot\lambda'_\alpha = \sum_{s_i=-1} s_i\lambda_\alpha^{\prime
i}.$$
An element $m\in M'$ defines a region $T_{m}$ of $T_{N'}$ by:
$$T_{m}=\{t\in T_{N'}: |m(t)|<1\}.$$
I also write, for a cone $C$ in $M$,
$$T_C=\{t\in T_{N'} : |m(t)|<1, \forall m\in C\cap M\}.$$
Thus, for example,
$$T_{\langle\lambda'_\alpha\rangle} = T_{\lambda_\alpha^{\prime
1}}\cap\cdots\cap T_{\lambda_\alpha^{\prime n}}.$$
\begin{prop}[General Expansion]
\label{prop:general-exp}
Given $s\in \{\pm1\}^n$, we have, for $t\in T_{\langle
s\lambda'_\alpha\rangle},$
\begin{equation}
\label{eq:nu-general-exp}
\nu_\alpha(t)=(\prod_{i=1}^n s_i)\chi[{\alpha + s_{-\kern -0.2em}\cdot\lambda'_\alpha
+ \langle s\lambda'_\alpha\rangle \cap M}](t).
\end{equation}
\end{prop}
\begin{proof}
In order to expand $\nu_\alpha$ when, for some $i$, we have
$|\lambda_\alpha^{\prime i}(t)|>1,$ I use the other expansion of $(1-z){}^{-1}$,
valid for $|z|>1$:
$$(1-z){}^{-1}= -z -z^2 -z^3 - z^4 - \cdots. $$
The result follows in the same way as the basic expansion. Note that compared
to the basic expansion, the cone whose characteristic series we end up with
undergoes a reflection plus a translation: $\langle\lambda'_\alpha\rangle\cap
M$ becomes $s_{-\kern -0.2em}\cdot\lambda'_\alpha + \langle s\lambda'_\alpha\rangle
\cap M$. This is due to the shift from $1+z+z^2+\cdots$ to
$-z^1-z^2-z^3-\cdots$.
\end{proof}
\subsubsection{Consistency of expansions}
It doesn't make sense to expand all the $\nu_\alpha$ according to
(\ref{eq:nu-basic-expansion}) because the variable $t$ can't satisfy the
condition $ |\lambda_\alpha^{\prime i}(t)|<1$ for all $i$ and $\alpha$. For
one thing, if $\alpha$ and $\beta$ are two extreme vertices of $\Box$ connected
by an edge, we will have
$\lambda_\alpha^{\prime i}=-\lambda_\beta^{\prime j}$ for some $i$ and $j$, so
that $ |\lambda_\alpha^{\prime i}(t)|<1 \iff |\lambda_\beta^{\prime j}(t)|>1$.
I we can find a domain for $t\in T_{N'}$ such that {\em all\/} the expansions
we perform are valid {\em at the same time,} then when we sum up all the
$\nu_\alpha(t)$, all but a finite number of terms in the infinite series
cancel, and we get the characteristic polynomial $\chi_\Box$ evaluated on $t$.
For each $\beta\in{\rm ext}\,\Box$, we choose an element $s^\beta\in \{\pm1\}^n$, and
expand according to (\ref{eq:nu-general-exp}). We require that the set
\begin{equation}
\bigcap_{\beta\in{\rm ext}\,\Box} T_{\langle s^\beta \lambda'_\beta\rangle} = T_{\cup
\{\langle s^\beta \lambda'_\beta\rangle : {\beta\in{\rm ext}\,\Box}\}}
\end{equation}
be non-empty. I turn next to the necessary conditions for this to be so.
\subsubsection{Neccessary conditions for a consistent expansion}
\label{subsub:necc-cond}
The above requirement implies, for instance, that if $\lambda^{\prime i}_\alpha
=-\lambda^{\prime j}_\beta$, as it happens for ajdacent vertices, then
$s^\alpha_i=-s^\beta_j$. This can be thought of graphically as choosing a
direction for each edge of the polytope $\Box$ and sticking to it throughout
the expansion. For each vertex $\alpha$ if the $i$-th edge is pointing into
$\alpha$ then we set $s^\alpha_i=-1$, if it is pointing out, we set
$s^\alpha_i=+1.$
Another necessary condition is that we choose $s^\alpha=(1,1,\dots,1)$ for some
$\alpha\in{\rm ext}\,\Box$. This can be seen easily, if one thinks for a moment of
decomposing $\chi_\Box$ as a sum of characteristic series for cones:
\begin{equation}
\label{eq:sum-chi}
\chi_\Box = \sum_{\beta\in{\rm ext}\,\Box} \pm \chi_{C'_\beta\cap M}
\end{equation}
where the cones $C'_\beta$ are obtained from the tangent cones $C_\beta$
eventually by the `reflection + translation' process prescribed in the general
expansion in proposition \ref{prop:general-exp} and the sign is determined by
the number of reflections specified by $s^\beta$. One of the cones involved
must be $C_\alpha$, for some $\alpha\in{\rm ext}\,\Box$. It will have all of its'
edges pointing outwards in the above orientation and will correspond to the
characteristic series $+\chi_{C_\alpha\cap M}$. I will call this the {\em base
vertex\/} for the expansion.
The non-emptiness requirement above then implies that the following condition
on the orientations be satisfied:
\paragraph{Orientation condition} Let $\lambda^{\prime i}_\alpha$ for
$i=1,\dots p$ be any set of edges emanating from $\alpha$ that have been
oriented so that they are {\em all outgoing with respect to $\alpha$.\/ } Then
we require that for all $\beta\neq\alpha,$
\begin{equation}
\label{eq:exp-condition}
\hbox{if }(\lambda^{\prime}_\beta)^j \in \pm \langle\lambda^{\prime
1}_\alpha,\dots,\lambda^{\prime p}_\alpha\rangle \hbox{ then }(s^\beta)_j=\pm
1.
\end{equation}
In words, this says that if an edge $\lambda^{\prime j}_\beta$ is a linear
combination, all of whose coefficients are of the same sign or zero, of
oriented edges $\lambda^{\prime i}_\alpha$ all going outwards from a given
vertex $\alpha$, then it should be oriented in the direction which includes it
in the cone spanned by these outgoing edges. This is because, if it were
oriented oppositely, it would mean that $T_{\langle\lambda^{\prime
1}_\alpha,\dots,\lambda^{\prime p}_\alpha\rangle} \cap T_{s^\beta_j
\lambda^{\prime j}_\beta} = \emptyset,$ since one cannot have both
$|\lambda^{\prime j}_\beta(t)|<1$ and $|-\lambda^{\prime j}_\beta(t)|<1.$
\subsubsection{Domain of validity of simultaneous expansions}
It is always possible to choose at least one orientation of the edges of $\Box$
which satisfies the orientation condition (\ref{eq:exp-condition}). Suppose we
have chosen such an orientation. For what values of $t\in T_N$ is it valid ? In
order to answer this, let us first make some remarks about the regions
$T_C\subset T_N,$ for $C$ a cone in $M$.
It is helpful, to describe $T_C$, to decompose $T_N$ as $CT_N\times H$,
corresponding to the Lie algebra decomposition ${\frak t}_{\Bbb C}={\frak t}\oplus
i{\frak t}$. By identifying the second factor in the Lie algebra
decomposition with $N_{\Bbb R}$, we have the exponential map
$$N_{\Bbb R} \stackrel{\exp}{\to} H.$$
\begin{lemma} If $C$ is a cone in $M$, then $T_C$ is given by
$$T_C=CT_N\times\exp(-{\rm int}(C{}^{\vee}))\subset CT_N\times H.$$
\end{lemma}
\begin{proof}
The interior of $C{}^{\vee}$ is the set of $n\in N_{\Bbb R}$ such that $\ip{n}{c}>0,
\forall c\in C.$ Under the exponential map, the orbit $CT_N\times \{-n\}$
corresponds to an orbit of constant modulus strictly less than $1$.
\end{proof}
{}From this, we see that
$$\bigcap_{\beta\in{\rm ext}\,\Box} T_{\langle s^\beta \lambda'_\beta\rangle} =
CT_N\times\exp(-{\rm int}(\sigma)),$$
where
\begin{equation}
\label{eq:sigma}
\sigma=\left(\bigcup_{\beta\in{\rm ext}\,\Box} \langle s^\beta
\lambda'_\beta\rangle\right)^\vee.
\end{equation}
If we respect condition (\ref{eq:exp-condition}), we see that
$\bigcup_{\beta\in{\rm ext}\,\Box} \langle s^\beta \lambda'_\beta\rangle$ never
contains a whole subspace, so that $\sigma$ is non-zero. The expansion
determined by $s^\beta$ for $\beta\in \Box$ is thus valid in the region
$T_\sigma\subset T_N$ given by equation (\ref{eq:sigma}).
\subsection{Elementary convex geometric interpretation}
According to the work we have done in the previous sections, one can prove the
extreme point formula as follows:
Begin by orienting the edges of $\Box$ such as to respect condition
(\ref{eq:exp-condition}). This defines a cone (with a sign) for each extreme
vertex, according to proposition \ref{prop:general-exp}, and the algebraic sum
of their characteristic series should yield the characteristic polynomial for
the polytope $\Box$. If one can prove this for one admissible orientation of
the edges of $\Box$, then the formula for the characteristic functions follows
by the existence of Ishida's ${\Bbb C}[M]$-homomorphism in the previous section.
This gives a proof of the formula involving only elementary convex geometry. We
won't bother with this, as Ishida \cite{ishida} already gives a proof which
reduces the problem to the contractibility of convex sets.
Instead we can deduce the following result in convex geometry:
\begin{thm}
\label{thm:chi-decomposition}
For all orientations $\{s^\alpha\}$ of the edges of $\Box$ satisfying the
orientation condition (\ref{eq:exp-condition}) we have
$$\chi_{\Box\cap M} = \sum_{\alpha\in{\rm ext}\,\Box} \pm\chi_{C^s_\alpha\cap M}$$
where $\pm=\prod_i (s^\alpha)_i$ and
$$ C^s_\alpha = \alpha + s_{-\kern -0.2em}\cdot\lambda'_\alpha + \langle
s\lambda'_\alpha\rangle.$$
\end{thm}
\section{Number of Lattice Points and Volume}
In this section I expand the functions $\nu_\alpha(t)$
around $t=1$ and derive formulae for the number of lattice
points and volume of $\Box$.
\subsection{The Number of Lattice Points}
Equation (\ref{eq:lef-fns-b}) expresses an equality between
the finite Laurent polynomial determined by $\Box$ and a sum
a rational functions. When evaluated on $t\in T_N$ with $t\to 1$ the left-hand
side tends to the number of lattice points of $\Box$ whereas on the right-hand
side the rational functions may have poles.
I choose a one-parameter subgroup $\{\exp(s\zeta) : s\in{\Bbb R}\}$ determined by
some element $\zeta$ of the Lie algebra $\bf t$ of $CT_N$.
Substituting $\exp(s\zeta)$ for $t$, the formula reduces to an equality between
rational functions of $s$ --- provided I choose a one-parameter subgroup that
does not coincide with the singular loci of the $\nu_\alpha$.
\begin{dfn} For short, I call $\zeta\in {\frak t}$ {\em generic\/} if
$\ip\zeta{{\lambda_\alpha}^i}\ne0$, for all $i$ and $\alpha$. (This is indeed the case
generically).
\end{dfn}
For generic $\zeta$, the functions $\nu_{\alpha,\zeta}^\Box: s \mapsto
\nu_\alpha^\Box(e^{s\zeta})$
can expanded in Laurent series:
$$ \nu_{\alpha,\zeta}^\Box(s)=\sum_{i=-\infty}^{\infty}
\nu_{\alpha,\zeta,i}^\Box s^i,
$$
and their sum as $s\to 0$ is
obviously given by the sum of the constant terms
$\nu_{\alpha,\zeta,0}$ in each expansion.
Denote by $C_\alpha$ the tangent cone of $\Box$ at $\alpha\in{\rm ext}\,\Box$, and by
$\lambda^i_\alpha$ for $ i=1,\dots,n$, its' primitive generators in $M$. The
semi-open unit parallelepiped determied by the generators of $C_\alpha$ in $M$
is denoted
\begin{equation}
\label{eq:Qalpha}
Q_\alpha=Q^M_{C_\alpha}=\{\sum a_i{\lambda_\alpha}^i: 0\leq a_i < 1\}.
\end{equation}
We have
$$\nu^\Box_{\alpha,\zeta}(s)= {\sum_{q\in Q_\alpha\cap M}
e^{s\ip{\zeta}{\alpha+q}}
\over
(1-e^{s\ip\zeta{{\lambda_\alpha}^1}})\cdots(1-
e^{s\ip\zeta{{\lambda_\alpha}^n}} )},$$
provided $\ip\zeta{{\lambda_\alpha}^i}\ne0$.
The zero-th order term in the expansion of $\nu^\Box_{\alpha,\zeta}(s)$ is a
homogeneous function of $\zeta$, which is equal to:
$$ {\sum_{q\in Q_\alpha\cap M} e^{s\ip{\zeta}{\alpha+q}}\over s^n\prod_i
(-\ip\zeta{{\lambda_\alpha}^i})}{\prod_i
(-s\ip\zeta{{\lambda_\alpha}^i})\over\prod_i(1-\exp(s\ip\zeta{{\lambda_\alpha}^i}))}
$$
which gives
$${1\over\prod_i \ip\zeta{{\lambda_\alpha}^i}}
\sum_{j=0}^n {(-1)^j \over j!} \sum_{q\in Q_\alpha\cap M}
\ip{\zeta}{\alpha+q}^j {\cal T}_{n-j}(\ip\zeta{\lambda_\alpha}),
$$
where ${\cal T}_k $ are the {\em Todd polynomials}, homogeneous polynomials of
degree $k$ whose coefficients can be expressed in terms of the Bernoulli
numbers \cite{hirz}. They are defined by the formal series
$$\sum_{k=0}^\infty s^k {\cal T}_k(x_1,x_2,\dots) = \prod_{i\geq 1}
{sx_i\over{1-\exp(-sx_i)}}.
$$
By ${\cal T}_k(\ip\zeta{\lambda_\alpha})$ I mean
$T_k(\ip\zeta{{\lambda_\alpha}^1},\dots,\ip\zeta{{\lambda_\alpha}^n})$.
\begin{thm}\label{thm:number}
Let $\Box$ be a simple convex lattice polytope.
Denote by $C_\alpha$ the tangent cone of $\Box$ at $\alpha\in{\rm ext}\,\Box$, and by
$\lambda^i_\alpha,$ for $ i=1,\dots,n$, the primitive generators of $C_\alpha$
in $M$.
The semi-open unit parallelepiped determied by the generators of $C_\alpha$ in
$M$ as in equation \ref{eq:Qalpha} is denoted $Q_\alpha$. Then, for generic
$\zeta\in {\frak t}$, the number of lattice points in $\Box$ is given by
$$\sum_{\alpha\in{\rm ext}\, \Box}{1\over\prod_i \ip\zeta{{\lambda_\alpha}^i}}
\sum_{j=0}^n {(-1)^j \over j!} \sum_{q_\alpha\in Q_\alpha\cap M}
\ip{\zeta}{\alpha+q_\alpha}^j {\cal T}_{n-j}(\ip\zeta{\lambda_\alpha}).$$
\end{thm}
\paragraph{Remark 1} It might be more convenient in some cases to subdivide the
tangent cone into non-singular cones. One obtains a similar formula (see
\cite[Th\'eor\`eme 3.1]{brion}).
\paragraph{Remark 2} Putting
$t=\exp(s\zeta)$ corresponds to considering the Lefschetz number for the action
of the one-parameter subgroup $G_\zeta$ of $CT_N$ generated by $\zeta\in{\frak t} =
{\rm Lie }\,CT_N$. Generically this has a dense orbit, and therefore the same
fixed points on $X$ as the whole real torus $CT_N$, and so the Lefschetz
formula for $G_\zeta$ is the same as that obtained by substituting
$\exp(s\zeta)$ for $t$.
This is not true of course when $\ip\zeta{{\lambda_\alpha}^i}=0$, for some $i$ and $\alpha$.
Indeed in that case the group $G_\zeta$ has whole circles of fixed points.
Restricting to $G_\zeta$ corresponds to projecting the vertices and edges of
$\Box$ onto the hyperplane in $M_{\Bbb R}$ defined by the form $\zeta\in
N_{\Bbb R}$.
\subsection{The Volume}
\subsubsection{The ``Classical Limit''}
In the introduction I mentioned the fact that for larger and larger
polytopes (or finer and finer lattices) the number of points is
asymptotically equal to their volume --- I call this ``the classical limit'' by
analogy with the limit $\hbar\to 0$ in quantum mechanics. More precisely, for
any $n$-dimensional polytope $\Box$, the volume of $\Box$ is given by
\begin{equation}
\label{eq:vol_lim}
{\rm vol}_n(\Box)=\lim_{k\to\infty}{\#(k{}^{-1} M\cap
\Box)\over k^n} = \lim_{k\to\infty}{\#(M\cap k\Box)\over k^n}.
\end{equation}
Indeed \cite{mac:poly}, the function
$$H_{\Box}(k)=\#(k{}^{-1} M\cap \Box)=\#(M\cap (k\Box))$$
is a polynomial of degree $n$, for $k\in{\Bbb N}$, with leading
coefficient ${\rm vol}_n(\Box),$ and is called the {\em
Hilbert polynomial\/} for $\Box$. The polynomial $H_{\Box}$ is in fact equal to
the {\em Hilbert polynomial\/} $H_{(X_\Box,L_\Box)}$ for the pair $(X_\Box,L_\Box)$, namely
$$H_{(X_\Box,L_\Box)}(k)=\chi(X_\Box,{\cal O}_{X_\Box}(kL_\Box))=
\sum(-1)^i\dim H^i(X_\Box,{\cal O}_{X_\Box}(kL_\Box)).$$
This follows from equation (\ref{eq:coho}) and because taking tensor powers
$L_\Box^{\otimes k}$ of $L_\Box$ corresponds to taking multiples of $kN$ of $N$, and
hence submultiples $k{}^{-1} M$ of $M$.
\begin{thm}
\label{thm:volume} Let $\Box$ be a simple convex lattice polytope and adopt the
same notation as theorem \ref{thm:number}. Let $|K_\alpha|$ denote the order of
the singularity of $\Box$ at $\alpha$. Then for generic $\zeta$ the volume of
$\Box$ is given by
$${\rm vol}_n(\Box)= {(-1)^n\over n!}
\sum_{\alpha\in{\rm ext}\,\Box}{\ip\zeta{\alpha}^n |K_\alpha| \over
\ip\zeta{{\lambda_\alpha}^1}\cdots\ip\zeta{{\lambda_\alpha}^n}}.$$
\end{thm}
\begin{proof} The proposition follows from taking the coefficients
of the $k^n$ terms in theorem \ref{thm:number} applied to the
polytope $k\Box$. Note that ${\rm ext}\, k\Box=k({\rm ext}\,\Box)$ and that
$C^{k\Box}_{k\alpha}=C^\Box_\alpha.$ Note that the order $|K_\alpha|$ of the
singularity at $\alpha$ is equal to the cardinality of $Q_\alpha\cap M$. See
\cite[Corollaire 2]{brion}.
\end{proof}
\subsubsection{The Riemann-Roch approach}
The volume of $\Box$ appears if one uses the same geometric approach based on
the $d''$-complex but directly applies the Riemann-Roch theorem, instead of
computing the Lefschetz number for the action of $t\in T$ and then letting
$t\to1$.
The Riemann-Roch theorem expresses the Euler characteristic of a
holomorphic vector bundle $E$ over a complex manifold $X$ in terms of
characteristic classes of $E$ and (tangent bundle to the) $X$:
\begin{equation}
\chi(X,E)=\{\hbox{ch}(E)\cdot{\cal T}(X)\}[X],
\end{equation}
where ch$(E)$ and ${\cal T}(X)$ are the Chern character of $E$
and the Todd class of $X$, respectively. If $E$ has rank $n$ and
$c_1,\dots,c_n$ denote the characteristic classes of $E$ then the {\em Chern
character} can be defined by the power series
$$\sum_{i=1}^n e^{x_i}= n+\sum x_i+{\sum x_i^2\over 2!}+\cdots,$$
where the $c_i$ are to be thought of formally as the elementary symmetric
functions in the $x_i$.
Since we are in a one-dimensional situation and $c_1(L_\Box)$ is represented by
the K\"ahler form $\omega$, the Chern character is given by
$${\rm
ch}(L_\Box)=1+\omega+{\omega^2\over2!}+{\omega^3\over3!}+\dots+{\omega^n\over
n!}.$$
The {\em Todd class} is a polynomial in the characteristic
classes $c'_i$ of the tangent bundle of $X$. If the $c'_i$ are regarded
formally as the elementary symmetric functions of the $x'_i$ (as in the case
above), the Todd class can be expressed as
$${\cal T}(X)=\prod_i {x'_i\over 1-e^{-x'_i}}.$$
(Presumably, there is some relationship between these and the Todd polynomials
of theorem \ref{thm:number} which in this case exhibits the Riemann-Roch
formula as the ``classical limit'' of the Lefschetz fixed point formula.)
By multiplying the two series selecting the terms of order $n$ and evaluating
them on
$[X]$, we get
$$\chi(X,L_\Box) ={\rm vol}_n(X)+ \hbox{\em lower order terms},$$
where the ``lower order terms" are terms involving powers of $\omega$ of order
less than $n$. Again, because refining the lattice $M$ corresponds to
multipying $\omega$, we see that $\chi(X,tL_\Box)$ is given asymptotically by
${\rm vol}_n(X)t^n$.
| proofpile-arXiv_065-15770 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The beginning of the XXI century has brought a lot of excitement and whole new perspectives both to Mathematical Relativity and Lorentzian geometry. In particular, the first detection of gravitational waves \cite{LIGO} and the latest observations of black holes \cite{BH} have boosted the interest of exploring geometric tools that adapt well to non-smooth settings.
In the context of Lorentzian geometry, the search for an axiomatic approach to the most remarkable aspects of relativity --such as causality--- can be traced back to the fundamental work of Kronheimer and Penrose \cite{KP}. In their approach, the main features of causality theory could be established from a small set of axioms rather than deduced from the smooth geometry structure of spacetime. Thanks to Penrose's insight, such idealizations have provided effective foundations to deal with situations where smoothness is not required, such as in the study of quantum gravity \cite{Surya} and more recently in the novel field of Lorentzian length spaces \cite{Kunzinger}.
On the other hand, we have witnessed in the past decade a renewed interest for synthetic geometric methods, arising from their extensive use in scenarios where tools stemming from differential geometry are not available. Indeed, several classical results from Riemannian geometry have been extended to the more general scope of length spaces. Basically, a length space is a metric space where the distance between any two points can be approached by the length of curves joining them. Remarkably, for geodesic length spaces we are able to define a synthetic notion of curvature by comparing geodesic triangles with triangles in a suitable space form of constant curvature (see \cite{Bridson,Burago,Plaut,Shiohama} and references therein). The first attempts to establish a comparison theory for Lorentzian manifolds dates back to the work of Harris in the proof of Toponogov's Splitting Theorem \cite{Harris}. In their seminal work \cite{Kunzinger}, Kunzinger and S\"amann developed a synthetic notion of (timelike) curvature bounds in a Lorentzian non-smooth context and used it to explore the nature of singularities. More recently, in \cite{Felix} we find the first detailed study of Lorentzian comparison theory, as well as further developments and techniques in the context of Lorentzian pre-length spaces. Moreover, in \cite{FS} the main analytic tools related to comparison theorems (such as properties of the exponential map) are laid out, along with fundamental results pertaining hyperbolic angles, most notably, the triangle inequality. In the present paper we lay out the basic tools to deal with comparisons in Lorentzian length spaces in a synthetic manner analog to the well established theory of Alexandrov or CAT spaces, as well as some global results. In particular, we discuss three equivalent notions of curvature bounds and a first variation formula for Lorentzian length spaces bounded from below by $0$. We hope these results will prove helpful in the endeavor of applying synthetic geometric methods in Relativity.
The paper is organized as follows: in section \ref{sec:pre} we set the main definitions pertaining Lorentzian length spaces and fix the notation we will be using throughout this work. In section \ref{sec:comp} we recall the definition of bounded timelike curvature. After revising in section \ref{sec:nonnorm} the notion of non-normalized angle due to Alexander and Bishop \cite{Bishop}, we establish the equivalence of timelike curvature bounds and Alexandrov's convexity property in section \ref{sec:Alex}. On section \ref{sec:Toponogov} we define the notion of angle in Lorentzian pre-length spaces and use it to discuss the relation of timelike curvature bounds and the local Lorentzian version of Toponogov's property. Finally, in section \ref{sec:firstvar} we prove a global first variation formula for non-negative curvature bounded Lorentzian length spaces.
\section{Preliminaries}\label{sec:pre}
Throughout this section we will recall some basic notions in the context of Lorentzian length spaces as defined in \cite{Kunzinger}. The first ingredient consist in an axiomatic formulation of causality, close in spirit to the original definition of causal spaces first proposed in \cite{KP}.
\begin{definition}\label{defi:prel}
A \emph{Lorentzian pre-length space} is a quintuple $(X,d,\ll,\leq, \tau)$ where
\begin{enumerate}
\item $(X,d)$ is a metric space,
\item $\leq$ is a pre-order,
\item $\ll$ is a transitive relation contained in $\leq$.
\item $\tau:X\times X \to [0,\infty]$ is lower semi-continuous function satisfying
\begin{itemize}
\item $\tau(x,z)\geq \tau(x,y) + \tau(y,z)$ for all $x\leq y \leq z$
\item $\tau(x,y)>0$ if and only if $x\ll y$.
\end{itemize}
\end{enumerate}
\end{definition}
If a pair of points satisfy $x\ll y$, ($x\le y$) we say that $x$ and $y$ are \emph{chronologically} (\emph{causally}) \emph{related}, respectively. \emph{Chronological} (\emph{causal}) \emph{future} and \emph{past sets} $I^+(x)$, $I^-(x)$ ($(J^+(y)$, $J^-(y)$) are thus defined in the standard way. The function $\tau$ is called a \emph{time separation} function.
Lorentzian pre-length spaces have just enough structure in order to establish some of the most basic facts pertaining causality. Most notably, the so called \emph{push-up property} (if $x\ll y\le z$ or $x\le y\ll z$ then $x\ll z$) and the openness of the chronological sets $I^\pm (x)$.
A curve $\gamma:[a,b]\to X$ that is not constant on any subinterval of $[a,b]$ is called \emph{future-directed timelike} (\emph{causal}) if $\gamma$ is locally Lipschitz continuous with respect to $d$ and whenever $s,t\in [a,b]$ with $s<t$ then $\gamma(s)\ll \gamma(t)$ ($\gamma(s)\leq \gamma(t)$). Such curve is \emph{future-directed null} if it is causal and no pair of points on the curve are timelike related. Past directed curves are defined similarly.
In order to have a sensible notion of length of causal curves, we rely on the time separation function. Thus, we define the $\tau-$\emph{length} of a future-directed causal curve $\gamma$ as
\[
L_{\tau}(\gamma)= \inf\left\{ \displaystyle \sum_{i=0}^{n-1} \tau(\gamma(t_i),\gamma(t_{i+1})) : a=t_0<t_1<\cdots <t_n = b, n\in\mathbb{N} \right\}.
\]
where the infimum is taken over all possible partitions of $\gamma$. A future-directed causal curve $\gamma$ is \emph{maximal} if $L_{\tau}(\gamma)=\tau(\gamma(a),\gamma(b))$. Maximal curves are cornestone to synthetic geometry, as they are the closest analogs to geodesics.
Essentially, a \emph{Lorentzian length space} $(X,d,\ll,\leq, \tau)$ is a Lorentzian pre-length space with a local structure that resemble the one provided by normal neighborhoods and whose time separation function can be recovered from the $\tau$-length of curves. The former is achieved through the notions of \emph{localizing neighborhoods}: that is, neighborhoods $\Omega_x$ around each point $x\in X$ furnished with relations $\le_{\Omega_x}$, $\ll_{\Omega_x}$ and continuous functions $\omega_x:\Omega_x\times\Omega_x\to [0,\infty)$ such that
\begin{enumerate}
\item $(\Omega_x,d\vert_{\Omega_x\times\Omega_x},\ll_{\Omega_x},\leq_{\Omega_x}, \omega_x)$ is a Lorentzian pre-length space.
\item $I^\pm (y)\cap \Omega_x\neq\emptyset$, for all $y\in\Omega_x$.
\item All causal curves contained in $\Omega_x$ have uniformly bounded $d$-length.
\item For all $p\neq q\in \Omega_x$ with $p\le q$ there exists a future causal curve $\gamma_{pq}$ contained in $\Omega_x$ such that $L_\tau(\gamma_{pq})=\omega_x(p,q)$ and whose $\tau$-length is maximal among all future causal curves from $p$ to $q$ lying in $\Omega_x$.
\end{enumerate}
The precise definition reads as follows (refer to Definition 3.22 of \cite{Kunzinger}).
\begin{definition}\label{defi:lls}
A Lorentzian pre-length space for which:
\begin{enumerate}
\item Every point $x$ has a localizing neighborhood $\Omega_x$.
\item Every point has a neighborhood in which the $\ll$ is closed.
\item If $x\le y$ then there exists a future causal curve from $x$ to $y$.
\item $\tau (x,y)=\mathcal{T}(x,y)$, for all $x,y\in X$, where
\[
\mathcal{T}(x,y) = \sup\{L_{\tau}(\gamma): \gamma\mbox{ is future-directed causal curve from $x$ to $y$}\}.\footnote{Here, we set $\mathcal{T}(x,y)=0$ when the set of future-directed causal curves from $x$ to $y$ is empty.}
\]
is called a \emph{Lorentzian length space}.
\end{enumerate}
\end{definition}
A causality theory for Lorentzian length spaces can be developed in a way that resembles the classical theory for spacetimes. In particular, a causal hierarchy can be established with the notion of global hyperbolicity at the top. Just as in the classical smooth case, a causal Lorentzian length space is \emph{globally hyperbolic} if the causal diamonds $J^+(x)\cap J^-(z)$ are compact. In this case, the time separation function $\tau : X\times X\to [0,\infty]$ is continuous and finite. Moreover, $(X,d,\ll,\leq, \tau)$ satisfies the Avez-Seifert property: for any pair of causally related points $x\le y$ there exists a maximal future causal curve $\gamma$ from $x$ to $y$ \cite{ACS,Kunzinger}.
\section{Triangle Comparison}\label{sec:comp}
At the core of synthetic geometry is the notion of triangle comparison. In a nutshell, the main idea is that curvature bounds can be recovered locally from comparisons of the most basic geometric objects (lengths and angles) with respect to those found in a two dimensional model space. As in the Alexandrov case, geodesic triangle comparison is the main key to describe the curvature in the Lorentzian context. This is achieved by looking at timelike triangles in the Lorentzian space forms of constant sectional curvature $k$. We denote these models by
\[
\mathbb{M}_{k}^{L} = \left\{
\begin{array}{ll}
\mathbb{S}_{1}^{2}(r) & k=\frac{1}{r^2} \\
\mathbb{R}_{1}^{2} & k=0 \\
\mathbb{H}_{1}^{2}(r) & k=-\frac{1}{r^{2}}
\end{array}
\right.,
\]
where $\mathbb{S}_{1}^{2}(r)$ is the simply connected cover of two-dimensional de Sitter space, $\mathbb{R}_{1}^{2}$ is the two-dimensional Minkowski space and $\mathbb{H}_{1}^{2}(r)$ is the simple connected cover of the two-dimensional anti de Sitter space.
Notice that in all these cases there exist restrictions on the lengths of the sides of a triangle akin to the triangle inequality in Euclidean geometry. In fact, since we will be dealing with triangles whose sides are unique maximizing timelike segments, the occurrence of conjugate points along geodesics hinders the possibility of having such kind of triangles with arbitrary long side lengths. Those restrictions are described in the Realizability Lemma (Lemma 4.6 of \cite{Kunzinger} or Lemma 2.1 in \cite{Bishop}), and if the side lengths of a triangle satisfy them, we would say that such triangle obey \emph{timelike size bounds for} $k$. Roughly speaking, for $k=0$ the restriction is given by the reverse triangle inequality, while for $k<0$ in addition we have that the greatest side should be less than $\pi /\sqrt{-k}$. A timelike geodesic triangle in a model space ${M}_{k}^{L}$ whose vertices satisfy $x\ll y\ll z$ will be denoted by $\triangle xyz$.
\begin{definition}
A \emph{timelike geodesic triangle} $(x,y,z)$ in a Lorentzian length space $(X,d,\ll,\leq, \tau)$ is a triple of points in $X$ satisfying $x\ll y\ll z$ such that $\tau(x,z)<\infty$. Its \emph{sides} are maximal future-directed causal curves $\alpha$ from $x$ to $y$, $\beta$ from $y$ to $z$ and $\gamma$ from $x$ to $z$, hence
\[
L_{\tau}(\alpha)=\tau(x,y), \quad L_{\tau}(\beta)=\tau(y,z), \quad L_{\tau}(\gamma)=\tau(x,z).
\]
A \emph{comparison triangle} for the geodesic triangle $(x,y,z)$ is a triangle $\triangle\bar{x}\bar{y}\bar{z}$ in a model space $\mathbb{M}_{k}^{L}$ with geodesic segments joining them $\bar{\alpha }$, $\bar{\beta}$, $\bar{\gamma}$ whose lengths equal those of $\alpha$, $\beta$, $\gamma$, respectively\footnote{Notice that comparison triangles are unique up to an isometry.}. In other words
\[
\bar{\tau} (\bar x,\bar y) =\tau (x,y),\quad \bar{\tau} (\bar y,\bar z) =\tau (y,z), \quad \bar{\tau} (\bar x,\bar z) =\tau (x,z).
\]
where $\bar{\tau}$ is the time separation function in the model space $\mathbb{M}_{k}^{L}$. If a point $q$ lies on a side of a timelike geodesic triangle, we denote by $\bar{q}$ the point lying on the corresponding side of the comparison triangle such that $\bar{\tau}(\bar{p},\bar{q})=\tau (p,q)$, where $p\in\{x,y\}$ is the initial point of the side.
\end{definition}
\begin{figure}[h]
\centering{
\includegraphics[scale=.3]{Diapositiva1.png}
\caption{A timelike geodesic triangle and its comparison triangle in $\mathbb{M}_{k}^{L}$}
}
\end{figure}
We now state the original definition of timelike curvature bound for Lorentzian length spaces as established in \cite{Kunzinger}.
\begin{definition}\label{defi:curvbounds}
A Lorentzian pre-length space $(X,d,\ll,\leq,\tau)$ is said to have \emph{timelike curvature bounded below by} $k\in\mathbb{R}$ if around any $p\in X$ there exists a neighborhood $U\ni p$ with the following properties:
\begin{enumerate}
\item[(i)] $\tau\vert_{U\times U}$ is finite and continuous.
\item[(ii)] For every $x\ll y$ there exists a causal curve $\alpha$ in $U$ with $L_{\tau}(\alpha)=\tau(x,y)$.
\item[(iii)] For any timelike geodesic triangle $(x,y,z)$ in $U$, realized by maximal curves $\alpha$, $\beta$, $\gamma$ whose side lengths satisfy timelike size bounds for $k$ the following holds: if triangle $\triangle\bar{x}\bar{y}\bar{z})$ is a comparison triangle for $(x,y,z)$ in $\mathbb{M}_{k}^{L}$ realized by timelike geodesics $\bar{\alpha}$, $\bar{\beta}$ and $\bar{\gamma}$, then whenever $p$, $q$ are points on the sides of $(x,y,z)$ and $\bar{p},\bar{q}$ are the corresponding points in $(\bar{x},\bar{y},\bar{z})$, we have
\[
\tau(p,q)\leq \bar{\tau}(\bar{p},\bar{q}).
\]
If under the same hypothesis the alternative inequality
\[
\tau(p,q)\ge \bar{\tau}(\bar{p},\bar{q}).
\]
holds, we will say that $(X,d,\ll,\leq,\tau)$ has \emph{timelike curvature bounded above by} $k$. The neighborhood $U$ is called a \emph{comparison neighborhood for} $p$.
\end{enumerate}
\end{definition}
\begin{figure}[h]
\centering{
\includegraphics[scale=.3]{Diapositiva2.png}
\caption{Timelike curvature bounded from below by $k$}
}
\end{figure}
In the interest of having a lighter presentation, we include the technical aspects of Definition \ref{defi:curvbounds} as follows:
\begin{definition}\label{defi:curvneigh}
Let $(X,d,\ll,\leq,\tau)$ be a Lorentzian pre-length space. We say that $x\in X$ has a \emph{compatible neighborhood} $U$ if
\begin{enumerate}
\item[(i)] $\tau\vert_{U\times U}$ is finite and continuous.
\item[(ii)] For every $x\ll y$ there exists a causal curve $\alpha$ in $U$ with $L_{\tau}(\alpha)=\tau(x,y)$.
\end{enumerate}
\end{definition}
\begin{remark}\label{GeodesicPropertiesU}
Let $U$ be a compatible neighborhood in a Lorentzian pre-length space $(X,d,\ll,\leq,\tau)$.
\begin{enumerate}
\item In virtue of Proposition 3.34, Remark 4.3 and Remark 4.8 in \cite{Kunzinger}, any maximal timelike curve in $U$ can parameterized by arc length. Furthermore, any intermediate value of $\tau$ along $\alpha$, $\beta$ or $\gamma$ is attained. In particular, all of these follow when $(X,d,\ll,\leq,\tau)$ is a globally hyperbolic Lorentzian length space.
\item Let $(x,y,z)$ be a timelike geodesic triangle in $U$, realized by maximal causal curves $\alpha$, $\beta$, $\gamma$ whose side lengths satisfy timelike size bounds for $k$. If we take points $p\in \alpha$, $q\in \beta$, $r\in\gamma$ then the sides of triangle $(p,y,q)$ also satisfy timelike size bounds for $k$. Moreover, if $p\ll r$, then the sides of triangle $(x,p,r)$ satisfy timelike size bounds for $k$ and so do the sides of triangle $(r,q,z)$ provided $r\ll q$.
\end{enumerate}
\end{remark}
As can be readily seen, Definition \ref{defi:curvbounds} formalizes the intuitive notion that in the presence of positive (negative) timelike curvature, triangles look fatter (thinner) than flat triangles. In fact, this formulation agrees with the well known notions in metric geometry of Alexandrov (curvature bounded from below) and CAT (curvature bounded from above) spaces. Moreover, it is consistent with the definition of curvature bounds for semi-Riemannian manifolds proposed in \cite{Bishop}. However, there is a catch. According to \cite{Bishop}, a Lorentzian manifold having curvature bounded from \emph{below} by $k$ while having timelike curvature bounded from \emph{below} (as a Lorentzian length space), it has sectional curvature on timelike planes bounded from \emph{above} by $k$. A similar statement holds if the words below and above are interchanged.
Among the first applications of the synthetic notion of curvature in Lorentzian length spaces described in \cite{Kunzinger} we find the description of curvature singularities, a topic of great interest in the realm of Relativity. A Lorentzian length space has \emph{timelike curvature unbounded from below} (\emph{above}) if there exist a compatible neighborhood that fails to be a comparison neighborhood for all $k\in\mathbb{R}$. While this definition can be used to spot singularities (for instance, the interior region of Schwarszchild spacetime is a Lorentzian length space with timelike curvature unbounded from below), its use often requires comparisons on the large, which are at odds with the intuitive local character of curvature. As an example, a timelike funnel with timelike $\lambda$ has timelike curvature unbounded from below (see Examples 3.19 and 4.21 in \cite{Kunzinger}), but both $I^-(p)$ and $I^+(q)$ are flat open subsets of it, hence Lorentzian length spaces in their own right with timelike curvature bounded ---both from below and from above-- by $0$.
Here we present a novel example of a globally hyperbolic Lorentzian length space with arbitrary small $k$-compatible neighborhoods in which no comparison is possible. Hence, in spite of being at the top of the causal ladder, any open subset of it has unbounded timelike curvature both from above and from below.
\begin{example}
Let us consider $(X,d)=(\mathbb{R}^2,d_T)$ where $d_T$ is the taxicab metric
\[
d_T((x_1,y_1),(x_2,y_2))=\vert x_1-x_2\vert +\vert y_1-y_2\vert ,
\]
and let the relations $\ll_T$, $\leq_T$ be the usual chronological and causal relations in Minkowski space $\mathbb{R}^2_1$. Furthermore, for $(x_1,y_1),(x_2,y_2)\in\mathbb{R}^{2}$ we define
\[
\tau_T((x_1,y_1),(x_2,y_2) = \left\{
\begin{array}{ll}
y_2-y_1-\vert x_2-x_1\vert & \mbox{if $(x_1,y_1)\leq (x_2,y_2)$} \\
0 & \mbox{otherwise}
\end{array}
\right.
\]
\end{example}
We first show now that $\mathbb{R}^{2,1}_T=(\mathbb{R}^{2},d_T,\ll_T,\leq_T,\tau_T)$ ---dubbed \emph{Lorentzian taxicab space}--- is a globally hyperbolic Lorentzian length space.
A straightforward computation shows that $\tau_T$ satisfies the causal properties of a time separation as described in Definition \ref{defi:prel}. Since $d_T$ is equivalent to the standard Euclidean metric, the topology induced by $d_T$ is Euclidean, and as a consequence $d_T$ is continuous. Moreover, the class of (Lipschitz) causal curves in both $\mathbb{R}^{2,1}_T$ and $\mathbb{R}^2_1$ also coincide, which in turns implies that the causal diamonds of $\mathbb{R}^{2,1}_T$ are just the standard causal diamonds of $\mathbb{R}^{2}_1$. Compacity of the causal diamonds follows, and thus
$\mathbb{R}^{2,1}_T$ is a globally hyperbolic pre-length space.
Now we focus on the requirements of Definition \ref{defi:lls}. Global hyperbolicity implies causal connectivity and causal closedness. Moreover, given any point $(x,y)\in \mathbb{R}^{2,1}_T$ consider the $d_T$ open ball centered at $(x,y)$
\[
\Omega_{(x,y)} = \{(p,q)\in \mathbb{R}^{2}: \vert p-x\vert +\vert q-y\vert <1\},
\]
and $\omega_{(x,y)}=\tau_T\vert_{\Omega_{(x,y)}}$. Conditions (1) and (2) of the definition of a localizing neighborhood are immediate. In order to show (3), let $\gamma:[a,b]\to \mathbb{R}^{1,1}_T$, $\gamma(t)=(\gamma_1(t),\gamma_2(t))$, be a future causal curve in $\Omega_{(x,y)}$ and take a partition $a=t_1<t_2<\cdots < t_N=b$. Since $\gamma_2(t_{i+1})- \gamma_2(t_i)\geq \vert\gamma_1(t_i)-\gamma_1(t_{i+1})\vert$ we have
\[
\begin{array}{rcl}
\displaystyle\sum_{i=1}^{N-1} d_T(\gamma(t_i),\gamma(t_{i+1})) &=& \displaystyle\sum_{i=1}^{N-1} \vert\gamma_1(t_i)-\gamma_1(t_{i+1})\vert + \vert\gamma_2(t_i)-\gamma_2(t_{i+1})\vert \\
&\leq& \displaystyle\sum_{i=1}^{N-1} 2(\gamma_2(t_{i+1})- \gamma_2(t_i))\leq 4.
\end{array}
\]
Thus, the $d_T$ arc-length of curves is bounded.
Even though $\mathbb{R}^2_1$ and $\mathbb{R}^{2,1}_T$ share topology and casuality, their geodesic structures are rather different. As opposed to the Minkowski case, the Lorentzian taxicab admits infinitely many maximal curves joining any pair of causally related points. Indeed, given $(x_1,y_2)\le_T (x_2,y_2)$ take any future causal curve and a partition as above. Further assume $\gamma_1$ is a monotone function. Thus
\[
\begin{array}{rcl}
\displaystyle\sum_{i=1}^{N-1} \tau_T(\gamma(t_i),\gamma(t_{i+1})) &=& \displaystyle\sum_{i=1}^{N-1} \gamma_2(t_{i+1})-\gamma_2(t_{i})-\vert\gamma_1(t_{i+1})-\gamma_1(t_{i})\vert \\
&=& \displaystyle\sum_{i=1}^{N-1} \gamma_2(t_{i+1})-\gamma_2(t_{i}) - \displaystyle\sum_{i=1}^{N-1} \gamma_1(t_{i+1})-\gamma_1(t_{i}) \\
&=& \gamma_2(b)-\gamma_2(a)-(\vert\gamma_1(b)-\gamma_1(a)\vert )\\,
&=& \tau_T((x_1,y_1),(x_2,y_2))
\end{array}
\]
Hence $L_{\tau_T}(\gamma)=\tau_T((x_1,y_1),(x_2,y_2))$, and
\[
\tau_T((x_1,y_1),(x_2,y_2)) = L_{\tau_{T}}(\gamma) \leq \mathcal{T}((x_1,y_1),(x_2,y_2)) \leq \tau_T((x_1,y_1),(x_2,y_2)).
\]
which shows that $\gamma$ is maximizing. As an immediate consequence, condition (4) of localizing neighborhoods holds and also $\mathcal{T}=\tau$. Thus $\mathbb{R}^{2,1}_T$ is a globally hyperbolic Lorentzian length space.
\begin{figure}[ht]
\centering{
\includegraphics[scale=.3]{Diapositiva3.png}
\caption{Infinitely many maximal causal curves joining causally related points in $\mathbb{R}^{2,1}_T$.}
}
\end{figure}
We now apply directly the definition of timelike curvature bounds to show that there exists arbitrary small neighborhoods of $\mathbb{R}^{2,1}_T$ with no curvature bounds. We focus first at the case $k=0$. Let $\varepsilon>0$ and consider the triangle $(x,y, z)$ in $\mathbb{R}^{2,1}_T$ whose vertices are $x=(0,0)$, $y=(-2\varepsilon,3\varepsilon)$, $z=(\varepsilon,7\varepsilon)$, and whose sides are the linear segments connecting them. Moreover, set the comparison triangle $\triangle\overline{x}\overline{y}\overline{z}$ in Minkowski space given by $\overline{x}=(0,0)$, $\overline{y}=(\sqrt{10}\varepsilon ,3\varepsilon)$ and $\overline{z}=(0,6\varepsilon)$.
Further let $q=( {\varepsilon}/{4},{13\varepsilon}/{2} )$ and notice it belongs to the segment joining $y$ with $z$. Its corresponding point in the comparison triangle $\triangle\overline{x}\overline{y}\overline{z})$ is $\overline{q}=( \sqrt{10}\varepsilon/4, {21\varepsilon}/{4} )$. Then
\[
\tau_T(x,q) =\displaystyle\frac{25\varepsilon}{4} > \displaystyle\frac{\sqrt{431}\varepsilon}{4} =
\overline{\tau}(\overline{x},\overline{q})
\]
On the other hand, set $q=( -{5\varepsilon}/{4},4\varepsilon )$ on the segment from $y$ to $z$ and its corresponding point $\overline{q}=( {3\sqrt{10}\varepsilon}/{4}, {15\varepsilon}/{4} )$. Thus
\[
\tau_T(x,q) = \frac{11\varepsilon}{4}< \frac{\sqrt{135}\varepsilon}{4} =
\overline{\tau}(\overline{x},\overline{q})
\]
Therefore, none of the curvature conditions of Definition \ref{defi:curvbounds} hold in a neighborhood $U$ containing a triangle isometric to $(x,y,z)$.
We can use the same choice of triangle $(x,y,z)$ and points $p$, $q$ in order to obtain similar inequalities in the model spaces $\mathbb{M}_k^L$ with $k\neq 0$ provided that $\varepsilon$ is small enough so that $(x,y,z)$ satisfies timelike size bounds.
\begin{remark}
The above example admits an straightforward generalization: consider a metric space $(X,d_X)$ and a Lorentzian pre-length space $(Y,d_Y,\ll_Y,\leq_Y,\tau_Y)$ let us take the metric $d:(X\times Y)\times (X\times Y)\to \mathbb{R}_{\geq 0}$ defined as
\[
d((a,b),(x,y)) = d_X(a,x) + d_Y(b,y).
\]
Let us set $\ll$, $\leq$ and $\tau:(X\times Y)\times (X\times Y)\to [0,\infty]$ defined as follows:
\begin{itemize}
\item $(a,b)\leq (x,y)$ if and only if $\tau_Y(b,y)\geq d_X(a,x)$ and $b\leq y$.
\item $(a,b)\ll (x,y)$ if and only if $\tau_Y(b,y)> d_X(a,x)$.
\item For $(a,b),(x,y)\in X\times Y$ we have
\[
\tau((a,b),(x,y)) = \left\{
\begin{array}{ll}
\tau_Y(b,y) -d_X(a,x) & \mbox{if $(a,b)\leq (x,y)$} \\
0 & \mbox{otherwise}
\end{array}
\right.
\]
\end{itemize}
As can be checked, the \emph{taxicab Lorentzian product} $(X\times Y,d,\ll,\leq,\tau)$ is a Lorentzian pre-length space.
\end{remark}
\section{Non-normalized angles}\label{sec:nonnorm}
According to Euclidean geometry, side lengths and angle measure are the fundamental quantities associated to a triangle. Since the notion of timelike curvature bounds involves length comparison, it is natural to ask if there are alternative formulations involving angle measurements. This is indeed the case for Alexandrov and CAT spaces. Moreover, in the context of semi-Riemannian geometry, an affirmative answer is given in Proposition 2.1 of \cite{Bishop}. Refer to \cite{Kirch} for a thorough analysis on the properties of non-normalized angles.
\begin{definition}
Let $(M,\langle \cdot,\cdot\rangle )$ be a semi-Riemannian model space of curvature $k$. For a geodesic triangle $\triangle xyz$ in $M$ with $\alpha ,\gamma :[0,1]\to M$ geodesics connecting $x$ with $y$, and $x$ with $z$, respectively, we denote $\measuredangle yxz= \langle \alpha'(0), \gamma'(0) \rangle$ and call it the \emph{non-normalized angle} at $p$.
\end{definition}
\begin{remark}
In the scenario depicted above, choose for instance a point $p=\alpha (\lambda)$, other than $x$ on the side $\alpha$. Then it follows
\[
\angle pxz =\lambda \angle yxz.
\]
Hence, $\angle pxz$ and $\angle yxz$ though not equal, only differ by the scaling factor $\lambda$. Hence the use of the term non-normalized angles is fully accurate.
\end{remark}
In view of the above remark, we can relate the non-normalized angles when the endpoints vary along the sides of a geodesic triangle. We state this relation in the form of a lemma, which will be used often in the following results.
\begin{lemma}\label{RescalAngulo}
Let $\triangle xyz$ be a timelike geodesic triangle in a Lorentzian model space $\mathbb{M}_{k}^{L}$ realized by maximal timelike curves $\alpha :[0,a]\to \mathbb{M}_{k}^{L} $, $\beta :[0,b]\to \mathbb{M}_{k}^{L} $, $\gamma :[0,c]\to \mathbb{M}_{k}^{L} $ whose side lengths satisfy timelike size bounds for $k$. Then, for every $a_0\in[0,a]$, $b_0\in[0,b]$ and $c_0\in[0,c]$ we have
\begin{enumerate}
\item[(a)] $\measuredangle\bar{\alpha}(a_0)\bar{x}\bar{z} = \frac{a_0}{a}\measuredangle \bar{y}\bar{x}\bar{z}$ and $\measuredangle \bar{y}\bar{x}\bar{\gamma}(c_0) = \frac{c_0}{c}\measuredangle \bar{y}\bar{x}\bar{z}$.
\item[(b)] $\measuredangle \bar{\beta}(b_0)\bar{y}\bar{x} = \frac{b_0}{b} \measuredangle \bar{z}\bar{y}\bar{x}$ and $\measuredangle \bar{z}\bar{y}\bar{\alpha}(a_0) = \frac{a-a_0}{a} \measuredangle \bar{z}\bar{y}\bar{x}$.
\item[(c)] $\measuredangle \bar{\gamma}(c_0)\bar{z}\bar{y} = \frac{c-c_0}{c} \measuredangle \bar{x}\bar{z}\bar{y}$ and $\measuredangle \bar{x}\bar{z}\bar{\beta}(b_0) = \frac{b-b_0}{b} \measuredangle\bar{x}\bar{z}\bar{y}$.
\end{enumerate}
\end{lemma}
Two of the main results pertaining the above notion are the Hinge Lemma (Lemma 2.2 in \cite{Bishop}) and the Straightening Lemma (Lemma 2.4 in \cite{Bishop}). We present these lemmas in a context adapted to our ends. Notice that in its original formulation, these results are stated using signed distances.
First, notice that the result below agrees completely with its basic Euclidean counterpart.
\begin{lemma}[Hinge Lemma for included angles]\label{HingeLemmaMk}
Let $\triangle x_1y_1z_1$ be two timelike geodesic triangles in $\mathbb{M}^{L}_{k}$ satisfying timelike curvature bounds for $k$.
\begin{itemize}
\item Suppose $\bar{\tau}(x_1,y_1)=\bar{\tau}(x_2,y_2)$ and $\bar{\tau}(y_1,z_1)=\bar{\tau}(y_2,z_2)$. Then $\bar{\tau}(y_1,z_1)\leq \bar{\tau}(y_2,z_2)$ if and only if $\measuredangle y_1x_1z_1 \leq \measuredangle y_2x_2z_2$.
\item Suppose $\bar{\tau}(x_1,y_1)=\bar{\tau}(x_2,y_2)$ and $\bar{\tau}(y_1,z_1)=\bar{\tau}(y_2,z_2)$. Then $\bar{\tau}(x_1,z_1)\leq \bar{\tau}(x_2,z_2)$ if and only if
$\measuredangle x_1y_1z_1 \leq \measuredangle x_2y_2z_2$.
\item Suppose $\bar{\tau}(x_1,z_1)=\bar{\tau}(x_2,z_2)$ and $\bar{\tau}(y_1,z_1)=\bar{\tau}(y_2,z_2)$. Then $\bar{\tau}(x_1,y_1)\leq \bar{\tau}(x_2,y_2)$ if and only if
$\measuredangle y_1z_1x_1 \leq \measuredangle y_2z_2x_2$.
\end{itemize}
\end{lemma}
\begin{lemma}[Hinge Lemma for shoulder angles]\label{Hinge2}
Let $\triangle x_1y_1z_1$ be two timelike geodesic triangles in $\mathbb{M}^{L}_{k}$ satisfying timelike curvature bounds for $k$. \begin{itemize}
\item Suppose $\overline{\tau}(x_1,y_1)=\overline{\tau}(x_2,y_2)$ and $\overline{\tau}(x_1,z_1)=\overline{\tau}(x_2,z_2)$. If $\overline{\tau}(y_1,z_1)\leq \overline{\tau}(y_2,z_2)$ then $\measuredangle x_2y_2z_2 \leq \measuredangle x_1y_1z_1$ or $\measuredangle x_2z_2y_2\leq \measuredangle x_1z_1y_1$.
\item Suppose $\overline{\tau}(x_1,y_1)=\overline{\tau}(x_2,y_2)$ and $\overline{\tau}(y_1,z_1)=\overline{\tau}(y_2,z_2)$. If $\overline{\tau}(x_1,z_1)\leq \overline{\tau}(x_2,z_2)$ then $\measuredangle y_2x_2z_2\leq \measuredangle y_1x_1z_1$ or $\measuredangle y_2z_2x_2 \leq \measuredangle y_1z_1x_1$.
\item Suppose $\overline{\tau}(x_1,z_1)=\overline{\tau}(x_2,z_2)$ and $\overline{\tau}(y_1,z_1)=\overline{\tau}(y_2,z_2)$. If $\overline{\tau}(x_1,y_1)\leq \overline{\tau}(x_2,y_2)$ then $\measuredangle x_2y_2z_2\leq \measuredangle x_1y_1z_1$ or $\measuredangle y_2x_2z_2 \leq y_1x_1z_1$.
\end{itemize}
\end{lemma}
As a first application of the notion of non-normalized angles we prove an improved version of timelike curvature bounds more suited for applications. Namely, triangle comparison can be more easily performed when one of the points $p$ or $q$ in Definition \ref{defi:curvbounds} agree with a vertex, while the other point is chosen on its opposite side.
\begin{proposition} \label{CeviansCriteria}
Let $(X,d,\ll,\leq,\tau)$ be a Lorentzian pre-length space and suppose that for every point $X$ there exists a compatible neighborhood $U$ with the following property:
for any timelike geodesic triangle $(x,y,z)$ in $U$, realized by maximal timelike curves $\alpha$, $\beta$, $\gamma$ whose lengths satisfy timelike size bounds for $k$, whenever $p$ is a point on one side of $(x,y,z)$ and $v\in\{x,y,z\}$ is the vertex opposite to it, then we have
\begin{enumerate}
\item[(i)] $\tau(v,p)\leq \bar{\tau}(\bar{v},\bar{p})$, if $\tau(v,p)>0$.
\item[(ii)] $\tau(p,v)\leq \bar{\tau}(\bar{p},\bar{v})$, if
$\tau(p,v)>0$;
\end{enumerate}
where $\triangle\bar{x}\bar{y}\bar{z}$ is a comparison triangle of $(x,y,z)$ in $\mathbb{M}_{k}^{L}$ with corresponding sides $\bar{\alpha}$, $\bar{\beta}$, $\bar{\gamma}$. Then $(X,d,\ll,\leq,\tau)$ has timelike curvature bounded below by $k$.
\end{proposition}
\begin{proof}
We will prove that $U$ is a comparison neighborhood with respect to $\mathbb{M}_{k}^{L}$. Thus, take two points $p$, $q$ on the sides of triangle $(x,y,z)$, so $\bar{p}$, $\bar{q}$ are the corresponding points on triangle $\triangle\bar{x}\bar{y}\bar{z}$.
We first focus on vertex $y$. Suppose $p\in \beta$ and $q\in\alpha$. Notice that $0<\tau(q,p)<\overline{\tau} (\overline{q},\overline{p})$ and observe that $x\ll y\ll p$, then the sides of triangle $(x,y,p)$ satisfy timelike size bounds for $k$. Thus, let $\triangle x_1y_1p_1$ be a comparison triangle in $\mathbb{M}_{k}^{L}$ for triangle $(x,y,p)$.
\begin{figure}[h!]
\centering{
\includegraphics[scale=.3]{Diapositiva4.png}
\caption{Triangle $(x,y,z)$ and comparison triangles $\triangle \bar{x}\bar{y}\bar{z}$ and $\triangle x_1y_1p_1$.}
}
\end{figure}
By hypothesis we have $\tau(x,p)\leq \bar{\tau}(\bar{x},\bar{p})$. Then
\[
\bar{\tau}(x_1,p_1) = \tau(x,p) \leq \bar{\tau}(\bar{x},\bar{p}),
\]
and therefore by Lema \ref{HingeLemmaMk} we have $\measuredangle \bar{p}\bar{y}\bar{x}\geq \measuredangle p_1y_1x_1$. On the other hand, using Lemma \ref{RescalAngulo} we deduce that
\[
\measuredangle \bar{p}\bar{y}\bar{q} = \frac{\bar{\tau}(\bar{q},\bar{y})}{\bar{\tau}(\bar{x},\bar{y})} \measuredangle \bar{p}\bar{y}\bar{x} \ \mbox{and} \ \measuredangle p_1y_1q_1 = \displaystyle\frac{\overline{\tau}(q_1,y_1)}{ \overline{\tau}(x_1,y_1)} \measuredangle p_1x_1y_1.
\]
Since $\bar{\tau}(\bar{x},\bar{y}) = \tau(x,y) = \bar{\tau}(x_1,y_1)$ and $\bar{\tau}(\bar{q},\bar{y}) = \tau(q,y) = \bar{\tau}(q_1,y_1)$ we conclude $\measuredangle \bar{p}\bar{y}\bar{q} \geq \measuredangle p_1y_1q_1$. Again, by Lemma \ref{HingeLemmaMk} we have $\bar{\tau}(q_1,p_1) \leq \bar{\tau}(\bar{q},\bar{p})$ and therefore
\[
\tau(q,p) \leq
\bar{\tau}(q_1,p_1) \leq \bar{\tau}(\bar{q},\bar{p}).
\]
Now we look at vertex $x$. Suppose $p\in\alpha$ and $q\in\gamma$. If $\tau(p,q)=0$ then $\tau(p,q)\leq \bar{\tau}(\bar{p},\bar{q})$. So take $\tau(p,q)>0$, which implies $x\ll p \ll q$ and the sides of triangle $(x,q,p)$ satisfy timelike size bounds for $k$. Let $(x_1,p_1,z_1)$ be a comparison triangle for $(x,p,z)$ and $q_1$ the corresponding point for $q$ in triangle $(x_1,p_1,z_1)$.
\begin{figure}[ht]
\centering{
\includegraphics[scale=.3]{Diapositiva5.png}
\caption{Triangle $(x,y,z)$ and comparison triangles $\triangle \bar{x}\bar{y}\bar{z}$ and $\triangle x_1p_1z_1$.}
}
\end{figure}
Then $\bar{\tau}(p_1,z_1) = \tau(p,z) \leq \bar{\tau}(\bar{p}, \bar{z})$, which implies $\measuredangle p_1x_1z_1 \leq \measuredangle \bar{p}\bar{x}\bar{z}$ because of Lemma \ref{HingeLemmaMk}. Following a similar argument as in vertex $y$ we show that $\measuredangle p_1x_1q_1 \leq \measuredangle \bar{p}\bar{x}\bar{q}$ and therefore $\bar{\tau}(p_1,q_1)\leq \bar{\tau}(\overline{p}, \bar{q})$, again by Lemma \ref{HingeLemmaMk}. On the other hand $(x,p,z)$ we have $\tau(p,q) \leq \bar{\tau}(p_1,q_1)$, thus
\[
\tau(p,q) \leq \overline{\tau}(p_1,q_1) \leq \bar{\tau}(\bar{p}, \bar{q}).
\]
Again, the case $\tau(q,p)=0$ is trivial. If $\tau(q,p)>0$ we have $q\ll p\ll y$ and therefore this case follows an analog path as the case $\tau(p,q)>0$ just analyzed, where the point $y$ plays the role of $x$. Thus $\tau(q,p)\leq \bar{\tau}(\bar{q},\bar{p})$.
Finally, notice that the analysis of vertex $z$ is completely analogous to the one preformed on vertex $x$, thus completing the proof.
\end{proof}
\section{Alexandrov's convexity property}\label{sec:Alex}
Let us recall that if two future directed timelike curves $\alpha,\beta :[0,a]\to M$ in a Lorentzian manifold $(M,\langle \cdot,\cdot\rangle )$ meet at a point $p=\alpha (0)=\beta (0)$, then the hyperbolic angle $\varphi$ spanned by $\alpha$ and $\beta$ at $p$ is given by the relation
\[
-\cosh \varphi = \frac{\langle \alpha^\prime (0),\beta^\prime (0)\rangle }{\vert \alpha^\prime (0)\vert \ \vert \beta^\prime (0)\vert}.
\]
In this section we provide analog formulations adapted to the context of Lorentzian pre-length spaces. The main idea is to construct a function that plays the same role as $-\cosh \varphi$, being its monotonicity the most relevant feature for comparison purposes.
\begin{definition}
Let $\triangle\bar{p}\bar{q}\bar{r}$ be a comparison triangle in $\mathbb{M}_{k}^{L}$ for a timelike geodesic triangle $(p,q,r)$ in a Lorentzian pre-length space $(X,d,\ll,\leq,\tau)$. We define the \emph{comparison angles at} $p$, $q$, $r$ by
\[
\widetilde{\measuredangle}_{k} rpq = \measuredangle \bar{r}\bar{p}\bar{q}\quad \widetilde{\measuredangle}_{k} pqr = \measuredangle \bar{p}\bar{q}\bar{r}, \quad \widetilde{\measuredangle}_{k} qrp = \measuredangle \bar{q}\bar{r}\bar{p},
\]
respectively.
\end{definition}
\begin{definition}\label{preAngle}
Given a compatible neighborhood $U$ in a Lorentzian pre-length space $(X,d,\ll,\leq,\tau)$, take a timelike geodesic triangle $(x,y,z)$ in $U$, realized by maximal causal curves ${\alpha}:[0,a]\to X$, ${\beta}:[0,b]\to X$, ${\gamma}:[0,c]\to X$ satisfying timelike size bounds for $k$. We define the \emph{angle comparison functions} $\theta_{\alpha,\gamma}^{k}:(0,a]\times (0,c]\to \mathbb{R}$, $\theta_{\beta,\alpha}^{k}:(0,b]\times (0,a]\to \mathbb{R}$ and
$\theta_{\gamma,\beta}^{k}:(0,c]\times (0,b]\to \mathbb{R}$ as follows:
\begin{enumerate}
\item $\theta_{\alpha,\gamma}^{k}(s,t) = \displaystyle\frac{\widetilde{\measuredangle}_{k}\alpha(s)x\gamma(t)}{st}$, provided $\alpha(s)\ll \gamma(t)$ or $\gamma(t)\ll \alpha(s)$.
\item $\theta_{\gamma,\beta}^{k}(s,t) = \displaystyle\frac{\widetilde{\measuredangle}_{k}{\gamma}^*(s)z{\beta}^*(t)}{st}$, provided ${\gamma}^*(s)\ll {\beta}^*(t)$ or ${\beta}^*(t)\ll {\gamma}^*(s)$.
\item $\theta_{\beta,\alpha}^{k}(s,t) = \displaystyle\frac{\widetilde{\measuredangle}_{k}\beta(s)y{\alpha}^*(t)}{st}$, for every $(s,t)\in (0,b]\times (0,a]$.
\end{enumerate}
where $\alpha^*:[0,a]\to X$, $\alpha (t)=\alpha (a-t)$ denotes the reverse curve of $\alpha$.
\end{definition}
It is immediate for the definition that $\theta_{\beta,\alpha}^{k}(s,t)>0$ and $\theta_{\alpha,\gamma}^{k}(s,t)<0$, $\theta_{\gamma,\beta}^{k}(s,t)<0$.
\begin{remark}
Notice that because the relation $\ll$ is open and $\tau$ is continuous in $U$, we can always find small enough $s,t$ such that $\alpha(s)\ll \gamma(t)$ or $\gamma(t)\ll \alpha(s)$ so the conditions in part (1) of the above definition are met. The same applies for parts (2) and (3).
\end{remark}
An straightforward computation shows that when we apply Definition \ref{preAngle} to a geodesic triangle in $\mathbb{R}^2_1$, viewed as a Lorentzian pre-length space, we obtain that the value of $\theta_{\alpha,\gamma}^{0}(s,t)$ is constant ---independent of $(s,t)$--- and $\cosh^{-1}(-\theta_{\alpha,\gamma}^{0})$ is precisely the hyperbolic angle $\varphi$ described above. A similar scenario holds for the functions $\theta_{\alpha,\gamma}^{0}$ and $\theta_{\beta,\alpha}^{0}$. The same is true for the model spaces $\mathbb{M}_{k}^{L}$ with $k\neq 0$. Thus, these functions are natural candidates as comparison functions for Lorentzian length spaces. To be able to show their monoticity, a lemma is in order.
\begin{lemma}\label{ComparisonPreLemma}
Let $(X,d,\ll,\leq,\tau)$ be a Lorentzian pre-length space and suppose it has timelike curvature bounded below by $k$. Let $U$ be a comparison neighborhood. Suppose that $(x,y,z)$ is a timelike geodesic triangle in $U$, realized by maximal causal curves $\alpha$, $\beta$, $\gamma$ whose lengths satisfy timelike size bounds for $k$.
\begin{enumerate}
\item For every $(s,t)\in (0,a]\times (0,c]$ such that $\alpha(s)\ll \gamma(t)$ or $\gamma(t)\ll \alpha(s)$ the following inequalities hold:
\begin{enumerate}
\item If $\alpha(s)\ll \gamma(t)$, then for all $s\geq s'$ and $t'\geq t$ we have
\[
\displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t)}{s} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s')x\gamma(t)}{s'} \ \ \mbox{and} \ \
\displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t')}{t'} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t)}{t},
\]
\item If $\gamma(t)\ll \alpha(s)$, then for all $s'\geq s$ and $t\geq t'$ we have
\[
\displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s')x\gamma(t)}{s'} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t)}{s} \ \ \mbox{and} \ \
\displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t)}{t} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t')}{t'},
\]
\end{enumerate}
\item For every $(s,t)\in (0,c]\times (0,b]$ such that ${\gamma}^*(s)\ll {\beta}^*(t)$ or ${\beta}^*(t)\ll {\gamma}^*(s)$ the following inequalities hold:
\begin{enumerate}
\item If ${\gamma}^*(s)\ll {\beta}^*(t)$, then for all $s\geq s'$ and $t'\geq t$ we have
\[
\displaystyle\frac{\widetilde{\measuredangle}_{k} {\gamma}^*(s)z{\beta}^*(t)}{s} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} {\gamma}^*(s')z{\beta}^*(t)}{s'} \ \ \mbox{and} \ \
\displaystyle\frac{\widetilde{\measuredangle}_{k} {\gamma}^*(s)z{\beta}^*(t')}{t'} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} {\gamma}^*(s)z{\beta}^*(t)}{t},
\]
\item If ${\beta}^*(t)\ll {\gamma}^*(s)$, then for all $s'\geq s$ and $t\geq t'$ we have
\[
\displaystyle\frac{\widetilde{\measuredangle}_{k} {\gamma}^*(s')z{\beta}^*(t)}{s'} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} {\gamma}^*(s)z{\beta}^*(t)}{s} \ \ \mbox{and} \ \
\displaystyle\frac{\widetilde{\measuredangle}_{k} {\gamma}^*(s)z{\beta}^*(t)}{t} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} {\gamma}^*(s)z{\beta}^*(t')}{t'},
\]
\end{enumerate}
\item For every $(s,t)\in (0,b]\times t\in(0,a]$ we have
\[
\displaystyle\frac{\widetilde{\measuredangle}_{k} \beta(s)y{\alpha}^*(t)}{s} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \beta(s')y{\alpha}^*(t)}{s'} \ \ \mbox{and} \ \
\displaystyle\frac{\widetilde{\measuredangle}_{k} \beta(s)y{\alpha}^*(t')}{t'} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \beta(s)y{\alpha}^*(t)}{t},
\]
for all $s\geq s'$ and $t'\geq t$.
\end{enumerate}
\end{lemma}
\begin{proof}
For (1a), take $s\geq s'$, then $x\ll \alpha(s')\ll \alpha(s) \ll \gamma(t)$ and therefore the sides of triangle $(x,\alpha(s'),\gamma(t))$ satisfy timelike size bounds for $k$. Let $\triangle x_1p_1q_1)$ and $\triangle x_2p_2q_2$ be comparison triangles for $(x,\alpha(s),\gamma(t))$ and $(x,\alpha(s'),\gamma(t))$, respectively.
\begin{figure}[ht]
\centering{
\includegraphics[scale=.3]{Diapositiva6.png}
\caption{Triangle $(x,y,z)$ and comparison triangles $\triangle x_1p_1q_1$ and $\triangle x_2p_2q_2$.}
}
\end{figure}
Let $p$ be the corresponding point in triangle $\triangle x_1p_1q_1$ for $\alpha(s')$ and denote $\theta_1= \measuredangle p_1x_1q_1$, $\theta_2 = \measuredangle p_2x_2q_2$. Since $U$ is a comparison neighborhood with respect to $\mathbb{M}_{k}^{L}$ we have
\[
\bar{\tau}(p,q_1) \geq \tau(\alpha(s'),\gamma(t)) = \bar{\tau}(p_2,q_2),
\]
thus by Lemma \ref{HingeLemmaMk} we get $\measuredangle px_1q_1 \geq \measuredangle p_2x_2q_2=\theta_2$. By Lemma \ref{RescalAngulo} we have $\displaystyle\frac{s'}{s}\theta_1 = \measuredangle_k px_1q_1 \geq \theta_2$. This implies
\[
\displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t)}{s} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s')x\gamma(t)}{s'}.
\]
For part of (1b), fix $t'\geq t$. Then $x\ll \alpha(s) \ll \gamma(t) \ll \gamma(t')$ and the sides of $(x,\alpha(s),\gamma(t'))$ satisfy timelike size bounds for $k$. Let $\triangle x_1p_1q_1$ and $\triangle x_2p_2q_2$ be comparison triangles for triangles $(x,\alpha(s),\gamma(t'))$ and $(x,\alpha(s),\gamma(t))$, respectively.
\begin{figure}[ht]
\centering{
\includegraphics[scale=.3]{Diapositiva7.png}
\caption{Triangle $(x,y,z)$ and comparison triangles $\triangle x_1p_1q_1$ and $\triangle x_2p_2q_2$.}
}
\end{figure}
Denote by $p$ the corresponding point for $\gamma(t)$ in triangle $\triangle x_1p_1q_1$ and take $\theta_1 = \measuredangle p_1x_1q_1$, $\theta_2=\measuredangle p_2x_2q_2$. Just like in the previous case we have
\[
\bar{\tau}(p_1,p) \geq \tau(\alpha(s),\gamma(t)) = \bar{\tau}(p_2,q_2)
\]
and applying Lemma \ref{HingeLemmaMk} we deduce $\measuredangle p_1x_1p \geq \measuredangle p_2x_2q_2 = \theta_2$. Notice $\displaystyle\frac{t}{t'}\theta_1 = \measuredangle p_1x_1p \geq \theta_2$ by Lemma \ref{RescalAngulo}, which implies
\[
\displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t')}{t'} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t)}{t}.
\]
Alternatively, if $\gamma(t)\ll \alpha(s)$, then $\gamma(t)\ll \alpha(s')$ for all $s'\geq s$. Let us take the comparison triangles $\triangle x_1q_1p_1$ and $\triangle x_2q_2p_2$ for triangles $(x,\gamma(t),\alpha(s'))$ and $(x,\gamma(t),\alpha(s))$, respectively. Also, the point $p$ is the corresponding one for $\alpha(s)$ in triangle $\triangle x_1q_1p_1$. Thus by curvature conditions we have
\[
\overline{\tau}(q_2,p_2) = \tau(\gamma(t),\alpha(s)) \leq \overline{\tau}(\gamma(t),\alpha(s')).
\]
Therefore
\[
\frac{s}{s'} \widetilde{\measuredangle}_{k} \alpha(s')x\gamma(t) = \frac{s}{s'}\measuredangle p_1x_1q_1 = \measuredangle px_1q_1 \geq \measuredangle p_2x_2q_2 = \widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t),
\]
and hence
\[\displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s')x\gamma(t)}{s'} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t)}{s}.\]
On the other hand, if $t\geq t'$ then $\gamma(t')\ll \alpha(s)$. Choose comparison triangles $\triangle x_1q_1p_1$ and $\triangle x_2q_2p_2$ for triangles $(x,\gamma(t),\alpha(s))$ and $(x,\gamma(t'),\alpha(s))$, respectively. Now, the point $p$ on triangle $\triangle x_1q_1p_1$ the corresponding point for $\gamma(t')$. Since
\[
\overline{\tau}(q_2,p_2) = \tau(\gamma(t'),\alpha(s)) \leq \overline{\tau}(p,p_1),
\]
we obtain
\[
\frac{t'}{t}\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t) = \frac{t'}{t}\measuredangle p_1x_1q_1 = \measuredangle p_1x_1p \geq \measuredangle p_2x_2q_2 = \widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t').
\]
In conclusion
\[\displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t)}{t} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t')}{t'}.\]
Now, let us focus in cases (3a) and (3b). If $s\geq s'$ then ${\alpha}^*(t)\ll \beta(s')$. Let us take the comparison triangles $\triangle p_1y_1q_1$ and $\triangle p_2y_2q_2$ for triangles $({\alpha}^*(t), y, \beta(2))$ and $({\alpha}*(t), y, \beta(s'))$, respectively (here observe that $\tau({\alpha}^*(t), y)=t$ by definition). Set $p$ the corresponding point for $\beta(s')$ in triangle $\triangle p_1y_1q_1$. Then
\[
\overline{\tau}(p_2,q_2) = \tau({\alpha}^*(t),\beta(s')) \leq \overline{\tau}(p_1,p),
\]
it means
\[
\frac{s'}{s}\widetilde{\measuredangle} \beta(s)y{\alpha}^*(t) = \frac{s'}{s}\measuredangle p_1y_1q_1 = \measuredangle p_1y_1p \geq \measuredangle p_2y_2q_2 = \widetilde{\measuredangle} \beta(s')y{\alpha}^*(t).
\]
Hence
\[\displaystyle\frac{\widetilde{\measuredangle}_{k} \beta(s)y{\alpha}^*(t)}{s} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \beta(s')y{\alpha}^*(t)}{s'}.\]
In case that $t'\geq t$ we take the comparison triangles $\triangle p_1y_1q_1$ and $\triangle p_2y_2q_2$ for triangles $({\alpha}^*(t'), y, \beta(s))$ and $(\alpha(t), y,\beta(s))$, respectively (here $\tau({\alpha}^*(t'),y)=t'$ and $\tau({\alpha}^*(t),y)=t$). If $p$ is the corresponding point for $\alpha(t)$ in triangle $\triangle p_1y_1q_1$, then
\[
\overline{\tau}(p_2,q_2) = \tau({\alpha}^*(t),\beta(s)) \leq \overline{\tau}(p,q_1).
\]
This last inequality implies
\[
\frac{t}{t'}\widetilde{\measuredangle} {\alpha}^*(t')y\beta(s) = \frac{t}{t'}\measuredangle p_1y_1q_1 =\measuredangle py_1q_1 \geq \measuredangle p_2y_2q_2 = \widetilde{\measuredangle} {\alpha}^*(t)y\beta(s),
\]
thus
\[\displaystyle\frac{\widetilde{\measuredangle}_{k} \beta(s)y\tilde{\alpha}(t')}{t'} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \beta(s)y\tilde{\alpha}(t)}{t}.
\]
Finally, cases (2a) and (2b) are analogous to (1a) and (1b) and the proof is complete.
\end{proof}
In Alexandrov geometry the monotonicity of the angle comparison functions is equivalent to the definition of curvature bounds (see for example Definition 4.3.1 of \cite{Burago} or Section 2.2 of \cite{Shiohama}). Hence, this monotonicity property is termed as \emph{the local version of the Alexandrov convexity}. We proceed to establish a similar result in the Lorentzian context.
\begin{theorem}[Angle monotonicity] \label{MonotonicityCriterion}
Let $(X,d,\ll,\leq,\tau)$ be a Lorentzian pre-length space and suppose it has timelike curvature bounded below by $k$. Let $U$ be a comparison neighborhood. Suppose that $(x,y,z)$ is a timelike geodesic triangle in $U$, realized by maximal causal curves $\alpha$, $\beta$, $\gamma$ whose lengths satisfy timelike size bounds for $k$. For every $s,s'\in (0,\tau(x,y)]$ and $t,t'\in(0,\tau(x,z)]$ such that $s'\leq s$, $t'\leq t$ and
\begin{enumerate}
\item $\alpha(s)\ll \gamma(t)$ or $\gamma(t) \ll \alpha(s)$,
\item $\alpha(s')\ll \gamma(t')$ or $\gamma(t') \ll \alpha(s')$,
\end{enumerate}
we have the following monotonicity condition
\[
\theta^k_{\alpha,\gamma}(s',t')\leq \theta^k_{\alpha,\gamma}(s,t).
\]
Similar inequalities apply for functions $\theta^{k}_{\gamma,\beta}$ and $\theta^{k}_{\beta,\alpha}$.
\end{theorem}
\begin{proof}
Here we will deal with the case when $\alpha(s)\ll \gamma(t)$ and $\alpha(s')\ll \gamma(t')$ and similar ideas apply for the other cases. Thus, using Proposition \ref{ComparisonPreLemma} we get
\[
t\theta^{k}_{\alpha,\gamma}(s,t) =\displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t)}{s} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s')x\gamma(t)}{s'} = t\theta^{k}_{\alpha,\gamma}(s',t),
\]
which implies $\theta^{k}_{\alpha,\gamma}(s,t)\geq \theta^{k}_{\alpha,\gamma}(s',t)$. On the other hand
\[
s'\theta^{k}_{\alpha,\gamma}(s',t) = \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s')x\gamma(t)}{t} \geq \displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s')x\gamma(t')}{t'}= s'\theta^{k}_{\alpha,\gamma}(s',t'),
\]
and therefore $\theta^{k}_{\alpha,\gamma}(s',t)\geq \theta^{k}_{\alpha,\gamma}(s',t')$. Finally
\[
\theta^{k}_{\alpha,\gamma}(s,t)\geq \theta^{k}_{\alpha,\gamma}(s',t) \geq \theta^{k}_{\alpha,\gamma}(s',t').
\]
\end{proof}
As in the Alexandrov case, by assuming the monotonicity of $\theta^{k}_{\alpha,\gamma}$, $\theta^{k}_{\gamma,\beta}$ $\theta^{k}_{\beta,\alpha}$ we get the converse of Theorem \ref{MonotonicityCriterion}.
\begin{theorem}
Let $(X,d,\ll,\leq,\tau)$ be a Lorentzian pre-length space
and $(x,y,z)$ be a timelike geodesic triangle in a compatible neighborhood $U$, realized by maximal timelike curves $\alpha$, $\beta$, $\gamma$ whose lengths satisfy timelike size bounds for $k$. If $\theta^{k}_{\alpha,\gamma}$, $\theta^{k}_{\gamma,\beta}$ and $\theta^{k}_{\beta,\alpha}$ are increasing functions, then $(X,d,\ll,\leq,\tau)$ has timelike curvature bounded below by $k$.
\end{theorem}
\begin{proof}
We rely on Proposition \ref{CeviansCriteria} in order to prove that $(X,d,\ll,\leq,\tau)$ has timelike curvature bounded below by $k$. Now suppose $p\in \alpha$, then $p\ll y \ll z$ and $\bar{p}\in\bar{\alpha}$ so that $\tau(x,p)=\bar{\tau}(\bar{x},\bar{p}) = s$. Let $\triangle x_1p_1z_1$ be a comparison triangle for $(x,p,z)$ in $\mathbb{M}_{k}^{L}$.
\begin{figure}[ht]
\centering{
\includegraphics[scale=.3]{Diapositiva8.png}
\caption{Triangle $(x,y,z)$ and comparison triangles $\triangle \bar{x}\bar{y}\bar{z}$ and $\triangle x_1p_1z_1$.}
}
\end{figure}
Note that
\[
\displaystyle\frac{\measuredangle p_1x_1z_1}{sc} = \displaystyle\frac{\widetilde{\measuredangle}_{k} pxz }{sc} =\theta^{k}_{\alpha,\gamma}(s,c),
\]
and due to the monotonicity of $\theta^{k}_{\alpha,\gamma}$ we conclude
\[
\displaystyle\frac{\measuredangle p_1x_1z_1}{sc} = \theta^{k}_{\alpha,\gamma}(s,c)
\leq \theta^{k}_{\alpha,\gamma}(a,c)
= \displaystyle\frac{\widetilde{\measuredangle}_{k} yxz}{ac}
= \displaystyle\frac{\measuredangle \bar{y}\bar{x}\bar{z}}{ac},
\]
Therefore $\displaystyle\frac{\measuredangle p_1x_1z_1}{sc} \leq \displaystyle\frac{\measuredangle \bar{y}\bar{x}\bar{z}}{ac}$. On the other hand, by Lemma \ref{RescalAngulo} it follows that
\[
\measuredangle \bar{y}\bar{x}\bar{z} = \frac{a}{s}\measuredangle \bar{p}\bar{x}\bar{z},
\]
thus
\[
\displaystyle\frac{\measuredangle p_1x_1z_1}{sc} \leq \displaystyle\frac{\measuredangle \bar{y}\bar{x}\bar{z}}{ac} = \displaystyle\frac{\measuredangle \bar{p}\bar{x}\bar{z}}{sc},
\]
then $\measuredangle p_1x_1z_1\leq \measuredangle \bar{p}\bar{x}\bar{z}$. Finally, applying Lemma \ref{HingeLemmaMk} in $\mathbb{M}_{k}^{L}$ we get the desired inequality $\tau(p,z)\leq \bar{\tau}(\bar{p},\bar{z})$. All the remaining cases can be worked out similarly.
\end{proof}
\section{Normalized angles in Lorentzian pre-length spaces}\label{sec:Toponogov}
So far, we have been able to find suitable angle comparison functions in our non-smooth context. Now we are ready to define the main notion of our work, namely, normalized angles for Lorentzian pre-length spaces.
\begin{definition}\label{LorentzianAngle}
Following the same assumptions of Definition \ref{preAngle} we set
\[
\measuredangle yxz = \displaystyle\lim_{s,t\to 0} \theta^{k}_{\alpha,\gamma}(s,t), \ \measuredangle xyz = \displaystyle\lim_{s,t\to 0} \theta^{k}_{\beta,\alpha}(s,t)
\]
\[
\mbox{ and } \ \measuredangle xzy = \displaystyle\lim_{s,t\to 0} \theta^{k}_{\gamma,\beta}(s,t).
\]
\end{definition}
\begin{remark}\label{ExistenceAngle}
Notice that the limits for $\measuredangle yxz$ and $\measuredangle xzy$ described above need not exist, even in the presence of lower curvature bounds, since in this case the limits may diverge to $-\infty$. On the other hand, since $\theta^k_{\alpha ,\beta}(s,t)$ is positive, monotonicity guarantees that $\measuredangle xyz$ always exists.
Furthermore, as we will prove, all normalized angles exist when the vertices of a geodesic triangle are not conjugate points along their sides. In other words if in a timelike geodesic triangle the sides can be extended past their vertices as maximizing timelike curves then the normalized angles exist (refer to Proposition \ref{SumGreaterCero} below). In the next results we will not be assuming this mild extra hypothesis, but rather require the existence of the normalized angles.
\end{remark}
As an immediate consequence of the definition of normalized angles we have a property called the \emph{local Lorentzian Toponogov property}.
\begin{theorem}\label{Toponogov}
For any comparison neighborhood $U$ in a Lorentzian pre-length space $(X,d,\ll,\leq,\tau)$ with curvature bounded below by $k$ and any triangle $(x,y,z)$ in $U$ we have
\[
\measuredangle yxz \leq \displaystyle\frac{\widetilde{\measuredangle}_{k}yxz}{ac}, \ \measuredangle xyz \leq \displaystyle\frac{\widetilde{\measuredangle}_{k}xyz}{ab}, \ \measuredangle xzy \leq \displaystyle\frac{\widetilde{\measuredangle}_{k}xzy}{bc}
\]
\end{theorem}
Once we have a notion of normalized angle in a Lorentzian length space, we show an adapted version of the hinge theorem.
\begin{theorem}\label{HingeTheorem}
Let $U$ be a comparison neighborhood of a Lorentzian pre-length space $(X,d,\ll,\leq,\tau)$ with curvature bounded below by $k$. Let us consider a timelike geodesic triangle $(x,y,z)$ in $U$, realized by maximal causal curves $\alpha$, $\beta$, $\gamma$ whose lengths satisfy timelike size bounds for $k$ and such that $\measuredangle yxz$ is finite and $\measuredangle \bar{y}\bar{x}\bar{z} = ac \measuredangle yxz$. Then, for any $s,t\in (0,a]\times (0,c]$ satisfaying $\alpha(s)\ll \gamma(t)$ or $\gamma(t)\ll \alpha(s)$, we have
\[
\tau(\alpha(s),\gamma(t)) \geq \bar{\tau}(\bar{\alpha}(s),\bar{\gamma}(t)).
\]
\end{theorem}
\begin{proof}
Let us take $(s,t)\in (0,a]\times (0,c]$ such that $\alpha(s)\ll \gamma(t)$. If $\bar{\tau}(\bar{\alpha}(s),\bar{\gamma}(t))=0$ then we are done. Otherwise, suppose $\bar{\tau}(\bar{\alpha}(s),\bar{\gamma}(t))>0$. Let $triangle x_1y_1z_1$ be a comparison triangle for triangle $(x,\alpha(s),\gamma(t))$. Applying Lemma \ref{RescalAngulo} we get
\[
\measuredangle \bar{\alpha}(s)\bar{x}\bar{\gamma}(t) = \frac{st}{ac} \measuredangle \bar{y}\bar{x}\bar{z} = \frac{st}{ac} \left( ac \measuredangle yxz \right) = st \measuredangle yxz
\]
Because of the definition of the normalized angle we have
\[
\measuredangle \bar{\alpha}(s)\bar{x}\bar{\gamma}(t) = st \measuredangle yxz
\leq st \theta^{k}_{\alpha,\gamma}(s,t)
= st \left(\displaystyle\frac{\widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t)}{st} \right)
= \widetilde{\measuredangle}_{k} \alpha(s)x\gamma(t) = \measuredangle y_1x_1z_1,
\]
therefore $\measuredangle \bar{\alpha}(s)\bar{x}\bar{\gamma}(t) \leq \measuredangle y_1x_1z_1$. By Lemma \ref{HingeLemmaMk} we conclude $\bar{\tau}(y_1,z_1)\geq \bar{\tau}(\bar{\alpha}(s),\bar{\gamma}(t))$. But $\tau(\alpha(s),\gamma(t)) = \bar{\tau}(y_1,z_1)$, thus
\[
\tau(\alpha(s),\gamma(t)) \geq \bar{\tau}(\bar{\alpha}(s),\bar{\gamma}(t)).
\]
\end{proof}
For metric length spaces of bounded curvature the following fact about the sum of adjacent angles is well known: for any geodesic segment $\gamma$ from $x$ to $y$, a point $p\in\gamma$ and a point $q\not\in\gamma$ we have
\begin{equation}\label{SumAnglesPi}
\measuredangle xpq + \measuredangle qpy = \pi
\end{equation}
In the case of Lorentzian model spaces of constant curvature we have the next lemma. A detailed proof can be found in \cite{Kirch}.
\begin{lemma}\label{SumaAngleCero}
Let us consider a triangle $\triangle pqr$ satisfying size bounds for $k$ in $\mathbb{M}_{k}^{L}$. Let $m$ be a point on the side $\beta$ joining $p$ with $r$
and suppose this geodesic $\beta$ is parametrized by $[0,1]$, so there exists $\lambda\in[0,1]$ such that $m=\beta(\lambda)$. Then
\[
(1-\lambda)\measuredangle qmp + \lambda \measuredangle rmq =0.
\]
\end{lemma}
One of the most important issues that need to be addressed in comparison geometry consists on relating the measures of the angles of the comparison triangles coming from the subdivision of a given triangle. The following results deals with this situation.
\begin{lemma}[Straightening lemma for shoulder angles]\label{SLemma}
Let $\triangle qpr$, $\triangle q_1p_1m_1$ and $\triangle q_2m_2r_2$ be three timelike geodesic triangles satisfying curvature bounds for $k$ in $\mathbb{M}_{k}^{L}$, and $m$ a point on the side joining $p$ to $r$. Suppose $\overline{\tau}(q,p)=\overline{\tau}(q_1,p_1)$, $\overline{\tau}(q,r)= \overline{\tau}(q_2,r_2)$, $\overline{\tau}(p,m)= \overline{\tau}(p_1,m_1)$, $\overline{\tau}(m,r)= \overline{\tau}(m_2,r_2)$ and $\overline{\tau}(q_1,m_1)= \overline{\tau}(q_2,m_2)$. If
\[
\left(1 - \frac{\overline{\tau}(p,m)}{\overline{\tau}(p,r)}\right)\widetilde{\measuredangle} q_1m_1p_1 + \frac{\overline{\tau}(p,m)}{\overline{\tau}(p,r)} \widetilde{\measuredangle} q_2m_2r_2 \geq 0,
\]
then
\[
\widetilde{\measuredangle} qpm \geq \widetilde{\measuredangle} q_1p_1m_1 \ and \ \widetilde{\measuredangle} qrm \geq \widetilde{\measuredangle} q_2r_2m_2.
\]
\end{lemma}
\begin{remark}
\begin{enumerate}
\item The statement obtained by reversing all inequalities in the above Lemma holds true as well.
\item We have analogous results when the point $m$ is located in any of the remaining sides of the triangle $m$.
\end{enumerate}
\end{remark}
We end our analysis on non-normalized angles by proving a converse of the straightening lemma that will prove useful in establishing the existence of angles in
\begin{lemma}\label{ConverseSL}
Suppose $\triangle {q}{p}{r}$ is a triangle satisfying size bounds for $K$ in a model space of curvature $K$. Let ${m}$ be a point on the side joining ${p}$ to ${r}$, and set $\lambda=\lambda_{{m}} \in[0,1]$ the affine parameter corresponding to point $m$ when the side is parametrized by $[0,1]$. Let $\triangle q_1p_1m_1$ and $\triangle q_2m_2r_2$ be triangles in respective model spaces of curvature $K$, where $\bar\tau (q_1,m_1)=\tau\vert (q_2,m_2)$, $ \overline{\tau}(q_1,p_1)=\overline{\tau}\tilde{q},\tilde{p})$, $\overline{\tau}( q_2,r_2)=\overline{\tau}({q},{r})$, $\overline{\tau}( p_1,m_1)=\overline{\tau}({p},{m})$ and $\overline{\tau}( m_2,r_2)=\overline{\tau}({m},{r})$. If
\[
\measuredangle {q}{p}{m}\geq \measuredangle q_1p_1m_1 \ \mbox{and} \ \measuredangle {q}{r}{m}\geq \measuredangle q_2r_2m_2
\]
then
\[
(1-\lambda)\measuredangle p_1m_1q_1 + \lambda\measuredangle r_2m_2q_2\geq 0.
\]
\end{lemma}
\begin{proof}
Because $\measuredangle {q}{p}{m}\geq \measuredangle q_1p_1m_1$, then applying Lemma \ref{HingeLemmaMk} we have $\bar\tau (q,m)\leq \bar\tau ( q_1,m_1)$. But by Lemma \ref{Hinge2}, the shoulder angles in triangles $\triangle {q}{p}{m}$ and $\triangle q_1p_1m_1$ satisfy $\measuredangle {p}{m}{q} \leq \measuredangle p_1m_1q_1$. Similarly, $\measuredangle {q}{m}{r} \leq \measuredangle q_2m_2r_2$. In conclusion
\[
(1-\lambda)\measuredangle p_1m_1q_1 + \lambda\measuredangle r_2m_2q_2 \geq (1-\lambda)\measuredangle \tilde{p}\tilde{m}\tilde{q} + \lambda \measuredangle \tilde{q}\tilde{m}\tilde{r} =0.
\]
\end{proof}
We now establish the same inequality as above for Lorentzian pre-length spaces with lower curvature bounds.
\begin{proposition}\label{SumGreaterCero}
Let $(X,d,\ll, \leq,\tau)$ be a Lorentzian pre-length space with timelike curvature bounded below by $k$. For a timelike geodesic triangle $(x,y,z)$ realized by maximal causal curves $\alpha$, $\beta$, $\gamma$ whose side lengths satisfy timelike size bounds for $k$, fix a point $m$ in $\beta$ and a maximal timelike curve $\sigma$ connecting $m$ with $x$. Then
\[
\measuredangle ymx + \measuredangle xmz \geq 0.
\]
Similar inequalities hold if $m$ is on $\alpha$ or $\gamma$.
\end{proposition}
\begin{proof}
Let $U$ be a comparison neighborhood around $m$. Let the points $q$, $p$, $r\in U$ lie on sides $\beta_{my}$, $\beta_{mz}$ and $\sigma$, where the latter is a maximal geodesic connecting $m$ and $x$, such that $p\ll q$. Thus, $(p,q,r)$ is a timelike geodesic triangle satisfying curvature bounds for $k$. Furthermore, set $\tau(q,m)=s$, $\tau(m,r)=t$ and $\tau(p,m)=h$.
\begin{figure}[ht]
\centering{
\includegraphics[scale=.3]{Diapositiva9.png}
\caption{Timelike geodesic triangle $(x,y,z)$ in $X$}
}
\end{figure}
Let $\triangle p_1q_1r_1$, $\triangle p_2q_2m_2$ and $\triangle p_3m_3r_3$ comparison triangles in $\mathbb{M}^L_k$ for triangles $(q,p,r)$, $(p,q,m)$ and $(p,m,r)$, respectively. Let us denote $\bar{\tau}(p_1,m_1)=h_1$.
\begin{figure}[h]
\centering{
\includegraphics[scale=.3]{Diapositiva10.png}
\caption{Comparison triangles in $\mathbb{M}_{k}^{L}$}
}
\end{figure}
By the curvature condition we have $h_1\geq h$, thus by Lemma \ref{HingeLemmaMk} we have $\measuredangle p_1q_1m_1 \geq \measuredangle p_2q_2m_2$ and $\measuredangle p_1r_1m_1\geq \measuredangle p_3r_3m_3$. Now applying Lemma \ref{ConverseSL} we deduce
\[
\begin{array}{rcl}
\left(1-\frac{s}{s+t}\right)\measuredangle q_2m_2p_2 + \left(\frac{s}{s+t}\right) \measuredangle r_3m_3p_3 &\geq& 0 \\
\left(\frac{t}{s+t}\right)(sh\theta^{k}_{\sigma,\beta_{my}}(h,s)) + \left(\frac{s}{s+t}\right)(th\theta^{k}_{\beta_{mz},\sigma}(t,h)) &\geq& 0\\
\theta^{k}_{\sigma,\beta_{my}}(h,s) + \theta^{k}_{\beta_{mz},\sigma}(t,h) &\geq& 0
\end{array}
\]
Since $\angle xmz=\lim\limits_{{t,h}\to 0}\theta^{k}_{\beta_{mz},\sigma}(t,h)$ is non-negative, the above inequality provides a lower bound for $\theta^{k}_{\sigma,\beta_{my}}(h,s)$. Thus $\angle ymx $ exists also. Taking the limit when $s,t,h$ go to $0$ we conclude the desired inequality
\[
\measuredangle ymx + \measuredangle xmz \geq 0.
\]
\end{proof}
\begin{remark}
We emphasize that Proposition \ref{SumGreaterCero} is in fact a global result (as long as timelike size bounds for $k$ are satisfied). We also would like to remark that in virtue of this result, we can show that in any Lorentzian pre-length space with lower timelike curvature bounds all three normalized angles in a geodesic triangle satisfying timelike size bounds exist and are finite, provided that its sides can be extended past their vertices as maximal timelike curves.
\end{remark}
\begin{remark}
When the normalized angles exist, their value is independent of the comparison model space $\mathbb{M}_k^L$ in which the angle comparison functions $\theta^{k}_{\alpha,\gamma}$, $\theta^{k}_{\gamma,\beta}$, $\theta^{k}_{\beta,\alpha}$ are defined. Intuitively, this is expected in the Riemannian setting since all Riemannian model spaces are conformally equivalent and angles between curves are preserved under conformal transformations. In practice, the proof relies heavily on the Law of cosines for the model spaces. The same is true for Lorentzian spaceforms, thus following closely the argument in Proposition 2.9 in \cite{Bridson} and using the Lorentzian Laws of cosines (see for example \cite{Birman}, \cite{Laws}) we can show the aforementioned independence in the context of Lorentzian pre-length spaces as well.
\end{remark}
As it turns out, the inequality of Proposition \ref{SumGreaterCero} enables us to have a partial converse to Theorem \ref{Toponogov}.
\begin{theorem}
Let $(X,d,\ll,\leq,\tau)$ be a Lorentzian pre-length space such that for any geodesic triangle $(x,y,z)$, realized by maximal timelike curves $\alpha$, $\beta$, $\gamma$ in a compatible neighborhood $U$, all normalized angles exist and satisfy
\[
\measuredangle yxz \leq \displaystyle\frac{\widetilde{\measuredangle}_{k}yxz}{ac}, \ \measuredangle xyz \leq \displaystyle\frac{\widetilde{\measuredangle}_{k}xyz}{ab}, \ \measuredangle xzy \leq \displaystyle\frac{\widetilde{\measuredangle}_{k}xzy}{bc}
\]
Furthermore,
\begin{enumerate}
\item If $p\in\beta$ then $\measuredangle ypx + \measuredangle xpz \geq 0$.
\item If $p\in \alpha$ then $\measuredangle xpz + \measuredangle zpy \geq 0$.
\item If $p\in \gamma$ with $y\ll p$ or $p\ll y$ then $\measuredangle xpy + \measuredangle ypz \geq 0$.
\end{enumerate}
Then $(X,d,\ll,\leq,\tau)$ has timelike curvature bounded below by $k$.
\end{theorem}
\begin{proof}
We show that $U$ satisfies the conditions of Proposition \ref{CeviansCriteria}. Here we handle the case $p\in\beta$ and since the remaining ones are completely analogous. Just to simplify notation set $b_1=\tau(y,p)$, $b_2=\tau(p,z)$ (thus $b=b_1+b_2$), $h=\tau(x,p)$ and $\bar{h}=\bar{\tau}(\bar{x},\bar{p})$. We want to prove that $\bar{h}\geq h$. Let $\triangle x_1y_1p_1$ and $\triangle x_2p_2z_2$ comparison triangles in $\mathbb{M}_{k}^{L}$ for triangles $(x,y,p)$ and $(x,p,z)$, respectively. Thus
\[
\begin{array}{rcl}
\left(1-\frac{b_1}{b}\right)\measuredangle y_1p_1x_1 + \left(\frac{b_1}{b}\right)\measuredangle x_2p_2y_2 &=& \left(\frac{b_2}{b}\right)\measuredangle y_1p_1x_1 + \left(\frac{b_1}{b}\right)\measuredangle x_2p_2y_2 \\
&\geq & \left(\frac{b_2}{b}\right) \left(b_1h\measuredangle ypx\right) + \left(\frac{b_1}{b}\right) \left(b_2h\measuredangle xpz\right) \\
&=& \left(\frac{b_1b_2h}{b}\right) \left(\measuredangle ypx + \measuredangle xpz \right) \geq 0, \\
\end{array}
\]
therefore $\left(1-\frac{b_1}{b}\right)\measuredangle y_1p_1x_1 + \left(\frac{b_1}{b}\right)\measuredangle x_2p_2y_2 \geq 0$. Then by Lemma \ref{SLemma} we get $\measuredangle \bar{x}\bar{y}\bar{p} \geq \measuredangle x_1y_1p_1$. Using Lemma \ref{HingeLemmaMk} we conclude $\bar{\tau}(\bar{x},\bar{p})\geq \bar{\tau}(x_1,p_1)$, which implies $\bar{h}\geq h$.
\end{proof}
\section{First variation for nonnegatively curved Lorentzian length spaces}\label{sec:firstvar}
This section is devoted to proving a first variation formula for globally hyperbolic length spaces with timelike curvature bounded below by zero. We first show a local version for pre-length spaces and then use it for the global result. Throughout this section, we use repeatedily the Law of Cosines in Minkowski space $\mathbb{R}^2_1$, thus we state it for ease of reference.
\begin{lemma}[Law of Cosines for $\mathbb{R}^{2}_{1}$]\label{MinkowskiLawCosines}
Given a timelike geodesic triangle $\triangle xyz$ satisfying curvature bounds in $\mathbb{R}^{2}_{1}$ with $x\ll y \ll z$ and $\tau_{\mathbb{R}^{2}_{1}}$ the time separation function in $\mathbb{R}^{2}_{1}$ we have
\[
\begin{array}{rcl}
\tau_{\mathbb{R}^{2}_{1}}(y,z)^{2} &=& \tau_{\mathbb{R}^{2}_{1}}(x,y)^2 + \tau_{\mathbb{R}^{2}_{1}}(x,z)^2 + 2\measuredangle yxz, \\
\tau_{\mathbb{R}^{2}_{1}}(x,y)^{2} &=& \tau_{\mathbb{R}^{2}_{1}}(x,z)^2 + \tau_{\mathbb{R}^{2}_{1}}(y,z)^2 + 2\measuredangle xzy, \\
\tau_{\mathbb{R}^{2}_{1}}(x,z)^{2} &=& \tau_{\mathbb{R}^{2}_{1}}(x,y)^2 + \tau_{\mathbb{R}^{2}_{1}}(y,z)^2 + 2\measuredangle zyx, \\
\end{array}
\]
\end{lemma}
As an immediate consequence we have the following result
\begin{lemma}\label{ConvergenTriangleMinkowski}
Let $\{\triangle x_iy_iz_i\}_{i=1}^{\infty}$ a sequence of timelike geodesic triangles in $\mathbb{R}^2_1=\mathbb{M}^L_0$ such that
\begin{enumerate}
\item $\tau_{\mathbb{R}^{2}_{1}}(x_i,y_i)=a$ and $\tau_{\mathbb{R}^{2}_{1}}(x_i,z_i)=c$.
\item
$\displaystyle\lim_{i\to\infty} x_i = x, \displaystyle\lim_{i\to\infty} y_i = y, \displaystyle\lim_{i\to\infty} z_i = z$,
\item $\lim\limits_{i\to\infty}\tau_{\mathbb{R}^{2}_{1}}(y_i,z_i)=\tau_{\mathbb{R}^{2}_{1}}(y,z)$.
\end{enumerate}
Then $\displaystyle\lim_{i\to\infty} \measuredangle y_ix_iz_i = \measuredangle yxz$ .
\end{lemma}
\begin{proposition}[Semi-continuity of angles]\label{SemicontinuityAngles}
Let $(X,d,\ll,\leq,\tau,)$ a Lorentzian pre-length space with timelike curvature bounded by $k=0$ and $U$ a comparison neighborhood with respect to $\mathbb{R}^{2}_{1}$. Let $\{(x_i,y_i,z_i)\}_{i=1}^\infty$ a sequence of timelike geodesic triangles realized by maximal curves $\alpha_i$, $\beta_i$, $\gamma_i$ whose side lengths satisfy timelike size bounds for $0$. Suppose
\[
\displaystyle\lim_{i\to\infty} x_i = x, \displaystyle\lim_{i\to\infty} y_i = y, \displaystyle\lim_{i\to\infty} z_i = z, \displaystyle\lim_{i\to\infty} \alpha_i = \alpha, \displaystyle\lim_{i\to\infty} \beta_i = \beta, \displaystyle\lim_{i\to\infty} \gamma_i = \gamma,
\]
and $\alpha_i\to\alpha$, $\beta_i\to\beta$, $\gamma_i\to\gamma$
uniformly where $\alpha$, $\beta$, $\gamma$ are maximal timelike curves that satisfy timelike size bounds for $k=0$. Finally, suppose all normalized angles are finite. Then
\[
\displaystyle\limsup_{i\to\infty} \measuredangle y_ix_iz_i \leq \measuredangle yxz.
\]
\end{proposition}
\begin{proof}
Fix $\varepsilon>0$. Since the definition of $\measuredangle yxz$, there exists $\delta>0$ with the next property: for every $t,s<\delta$ such that $\alpha(s)\ll \gamma(t)$ or $\gamma(t)\ll \alpha(s)$ we have
\[
\measuredangle yxz > \theta^{k}_{\alpha,\gamma}(s,t) - \varepsilon.
\]
Take $s,t<\delta$ with such property and denote $p=\alpha(s)$, $q=\gamma(t)$, $p_i=\alpha_i(s)$ and $q_i=\gamma_i(t)$. If there exists a subsequence $\{p_{i_j},q_{i_j}\}$ such that $\tau(p_{i_j},q_{i_j})=0$, then by the continuity of $\tau$ in $U$ we would have $\tau(p,q)=0$, which is a contradiction. Thus, we can find $N\in\mathbb{N}$ such that $\tau(p_i,q_i)>0$ for all $i\geq N$. Let $\triangle \bar{x}_{i}\bar{p}_{i}\bar{q}_{i}$ and $\triangle \bar{x}\bar{p}\bar{q}$ comparison triangles for $(x_i,p_i,q_i)$ and $(x,p,q)$, respectively. Further $\displaystyle\lim_{i\to\infty}\tau(p_i,q_i)=\tau(p,q)$. On the other hand observe that $\displaystyle\lim_{i\to\infty} \theta^{k}_{\alpha_i,\gamma_i}(s,t) = \theta^{k}_{\alpha,\gamma}(s,t)$ since $\measuredangle p_ix_iq_i\to \measuredangle pxq$ as $i\to\infty$ because of Lemma \ref{ConvergenTriangleMinkowski}. So there exists $M\in\mathbb{N}$ such that
\[
\theta^{k}_{\alpha,\gamma}(s,t)> \theta^{k}_{\alpha_i,\gamma_i}(s,t)-\varepsilon,
\]
for all $i\geq M$. In conclusion, for all $i\geq \max\{N,M\}$ we have
\[
\measuredangle yxz \geq \theta^{k}_{\alpha,\gamma}(s,t) - \varepsilon > \theta^{k}_{\alpha_i,\gamma_i}(s,t)-2\varepsilon \geq \measuredangle y_ix_iz_i - 2\varepsilon.
\]
Hence
\[
\displaystyle\limsup_{i\to\infty} \measuredangle y_ix_iz_i \leq \measuredangle yxz.
\]
\end{proof}
\begin{proposition}\label{LimInfVF}
Let $(X,d,\ll,\leq,\tau)$ be a Lorentzian pre-length space with timelike curvature bounded by $0$ and $U$ a comparison neighborhood with respect to $\mathbb{R}^{2}_{1}$. Set a timelike geodesic triangle $(x,y,z)$ in $U$ realized by maximal curves $\alpha$, $\beta$, $\gamma$ whose side lengths satisfy timelike size bounds for $0$. For every $0\leq t \leq \tau(x,y)$ we define $\ell(t)=\tau(\alpha(t),z)$. Then
\[
\displaystyle\liminf_{t\to 0^{+}} \displaystyle\frac{\ell(t)-\ell(0)}{t} \geq \measuredangle yxz.
\]
\end{proposition}
\begin{proof}
Let $c=\tau (x,z)$. We only consider the case when $s\in(0,c]$ satisfies $\alpha(t)\ll \gamma(s)$, since the case $\gamma(s)\ll \alpha(t)$ is similar. Let $\triangle x_1p_1q_1$ and $\triangle x_2p_2q_2$ be comparison triangles for $(x,\alpha(t),z)$ and $(x,\alpha(t),\gamma(s))$.
\begin{figure}[h]
\centering{
\includegraphics[scale=.3]{Diapositiva11.png}
\caption{Variation Formula.}
}
\end{figure}
Now observe that
\[
\theta_{\alpha,\gamma}(t,s)=\displaystyle\frac{\measuredangle p_2x_2q_2}{ts}.,
\]
and using Lemma \ref{RescalAngulo} we also have $\measuredangle p_1x_1q_1 = \frac{s}{c}\measuredangle p_1x_1z_1$. Thus, applying Lemma \ref{MinkowskiLawCosines} in triangle $\triangle p_1x_1z_1$ we get
\[
\begin{array}{rcl}
\displaystyle\frac{\measuredangle p_1x_1q_1}{ts} &=& \displaystyle\frac{ \frac{s}{c}\measuredangle p_1x_1z_1 }{ts} \\
&=& \displaystyle\frac{\measuredangle p_1x_1z_1}{ct} \\
&=& \displaystyle\frac{\ell(t)^2 - c^2-t^2}{2ct} \\
&=& \left(\displaystyle\frac{\ell(t)-c}{t}\right) \left(\displaystyle\frac{\ell(t)+c}{2c}\right) - \displaystyle\frac{t}{2c},
\end{array}
\]
therefore $\displaystyle\frac{\measuredangle p_1x_1q_1}{ts} = \left(\displaystyle\frac{\ell(t)-c}{t}\right) \left(\displaystyle\frac{\ell(t)+c}{2c}\right) - \displaystyle\frac{t}{2c}$. On the other hand, by the curvature conditions we have
\[
\overline{\tau}(p_2,q_2) = \tau(\alpha(t), \gamma(s)) \leq \overline{\tau}(p_1,q_1),
\]
which means that $\measuredangle p_2x_2q_2\leq \measuredangle p_1x_1q_1$ because of Lemma \ref{HingeLemmaMk}. In conclusion
\[
\begin{array}{rcl}
\left(\displaystyle\frac{\ell(t)-c}{t}\right) \left(\displaystyle\frac{\ell(t)+c}{2c}\right) - \displaystyle\frac{t}{2c} &=& \displaystyle\frac{\measuredangle p_1x_1q_1}{ts} \\
&\geq& \displaystyle\frac{\measuredangle p_2x_2q_2}{ts} \\
&=& \theta_{\alpha,\gamma}(s,t) \\
&\geq& \theta_{\alpha,\gamma}(s,t)- \displaystyle\frac{t}{2c} \\
&\geq& \measuredangle yxz- \displaystyle\frac{t}{2c}.
\end{array}
\]
Taking the limit when $t\to 0^{+}$ we have that $\displaystyle\frac{t}{2c}$ goes to $0$ and the value of $\ell(t)$ tends to $c$, thus $\displaystyle\frac{\ell(t)+c}{2c}$ approximates $1$, thus establishing the desired inequality.
\end{proof}
\begin{remark}\label{maxangle}
Here we have to note two important things. First, in Proposition \ref{LimInfVF} we can obtain similar inequalities for angles $\measuredangle xyz$ and $yzx$ using the corresponding distance functions $\ell$. Second, observe that angle $\measuredangle yxz$ is a function depending on the geodesics $\alpha$ and $\gamma$, but the function $\ell$ does not depend on $\gamma$. So, if $\max_{\gamma}(\measuredangle yxz)$ denotes the supremum of angles $\measuredangle yxz$ over all the timelike maximal geodesics $\gamma$ connecting $x$ with $z$ we conclude
\[
\displaystyle\liminf_{t\to 0^{+}} \displaystyle\frac{\ell(t)-\ell(0)}{t} \geq \max_{\gamma}(\measuredangle yxz).
\]
\end{remark}
\begin{theorem}[First variation formula] \label{FVFLocal}
Let $(X,d,\ll,\leq,\tau)$ be a Lorentzian pre-length space with timelike curvature bounded by $0$ and $U$ a comparison neighborhood. Set a timelike geodesic triangle $(x,y,z)$ in $U$ realized by maximal curves $\alpha$, $\beta$, $\gamma$ whose side lengths satisfy timelike size bounds for $0$. For every $0\leq t \leq \tau(x,y)=a$ we define $\ell(t)=\tau(\alpha(t),z)$. Assume that a sequence of future directed causal curves $\{\gamma_i\}_{i=1}^{\infty}$ converges uniformly to $\gamma$, where $\gamma_i(0)=\alpha (t_i)$ for some sequence $\{t_i\}_{i=1}^{\infty}$, $t_i\to 0$ as $i\to \infty$. Then
\[
\displaystyle\lim_{i\to \infty} \displaystyle\frac{\ell(t_i)-\ell(0)}{t_i} = \measuredangle yxz.
\]
\end{theorem}
\begin{proof}
Set $p_i=\alpha(t_i)$ and take a comparison triangle $\triangle \tilde{x}_i\tilde{p}_i\tilde{z}_i$ for $(x,p_i,z)$. Thus, by Lemma \ref{MinkowskiLawCosines} applied to triangle $\triangle \tilde{x}_i\tilde{p}_i\tilde{z}_i$ we get
\[
\measuredangle zp_ix \leq \displaystyle\frac{\measuredangle \tilde{z}_i\tilde{p}_{i} \tilde{x}_i}{\ell(t_i)\cdot t_i} = \displaystyle\frac{c^2-\ell(t_i)^2-t_i^{2} }{2\ell(t_i)\cdot t_i},
\]
therefore
\[
\left( \displaystyle\frac{\ell(t_i)-c}{t_i} \right) \left( \displaystyle\frac{\ell(t_i)+c}{2\ell(t_i)} \right) + \displaystyle\frac{t_i}{2\ell(t_i)} \leq -\measuredangle zp_ix.
\]
Because of Proposition \ref{SumGreaterCero} we conclude
\[
\left( \displaystyle\frac{\ell(t_i)-c}{t_i} \right) \left( \displaystyle\frac{\ell(t_i)+c}{2\ell(t_i)} \right) + \displaystyle\frac{t_i}{2\ell(t_i)} \leq -\measuredangle zp_ix \leq \measuredangle yp_iz,
\]
hence
\[
\begin{array}{rcl}
\displaystyle\limsup_{i\to\infty} \displaystyle\frac{\ell(t)-\ell(0)}{t_i} &=& \displaystyle\limsup_{i\to\infty} \left[\left( \displaystyle\frac{\ell(t_i)-c}{t_i} \right) \left( \displaystyle\frac{\ell(t_i)+c}{2\ell(t_i)} \right) + \displaystyle\frac{t_i}{2\ell(t_i)}\right] \\
&\leq& \displaystyle\limsup_{i\to\infty} \measuredangle yp_iz \leq \measuredangle yxz,
\end{array}
\]
this last inequality holds because of Proposition \ref{SemicontinuityAngles}. Finally, the result follows from Proposition \ref{LimInfVF}.
\end{proof}
\begin{remark}
As in Remark \ref{maxangle} we are able to conclude that in fact
\[
\displaystyle\lim_{i\to \infty} \displaystyle\frac{\ell(t_i)-\ell(0)}{t_i} = \max_{\gamma}{\measuredangle yxz},
\]
because
\[
\displaystyle\limsup_{i\to\infty} \displaystyle\frac{\ell(t)-\ell(0)}{t_i} \leq \measuredangle yxz \leq \max_{\gamma}(\measuredangle yxz).
\]
\end{remark}
We emphasize the first variation formula holds for timelike geodesic triangles of arbitrary size. Recall that in globally hyperbolic length spaces, the time separation $\tau$ is continuous and any two causally related points can be joined by a maximal causal curve.
\begin{theorem}\label{FVFGlobal}
Let $(X,d,\ll ,\leq, \tau)$ be a globally hyperbolic Lorentzian length space with timelike curvature bounded below by $0$. Set a timelike geodesic triangle $(x,y,z)$ in $X$ realized by maximal curves $\alpha$, $\beta$, $\gamma$ whose side lengths satisfy timelike size bounds for $0$. For every $0\leq t \leq \tau(x,y)=a$ we define $\ell(t)=\tau(\alpha(t),z)$. Assume that a sequence of future directed causal curves $\{\gamma_i\}_{i=1}^{\infty}$ converges uniformly to $\gamma$, where $\gamma_i(0)=\alpha (t_i)$ for some sequence $\{t_i\}_{i=1}^{\infty}$, $t_i\to 0$ as $i\to \infty$. Then
\[
\displaystyle\lim_{i\to \infty} \displaystyle\frac{\ell(t_i)-\ell(0)}{t_i} = \measuredangle yxz.
\]
\end{theorem}
\begin{proof}
The proof is divided in two parts. First, let us take $U$ a comparison neighborhood with respect to $\mathbb{R}^{2}_{1}$ around $x$. Fix to points $y_0\in\alpha $ and $z_0\in\gamma$ in $U$ such that $y_0\ll z_0$ and define $\ell_{0}(t)=\tau(\alpha(t),z_0)$ for $0\leq t \leq \tau(x,y_0)$. Then by Theorem \ref{FVFLocal} we have
\[
\displaystyle\lim_{i\to\infty} \displaystyle\frac{\ell_0(t_i)-\ell_0(0)}{t_i} = \measuredangle y_0xz_0 = \measuredangle yxz.
\]
Now observe that $\ell(t_i)\geq \ell_0(t_i)+\tau(z_0,z)$, then
\[
\displaystyle\frac{\ell(t_i)-\tau(x,z)}{t_i} \geq \displaystyle\frac{\ell_0(t_i)-(\tau(x,z)-\tau(z_0,z))}{t_i},
\]
and therefore
\[
\displaystyle\lim_{i\to\infty} \displaystyle\frac{\ell(t_i)-\ell(0)}{t_i}\geq
\displaystyle\lim_{i\to\infty} \displaystyle\frac{\ell_0(t_i)-\ell_0(0)}{t_i} = \measuredangle yxz.
\]
For the second part and using de continuity of $\tau$ with respect to $d$ take a large enough $i$ such that $\alpha(t_i)\in U$ and $z_i\in \gamma(i)$ with $\tau(\alpha(t_i),z_i)=r$ for a fix $r>0$. Let $(\bar{x}_i,\bar{p}_i,\bar{z}_i)$ a comparison triangle in $\mathbb{R}^{2}_{1}$ for triangle $(x,\alpha(t_i),z_i)$. Set $c_i=\tau(x,z_i)$, then by curvature conditions in $U$ and applying The Law of Cosines in $\mathbb{R}^{2}_{1}$ we have
\[
\measuredangle z_i\alpha(t_i)x \leq \displaystyle\frac{\measuredangle \bar{x}_i\bar{p}_i\bar{z}_i}{rt_i} = \displaystyle\frac{c_i^2-r^2-t_i^2}{2rt_i},
\]
which is equivalent to
\[
\left(\displaystyle\frac{r-c_i}{t_i}\right) \left(\displaystyle\frac{r+c_i}{2r}\right) + \displaystyle\frac{t_i}{2r} \leq -\measuredangle z_i\alpha(t_i)x \leq \measuredangle y\alpha(t_i)z_i = \measuredangle y\alpha(t_i)z,
\]
this last inequalities on behalf of Proposition \ref{SumGreaterCero}. On the other hand, since $x\ll \alpha(t_i)\ll z_i \ll z$ we obtain $(\ell(t_i)-r)+ c_i\leq c$ and $r\leq c_i$, thus
\[
\left(\displaystyle\frac{\ell(t_i)-c}{t_i}\right)\left(\displaystyle\frac{r+r}{2r}\right) + \displaystyle\frac{t_i}{2r} \leq \left(\displaystyle\frac{r-c_i}{t_i}\right)\left(\displaystyle\frac{r+c_i}{2r}\right) + \displaystyle\frac{t_i}{2r} \leq \measuredangle y\alpha(t_i)z.
\]
When $t_i\to 0$ and using Proposition \ref{SemicontinuityAngles} we have
\[
\displaystyle\lim\sup_{i\to \infty} \displaystyle\frac{\ell(t_i)-c}{t_i} \leq \displaystyle\lim\sup_{i\to \infty} \measuredangle y\alpha(t_i)z \leq \measuredangle yxz,
\]
and we are done.
\end{proof}
\begin{remark}
Here we have the same situations as in Remarks \ref{maxangle}, namely
\[
\displaystyle\lim_{i\to \infty} \displaystyle\frac{\ell(t_i)-\ell(0)}{t_i} = \max_{\gamma}(\measuredangle yxz).
\]
\end{remark}
\section*{Acknowledgements}
W. Barrera was partially supported by Conacyt under grants SNI 45382 and Ciencia de Frontera 21100. D. Solis was partially supported by Conacyt under grant SNI 38368. The authors are very thankful to T. Beran for insightful comments on an earlier version of this work. The authors are very thankful to the organizers of \emph{SCRI21, a tribute to Roger Penrose} for this outstanding event.
| proofpile-arXiv_065-15910 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Warm large exoplanets, giant planets with 10--100~day orbital periods, pose a major challenge to our understanding of how planets form and evolve. Origins hypotheses developed and fine-tuned to account for the more readily discovered hot Jupiters (orbital periods $<10$~days) and the far more abundant warm sub-Neptunes find it challenging to account for warm, large exoplanets' occurrence rates, eccentricities, masses, and companion properties (e.g., \citealt{wu11,beau12,petr15,daws15b,huan16}; see Section~4.3 of \citealt{daws18} for a review). Although rarer than smaller planets and more distant giants, warm, large exoplanets are an outcome of physical processes that likely sculpt many planetary systems.
Recently some have argued for two origins channels for warm, large exoplanets (e.g., \citealt{daws13,dong14,daws15,petr16}): high eccentricity tidal migration, and a second channel that may involve disk migration and/or in situ formation. Under the hypothesis of high eccentricity tidal migration, warm, large exoplanets are planets caught in the act of migration: they began further from the star, were disturbed onto highly elliptical orbits, and are tidal circularizing to short orbital periods. However, a key piece of evidence supporting the second channel is the handful of warm, large exoplanets with nearby planets, which are incompatible with high eccentricity migration { and are not on route to becoming hot Jupiters}. Fig.~\ref{fig:arch} shows all confirmed systems with a warm, large exoplanet (mass greater than 0.25~$M_{\rm Jup}$ or radius greater than 8 Earth radii; period less than 100~days) and a companion with a $<100$~day orbital period. It is striking that most of these systems are in or near an orbital resonance, and almost all contain a known small planet on a $<10$~day orbital period, despite the low occurrence rate of such short period planets in general (e.g., \citealt{muld15}). They also happen to be some of the most iconic, well-studied exoplanet systems, probably because large and/or massive planets with short orbital periods are most amenable to transit and radial velocity characterization. Discovering and characterizing more warm, large exoplanets with nearby planets could help shed light on the nature of this second channel.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{architecture.eps}
\caption{
\label{fig:arch}
All confirmed exoplanet systems with a warm, large exoplanet (mass greater than 0.25~$M_{\rm Jup}$ or radius greater than 8 Earth radii; orbital period less than 100~days) and one or more companions with a $<100$~day orbital period. { (The WASP-47 system satisfies this criteria but contains a hot Jupiter.)} Sizes shown are roughly proportional to planet size.
}
\end{center}
\end{figure}
The \emph{TESS}\xspace pipeline \citep{jenk16,twic18,li18} recently discovered a pair of warm, large planet candidates orbiting TOI-216\xspace. Like the other systems in Fig.~\ref{fig:arch}, the putative planets are in or near an orbital resonance. Their proximity to resonance leads to detectable transit timing variations (TTVs). Based on expected \emph{TESS}\xspace planet yields, \citet{hadd18} predicted that significant mass constraints from TTVs would be possible for of order five planets. Here we seek to validate and characterize the TOI-216\xspace planet candidates and assess what additional follow-up is necessary to test theories for their origin. We characterize the host star in Section~\ref{sec:star}. In Section~\ref{sec:lc}, we describe our analysis of the \emph{TESS}\xspace data and extraction of planet parameters. We rule out most astrophysical false positive scenarios in Section~\ref{sec:valid}. We constrain the system's orbital architecture in Section~\ref{sec:arch} -- including mutual inclination, TTVs, eccentricities, and additional transit signals -- and the planets' masses sufficiently to confirm the planets. We present our conclusions in Section~\ref{sec:discuss}.
\section{Stellar Characteristics}
\label{sec:star}
TOI-216\xspace\ is an 11.5 \emph{TESS}\xspace apparent magnitude, main sequence K-dwarf. To better refine its parameters -- particularly the metallicity -- we obtained seven spectra of TOI-216\xspace with the ANU 2.3m Echelle spectrograph over a period of 11~days in 2018 Nov. These observations were also made to broadly constrain the mass of the planets and to check for obvious astrophysical false positive scenarios, such as line blending due to background stars. The ANU 2.3m/Echelle is located at Siding Spring Observatory, Australia. The spectrograph has a spectral resolution of $\lambda / \Delta \lambda \equiv R = 23000$, covering the wavelength region of 3900--6700~\AA. Observations are bracketed by ThAr arc lamp exposures for wavelength calibration. Instrument stability issues limit the radial velocities to a typical precision of only $\sim 500\,\mathrm{m\,s}^{-1}$ for this facility. Stellar parameters for TOI-216\xspace were derived using SpecMatch \citep{Yee:2017} on the ANU 2.3m/Echelle spectra, yielding atmospheric parameters of $T_\mathrm{eff} = 5045\pm110$~K, $\log g = 4.53\pm0.12$\,dex, and $\mathrm{[Fe/H]}=-0.16\pm0.09$\,dex.
We use the approach described by \citet{daws15} to fit the observed stellar properties using the \citet{take07} and Dartmouth \citep{dott08} stellar evolution models. We perform an additional fit using the Dartmouth models to both the spectrum properties and the \emph{Gaia} DR2 parallax and apparent $g$ magnitude \citep{gaia16,gaia18}. We find that the measured atmospheric parameters are consistent with a main-sequence K-dwarf and list the derived stellar mass, radius, and density in column~2 of Table~\ref{tab:star}. The resulting values are in agreement with the \emph{TESS}\xspace Input Catalog (TIC; \citealt{stas18}) but more precise. We choose to use the Dartmouth values hereafter because the posteriors extend to a lower mass ($M_\star < 0.7 M_\odot$) than covered by the \citet{take07} models and because they allow us to fit the \emph{Gaia} DR2 parameters.
\begin{deluxetable*}{lcrrr}
\tablecaption{Stellar Parameters\tablenotemark{a} for TOI-216 \label{tab:star}}
\startdata
\\
Catalog Information \\
Parameters & Value & Source\\
\hline
~~~~R.A. (h:m:s) & 04:55:55.3 & \emph{Gaia} DR2\\
~~~~Dec. (d:m:s) & $-63$:16:36.2 & \emph{Gaia} DR2\\
~~~~Epoch & 2015.5 & \emph{Gaia} DR2 \\
~~~~Parallax (mas) & $5.59\pm0.03$ & \emph{Gaia} DR2\\
~~~~$\mu_{\mathrm{ra}}$ (mas yr$^{-1}$) & $-22.7\pm0.04$ & \emph{Gaia} DR2 \\
~~~~$\mu_{\mathrm{dec}}$ (mas yr$^{-1}$) & $-56.355\pm0.05$ & \emph{Gaia} DR2\\
~~~~$g$ magnitude & 12.163126\\
~~~~\emph{Gaia} DR2 ID & 4664811297844004352 & \\
~~~~TIC ID & 55652896 & \\
~~~~TOI ID & 216 & \\
~~~~TIC \emph{TESS}\xspace magnitude & 11.504\\
~~~~$V$ magnitude\tablenotemark{b} & 12.393\\
\hline
\hline
Spectroscopic properties \\
Parameters & Spectrum & Takeda\tablenotemark{c} & Dartmouth\tablenotemark{d} & +\emph{Gaia}\tablenotemark{d}\\
\hline
~~~~Stellar effective temperature, $T_{\rm eff}$ [K] & 5045$\pm$110 &50560$^{+1100}_{-1120}$ &50540$^{+1030}_{-1200}$& 50890$^{+430}_{-450}$\\
~~~~Iron abundance, [Fe/H] &-0.16 $\pm$0.09 &-0.16$\pm 0.08$&-0.16$\pm0.09$&-0.15$^{+0.08}_{-0.09}$\\
~~~~Surface gravity, $\log g [$cm~s$^{-2}$] &4.53$\pm$0.12 &4.578$^{+0.02}_{-0.023}$ &4.58$^{+0.03}_{-0.04}$&4.58600$^{+0.003}_{-0.0350}$\\
~~~~Stellar mass, $M_{\star}$ [$M_{\odot}$] & &0.78$^{+0.04}_{-0.02}$ & 0.76$^{+0.04}_{-0.03}$ & 0.77$^{+0.03}_{-0.03}$ \\
~~~~Stellar radius, $R_{\star}$ [$R_{\odot}$] & &0.765$^{+0.023}_{-0.02}$ & 0.74$^{+\pm0.043}_{-0.03}$& 0.747$^{+0.015}_{-0.014}$\\
~~~~Stellar density, $\rho_{\star}$ [$\rho_{\odot}$] & & 1.812$^{+0.14}_{-0.146}$ & 1.995$^{+0.213}_{-0.230}$ & 1.84$^{+0.14}_{-0.15}$
\enddata
\tablenotetext{a}{As a summary statistic we report the median and 68.3\% confidence interval of the posterior distribution.}
\tablenotetext{b}{Using the relationship derived by \citet{jord10}, we compute the $V$ magnitude from the \emph{Gaia} $g$ magnitude and the Johnson-Cousins $I_C$ magnitude. We estimate the $I_C$ magnitude to be the \emph{TESS}\xspace magnitude, because the two band passes have the same center \citep{rick15}.}
\tablenotetext{c}{\citet{take07}}
\tablenotetext{d}{\citet{dott08}}
\end{deluxetable*}
\section{Light Curve Analysis}
\label{sec:lc}
TOI-216\xspace\ is located near the southern ecliptic pole, and is scheduled to be observed for 12 sectors of the first year of the \emph{TESS}\xspace Primary Mission. This paper is based on data from Sectors 1, 2, 3, 4, 5, and 6 (2018 July 25 -- 2019 January 7), during
which TOI-216\xspace\ was observed with CCD~1 on Camera~4, and from ground-based observatories.
\\
\\
\\
\subsection{Data from \emph{TESS}\xspace Mission}
We use the publicly available 2-min cadence data from the \emph{TESS}\xspace Alerts, which is processed with the Science Processing Operations Center pipeline. The pipeline, a descendant of the \emph{Kepler}\xspace mission pipeline based at the NASA Ames Research Center \citep{jenk02,jenk10,jenk16}, analyzes target pixel postage stamps that are obtained for pre-selected target stars. For TOI-216\xspace, the short cadence pipeline detected two threshold crossing event at periods 34.54~days and 17.1~days with high signal-to-noise. The candidates were also detected by the long cadence MIT Quick Look Pipeline \citep{sha19}.
\subsection{Ground-based Photometric Follow-up}
{ We used the resources of the \emph{TESS}\xspace Follow-up Observing Program (TFOP) Working Group (WG) Sub Group~1 (SG1)\footnote{\url{https://tess.mit.edu/followup/}} to collect seeing-limited time-series photometric follow-up of TOI-216.} The transit depths of both TOI-216 planet candidates, as predicted by the \emph{TESS}\xspace light curves, are deep enough to detect from the ground at high significance. Therefore our primary goal was to attempt to detect the transits using our higher spatial resolution ground-based imaging and a photometric aperture that is small enough to exclude the flux from known nearby stars that are bright enough to cause the \emph{TESS}\xspace detected events. The secondary goal was to identify or rule out potential nearby eclipsing binaries (Section~\ref{sec:valid}).
We used the {\tt TESS Transit Finder}, which is a customized version of the {\tt Tapir} software package \citep{Jensen:2013}, to schedule photometric time-series follow-up observations. We initially scheduled observations for both planet candidates according to the public linear ephemerides derived from Sectors 1 and 2 \emph{TESS}\xspace data. Our eight time-series follow-up observations are listed in Table~\ref{tab:ground}. We used the AstroImageJ software package \citep{Collins2017} for data reduction and aperture photometry for all of our follow-up photometric observations. The facilities used to collect the TOI-216 observations are: Las Cumbres Observatory (LCO) telescope network \citep{brown2013}; Hazelwood Observatory; the Myers-T50 Telescope; and El Sauce Observatory. All LCO 1~m telescopes are equipped with the Sinistro camera, with a 4k x 4k pixel Fairchild back illuminated CCD and a 26.5 x 26.5 arcmin FOV. The LCO 0.4~m telescopes are mounted with an SBIG STX6303 2048 x 3072 pixels CCD with a 19 x 29 arcmin FOV. Hazelwood is a private observatory with an f/8 Planewave Instruments CDK12 0.32 m telescope and an SBIG STT3200 2.2K$\times$1.5K CCD, giving a $20\arcmin\times13\arcmin$ field of view. The Myers-T50 is an f/6.8 PlaneWave Instruments CDK17 0.43 m Corrected Dall-Kirkham Astrograph telescope located at Siding Spring, Australia. The camera is a Finger Lakes Instruments (FLI) ProLine Series PL4710 - E2V, giving a $15\farcm5\times15\farcm5$ field of view. El Sauce is a private observatory with a Planewave CDK14 0.36 m telescope on a MI500/750F fork mount. The camera is an SBIG STT1603-3 1.5K$\times$1.0K CCD, giving a $18\farcm5\times12\farcm3$ field of view.
We observed five transits of TOI-216c\xspace at three epochs and confirmed that the transit events occur on target using follow-up apertures with radius $\sim 6\arcsec$. We conducted five TOI-216b\xspace observations at four transit epochs and ruled out the $\sim 4$ parts per thousand transit events at the public linear ephemeris. However, with the later addition of data from \emph{TESS}\xspace sectors 3 and 4 to the TTV analysis, we determined that the large TTV signal caused the transit events to egress before our follow-up observations started. We then observed an out-of-transit sequence that occurred just prior to the newly determined transit ingress time to help constrain the TTV model (since the time of transit was not observable from our available facilities).
\clearpage
\begin{table*}
\footnotesize
\caption{Observation Log}
\label{tab:ground}
\centering
\begin{tabular}{c l l c c c c c c c}
\hline\hline
\noalign{\smallskip}
\multirow{2}{*}{TOI-216} & Date & \multirow{2}{*}{Telescope}\tablenotemark{$\dag$} & \multirow{2}{*}{Filter} & ExpT & Exp & Dur. & Transit & Ap. Radius & FWHM\\
& (UTC) & & & (sec) & (N) & (min) & expected coverage\tablenotemark{$\ddag$} & (arcsec) & (arcsec) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multirow{6}{*}{b\xspace}
& 2018-11-22${^\ddag}$ & LCO-SSO-0.4 & i$^\prime$ & 90 & 54 & 100 & Ingress+30$\%$ & 8.5 & 7.5 \\
& 2018-12-09${^\ddag}$ & Myers-T50 & Lum & 60 & 200 & 240 & full & 8.3 & 4.6 \\
& 2018-12-26${^\ddag}$ & LCO-SSO-1.0 & i$^\prime$ & 30 & 85 & 99 & Ingress+25$\%$ & 7.0 & 2.8 \\
& 2019-01-29${^\ddag}$ & LCO-SAAO-1.0 & r$^\prime$ & 100 & 97 & 225 & Full & 9.3 & 2.4 \\
& 2019-01-29${^\ddag}$ & LCO-SAAO-1.0 & i$^\prime$ & 25 & 181 & 198 & Full & 5.8 & 2.2 \\
& 2019-02-15 & LCO-SSO-1.0 & Zs & 60 & 160 & 236 & Out-of-Transit & 4.7 & 2.0 \\
\hline
\noalign{\smallskip}
\multirow{7}{*}{c\xspace}
& 2018-12-16 & LCO-SAAO-1.0 & i$^\prime$ & 90 & 75 & 180 & Egress+60$\%$ & 5.8 & 2.5 \\
& 2018-12-16 & LCO-SAAO-1.0 & i$^\prime$ & 39 & 331 & 450 & Full & 5.8 & 2.1 \\
& 2019-01-20 & Hazelwood-0.3 & g$^\prime$ & 240 & 101 & 449 & Egress+70$\%$ & 5.5 & 3.2 \\
& 2019-02-23 & LCO-SAAO-1.0 & Zs & 60 & 148 & 212 & Out-of-Transit & 6.2 & 2.5 \\
& 2019-02-24 & LCO-CTIO-1.0 & Zs & 60 & 150 & 213 & In-Transit & 6.2 & 2.5 \\
& 2019-02-24 & El Sauce-0.36 & Rc & 30 & 514 & 303 & Egress+90$\%$ & 5.9 & 3.7 \\
& 2019-02-24 & LCO-SSO-1.0 & Zs & 60 & 81 & 117 & Out-of-Transit & 6.2 & 2.5 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\tablenotetext{$\dag$}{Telescopes: \\
LCO-CTIO-1.0: Las Cumbres Observatory - Cerro Tololo Astronomical Observatory (1.0 m) \\
LCO-SSO-1.0: Las Cumbres Observatory - Siding Spring (1.0 m) \\
LCO-SAAO-1.0: Las Cumbres Observatory - South African Astronomical Observatory (1.0 m) \\
LCO-SSO-0.4: Las Cumbres Observatory - Siding Spring (0.4 m) \\
Myers-T50: Siding Spring Observatory - T50 (0.43 m)\\
Hazelwood-0.3: Stockdale Private Observatory - Victoria, Australia (0.32 m) \\
El Sauce-0.36: El Sauce Private Observatory - El Sauce, Chile (0.36 m) \\
}
\tablenotetext{$\ddag$}{Observations did not detect a transit event because they were scheduled using the initial public \emph{TESS}\xspace linear ephemeris. The TTV offset from the linear ephemeris is now known to be larger than the time coverage of the observations.}
\end{table*}
\subsection{Light curve fits}
We fit the transit light curves (Fig.~\ref{fig:transits}) using the TAP software \citep{gaza12}, which implements Markov Chain Monte Carlo using the \citet{mand02} transit model and the \citet{cart09} wavelet likelihood function, with the modifications described in \citet{daws14}. The results are summarized in Table~\ref{tab:216}. We use the presearch data conditioned (PDC) flux, which is corrected for systematic (e.g., instrumental) trends using cotrending basis vectors \citep{smit12,stum14}; the \citet{cart09} wavelet likelihood function (which assumes frequency$^{-1}$ noise) with free parameters for the amplitude of the red and white noise; and a linear trend fit simultaneously to each transit light curve segment with other transit parameters. We assign each instrument (\emph{TESS}\xspace, Hazelwood, LCO, El Sauce) its own set of limb darkening parameters because of the different wavebands. We use different noise parameters for \emph{TESS}\xspace, Hazelwood, LCO, and El Sauce. We adopt uniform priors on the planet-to-star radius ratio ($R_{p}/R_{\star}$), the log of the light curve stellar density $\rho_{\rm circ}$ (i.e., equivalent the light curve parameter $d/R_\star$, where $d$ is the planet-star separation, converted to stellar density using the planet's orbital period and assuming a circular orbit), the impact parameter $b$ (which can be either negative or positive; we report $|b|$), the mid transit time, the limb darkening coefficients $q_1$ and $q_2$ \citep{kipp13}, and the slope and intercept of each transit segment's linear trend. For the Hazelwood, LCO, and El Sauce observations, we fit a linear trend to airmass instead of time.
\begin{deluxetable*}{rrl}
\tabletypesize{\footnotesize}
\tablecaption{Planet Parameters for TOI-216b\xspace and TOI-216c\xspace Derived from the Light Curves \label{tab:216}}
\tablewidth{0pt}
\tablehead{
\colhead{Parameter} & \colhead{Value\tablenotemark{a}}}
\startdata
\hline
TOI-216b\xspace\\
Planet-to-star radius ratio, $R_{p}/R_{\star}$ &0.11 &$^{+0.04}_{-0.02}$ \\
Planet radius, $R_p$ [$R_\oplus$] & 8.6 & $^{+2.9}_{-1.9}$\\
Light curve stellar density, $\rho_{\rm circ}$ [$\rho_\odot$] &1.13 &$^{+0.29}_{-0.19}$ \\
$a/R_\star$\tablenotemark{b} &29.1&$^{+2.3}_{-1.8}$\\
Impact parameter, $|b|$ & 0.99 &$^{+0.05}_{-0.04}$\\
Sky-plane inclination, $i_{\rm sky}$ $[^\circ]$& 88.0&$^{+0.2}_{-0.2}$\\
Mid-transit times & 1325.328 &$^{+0.003}_{-0.004}$\\
&1342.431 &$^{+0.003}_{-0.003}$\\
&1359.539 &$^{+0.003}_{-0.003}$\\
&1376.631 &$^{+0.003}_{-0.003}$\\
&1393.723 &$^{+0.003}_{-0.003}$\\
&1427.879 &$^{+0.003}_{-0.003}$\\
&1444.958 &$^{+0.003}_{-0.003}$\\
&1462.031 &$^{+0.003}_{-0.003}$\\
&1479.094 &$^{+0.003}_{-0.003}$\\
&1496.155 &$^{+0.003}_{-0.003}$\\
&1513.225 &$^{+0.003}_{-0.003}$\\
\hline
TOI-216c\xspace \\
Planet-to-star radius ratio, $R_{p}/R_{\star}$ &0.1236 &$^{+0.0008}_{-0.0008}$ \\
Planet radius, $R_p$ [$R_\oplus$] & 10.2 & $^{+0.2}_{-0.2}$\\
Light curves stellar density, $\rho_{\rm circ}$ [$\rho_\odot$] &1.75 &$^{+0.04}_{-0.06}$ \\
$a/R_\star$\tablenotemark{b} &53.8&$^{+0.4}_{-0.6}$\\
Impact parameter, $|b|$ &0.11 &$^{+0.09}_{-0.00}$ \\
Sky-plane inclination, $i_{\rm sky}$ $[^\circ]$ & 89.89&$^{+0.08}_{-0.10}$\\
Mid-transit times & 1331.2851 &$^{+0.0007}_{-0.0007}$\\
&1365.8245 &$^{+0.0007}_{-0.0007}$\\
&1400.3686 &$^{+0.0007}_{-0.0007}$\\
&1434.9227&$^{+0.0007}_{-0.0007}$\\
&1469.4773&$^{+0.0007}_{-0.0007}$\\
LCO&1469.4781&$^{+0.0004}_{-0.0004}$\\
Hazelwood&1504.037 &$^{+0.002}_{-0.002}$\\
El Sauce&1538.5939 &$^{+0.0015}_{-0.0015}$\\
\hline
Minimum mutual inclination $[^\circ]$ & $1.8$ &$^{+0.2}_{-0.2}$\\
\enddata
\tablenotetext{a}{As a summary statistic we report the median and 68.3\% confidence interval of the posterior distribution.}
\tablenotetext{b}{If the planet's orbit is not circular, this corresponds to the average planet-star-separation during transit divided by the stellar radius.}
\end{deluxetable*}
\begin{deluxetable*}{rrlrlrlrl}
\tabletypesize{\footnotesize}
\tablecaption{Light Curve Parameters for the TOI-216\xspace system\label{tab:216star}}
\tablewidth{0pt}
\tablehead{
\colhead{Parameter\tablenotemark{a}} & \colhead{\emph{TESS}\xspace}&&\colhead{El Sauce}&&\colhead{LCO}&&\colhead{Hazelwood}}
\startdata
Limb darkening coefficient, $q_{1}$ & $0.33$& $^{+0.12}_{-0.09}$ &$0.5$& $^{+0.2}_{-0.2}$&$0.52$& $^{+0.15}_{-0.12}$ &$0.50$& $^{+0.23}_{-0.15}$ \\
Limb darkening coefficient, $q_{2}$ & $0.32$&$^{+0.14}_{-0.11}$&$0.30$&$^{+0.26}_{-0.16}$&$0.21$&$^{+0.08}_{-0.08}$& $0.7$&$^{+0.2}_{-0.2}$ \\
Red noise, $\sigma_r$ [ppm] & 3000 & $^{+800}_{-900}$ &10000 & $^{+4000}_{-4000}$&1500 & $^{+1600}_{-1000}$ & 4000 & $^{+3000}_{-3000}$ \\
White noise, $\sigma_w$ [ppm] & 2367&$^{+17}_{-17}$ & 3140&$^{+80}_{-80}$ & 1060&$^{+40}_{-40}$ & 2450&$^{+190}_{-190}$\\
\enddata
\tablenotetext{a}{As a summary statistic we report the mode and 68.3\% confidence interval of the posterior distribution.}
\end{deluxetable*}
\begin{figure*}
\begin{center}
\includegraphics{transits.eps}
\includegraphics{transits2.eps}
\caption{
\label{fig:transits}
Detrended light curves, color coded by transit epoch, spaced with arbitrary vertical offsets, and with a model light curve overplotted. The light curves are phased based on a constant orbital period linear ephemeris to show the TTVs. }
\end{center}
\end{figure*}
The inner planet candidate's transits are grazing, so the planet-to-star radius ratio $R_p/R_\star$ is not well-constrained. We impose a uniform prior from 0 to 0.17, with the upper limit corresponding to a radius of 0.13 solar radii. Fig.~\ref{fig:rdr} shows the covariance between $R_p/R_\star$ and the light curve stellar density $\rho_{\rm circ}$ and impact parameter $b$. The larger the planet, the larger the impact parameter required to match the transit depth. The larger the impact parameter, the shorter the transit chord and the lower the light curve stellar density (which correlates with the transit speed) required to match the transit duration. Through its affect on $|b|$ and $\rho_{\rm circ}$, the upper limit on $R_p/R_\star$ affects our inference of the inner planet's eccentricity and the mutual inclination between the planets; in Section~\ref{sec:arch}, we will assess the sensitivity to this upper limit.
\begin{figure}
\begin{center}
\includegraphics{rdr.eps}
\caption{
\label{fig:rdr}
Draws from the posterior distribution of correlated parameters $\rho_{\rm circ}$, $R_p/R_\star$, and $|b|$ for TOI-216b\xspace, which has grazing transits. Larger $R_p/R_\star$ correspond to larger $|b|$ and smaller $\rho_{\rm circ}$.
}
\end{center}
\end{figure}
\subsection{Search for additional transit signals}
\label{subsec:search}
We ran the box car least squares algorithm on the residuals of the light curve after removing the transit signal of TOI-216b\xspace and TOI-216c\xspace. We used a duration of 2.5 hours, which corresponds to an impact parameter equal to planet c\xspace's at an orbital period of 3~days. We did not find any signal with signal-to-noise larger than 7.3. Using per-point rms precision of 0.00233, this limit rules out any planets interior to TOI-216b\xspace with a radius larger than 2.18 $R_{\oplus}$ or planets with periods less than 3~days and radii larger than 1.17 $R_{\oplus}$. With future \emph{TESS}\xspace data from 12 sectors in total, the detection threshold for all planets interior to TOI-216b\xspace will be lowered to 1.13 $R_{\oplus}$ planets.
\section{Validation}
\label{sec:valid}
Here we seek to validate the planet candidates by ruling out false positive scenarios using follow up observations and dynamical arguments. In Section~\ref{subsec:RV}, we consider and rule out unblended astrophysical false positive scenarios using radial velocity (RV) measurements. In \ref{subsec:phot}, we consider and rule out { most} blended false positive scenarios using photometry. We summarize the results in Section~\ref{subsec:summ}.
\subsection{Low precision radial-velocity follow up to rule out stellar companions to TOI-216\xspace}
\label{subsec:RV}
One or both transiting objects could be brown dwarf or stellar companions to TOI-216\xspace. The following astrophysical false positive scenarios can be tested through radial velocity follow up: TOI-216b\xspace and/or TOI-216c\xspace is a brown dwarf; TOI-216b\xspace (which has a poorly constrained transit depth) is an unblended stellar companion; or TOI-216b\xspace and/or TOI-216c\xspace is a blended stellar companion to TOI-216\xspace with a background or bound star diluting the transit depth.
{ If both objects are transiting TOI-216\xspace, but one or both is of brown dwarf or stellar mass, the system would be unstable if the objects are not in resonance or, if in resonance, the mass of the secondary would cause large TTVs incompatible with those observed (Section~\ref{sec:ttvs}). Furthermore, the brown dwarf scenario is less likely a priori. \citet{grie17} find that the occurrence rate of brown dwarfs with orbital periods less than 300~days is about 0.56\%, compared to 4.0\% for planets $>0.3 M_{\rm Jup}$ \citep{cumm08}.}
We use radial velocity (RV) measurements to put mass limits on any companion to TOI-216\xspace. The spectra described in Section~\ref{sec:star} shows no large radial velocity variations, with the measurements exhibiting a scatter of $470\,\mathrm{m\,s}^{-1}$. From these velocities, we derive the $3\sigma$ upper limit on the masses of the inner planet to be $\sim 18\,M_J$ and the outer planet to be $\sim 25\,M_J$. The upper limits rule out any scenario involving a stellar companion to TOI-216\xspace. The constraints also support our limit on $R_p/R_\star$ for the light curve fits for TOI-216b\xspace (Section~\ref{sec:lc}) corresponding to 1.3 $R_J$ because radii only start to increase above $\sim 1 R_J$ at around $60 M_J$ (e.g., \citealt{hatz15}, Fig.~2). The scenario in which one or both objects are brown dwarf companions is not ruled out by the RVs but will be ruled out by the TTVs in Section~\ref{sec:ttvs}.
\subsection{Photometry rules out most blended false~positive~scenarios}
\label{subsec:phot}
Analysis of systems with multiple transiting planet candidates from \emph{Kepler}\xspace has shown that the transit-like events have a higher probability of being caused by bona fide planets \cite[e.g.,][]{Lissauer2012} compared to single-planet candidate systems, lending credibility to the planetary nature of the transit-like events associated with TOI-216. However, the pixel scale of \emph{TESS}\xspace is larger than \emph{Kepler}\xspace's (${21\arcsec}$ for \emph{TESS}\xspace vs. ${4\arcsec}$ for \emph{Kepler}\xspace) and the point spread function of \emph{TESS}\xspace could be as large as ${1\arcmin}$, both of which increase the probability of contamination of the \emph{TESS}\xspace aperture by an nearby eclipsing binary. For example, a deep eclipse in a nearby faint eclipsing binary might cause a shallow transit-like detection by \emph{TESS}\xspace on the target star due to the dilutive effect of blending in the \emph{TESS}\xspace aperture.
A scenario in which both TOI-216b\xspace and TOI-216c\xspace are orbiting the same background binary is ruled out by the TTVs (Section~\ref{sec:ttvs}). One object could be a planet-mass companion to TOI-216\xspace\ and the other a background binary. Alternatively, both objects could be background binaries.
From a single sector of \emph{TESS}\xspace data, the one standard deviation centroid measurement uncertainty is $2\farcs58$ for TOI-216b\xspace and $3\farcs3$ for TOI-216c\xspace. TOI-216c\xspace would need to fully eclipse a star with Tmag 15.85 to cause the blend, and TOI-216b\xspace would need to fully eclipse a star with Tmag 17.5 to cause the blend. The brightest \emph{Gaia} D2 object within $40\arcsec$ has \emph{Gaia} $rp$ magnitude of 16.8 and is $3\farcs768$ away and therefore is marginally compatible with a blend scenario for TOI-216b\xspace. The second brightest \emph{Gaia} object within $40\arcsec$ has \emph{Gaia} $rp$ magnitude of 17.94, which cannot cause either of the transit signals we see.
We use higher spatial resolution ground-based time-series imaging to attempt to detect the transit-like events on target and/or to identify or rule out potential nearby ecliping binaries out to ${2.5\arcmin}$ from TOI-216. The higher spatial resolution and smaller point spread function of the ground-based observations facilitates the use of much smaller photometric apertures compared to the \emph{TESS}\xspace aperture to isolate a possible transit or eclipse signal to within a few arcseconds of the center of the follow-up aperture. From the ground, follow-up apertures exclude the flux of all known neighboring stars, except the two $\sim4\arcsec$ \emph{Gaia} DR2 neighbors. We collected observations of TOI-216c\xspace in both g$^\prime$ and i$^\prime$ filters (Section~\ref{sec:lc}) and found no obvious filter-dependent transit depth, which strengthens the case for a planetary system.
\subsection{Validation summary}
\label{subsec:summ}
{ In summary, we can rule out all astrophysical false positive scenarios with a couple exceptions. First, TOI-216b\xspace could be a blended binary orbiting the 16.8 $rp$ magnitude \emph{Gaia} DR2 object, in which case it would need a 53\% transit depth. Second, TOI-216b\xspace and/or TOI-216c\xspace could be a binary orbiting a star located at the same sky position as TOI-216\xspace, creating a blend not resolved by \emph{Gaia}.} However, we will show in Section~\ref{sec:ttvs} that the two transiting objects are fully compatible with causing each other's TTVs and that the TTVs have concavity in opposite direction (i.e., one planet loses orbital energy as the other gains). This false positive scenario would require the extremely unlikely configuration in which both objects happen to have an orbital period ratio near 2:1, happen to have non-transiting companions in or near orbital resonance causing their TTVs, and the TTVs happen to have opposite sign. Therefore we consider the system to be validated.
\\
\\
\section{Orbital Architecture}
\label{sec:arch}
Here we explore the orbital architecture of the TOI-216\xspace system through analysis of the transiting timing variations (TTVs), transit shape and duration, and limits on additional transiting planets.
\subsection{TTV overview}
{
Both candidates exhibit significant deviations from a linear transit time ephemeris (Fig.~\ref{fig:oc}), evidence for their mutual gravitational perturbations. These transit timing variations (TTVs) occur on two timescales. The first is the synodic timescale, $\tau_{\rm syn} = P_c\xspace/(P_c\xspace/P_b\xspace-1)$, which is the interval of time between successive planetary conjunctions.
The second -- for planets near the 2:1 resonance -- is the super-period\footnote{The super-period may be longer or shorter for planets in orbital resonance experiencing fast precession.}, $\tau_{\rm s-p}\approx|P_c\xspace/(2-P_c\xspace/P_b\xspace)|$, the timescale over which the planets have their conjunctions at the same longitude; $\tau_{\rm s-p}$ depends on the proximity of the ratio of the orbital periods to 2.
The synodic TTV signal, known as the chopping effect because it produces a saw-tooth like pattern (see \citealt{deck15} and references therein), depends on the perturbing planet's mass, which determines the strength of the kick at conjunction. To first order, the chopping effect does not depend on eccentricity.
The super-period TTV signal, known as the near-resonant effect (e.g., \citealt{lith12}), has a sinusoidal shape. The near-resonant effect generates a forced eccentricity for each planet, and the free eccentricity is an extra component that contributes to the total eccentricity. The near-resonant TTV amplitude depends on the perturbing planet's mass and the free eccentricity of the transiting and perturbing planets. To first order, the ratio of near-resonant signal amplitudes depends only on the planets' mass ratio (e.g., \citealt{lith12}'s Eqn. 14--15). Therefore TTVs covering a significant fraction of the super-period can provide a good estimate of the mass ratio.
For planets near resonance, the amplitude of the near-resonant effect is typically much larger than the amplitude of the chopping effect. Measuring the chopping and near-resonant signals for both transiting planets -- assuming there are no additional planets in the system contributing significantly to the TTVs -- would allow us to uniquely constrain their masses and eccentricities.
}
\begin{figure}
\begin{center}
\includegraphics{oc_new.eps}
\caption{
\label{fig:oc}
Observed mid-transit times (diamonds) with subtracted best-fit linear ephemeris for TOI-216b\xspace (top) and TOI-216c\xspace (bottom), with the best fit model overplotted (asterisks, dotted line).
}
\end{center}
\end{figure}
\subsection{Evidence for free eccentricity}
The phasing of the TTVs allows to diagnose at least one planet likely has significant free eccentricity. In Fig.~\ref{fig:oc_phase} we plot the TTVs as a function of phase. The top panel shows the TTVs of the inner planet phased with $2(\lambda_b\xspace-\lambda_c\xspace$), where $\lambda$ is the mean longitude (Section \ref{subsec:fits}). If the free eccentricities are zero, the TTVs should follow a sinusoid with no phase shift \citep{deck15}. The non-phase shifted sinusoid is inconsistent with the observed TTVs of TOI-216b\xspace, so we infer at least one planet has free eccentricity. [For the outer planet, no phase shift in $\lambda_b\xspace-\lambda_c\xspace$ (Fig.~\ref{fig:oc_phase}, row~2) is necessary.] We also follow \citet{lith12} and plot the TTVs phased to $2\lambda_c\xspace-\lambda_b\xspace$ (Fig.~\ref{fig:oc_phase}, row~3; equivalent to rows~1--2 because transit times are sampled at the planets' orbital period) and find that again a phase shift is necessary to match the inner planet's observed TTVs, indicating free eccentricity for one or both planets.
\begin{figure*}
\begin{center}
\includegraphics{oc_phase_subtract_new.eps}
\includegraphics{oc_phase_subtract2_new.eps}
\caption{
\label{fig:oc_phase} Evidence for free eccentricity from TTVs plotted as function of phase, where $\lambda_i$ is the mean longitude of the $i^{\rm th}$ planet's orbit. Top row: Inner planet's TTVs. We plot the best fit non-phase shifted sinusoid as a dotted line that goes with the red observed points, and the best fit phase-shifted sinusoid as a dashed line that goes with the blue points. Note that the red and blue points are different because orbital period and first transit epoch are also free parameters. A non-phase shifted sinusoid is inconsistent with the observed TTVs of TOI-216b\xspace, so we infer the planets have free eccentricity. row~2: Outer planet's TTVs. No phase shift in $\lambda_b\xspace-\lambda_c\xspace$ is necessary. The linestyle corresponds to the same linear ephemeris as used in row~1. The orange (purple) points use the same linear ephemeris as the red (blue) points in row~1. Bottom row: TTVs phased to $2\lambda_c\xspace-\lambda_b\xspace$. For the inner planet, a phase shift is necessary to match the inner planet's observed TTVs (i.e., the red points are not well-fit by the model).
}
\end{center}
\end{figure*}
The chopping signal would appear as additional harmonics, i.e., $\lambda_b\xspace-\lambda_c\xspace$, $3(\lambda_b\xspace-\lambda_c\xspace)$, etc. for TOI-216b\xspace and $2(\lambda_b\xspace-\lambda_c\xspace)$ and $3(\lambda_b\xspace-\lambda_c\xspace)$, etc. for TOI-216c\xspace. The fact that a sinusoid goes through the data points in Fig.~\ref{fig:oc} without these additional harmonics gives us a sense that the chopping signal will not be easily measured in this dataset. There will be a degeneracy between planet masses and free eccentricity.
\subsection{A large range of best-fit planet masses}
\label{subsec:fits}
We fit the transit times using our N-body TTV integrator model \citep{daws14}. Our model contains five parameters for each planet: the mass $M$, orbital period $P$, mean longitude at epoch $\lambda$, eccentricity $e$, and argument of periapse $\omega$. For each planet, we fix the sky plane inclination $i_{\rm sky}$ to the value in Table~\ref{tab:216} and set the longitude of ascending node on the sky to $\Omega_{\rm sky}=0$. We use the conventional coordinate system where the $X-Y$ plane is the sky plane and the $Z$ axis points toward the observer. See \citet{murr10} for a helpful pedagogical description of the orbital elements.
To explore the degeneracy between mass and eccentricity, we use the Levenberg-Marquardt alogrithm implemented in IDL {\tt mpfit} \citep{mark09} to minimize the $\chi^2$ on a grid of $(M_c\xspace, e_b\xspace)$. { We report the total $\chi^2$ for eighteen transit times and ten free parameters, i.e., eight degrees of freedom.} The resulting contour plot is shown in Fig.~\ref{fig:contour}. The lowest $\chi^2$ fits, i.e., those with $13 < \chi^2 < 18$, are possible for a range of outer planet masses ($M_c\xspace<3.0\,M_{\rm Jup}$). However, for small outer planet masses, a large range of inner planet eccentricities allow for a good fit, whereas a particular value of the eccentricity ($e_b\xspace \sim 0.13$ is necessary for larger planet masses. (See also discussions by \citealt{hadd17} and \citealt{miga18}.) Because there is so much more ``real estate'' in parameter space at low outer planet masses, a Markov Chain Monte Carlo (MCMC) will identify this type of solution as most probable. However, if we have a priori reason to suspect the outer planet is massive -- like a large transit depth -- and/or that free eccentricities are low, we could be misled.
\begin{figure}
\begin{center}
\includegraphics{evsm_new2.eps}
\caption{
\label{fig:contour}
Contours of $\chi^2$ show degeneracy between the inner planet's (osculating) eccentricity and the outer planet's mass. The best-fit solutions occupy the innermost contour.
}
\end{center}
\end{figure}
Fig.~\ref{fig:others} shows how other parameters correlate with the outer planet's mass $M_c\xspace$. The mass ratio, $M_b\xspace/M_c\xspace$, of the planets is about 0.17 for $M_c\xspace < 0.5 M_{\rm Jup}$ and decreases for larger $M_c\xspace$. Solutions with $M_c\xspace < 0.5 M_{\rm Jup}$ have larger values for the eccentricity of planet c\xspace. (Note that the eccentricity plotted in Fig.~\ref{fig:contour} and \ref{fig:others} is the osculating eccentricity; we will explore how these solutions translate to free and forced eccentricities in Section~\ref{subsec:long}.) The arguments of periapse $\omega_b$ and $\omega_c$ for planets b\xspace and c\xspace also correlate with planet c\xspace's mass.
\begin{figure}
\begin{center}
\includegraphics{allothers.eps}
\caption{
\label{fig:others}
Correlations between parameters in best fit solutions ( $\chi^2 < 18$). Larger outer planet masses correspond to smaller mass ratios ($M_b\xspace/M_c\xspace$) and smaller inner planet eccentricities; outer planet mass maps to particular ranges of the argument of periapse $\omega$.
}
\end{center}
\end{figure}
\subsection{Longterm behavior of best-fit solutions}
\label{subsec:long}
We integrate the $\chi^2 < 18$ solutions for $10^6$~days using {\tt mercury6} \citep{cham96} to assess the longer term behavior (Fig.~\ref{fig:long}). We find that resonant argument $2\lambda_c\xspace-\lambda_b\xspace-\varpi_b\xspace$ librates for the high $M_c\xspace$ ($M_c\xspace\gtrapprox0.3\,M_{\rm Jup}$) solutions but not for the lower $M_c\xspace$ solutions. Larger $M_c\xspace$ solutions have lower free and forced eccentricities for both planets (Fig.~\ref{fig:long}, rows~2--3). Period ratios $P_c\xspace/P_b\xspace$ are wider of the 2:1 for the higher $M_c\xspace$ solutions. We extend the simulations to 10 Myr and find that all configurations remain stable over that interval.
\begin{figure}
\begin{center}
\includegraphics{longplot.eps}
\caption{
\label{fig:long}
Long-term ($10^6$~days) behavior of solutions with $\chi^2<11$: free eccentricity (row~1; calculated as the maximum deviation from the median eccentricity), forced eccentricity (row~2; calculated as the median eccentricity); $e_b$ and orbital resonance (row~3); and time-averaged orbital period ratio (row~4).
}
\end{center}
\end{figure}
\subsection{Transit exclusion intervals}
We use ground-based observations in which an ingress or egress for TOI-216b\xspace is excluded (Table~\ref{tab:ground}) to check solutions. Before the \emph{TESS}\xspace Sector 6 data were available, the exclusion interval on the December 26 observation ruled out some solutions. Almost all solutions based on Sector 1--6 are consistent with no ingress or egress during the intervals in Table~\ref{tab:ground}.
\subsection{Ruling out the lowest-mass solutions with the ``photoeccentric'' effect}
The light curve stellar densities (Table~\ref{tab:216}) are similar to the true stellar density (Table~\ref{tab:star}), consistent with the planets being on nearly circular orbits. We follow \citet{daws12} to estimate the candidates' eccentricities from the light curve using the ``photoeccentric effect,'' but instead of applying the approximations appropriate for a grazing transit, we use the full Eqn. 15 from \citet{kipp10}. We find eccentricities that could be low for both candidates; their modes and 68.3\% confidence intervals a are: $e_b\xspace=0.20_{-0.06}^{+0.48}, e_c\xspace = 0.025_{-0.004}^{+0.490}$ (Fig.~\ref{fig:ecc}). The medians and their 68.3\% confidence intervals are $e_b\xspace=0.30_{-0.16}^{+0.38}, e_c\xspace = 0.10_{-0.08}^{+0.41}$. High eccentricities are not ruled out, e.g., the posterior probability of $e>0.5$ is 28\% for TOI-216b\xspace and 16\% TOI-216\xspacec\xspace. The posterior probability of an eccentricity less than 0.01 is 0.7\% for TOI-216b\xspace and 8\% for TOI-216c\xspace.
\begin{figure}
\begin{center}
\includegraphics{eccom_in_new.eps}\\
\includegraphics{eccom_out_new.eps}
\caption{
\label{fig:ecc}
Joint posterior, $\omega$ vs. $e$, for TOI-216b\xspace (top) and TOI-216c\xspace (bottom). The black (gray, light gray) contours represent the \{68.3,95,99\}\% probability density levels (i.e., 68\% of the posterior is contained within the black contour). Overplotted as a black and white dotted line is a histogram of the eccentricity posterior probability distribution marginalized over $\omega$. The transit shapes and durations are consistent with low eccentricity orbits, but moderately eccentric orbits are not ruled out for special ellipse orientations that result in similar planet-star separations to the circular case.
}
\end{center}
\end{figure}
The constraints on the eccentricity from the light curve allow us to rule out the lowest-mass solutions (Fig.~\ref{fig:exclude}). These solutions -- which correspond to an eccentric TOI-216c\xspace with its apoapse near our line of sight -- would produce a transit duration that is too long. Some higher-mass solutions that correspond to an eccentric TOI-216c\xspace with its periapse near our line of sight are also ruled out.
\begin{figure}
\begin{center}
\includegraphics{exclude_all_new2.eps}
\caption{
\label{fig:exclude}
Constraints on $g=\left(\rho_{\rm circ}/\rho_\star\right)^{1/3}$ from the light curve rule out a subset of solutions (red; inconsistent with $g$ outside the 2.5--97.5 percentile). Solutions with $\chi^2 < 18$ are plotted.
}
\end{center}
\end{figure}
\subsection{MCMC fits}
Following \citet{daws14}, we derive posteriors for the parameters using Markov Chain Monte Carlo with the Metropolis-Hastings algorithm. We incorporate the transit exclusion intervals and light curve stellar density (i.e., combining the $\rho_{\rm circ}$ posterior from the light curve and $\rho_\star$ posterior from the Dartmouth models) into the MCMC. Instead of including the orbital period and mean longitude at epoch as parameters in the MCMC, we optimize them at each jump using the Levenberg-Mardquardt algorithm. We visually inspect each parameter for convergence.
We perform two fits with different priors to explore both ends of the parameter degeneracy
evident in the grid of outer mass vs. inner eccentricity (Fig.~\ref{fig:contour}). The first solution (Table~\ref{tab:ttv216}, column~1) imposes { uniform} priors on eccentricities and { log} uniform priors on mass { (i.e., priors that are uniform in log space)}; the second (Table~\ref{tab:ttv216}, column~2) imposes uniform priors on { mass} and { sets $e_c=0$ (which we found to yield indistinguishable results from an eccentricity prior that is uniform in log space)}. All other fitted parameters (orbital period, mean longitude, argument of periapse) have uniform priors. The uniform priors on mass favors the higher-mass solutions seen in Fig.~6--8, whereas the log uniform prior on mass favor the lower-mass solutions.
Because the results are so prior-dependent (every parameter in Table~\ref{tab:ttv216} differs significantly between the two solutions except TOI-216b\xspace's eccentricity of $\sim 0.2$), we do not recommend currently adopting either solution. Instead, the MCMC approach is a way to formally separate the two types of solutions seen in the grid search and to incorporate the light curve stellar densitities and transit exclusion windows into the likelihood function.
\begin{deluxetable}{rrlll}
\tablecaption{Planet Parameters for TOI-216b\xspace and TOI-216c\xspace Derived from TTVs \label{tab:ttv216} }
\tablehead{
\colhead{Parameter} & \colhead{Soln 1\tablenotemark{a,b}}&
& \colhead{Soln 2\tablenotemark{a,c}}}
\startdata
$M_b\xspace$ ($M_{\rm Jup}$) & 0.05 &$^{+0.023}_{-0.03}$ & 0.10 &$^{+0.03}_{-0.02}$\\
$M_b\xspace/M_c\xspace$ & 0.149 & $^{+0.011}_{-0.012}$& 0.133 & $^{+0.010}_{-0.010}$\\
$e_b\xspace$ & 0.214 & $^{+0.154}_{-0.048}$ & 0.15 & $^{+0.04}_{-0.03}$ \\
$\varpi_b\xspace$ (deg.) & 240&$^{+40}_{-30}$& 293&$^{+7}_{-10}$\\
$M_c\xspace$ ($M_{\rm Jup}$) & 0.26 &$^{+0.14}_{-0.17}$& 0.57 &$^{+0.21}_{-0.16}$\\
$e_c\xspace$ & 0.06 & $^{+0.11}_{-0.03}$& \\
$\varpi_c\xspace$ (deg.) &-30&$^{+30}_{-60}$\\
$\Delta \varpi$ (deg.)& -80&$^{+30}_{-30}$ \\
$2\lambda_c\xspace - \lambda_b\xspace -\varpi_c\xspace$ (deg).&-20&$^{+40}_{-30}$\\
$2\lambda_c\xspace - \lambda_b\xspace -\varpi_b\xspace$ (deg).&60&$^{+11}_{-14}$&41&$^{+7}_{-6}$\\
\enddata
\tablenotetext{a}{As a summary statistic we report the median and 68.3\% confidence interval of the posterior distribution.}
\tablenotetext{b}{Uniform prior on eccentricity and log uniform prior on mass}
\tablenotetext{c}{{ $e_c=0$} and uniform prior on mass.}
\end{deluxetable}
\subsection{Mass-radius}
\label{sec:ttvs}
We plot the two solutions on a mass-radius plot in Fig.~\ref{fig:rm}. TOI-216c\xspace's radius is comparable to other known exoplanets for both mass solutions. The same is true for TOI-216b\xspace if its radius is close to the lower limit derived from its grazing transits. However, if its radius is somewhat larger than the lower limit, the lower-mass solution would correspond to a very low density.
\begin{figure}
\begin{center}
\includegraphics{rm_toi.eps}
\caption{\label{fig:rm}
Warm (10--200~day orbital period) planets with both mass and radius measurements (exoplanets.eu), including TOI-216 (red, Solution 1; blue, Solution 2).
}
\end{center}
\end{figure}
\subsection{Predictions for future transits}
\label{subsec:predict}
In Table~\ref{tab:future}, we tabulate the predicted times for { missed and} future transits. { For the inner planet, the predictions of the two solutions overlap within one standard deviation for each transit. However, the outer planet's transits differ between the solutions, so the next few sectors of \emph{TESS}\xspace data may help distinguish between them.}
\begin{deluxetable}{rrlll}
\tablecaption{{ Missed and} future transit times\label{tab:future}}
\tablehead{
\colhead{Solution 1\tablenotemark{a,b}}&
& \colhead{Solution 2\tablenotemark{a,c}}}
\startdata
TOI-216b\xspace\\
1530.286&$^{+0.006}_{-0.004}$ & 1530.295&$^{+0.011}_{-0.007}$ \\
1547.351&$^{+0.009}_{-0.007}$& 1547.363&$^{+0.013}_{-0.010}$ \\
1564.413&$^{+0.013}_{-0.010}$ & 1564.430&$^{+0.020}_{-0.015}$ \\
1581.479&$^{+0.019}_{-0.014}$ & 1581.50&$^{+0.02}_{-0.02}$ \\
1598.54&$^{+0.03}_{-0.02}$ & 1598.58&$^{+0.03}_{-0.03}$ \\
1615.61&$^{+0.04}_{-0.02}$ & 1615.65&$^{+0.04}_{-0.04}$ \\
TOI-216c\xspace\\
1573.09&$^{+0.03}_{-0.03}$& 1573.16&$^{+0.04}_{-0.03}$ \\
1607.63&$^{+0.04}_{-0.04}$& 1607.71&$^{+0.05}_{-0.04}$ \\
1642.18&$^{+0.04}_{-0.05}$ & 1642.26&$^{+0.05}_{-0.04}$\\
1676.72&$^{+0.05}_{-0.05}$ & 1676.82&$^{+0.06}_{-0.05}$\\
1711.26&$^{+0.05}_{-0.06}$ & 1711.37&$^{+0.07}_{-0.05}$\\
1745.81&$^{+0.06}_{-0.06}$ & 1745.92&$^{+0.07}_{-0.06}$\\
\enddata
\tablenotetext{a}{As a summary statistic we report the { median} and 68.3\% confidence interval of the posterior distribution.}
\tablenotetext{b}{Uniform prior on eccentricity and log uniform prior on mass}
\tablenotetext{c}{Log uniform prior on eccentricity and uniform prior on mass.}
\end{deluxetable}
\subsection{Mutual inclination}
A larger impact parameter for an inner planet than an outer planet points to at least a small mutual inclination between their orbits. The difference in the TOI-216b\xspace and TOI-216c\xspace's sky plane inclination (Table~\ref{tab:216}) corresponds to a mutual inclination of at least $1^\circ.90^{+0.15}_{-0.34}$ (mode; the median is $1^\circ.8^{+0.2}_{-0.3}$). This value is a minimum because we do not know the component of the mutual inclination parallel to the sky plane. { Future observations of transit duration variations -- and depth changes for the grazing transit -- may allow for constraints on the full 3D orbital architecture.}
\subsection{Comparison to other work}
{ While this manuscript was in preparation, we learned of a submitted paper by \cite{kipp19} on this system. We conducted the work here independently. After submitting this manuscript and revising in response to the referee report, we read \citet{kipp19} study in order to compare our results. Our solutions are generally consistent. We infer a larger range of possible masses and eccentricities. We find a smaller radius for the outer planet due to our different stellar parameters derived from ground based spectroscopy and a larger range of possible radii and impact parameters for the inner planet. Ground-based transits aided our work by extending the TTV baseline and filling in transit times that were missed by \emph{TESS}\xspace.}
\subsection{Summary}
\label{subsec:compare}
From the TTVs alone, we end up with solutions that occupy two qualitatively different parts of parameter space. The first corresponds to a sub-Saturn{-mass planet} and Neptune{-mass planet} with larger free eccentricities, period ratio near 2.00, and near but not in orbital resonance. The second corresponds to a Jupiter accompanied by a sub-Saturn with smaller free eccentricities, period ratios near 2.02, and librating in orbital resonance. Although the masses are not precisely constrained due to the degeneracy with eccentricity, we narrow the range of possible masses sufficiently to consider these candidates now confirmed as planets.
Although we cannot yet rule out the former solution, the latter solution has several appealing features. The period ratio falls outside the observed gap among Kepler multis \citep{fabr14}. The lower free eccentricities and libration of the resonant argument are suggestive of a dissipative process, such as disk migration, capturing the planets into resonance so that we observe them near a 2:1 period ratio. The masses are more typical of the observed radii (Fig.~\ref{fig:rm}).
\section{Discussion}
\label{sec:discuss}
TOI-216 is a system of two known transiting candidates in or near a 2:1 orbital resonance with { accuracy-to-minutes} constraints on their mid-transit times. Unlike most\footnote{See \citealt{daws14} for an example of a \emph{Kepler}\xspace warm Jupiter with ground-based mid-transit times.} \emph{Kepler}\xspace systems, the { 12.393 $V$ magnitude star is sufficiently bright} for ground-based follow up to play an important role in supplying additional transits and transit exclusion intervals. From the phases of the TTVs, we identified that the pair contains significant free eccentricity that leads to degeneracy between eccentricities and masses. We ruled out the lowest-mass solutions using the ``photoeccentric'' effect and the highest-mass solutions using transit exclusion intervals from missed ground-based transits. Their mutual inclination may be { modest} (minimum $1^\circ.90^{+0.15}_{-0.34}$) but the component parallel to the sky plane is unknown. We identified two families of solutions. One solution family corresponds to lower masses (a sub-Saturn{-mass planet} and Neptune{-mass planet}), larger eccentricities, period ratio near 2, planets near but not in resonance, and puffy radii. The other corresponds to larger masses (Jupiter{-mass planet} and sub-Saturn{-mass planet}), lower eccentricities, a period ratio of 2.02, masses typical of the planets' sizes, and orbital mean motion resonant libration. We prefer the second family of solutions but cannot yet rule out the first.
\subsection{Formation and evolution}
TOI-216 joins the population of systems featuring warm, large exoplanets that could not have achieved their close-in orbits through high eccentricity tidal migration (Fig. \ref{fig:arch}). They may have formed at or near their current locations (e.g., \citealt{huan16}), or formed at wider separations and migrated in (e.g., \citealt{lee02}). Both scenarios could lead to planets in or near resonance (e.g., \citealt{dong16,macd18}). The in situ scenario would require the planets to coincidentally form with a period ratio close to 2, but in situ formation sculpted by stability can produce ratios near this value (e.g., \citealt{daws16}). For the lowest-mass solutions, formation beyond the snow line may be necessary to account for the large radii \citep{lee16}.
The planets have at least small and possibly moderate free eccentricities and mutual inclination. The free eccentricities and inclinations might result from dynamical interactions with other undetected planets in the system. For the higher-mass/low eccentricity solution, the eccentricities/inclinations are small enough to be consistent with self-stirring (e.g., \citealt{petr14}) by Neptune-mass or larger planets. The free eccentricities could even be generated by the gas disk (e.g., \citealt{duff15}). However, the free eccentricities in the lower-mass solution would require nearby, undetected giant planets to accompany the observed sub-Saturn{-mass planet} and Neptune{-mass planet} pair.
Among the eleven systems featuring a warm, large exoplanet with companions with $< 100$~day orbital period (Fig.~\ref{fig:arch}), only TOI-216 and Kepler-30 lack a detected small, short period planet (Section~\ref{subsec:search}). Whatever formation and migration scenario led to the short period planets in the other systems may not have operated here, or the planet may have been lost through stellar collision or tidal disruption. If present but non-transiting, such a planet would need to be mutually inclined to the rest of the system (for example, a non-transiting 3~day TOI-216\,d would need to be inclined by $5^\circ$ with respect to TOI-216\,c). The same stirring environment that led to free eccentricities could also have generated a mutual inclination for this interior planet. (Of course, it may be that no planet formed or migrated interior to TOI-216\,b.) More generally, the mutual inclination between b and c makes it plausible that there are non-transiting planets in the system.
\subsection{Future observations}
Future \emph{TESS}\xspace sectors will allow for additional transit timing measurements. As shown in Section~\ref{subsec:predict}, distinguishing between the two families of solutions { may be possible with additional transits of the outer planets. Moreover,} we can likely distinguish between the two families of solutions by measuring the masses through RV follow up: the radial velocity amplitudes are $\sim 53$~m/s and $\sim$~2015~m/s for planet b\xspace and c\xspace respectively in Solution 1 (Table~\ref{tab:ttv216}) and $\sim$~10~m/s and $\sim$~670~m/s for planet b\xspace and c\xspace respectively in Solution 2 (Table~\ref{tab:ttv216}). We caution that because of the planets' period ratio and mass ordering, the RV signal alone is subject to significant degeneracy between the inner planet's mass and the outer planet's eccentricity \citep{angl10}. Combining TTVs and RVs can break this degeneracy.
{ Unfortunately TOI-216\xspace does not fall within the observable part of the sky for CHEOPS. Other space-based follow up possibilities, particularly to detect a change in transit depth/impact parameter for the inner planet due to its precession, include Spitzer.}
We expect ground-based observations to play an essential role in follow up of TOI-216. As demonstrated here, ground-based observations can provide accurate and precise transit times for this bright star with two large transiting planets. { For the larger planet in particular, ground-based transits can yield transit times that are more precise than from \emph{TESS}\xspace data (e.g., the transit observed by LCO in Table \ref{tab:216}). We can identify in advance which transit epoch(s) would be most valuable for distinguishing among models \citep{gold18}. Ground-based transits will allow for a long baseline of observations for better constraining the planets' masses and eccentricities and possibly even detect precession of the planets' orbits.
\acknowledgments
We thank the \emph{TESS}\xspace Mission team and follow up working group for the valuable dataset. We acknowledge the use of public \emph{TESS}\xspace Alert data from pipelines at the \emph{TESS}\xspace Science Office and at the \emph{TESS}\xspace Science Processing Operations Center. This paper includes data collected by the \emph{TESS}\xspace mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST).
This research has made use of the Exoplanet Follow-up Observation Program website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This work has made use of observations from the Las Cumbres Observatory network. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products.
We gratefully acknowledge support by NASA XRP NNX16AB50G and NASA \emph{TESS}\xspace GO 80NSSC18K1695. The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University, the Eberly College of Science, and the Pennsylvania Space Grant Consortium. T.D. acknowledges support from MIT's Kavli Institute as a Kavli postdoctoral fellow. K.H. acknowledges support from STFC grant ST/R000824/1. M{\v Z} acknowledges funding from the Australian Research Council (grant DP170102233).
We thank Samuel Hadden and Sarah Morrison for helpful discussions. { We thank the referee for a helpful report that improved the clarity of the paper.}
\bibliographystyle{apj}
| proofpile-arXiv_066-196 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
Enabling robots to operate in truly complex domains requires learning policies from a small amount of data and generalizing learned policies over different tasks. Policy search methods with low-dimensional, parametric policy representations enable data-efficient learning of local policies~\cite{deisenroth2013survey}. Contextual policy search~(CPS)~\cite{kupcsik2013data, daniel2012hierarchical} further enables generalization over different task settings by structuring the policy. CPS uses an upper-level policy $\pi(\vec \theta \vert \vec s)$ to select parameters $\vec \theta$ of a lower-level policy given context $\vec s$, where the context $\vec s$ specifies the task. The goal is to learn a policy $\pi(\vec \theta \vert \vec s)$ that maximizes the expected reward $\mathbb{E}[\mathcal{R}_{\vec s, \vec \theta}]$.
We propose to further structure the contextual policy representation by introducing a factorization of the context space. In particular, we factorize a context vector $\vec s$ into two components: (1) \emph{target contexts} $\vec s^t$ that specify task objectives, e.g. for a ball throwing task the target coordinates of the ball, and (2) \emph{environment contexts} $\vec s^e$ that characterize the environment and the system dynamics, e.g. initial position of the ball. Formally, we assume that the expected reward is given by $\mathcal{R}_{\vec s, \vec \theta} = \int p(\vec \tau \vert \vec s^e, \vec \theta) R(\vec s^t, \vec \tau) d\vec \tau$, where $\vec \tau$ is a trajectory with unknown dynamics $p(\vec \tau \vert \vec s^e, \vec \theta)$, and $R(\vec s^t, \vec \tau)$ is the reward function. The key difference between $\vec s^t$ and $\vec s^e$ is that the dynamics only depend on $\vec s^e$. We can exploit this property and re-evaluate prior experience in light of a new target context, leading to improved \mbox{data-efficiency and better generalization}.
\begin{figure}
\centering
\includegraphics[width=.35\textwidth]{fig/thrower_white.pdf}
\caption{Ball throwing task, where the robot is asked to hit target context $\vec s^t_1$ given initial position $\vec s^e$ of the ball. The robot chooses parameters $\vec \theta_1$ that generates a ball trajectory $\vec \tau \sim p(\vec \tau \vert \vec s^e, \vec \theta_1)$ landing at $\vec s^t_2$. Despite a low reward, knowing that $\vec \tau$ led to $\vec s^t_2$ is beneficial if the robot is asked again to throw near $\vec s^t_2$.
}
\label{fig:thrower}
\end{figure}
For example, assume a robot is learning to throw balls at different targets $\vec s^t$ (Fig.~\ref{fig:thrower}). The robot is asked to aim at $\vec s^t_1$. It chooses parameters $\vec \theta_1$, executes the throw, and observes a ball trajectory $\vec \tau_1$ that hits target $\vec s^t_2$, $\vec s^t_2 \neq \vec s^t_1$. This yields a reward $R(\vec s^t_1, \vec \tau_1)$. Assume the robot is now asked to aim at target $\vec s^t_2$. Standard CPS methods try to generalize prior experience solely based on the upper-level policy $\pi(\vec \theta \vert \vec s)$, e.g. by assuming that rewards obtained under similar contexts are correlated. Context factorization instead allows to treat the two context types differently. The target context $\vec s^t_2$ can be used to evaluate $R(\vec s^t_2, \vec \tau_1)$ directly, yielding the exact reward we would get for $\vec \tau_1$ when targeting $\vec s^t_2$. That is because a trajectory is independent from the target context. The same is not true for environment contexts, and thus we must rely on the upper-level policy to generalize over them.
We demonstrate the benefits of factorization by applying it to CPS approaches based on Bayesian optimization (BO) \cite{brochu2010tutorial}; however, other CPS methods would be also possible. First, we consider a passive learning setting, where the context is given to the robot, and introduce a factored variant of BO for CPS (BO-CPS)~\cite{krause2011contextual, metzen2015bayesian}. We then consider an active learning setting~\cite{fabisch2014active}, where the robot can choose the context during learning. We introduce factored contexts to ACES~\cite{metzen2015active}, a CPS method based on entropy search~\cite{hennig2012entropy}. For moderately low-dimensional search spaces, e.g. when learning pre-structured policies, such global optimization techniques achieve high data-efficiency by directly searching for the optimal parameters using a surrogate reward model.
So far we assumed that we can re-evaluate the reward function $R(\vec s^t, \vec \tau)$ for arbitrary target contexts.
This assumption is reasonable in real robot applications, where rewards typically encode objectives defined by the system designer.
However, if the agent only has access to samples from the reward function, we can still exploit factored contexts by re-evaluating the current trajectory w.r.t. the achieved outcome. This approach can be seen as an extension of hindsight experience replay (HER) \cite{andrychowicz2017hindsight}, a recently proposed data augmentation technique for goal-oriented RL algorithms.
We analyze the proposed methods first on a toy task. We then validate the benefits of context factorization on three simulated robotics environments from the OpenAI Gym~\cite{brockman2016openai}, where we employ dynamic movement primitives \cite{ijspeert2003learning} to efficiently generate trajectories. We show that context factorization is easy to implement, can be broadly applied to CPS problems, and consistently improves data-efficiency and generalization for various robotic tasks.
\section{RELATED WORK}
There are several CPS approaches that generalize over a context space. One group of work first learns different local policies and then uses supervised learning to interpolate policy parameters over contexts~\cite{da2012learning, metzen2014towards}. These methods are suitable for problems where local policies are available or easy to learn, but they are inefficient otherwise. The second group of work jointly learns local policies and generalizes over the context space~\cite{peters2007applying, kober2012reinforcement, kupcsik2013data}. These approaches were applied to a variety of real-world tasks, including playing table tennis~\cite{kober2012reinforcement}, darts~\cite{kober2012reinforcement} and hockey~\cite{kupcsik2013data}. Although all tasks involve target contexts, generalization over contexts solely relies on correlation. Similarly, CPS approaches based on BO~\cite{metzen2015bayesian, metzen2015active, pinsler2018sample, yang2018learning} learn a probabilistic reward model that generalizes over the context space through correlation. In this paper, we extend two BO approaches with factorization, namely \mbox{BO-CPS}~\cite{krause2011contextual, metzen2015bayesian} and ACES~\cite{metzen2015active}.
To the best of our knowledge, there is no prior work that explicitly factors the context space. Similar ideas are implicitly used by Kober et al. \cite{kober2012reinforcement} who learn a contextual policy for discrete targets while performing a higher-level task. While they map experience gained in one context to another, they do so for estimating discrete outcome probabilities and not for improving the policy.
GP-REPS \cite{kupcsik2017model} iteratively learns a transition model of the system using a Gaussian process (GP) \cite{rasmussen2006gaussian}, which is then used to generate trajectories offline for updating the policy. The authors consider generating additional samples for artificial contexts, but they do not define an explicit factorization.
The idea of replacing the goal of a trajectory has recently been explored in HER \cite{andrychowicz2017hindsight}, which increases data-efficiency in goal-based RL tasks with sparse rewards. The key idea is to augment the dataset with additional experience by replacing the original target context of a rollout to be the achieved outcome. Instead of replacing the target context after each rollout, we replace the target context of all previous episodes before each rollout and re-evaluate the entire dataset. Our approach additionally generalizes over environment contexts that are typical in CPS problems. If we have only access to sample rewards, we show how context factorization can be used to extend HER to CPS.
\begin{figure*}[!ht]
\begin{minipage}[t]{.49\textwidth}
\vspace{0pt}
\begin{algorithm}[H]
\caption{BO-CPS \cite{metzen2015bayesian}}
\label{alg:bo-cps}
\begin{algorithmic}
\Repeat
\State Observe $\vec s_q \sim \gamma(\vec s)$
\\
\State Learn reward model $p(R \vert \mathcal{D}, \vec s, \vec \theta)$ from $\mathcal{D}$ (Eq.~\ref{eq:gp-model})
\State Select $\vec \theta_q \sim \pi(\vec \theta \vert \vec s_q)$ (Eq.~\ref{eq:gp-ucb}, \ref{eq:bocps-policy}) using reward model
\State Execute rollout $\vec \tau \sim p(\vec \tau \vert \vec s_q, \vec \theta_q)$ with the robot
\State Add $(\vec s_q, \vec \theta_q , R(\vec s_q, \vec \tau))$ to $\mathcal{D}$
\Until{Policy $\pi$ converges}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\begin{minipage}[t]{.49\textwidth}
\vspace{0pt}
\begin{algorithm}[H]
\caption{BO-FCPS (ours)}
\label{alg:bo-fcps}
\begin{algorithmic}
\Repeat
\State Observe $\vec s_q \sim \gamma(\vec s)$, where $\vec s_q = (\vec s^t_q, \vec s^e_q)$
\State Construct dataset $\mathcal{D}_q$ from $\mathcal{D}$ (Eq.~\ref{eq:map-data})
\State Learn reward model $p(R_q^t|\mathcal{D}_q, \vec s^e_q, \vec \theta)$ from $\mathcal{D}_q$ (Eq.~\ref{eq:gp-model})
\State Select $\vec \theta_q \sim \pi(\vec \theta \vert \vec s_q)$ (Eq.~\ref{eq:gp-ucb}, \ref{eq:bocps-policy}) using reward model
\State Execute rollout $\vec \tau \sim p(\vec \tau \vert \vec s^e_q, \vec \theta_q)$ with the robot
\State Add $(\vec s^e_q, \vec \theta_q , \vec \tau)$ to $\mathcal{D}$
\Until{Policy $\pi$ converges}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{figure*}
\section{BACKGROUND}
\subsection{Bayesian Optimization for Contextual Policy Search}
In a CPS problem, the agent observes a \emph{context} $\vec s \sim \gamma(\vec s)$ before each episode, where the context specifies the task setting and $\gamma(\vec s)$ is a distribution over contexts. To solve the task, the agent maintains an upper-level policy $\pi(\vec \theta \vert \vec s)$ over parameters $\vec \theta$ of a lower-level policy, e.g. a dynamic movement primitive~\cite{ijspeert2003learning}. Executing the lower-level policy with parameters $\vec \theta$ generates a trajectory $\vec \tau \sim p(\vec \tau \vert \vec s, \vec \theta)$ that yields reward $R(\vec s, \vec \tau)$. The goal of the agent is to learn an upper-level policy that maximizes the expected reward,
\begin{equation*}
\mathbb{E}[\mathcal{R}_{\vec s, \vec \theta}] = \iiint \gamma(\vec s)\pi(\vec \theta \vert \vec s)p(\vec \tau \vert \vec s, \vec \theta) R(\vec s, \vec \tau) d\vec \tau d\vec \theta d\vec s.
\end{equation*}
BO-CPS \cite{krause2011contextual, metzen2015bayesian} frames CPS as a BO problem. BO is a global search method for optimizing real-valued functions, assuming only access to noisy sample evaluations. Starting from a prior belief about the objective, BO employs an acquisition function to guide the sampling procedure. In BO-CPS, a probabilistic reward model $p(R|\mathcal{D}, \vec s, \vec \theta)$ is learned from $N$ data samples $\mathcal{D} = \{\vec s_i, \vec \theta_i, \vec R_i\}_{i=1}^N$, which allows to evaluate potential parameters $\vec \theta$ for a query context $\vec s_q$. BO-CPS commits to a GP prior \cite{rasmussen2006gaussian} with predictive posterior $p(R|\mathcal{D}, \vec s_q, \vec \theta) = \mathcal{N}(\mu_{\vec s_q, \vec \theta}, \sigma^2_{\vec s_q, \vec \theta})$, and uses the GP-UCB acquisition function \cite{srinivas2009gaussian},
\begin{equation} \label{eq:gp-ucb}
\text{GP-UCB}(\vec s_q, \vec \theta) = \mu_{\vec s_q, \vec \theta} + \kappa \sigma_{\vec s_q, \vec \theta},
\end{equation}
where $\kappa$ trades off exploration and exploitation. The policy parameters $\vec \theta$ are selected by the upper-level policy $\pi$, which optimizes the acquisition function given the query context,
\begin{equation} \label{eq:bocps-policy}
\pi(\vec \theta \vert \vec s_q) = \delta \left(\vec \theta - \vec \theta^* \vert \vec s_q \right),
\end{equation}
where $\vec \theta^* \vert \vec s_q = \argmax_{\vec \theta} \text{GP-UCB}(\vec s_q, \vec \theta)$, and $\delta(\cdot)$ is the Dirac delta function. The algorithm is summarized in Alg.~\ref{alg:bo-cps}.
\subsection{Active Contextual Entropy Search} \label{sec:aces}
Active contextual entropy search~(ACES)~\cite{metzen2015active} is an extension of entropy search~(ES)~\cite{hennig2012entropy} to the active CPS setting, where both the parameters $\vec \theta$ and the context $\vec s$ are chosen by the agent before an episode. ACES maintains a conditional probability distribution $p(\vec \theta^* \vert \mathcal{D}, \vec s) = p(\vec \theta^* = \argmax_{\vec \theta} f(\vec s, \vec \theta) \vert \mathcal{D}, \vec s)$, expressing the belief about $\vec \theta$ being optimal in context $\vec s$. The most informative query point is chosen by maximizing the expected information gain integrated over the context space,
\begin{equation} \label{eq:aces1}
\text{ACES}(\vec s_q, \vec \theta_q) = \sum\nolimits_{c=1}^C G^{\vec s_c}(\vec s_q, \vec \theta_q),
\end{equation}
where $\{\vec s_c\}_{c=1}^C$ is a set of randomly chosen representer points. The expected information gain in context $\vec s_c$ after performing a hypothetical rollout with $(\vec s_q, \vec \theta_q)$ is given by
\begin{equation}
G^{\vec s_c}(\vec s_q, \vec \theta_q) = H[p(\vec \theta^* \vert \mathcal{D}, \vec s_c)] - \mathbb{E}\big[H[p(\vec \theta^* \vert \mathcal{D}^+, \vec s_c)]\big],
\end{equation}
where the expectation is taken over $p(R \vert \mathcal{D}, \vec s_q, \vec \theta_q, \vec s_c)$, and $\mathcal{D}^+ = \mathcal{D} \cup \{\vec s_q, \vec \theta_q, R \}$ is an updated dataset that contains the hypothetical query point.
In practice, Eq.~\ref{eq:aces1} requires further approximations, which are explained in the original work \cite{hennig2012entropy, metzen2015active}. The algorithm is summarized in Alg.~\ref{alg:aces}.
\subsection{Dynamic Movement Primitives}
Dynamic movement primitives (DMPs) are often used as lower-level policies in robot learning tasks. A DMP \cite{ijspeert2003learning} is a spring-damper system whose hyper-parameters can be flexibly adapted while retaining the general shape of the movement. These include the final position $\vec y_f$, final velocity $\dot{\vec y}_f$ and temporal scaling $\vec \tau$. The motion is further modulated by a non-linear forcing function $f_{\vec w}(z) = \vec w\tran \Phi(z)$ with basis functions $\Phi(z)$ parameterized by phase variable $z$. The parameters $\vec w$ determine the shape of the movement and can be obtained by imitation learning. Each generated DMP trajectory is followed by the control policy of the robot, which implements a low-level feedback controller.
\section{FACTORED CONTEXTUAL POLICY SEARCH}
In this section, we introduce context factorization and show how it can be integrated into CPS algorithms.
We propose to factorize a context vector $\vec s$ into two types of contexts, $\vec s = (\vec s^t, \vec s^e)$:
\begin{itemize}
\item target contexts $\vec s^t$ which specify the task objective, and
\item environment contexts $\vec s^e$ which characterize the environment and the system dynamics.
\end{itemize}
Formally, we assume that the reward function is given by $R(\vec s^t, \vec \tau)$\footnote{In general, the reward function may also depend on $\vec s^e$ and $\vec \theta$. We omit this dependence for improved readability; the principle remains the same.}, where the trajectory $\vec \tau$ is generated by unknown system dynamics, $\vec \tau \sim p(\vec \tau \vert \vec s^e, \vec \theta)$. Importantly, while the dynamics function depends on environment contexts, it does not depend on target contexts. This means that we can exchange the target context of a rollout without altering its trajectory, allowing to re-evaluate a rollout under different target contexts. For example, in our ball throwing task~(Fig.~\ref{fig:thrower}) we can re-evaluate a previously observed trajectory pretending we were aiming at a different target. We cannot do the same with environment contexts, e.g. the initial ball pose, because a different initial pose would result in a different trajectory.
In the following, we exploit factored contexts to reduce the data requirements of CPS algorithms. To what extend factorization can be exploited depends on the knowledge of the reward function $R(\vec s^t, \vec \tau)$. First, we assume that the reward function is fully known or that it can be evaluated for arbitrary targets $\vec s^t$. This allows to construct highly data-efficient algorithms, as we demonstrate on a passive (Section~\ref{sec:bo-fcps}) and active (Section~\ref{sec:faces}) CPS algorithm. In Section~\ref{sec:her}, we drop the assumption of a known reward function and propose an extension of hindsight experience replay \cite{andrychowicz2017hindsight} to the CPS setting using context factorization.
\begin{figure*}[!ht]
\begin{minipage}[t]{.49\textwidth}
\vspace{0pt}
\begin{algorithm}[H]
\caption{ACES \cite{metzen2015active}}
\label{alg:aces}
\begin{algorithmic}
\Repeat
\State Sample representer points $\{\vec s_c\}_{c=1}^C$ (Section \ref{sec:aces})
\\
\State Learn reward model $p(R \vert \mathcal{D}, \vec s, \vec \theta)$ from $\mathcal{D}$ (Eq.~\ref{eq:gp-model})
\State Select $(\vec s_q, \vec \theta_q) = \argmax_{\vec s, \vec \theta} \text{ACES}(\vec s, \vec \theta)$ (Eq.~\ref{eq:aces1})
\State Execute rollout $\vec \tau \sim p(\vec \tau \vert \vec s_q, \vec \theta_q)$ with the robot
\State Add $(\vec s_q, \vec \theta_q , R(\vec s_q, \vec \tau))$ to $\mathcal{D}$
\Until{Policy $\pi$ converges}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\begin{minipage}[t]{.49\textwidth}
\vspace{0pt}
\begin{algorithm}[H]
\caption{FACES (ours)}
\label{alg:faces}
\begin{algorithmic}
\Repeat
\State Sample representer points $\{\vec s_c\}_{c=1}^C$, $\vec s_c = (\vec s^t_c, \vec s^e_c)$
\State Construct datasets $\{\mathcal{D}_c \}_{c=1}^C$ from $\mathcal{D}$ (Eq.~\ref{eq:map-data})
\State Learn reward models $\{\textrm{GP}_c \}_{c=1}^C$ from $\{\mathcal{D}_c \}_{c=1}^C$ (Eq.~\ref{eq:gp-model})
\State Select $(\vec s^e_q, \vec \theta_q) = \argmax_{\vec s^e, \vec \theta} \text{FACES}(\vec s^e, \vec \theta)$ (Eq.~\ref{eq:aces2})
\State Execute rollout $\vec \tau \sim p(\vec \tau \vert \vec s^e_q, \vec \theta_q)$ with the robot
\State Add $(\vec s^e_q, \vec \theta_q , \vec \tau)$ to $\mathcal{D}$
\Until{Policy $\pi$ converges}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{figure*}
\subsection{Bayesian Optimization for Factored CPS} \label{sec:bo-fcps}
Context factorization can be easily incorporated into BO-CPS. The resulting algorithm, Bayesian optimization for factored contextual policy search (BO-FCPS), is shown in Alg.~\ref{alg:bo-fcps}. It maintains a dataset $\mathcal{D} = \{\vec s^e_i, \vec \theta_i, \vec \tau_i \}_{i=1}^N$\footnote{Instead of storing the entire trajectory, in practice we may only record its sufficient statistics for computing the reward, i.e. an outcome $\vec o = \phi(\vec \tau)$.} that can be used to re-evaluate past experiences for a new query context $\vec s_q = (\vec s^t_q, \vec s^e_q)$. Given reward function $R(\vec s^t, \vec \tau)$, we construct a query-specific dataset,
\begin{equation} \label{eq:map-data}
\mathcal{D}_q = \{\vec s^e_i, \vec \theta_i, R(\vec s^t_q, \vec \tau_i) \}_{i=1}^N,
\end{equation}
for learning a specialized reward model,
\begin{equation} \label{eq:gp-model}
p(R^t_q \vert \mathcal{D}_q, \vec s^e_q, \vec \theta) = \mathcal{N}(R^t_q \vert \mu_{\vec s_q, \vec \theta}, \sigma^2_{\vec s_q, \vec \theta}),
\end{equation}
before each rollout. This model is specific to the current target context $\vec s^t_q$. Jointly, the set of all possible reward models $\{p(R^t_{q'} \vert \mathcal{D}_{q'}, \vec s^e, \vec \theta)\}$ w.r.t. arbitrary targets $\vec s^t_{q'}$ generalizes directly over the target context space. Thus, each reward model only needs to generalize over environment contexts $\vec s^e$ and policy parameters $\vec \theta$, leading to a reduced input space compared to the original reward model. This has the added benefit of a smaller search space during optimization. The parameters $\vec \theta$ for context $\vec s_q$ are found by optimizing the acquisition function given the target-specific reward model (Eq.~\ref{eq:gp-ucb},~\ref{eq:bocps-policy}). We employ the DIRECT \cite{jones1993lipschitzian} algorithm for optimization, followed by \mbox{L-BFGS} \cite{byrd1995limited} to refine the result.
We can formally compare BO-FCPS and BO-CPS if we assume that both approaches share the same dataset $\mathcal{D} = \{\vec s_i, \vec \theta_i, \vec \tau_i, \vec R(\vec s^e_i, \vec \tau_i) \}_{i=1}^N$ and the same GP hyper-parameters. In this case, the reward model of BO-FCPS evaluated at a particular target context $\vec s^t_q$ is always at least as accurate as the one learned by BO-CPS. To see this, recall that BO-FCPS differs from BO-CPS in two ways: (1) BO-FCPS re-computes the rewards for $\vec s^t_q$, and (2) BO-FCPS does not consider the rewards at other target contexts $\vec s^t_{q'} \neq \vec s^t_q$. Re-computing the rewards is trivially beneficial because BO-FCPS knows the true reward for target context $\vec s^t_q$ given a trajectory $\vec \tau$, whereas BO-CPS needs to infer the reward from correlations between target contexts. Disregarding rewards at other target contexts $\vec s^t_{q'} \neq \vec s^t_q$ does not degrade the predictive performance of our model either. That is because for a given context-parameter pair the only source of uncertainty w.r.t. their expected reward is in the execution of the trajectory $\vec \tau$ and its effect on the environment. Since the trajectory does not depend on the target context, evaluating the same trajectory under different target contexts does not reveal more information about the trajectory itself.
Note that while a better reward model at a given target context leads to better greedy performance, e.g. during offline evaluation, it does not necessarily imply higher cumulative rewards during learning. Furthermore, in practice both the dataset $\mathcal{D}$ and the GP hyper-parameters would be different. Empirically, BO-FCPS does achieve both higher online and offline performance, as we show in Section~\ref{sec:experiments}. We defer a more extensive theoretical analysis to future work.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{fig/nips_env.png}
\caption{Toy cannon setup}
\label{fig:toy-setup}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{fig/toy_cannon_150-crop.pdf}
\caption{Comparison of different algorithms}
\label{fig:toy-performance}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{fig/toy_cannon_150_acqs-crop.pdf}
\caption{Comparison of different variants}
\label{fig:toy-variants}
\end{subfigure}
\caption{\textbf{(a)} Visualization of the toy cannon task. \textbf{(b)-(c)} Offline performance evaluated on a fixed set of contexts from a $15 \times 15$ grid. Results are averaged over 10 randomly generated environments. Shaded areas denote one standard deviation.
}
\end{figure*}
\subsection{Factored Active Contextual Entropy Search} \label{sec:faces}
In the active learning setting, the agent chooses both the policy parameters $\vec \theta$ and the context $\vec s$. Applying GP-UCB is problematic as it would not take the varying difficulty of tasks into account~\cite{fabisch2014active}.
Instead, we follow ACES~\cite{metzen2015active} and use an ES-based acquisition function, which aims to choose the most informative query points for global optimization~\cite{hennig2012entropy}.
We integrate factored contexts into ACES as follows. In each iteration, we map previous experience to all representer points $\{\vec s_c\}_{c=1}^C$ in the context space, i.e. we construct $C$ different datasets $\{\mathcal{D}_c \}_{c=1}^C$ as in Eq.~\ref{eq:map-data}. From these datasets, we construct a set of GP models $\{\textrm{GP}_c \}_{c=1}^C$ that we use to evaluate the ACES acquisition function. In particular, we employ the corresponding $\textrm{GP}_c: p(R_c^t|\mathcal{D}_c, \vec s^e_c, \vec \theta)$ when the expected information gain $G^{\vec s}(\vec s_q, \vec \theta_q)$ after a hypothetical query $(\vec s_q, \vec \theta_q)$ is evaluated for $\vec s = \vec s_c$. Similar to BO-FCPS, we therefore directly use the target-specific GPs instead of relying on the correlation between target contexts.
Note that the choice on the target-type query context $\vec s_q^t$ is actually indifferent if we ignore rewards during training, and thus we only need to select $(\vec s_q^e, \vec \theta_q)$ by maximizing
\begin{equation} \label{eq:aces2}
\text{FACES}(\vec s^e_q, \vec \theta_q) = \sum\nolimits_{c=1}^C G^{\vec s_c}(\vec s^e_q, \vec \theta_q).
\end{equation}
We call the resulting algorithm factored active contextual entropy search (FACES). The algorithm is shown in Alg.~\ref{alg:faces}.
\subsection{Hindsight Experience Replay for Factored CPS} \label{sec:her}
So far we have assumed that the agent has access to the reward function $R(\vec s^t, \vec \tau)$. If we cannot query the reward function at arbitrary points, it is still possible to leverage factored contexts. In particular, we replace the current target context $\vec s^t_q$ after a rollout by the achieved target $ \vec s^t_{\vec \tau}$ and evaluate it again, yielding reward $R(\vec s^t_{\vec \tau}, \vec \tau)$. Thus, the only requirement is to be able to obtain the sample reward $R(\vec s^t_{\vec \tau}, \vec \tau)$ in addition to the actual reward $R(\vec s^t_q, \vec \tau)$. The additional data point $(\vec s^e_q, \vec \theta_q, R(\vec s^t_{\vec \tau}, \vec \tau))$ can then be added to the training dataset $\mathcal{D}$, and standard CPS methods such as BO-CPS, cost-regularized kernel regression \cite{kober2012reinforcement} or contextual relative entropy policy search (C-REPS) \cite{daniel2012hierarchical, kupcsik2013data} can be used without further modifications. Such an approach can be seen as an extension of HER to the CPS setting. We believe we are the first ones that explicitly make this connection.
\section{EXPERIMENTS AND RESULTS} \label{sec:experiments}
We perform experiments to answer the following questions: (a) does context factorization lead to more data-efficient learning for passive and active BO-based CPS algorithms; (b) how does the choice of acquisition function influence the performance; (c) does context factorization improve generalization; (d) is our method effective in more complex robotic domains? We address questions (a)-(c) through experiments on a toy cannon task, and (d) on three simulated tasks from the OpenAI Gym~\cite{brockman2016openai}. For the Gym tasks, we employ an extension~\cite{kober2010movement} of the DMP framework~\cite{ijspeert2003learning} to efficiently generate goal-directed trajectories.
\subsection{Toy Cannon Task}
The toy cannon task is a popular domain for evaluating CPS algorithms~\cite{da2014active, metzen2015bayesian, metzen2015active}. As shown in Fig.~\ref{fig:toy-setup}, a cannon is placed in the center of a 3D coordinate system and has to shoot at targets on the ground in the range of $[-11, 11] \times [-11, 11]$m. The contextual policy maps from 2D targets $\vec s^t \in \mathcal{R}^2$ to 3D launch parameters $\vec \theta \in \mathcal{R}^3$: horizontal orientation $\alpha \in [0, 2\pi]$, vertical angle $\beta \in [0.01, \pi/2 - 0.2]$ and speed $v \in [0.1, 5]$ ms. The reward function is given by $R(\vec s^t, \vec \tau) = -\|\vec s^t - \vec s^t_{\vec \tau} \| - 0.05 v^2$, where $\vec s^t_{\vec \tau}$ is the achieved hitting location of a trajectory $\vec \tau$. To increase the difficulty of the problem, we add Gaussian noise ($\sigma_n = 1^{\circ}$) to the desired launch angle during training and randomly place hills in the environment. The learning agent is unaware of the hills and the target contexts carry no information on the elevation.
\begin{table}[t]
\centering
\begin{tabular}{@{}lcll@{}}
\toprule
\multicolumn{1}{c}{} & $t=50$ & $t=100$ & $t=150$ \\ \midrule
C-REPS & -496 ($\pm$ 17) & -955 ($\pm$ 56) & -1357 ($\pm$ 62) \\
BO-CPS & -461 ($\pm$ 28) & -843 ($\pm$ 70) & -1148 ($\pm$ 151) \\
BO-FCPS-HER (ours) & -447 ($\pm$ 24) & -809 ($\pm$ 71) & -1111 ($\pm$ 140) \\
BO-FCPS (ours) & \textbf{-303 ($\pm$ 34)} & \textbf{-414 ($\pm$ 44)} & ~\textbf{-499 ($\pm$ 56)} \\ \bottomrule
\end{tabular}
\caption{Online learning performance on the toy cannon task averaged over 10 random seeds. We report mean cumulative rewards obtained during the first $t$ iterations.}
\label{tab:toy-rewards}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/toy_cannon_150_achieved-crop.pdf}
\caption{BO-CPS}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/toy_cannon_150_factorized_achieved-crop.pdf}
\caption{BO-FCPS (ours)}
\end{subfigure}
\caption{Achieved rewards in target context space after 150 episodes, where no contexts were sampled from the upper-right and lower-left corner during training.
\vspace{-1.5em}
}
\label{fig:toy-rewards}
\end{figure}
First, we compare both our factored BO approach (BO-FCPS) and our factored HER-style BO approach (BO-FCPS-HER) to standard BO-CPS. Each algorithm uses the GP-UCB acquisition function. We employ a zero-mean GP prior, $p(\vec f) \sim \text{GP}(\vec 0, k(\vec x, \vec x'))$, with squared-exponential kernel $k(\vec x, \vec x') = \sigma_f^2 \exp\left(-\frac{1}{2} (\vec x - \vec x')\tran \vec \Lambda^{-1} (\vec x - \vec x')\right)$, where $\vec x = (\vec s, \vec \theta)$, and $\vec \Lambda = \diag(\vec \ell)$ contains positive length-scale parameters $\vec \ell$. The GP hyper-parameters are optimized by maximizing the marginal likelihood. We also compare to C-REPS, which performs local policy updates instead of global optimization. We use a linear Gaussian policy with squared context features that is updated every 30 episodes subject to the relative entropy bound $\epsilon = 0.5$.
The offline performance of each algorithm is shown in Fig.~\ref{fig:toy-performance}. BO-FCPS requires only 60 episodes to find a good policy, a considerable improvement over standard BO-CPS. BO-FCPS-HER improves on BO-CPS as well, although the variance is much larger. This is because the GP hyper-parameter optimization sometimes got stuck in a local minimum, overfitting to the context variables while ignoring the influence of the parameters $\vec \theta$. We hypothesize that a full Bayesian treatment would mitigate this issue. C-REPS is not competitive on this low-dimensional task since the policy adapts too slowly. The above findings are confirmed by the online performances, which are summarized in Table~\ref{tab:toy-rewards}.
In Fig.~\ref{fig:toy-variants}, we evaluate the dependence of BO-FCPS on the acquisition function. We compare three variants: BO-FCPS with GP-UCB~(UCB), entropy search~(ES) and a random acquisition function. Using GP-UCB over ES leads to slightly faster learning as ES tends to explore too much. Likewise, random exploration is not sufficient for data-efficient learning. We therefore focus on GP-UCB.
\begin{figure}[t]
\centering
\includegraphics[width=.32\textwidth]{fig/toy_cannon_150_active-crop.pdf}
\caption{Learning curve for active learning setting averaged over 10 randomly generated environments, evaluated offline based on contexts placed uniformly on an $8 \times 8$ grid. Shaded areas denote one standard deviation.
\vspace{-1.5em}
}
\label{fig:toy-active}
\end{figure}
\begin{figure*}[t]
\vspace{1em}
\centering
~~
\begin{subfigure}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{fig/fetchpush.png}
\end{subfigure}
\qquad\qquad\quad~
\begin{subfigure}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{fig/fetchslide.png}
\end{subfigure}
\qquad\qquad\quad~
\begin{subfigure}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{fig/thrower.png}
\end{subfigure}
\\[.5em]
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{fig/fetch_push_500_rew-crop.pdf}
\caption{FetchPush-v1}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{fig/fetch_slide_500_rew-crop.pdf}
\caption{FetchSlide-v1}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{fig/thrower_500_rew-crop.pdf}
\caption{Thrower-v2}
\end{subfigure}
\caption{Learning curves for the OpenAI Gym tasks. Policies are evaluated after every 10 rollouts, in 25 fixed contexts sampled once before the experiment. Results are averaged over 10 random seeds. Shaded areas denote one standard deviation. \vspace{-0.5em}}
\label{fig:sim-performance}
\end{figure*}
Next, we demonstrate why evaluating previous rollouts for the given target context leads to improved generalization. We only present contexts $\vec s^t \in [-11, 0] \times [0, 11]~\cup~[0, 11] \times [-11, 0]$ during training, while the agent has to generalize to the entire context space during evaluation. As depicted in Fig.~\ref{fig:toy-rewards}, BO-FCPS generalizes much better to unseen contexts due to a much more accurate reward model at locations where it has already shot to. When evaluated in previously unseen contexts, the mean rewards achieved by BO-FCPS were higher by $4.0$; while the improvement was $3.05$ in contexts that were sampled during training.
Finally, we consider an active learning setting, where the agent observes an additional context variable $\mathbb{I} \in [0, 1]$ that indicates whether the learning agent should shoot or not. If $\mathbb{I} \leq 0.1$, the agent receives reward $R(\vec s^t, \vec \tau) = -\|\vec s^t - \vec s^t_{\vec \tau} \| - 0.05 v^2$ as before, and an action penalty $-\|\vec \theta \|$ otherwise. The agent should therefore actively select $\mathbb{I} \leq 0.1$, for which the reward function is much harder to learn. We compare FACES to ACES, and use 200 representer points to approximate the acquisition functions. The results are shown in Fig.~\ref{fig:toy-active}. Similar to the passive case, factorization is greatly beneficial: FACES achieves much faster learning than ACES.
\subsection{Simulated Robotic Tasks}
Finally, we apply BO-FCPS to three distinct robotics tasks from the \mbox{OpenAI} Gym \cite{brockman2016openai, plappert2018multi}, namely:
\begin{itemize}
\item \textbf{FetchPush-v1}: Push a box to a goal position.
\item \textbf{FetchSlide-v1}: Slide a puck to a goal position that is out of reach of the robot.
\item \textbf{Thrower-v2}: Throw a ball to a goal position.
\end{itemize}
The BO-FCPS algorithm is used to select the DMP parameters for performing each task according to the current context. We deviate from the original task specification of Thrower-v2 by replacing the joint-space controller in favor of task-space control to reduce the dimensionality of the problem. The same controller is employed in the Fetch environments. Moreover, we use the final distance between the object and the target context as the reward function in each environment. For more details, please refer to~\cite{brockman2016openai, plappert2018multi}.
For FetchPush and FetchSlide, both the initial object position $\vec s^e \in \mathcal{R}^2$ and the desired goal position $\vec s^t \in \mathcal{R}^2$ are varied. The lower-level policy consists of two 3-dimensional task-space DMPs that are sequenced together. The first DMP is used to bring the robot arm into position to manipulate the object, where the trajectory is modulated by 25 basis functions per dimension, and the shape parameters $\vec w$ are learned by imitation. The second DMP is used to execute the actual movement (i.e. pushing, sliding), starting from the final position of the first DMP. The upper-level policy adapts the approach angle $\alpha$ of the first DMP w.r.t. the object, yielding a goal position $\vec y_1$ that is a fixed distance away from the object, and the goal position $\vec y_2$ of the second DMP. We use $\alpha \in [0, \pi], \vec y_2 \in [0, 0.4] \times [-0.4, 0.4]$ for FetchSlide and $\alpha \in [0, 2\pi], \vec y_2 \in [-0.2, 0.2]^2$ for FetchPush.
For the Thrower environment, only the desired goal position $\vec s_t \in \mathcal{R}^2$ is varied. The lower-level policy is a single 3-dimensional task-space DMP with 25 basis functions, where the shape parameters are learned by imitation. The upper-level policy selects the goal position $\vec y \in [-0.5, 0.5] \times [1, 1.5] \times [-0.5, 0.5]$ and goal velocity $\dot{\vec y} \in [0, 1]^3$ of the DMP, resulting in a 6-dimensional parameter vector $\vec \theta$.
Results are shown in Fig.~\ref{fig:sim-performance}. Our proposed approach consistently outperforms standard BO-CPS, suggesting that our earlier findings on the toy cannon task apply to more complex simulated robotic domains as well.
\section{DISCUSSION AND FUTURE WORK}
We introduced context factorization and integrated it into passive and active learning approaches for CPS with BO. The improvement we can expect from factorization depends on the characteristics of the task. It is most effective if a large part of the learning challenge is about generalizing across contexts, as opposed to learning a good policy for a single context. In general, the larger the space of target contexts over environment contexts and policy parameters, the more factorization is expected to be beneficial.
In this paper, we focused on BO for CPS. One shortcoming of BO is that it does not scale well to high-dimensional problems. Future work may address scalability and explore alternative acquisition functions such as predictive entropy search~\cite{hernandez2014predictive}. Since context factorization is not specific to BO, it could also be applied to other CPS algorithms, e.g. C-REPS~\cite{kupcsik2013data} or contextual CMA-ES~\cite{abdolmaleki2017contextual}. A simple approach would be to populate the dataset with additional re-evaluated samples, similar to HER. Finally, we demonstrated the benefits of factorization in extensive simulated experiments. In the future, we plan to demonstrate the approach on a real robot system as well.
\addtolength{\textheight}{-8cm}
\newpage
| proofpile-arXiv_066-224 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In recent years the discoveries of gravitational waves (GW) by the LIGO and
Virgo Collaborations have opened a new window to the Universe~\cite{LIGO,
Virgo, GEO600, GW150914, GW151226, GW170104, GW170814, GW170817, GW170608,
GWTC-1}. KAGRA will join the global GW detector network in
2019~\cite{KAGRA} and LIGO-India in 2025~\cite{LIGO-India}, improving source
localization and parameter estimation~\cite{lrr-prospects}, while LISA
Pathfinder's exceptional performance~\cite{pathfinder-2016} -- showing that the
LISA mission is feasible -- and maturing pulsar timing arrays~\cite{IPTA-2013}
mark the beginning of multiwavelength, multiband GW astronomy.
Compact binary systems are the most prominent sources for the present and
future GW observatories. So far these events have been analyzed using
quasi-circular GW templates, as radiation-reaction effects tend to
circularize the orbits~\cite{peters-1963, peters-1964} for prototypical
sources. For such systems one can thus assume that by the time the binary
enters the sensitivity band of current ground-based detectors the eccentricity
will be close to zero. However, there are a number of astrophysical scenarios
in which binary systems could have moderate eccentricities when entering the
sensitivity band of ground-based detectors~\cite{huerta-2016, tiwari-2016,
gondan-2018-1, gondan-2018-2, rodriguez-2018, dorazio-2018, zevin-2018}.
Recently, there have been studies showing that triple interactions among black
holes can produce coalescing binaries with moderate eccentricities ($\sim 0.1$)
when entering the LIGO band~\cite{samsing-2014, samsing-2017, samsing-2018-1}
or large eccentricities ($\sim 0.9$) when entering the LISA
band~\cite{bonetti-2018}. This has major implications on how to distinguish
between binary black hole (BBH) formation channels~\cite{samsing-2018-2} and
motivates the development of waveforms valid for nonzero eccentricities.
There has been great effort to model GWs of eccentric binary systems. One
usually employs the quasi-Keplerian parametrization~\cite{damour-1985,
memmesheimer-2004} to describe the conservative binary orbits. The phasing
description, developed in Refs.~\cite{damour-2004, koenigsdoerffer-2006} and
discussed in great detail for low-eccentricity binaries in
Ref.~\cite{moore-2016}, efficiently incorporates the effects of radiation
reaction, describing the binary dynamics on three different timescales: the
orbital timescale, the periastron precession timescale, and the
radiation-reaction timescale. In addition, the secular evolution of the orbital
elements has been completed at the third post-Newtonian (3PN) order in
Refs.~\cite{arun-2008-1, arun-2008-2, arun-2009}, including hereditary effects.
Using this, several waveform models have been developed in the past
years~\cite{yunes-2009, cornish-2010, key-2011, huerta-2014, huerta-2017,
huerta-2018, gopakumar-2011, tanay-2016, hinder-2018, cao-2017, klein-2018},
for both nonspinning and spinning binaries.
In this paper, we extend the work in Ref.~\cite{mishra-2015} by computing the
tail contributions to the GW amplitudes for compact binaries in eccentric
orbits at the third post-Newtonian level. Combining our tail results with the
instantaneous ones, we then incorporate post-adiabatic
corrections~\cite{damour-2004, koenigsdoerffer-2006, moore-2016} to get a
complete waveform including radiation-reaction effects valid during the early
inspiral of the binary system. We present all our results in modified harmonic
(MH) gauge in terms of the post-Newtonian parameter $\bar{x} = ( G m \bar{\omega} /
c^3 )^{2/3}$, where $G$ denotes the gravitational constant, $c$ the speed of
light, $m$ the total mass of the binary, and $\bar{\omega}$ the adiabatic
orbital frequency (see Sec.~\ref{sec: full waveform}), as well as a certain
time eccentricity $\bar{e} = \bar{e}_t$ associated with the PN-accurate quasi-Keplerian
parametrization. To calculate the complicated tail integrals, we work within a
low-eccentricity expansion and express everything in terms of the mean anomaly
$l$ and the phase angle $\lambda$, which accounts for the periastron advance.
Compared to the results in Ref.~\cite{mishra-2015}, ours will thus not be valid
for arbitrary eccentricities. Moreover, they will need to be completed by the
memory contributions, which we will tackle in a follow-up
paper~\cite{ebersold-2019}.
This paper is structured as follows: In Sec.~\ref{sec: prerequisites} we
quickly review the basics of spherical harmonic decomposition and recall how
to connect the radiative multipole moments to the actual source moments. We
also review the conservative 3PN-accurate quasi-Keplerian
parametrization~\cite{memmesheimer-2004}. In Sec.~\ref{sec: phasing}, we
discuss how to incorporate post-adiabatic corrections~\cite{damour-2004,
koenigsdoerffer-2006} into this description. In Sec.~\ref{sec: hereditary}, we
are then in a position to calculate the various tail integrals appearing in the
source multipole moments. In Sec.~\ref{sec: full waveform}, we combine these
results with the instantaneous ones and introduce post-adiabatic corrections.
We also compare our results to the circular waveforms in
Ref.~\cite{blanchet-2008}. Finally, in Sec.~\ref{sec: summary}, we give a brief
summary of our work. Throughout this paper we mostly present results up to
$\mathcal{O}(e)$, though expressions up to $\mathcal{O}(e^6)$ for all tail and
post-adiabatic modes will be listed in a supplemental \emph{Mathematica}
file~\cite{supplement}.
\section{Construction of the waveform for compact binaries in eccentric
orbits}\label{sec: prerequisites}
\subsection{Polarizations and spherical-mode decomposition}
The gravitational waves emitted by an isolated system near future radiative
infinity are encoded in the transverse-traceless ($\textnormal{TT}$) projection
$h_{ij}^\textnormal{TT}$ of the deviation of the space-time metric $g_{\mu\nu}$ from a flat
metric $\eta_{\mu\nu}=\text{diag}(-1,1,1,1)$, in a radiative-type
Cartesian-like coordinate grid $X^\mu = (cT, \bm{X})$, at order $1/R$, where $R
= |\bm{X}|$ denotes the Euclidean distance of the vector $\bm{X}$ to the
origin. It is convenient to chose this origin at the center of mass of the full
system and to introduce the standard spherical coordinates $(\Theta, \Phi)$
associated with the so-defined Cartesian frame, for which the relation $X^i = R
\, (\cos \Phi \sin \Theta, \sin \Phi \sin\Theta, \cos\Theta)$ holds. The
radiative property of this frame ensures that a null geodesic going through the
origin at time $T_R$ will reach an observer with position $\bm{X}$ at time
$T=T_R + R/c$. If $\bm{N}(\Theta, \Phi)=\bm{X}/R$ denotes the unit direction of
that observer, the plane span by the vectors $\bm{P}(\Theta, \Phi)$ and
$\bm{Q}(\Theta, \Phi)$ belonging to some arbitrary direct orthonormal triad
$(\bm{N},\bm{P},\bm{Q})$ must be transverse to the direction of propagation of
wave rays.
The transverse-traceless projection $h_{ij}^\textnormal{TT}$ can be uniquely decomposed
into symmetric trace-free (STF) radiative mass-type ($U_L$) and current-type
($V_L$) multipole moments as:
\begin{align} \label{eq: hTT}
h_{ij}^\textnormal{TT} =&\; \frac{4 G}{c^2 R} \mathcal{P}_{ijab}(\bm{N})
\sum_{\ell=2}^{\infty} \frac{1}{c^\ell \ell!} \Big\{ N_{L-2} U_{ab L-2}
\nonumber\\
&- \frac{2 \ell}{c (\ell + 1)} N_{c L-2}\epsilon_{cd(a} V_{b)d L-2}
\Big\} \Big|_{T_R} + \mathcal{O} \left( \frac{1}{R^2} \right) \,.
\end{align}
Here $\mathcal{P}_{ijab} = \mathcal{P}_{ia} \mathcal{P}_{jb} - \frac{1}{2} \mathcal{P}_{ij} \mathcal{P}_{ab}$,
with $\mathcal{P}_{ij} = \delta_{ij} - N_i N_j$, is the $\textnormal{TT}$ projection operator. The
waveform is usually projected on the transverse symmetric basis $e^+_{ij} =
\frac{1}{2} (P_i P_j - Q_i Q_j)$, $e^\times_{ij} = P_{(i} Q_{j)}$,
\begin{align}
\begin{pmatrix}
h_+ \\
h_\times
\end{pmatrix}
&=
\begin{pmatrix}
e^+_{ij} \\
e^\times_{ij}
\end{pmatrix}
\, h_{ij}^\textnormal{TT} \,,
\end{align}
the resulting components being referred to as the plus and cross polarizations,
respectively. Equivalently the complex basis formed by the vector $\bm{m} =
(\bm{P} + \mathrm{i} \bm{Q}) / \sqrt{2}$ of spin weight 2 and its complex conjugate
$\overline{\bm{m}}$ of spin weight $-2$ can be used. From the transverse
trace-free character of the waveform, it follows that
\begin{align}
h &= h_+ - \mathrm{i} h_\times = h_{ij}^\textnormal{TT} \, \overline{m}^i \overline{m}^j \,.
\end{align}
From now on we shall assume that the vector $\bm{m}$ is proportional to
$\bm{m}_S = (\partial \bm{N} / \partial \theta + \mathrm{i} \sin^{-1} \! \theta \,
\partial \bm{N} / \partial \phi) / \sqrt{2}$ so that the functions adapted to
the spherical decomposition of the spin $-2$ quantity $h$ are the usual
spin-weighted spherical harmonics of weight $-2$, which will be denoted by
$Y_{-2}^{\ell m}(\Theta, \Phi)$. In our conventions, they are given by
\begin{subequations}
\begin{align}
Y_{-2}^{\ell m}(\Theta, \Phi) =&\; \sqrt{\frac{2\ell+1}{4\pi}} d_2^{\ell
m}(\Theta) e^{i m \Phi} \,,\\
d_2^{\ell m} =&\; \sum_{k=k_\text{min}}^{k_\text{max}} \frac{(-1)^k}{k!}
\nonumber\\
&\times \frac{\sqrt{(\ell+m)!(\ell-m)!(\ell+2)!(\ell-2)!}} {(k-m+2)!
(\ell+m-k)! (\ell-k-2)!}
\nonumber\\
&\times \left(\cos\frac{\Theta}{2}\right)^{2\ell+m-2k-2} \left( \sin
\frac{\Theta}{2} \right)^{2k-m+2} \,,
\end{align}
\end{subequations}
with $k_\text{min} = \max(0,m-2)$ and $k_\text{max} = \min(\ell+m,\ell-2)$.
Thus, the gravitational waveform may be decomposed into spherical modes
$h^{\ell m}$ as
\begin{align}\label{eq: mode decomposition}
h_+ - \mathrm{i} h_\times &= \sum_{\ell=2}^{+\infty} \sum_{m=-\ell}^{\ell} h^{\ell
m} Y_{-2}^{\ell m} (\Theta, \Phi) \,.
\end{align}
The spherical harmonic modes $h^{\ell m}$ can be written in terms of the
radiative mass-type ($U^{\ell m}$) and current-type ($V^{\ell m}$) multipole
moments,
\begin{align}\label{eq: hlm rad mom}
h^{\ell m} &= -\frac{G}{\sqrt{2}R c^{\ell+2}} \left(U^{\ell m} -
\frac{\mathrm{i}}{c} V^{\ell m} \right) \,,
\end{align}
with the inverse relations
\begin{subequations}
\begin{align}
U^{\ell m} &= -\frac{R c^{\ell +2}}{\sqrt{2}G} \left( h^{\ell m} + (-1)^m
\overline{h}{}^{\ell -m} \right) \,,\\
V^{\ell m} &= -\frac{R c^{\ell + 3} }{\sqrt{2} \mathrm{i} G} \left( -h^{\ell m} +
(-1)^m \overline{h}{}^{\ell -m} \right) \,.
\end{align}
\end{subequations}
The radiative moments ($U^{\ell m}$, $V^{\ell m}$) are actually related to the
STF radiative moments ($U_L$, $V_L$) by
\begin{subequations}\label{eq: radiative STF}
\begin{align}
U^{\ell m} &= \frac{4}{\ell!} \sqrt{ \frac{(\ell+1) (\ell+2)}{2 \ell
(\ell-1)}} \alpha_L^{\ell m} U_L \,,\\
V^{\ell m} &= -\frac{8}{\ell !} \sqrt{ \frac{\ell (\ell+2)}{2 (\ell+1)
(\ell-1)}} \alpha_L^{\ell m} V_L \,,
\end{align}
\end{subequations}
where the $\alpha_L^{\ell m}$ denote a set of constant STF tensors that connect
the basis of spherical harmonics $Y^{\ell m}(\Theta, \Phi)$ to the set of STF
tensors $N_{\langle L \rangle}$ as
\begin{subequations}
\begin{align}
N_{\langle L \rangle}(\Theta, \Phi) &= \sum_{m=-\ell}^{\ell} \alpha_L^{\ell m}
Y^{\ell m} (\Theta, \Phi) \,,\\
Y^{\ell m}(\Theta, \Phi) &= \frac{(2\ell+1)!!}{4\pi\ell!}
\overline{\alpha}_L^{\ell m} N^{\langle L \rangle}(\Theta, \Phi) \,.
\end{align}
\end{subequations}
They can be calculated through
\begin{align}
\alpha_L^{\ell m} &= \int \mathrm{d}\Omega\; N_{\langle L \rangle} \bar{Y}^{\ell m} \,,
\end{align}
and are given explicitly in Eq.~(2.12) of Ref.~\cite{thorne-1980}.
Remarkably, for planar binaries, there exists a mode
separation~\cite{kidder-2008, faye-2012} such that $h^{\ell m}$ is completely
determined by mass-type radiative multipole moments $U^{\ell m}$ for $\ell+m$
even and by current-type radiative multipole moments $V^{\ell m}$ for $\ell+m$
odd, hence
\begin{subequations}
\begin{align}
h^{\ell m} &= -\frac{G}{\sqrt{2} R c^{\ell+2}} U^{\ell m} &&\textnormal{if
} \ell+m \textnormal{ is even} \,,\\
h^{\ell m} &= \frac{\mathrm{i} G}{\sqrt{2} R c^{\ell+3}} V^{\ell m}
&&\textnormal{if } \ell+m \textnormal{ is odd} \,.
\end{align}
\end{subequations}
Let us finally specify the choice of the Cartesian frame and polarization
vectors in the case of interest where the source is a binary system of
pointlike objects with bound orbits, since this choice will fully set the
amplitude modes computed in the present paper. We adopt the same conventions
as in Ref.~\cite{blanchet-2008}. In the absence of spin, the orbits stay in a
plane. The vector $\bm{e}_3$ is taken to be the unit normal vector orienting
the sense of the motion positively. For the polarization vector $\bm{P}$, we
pick the unit vector pointing towards the ascending node $\bm{N} \times
\bm{e}_3$, with $\bm{N}$ representing the direction of the Earth observer.
Therefore, we can also make it coincide with $\bm{e}_1$. To complete the
triads $\bm{e}_a$ and $(\bm{N},\bm{P},\bm{Q})$ we pose $\bm{e}_2=\bm{e}_3 \times
\bm{e}_1$ and $\bm{Q}=\bm{N}\times\bm{P}$. Notice that, by construction,
$\bm{N}$ belongs to the plane spanned by $\{\bm{e}_2,\bm{e}_3\}$. Its spherical
coordinates, in terms of the inclination of the binary $\iota$, are thus
$(\Theta = \iota, \Phi = \pi/2)$.
\subsection{Multipole moments}\label{sec: multipole moments}
From Eqs.~(\ref{eq: hlm rad mom}--\ref{eq: radiative STF}), we see that we need
to relate the $U_L$ and $V_L$ to the actual source. In the multipolar
post-Minkowsian (MPM) post-Newtonian (PN) formalism, the radiative moments
($U_L$, $V_L$) are functionals of six sets of source moments ($I_L$,
$J_L$, $W_L$, $X_L$, $Y_L$, $Z_L$). The relations between the radiative
moments and the source moments have been obtained at the 3PN order and are
listed in Ref.~\cite{blanchet-2008}, Eqs.~(5.4--5.11).
We can split the the expressions for the radiative moments into two parts,
namely the instantaneous and the hereditary parts:
\begin{align}
U_L &= U_L^\textnormal{inst} + U_L^\textnormal{hered} \,.
\end{align}
The instantaneous contributions only depend on the state of the source at a
given retarded time, while the hereditary parts depend on, and thus require
knowledge of, the entire past history of the source. At leading order, the
instantaneous parts of the radiative moments are directly related to the
source moments as
\begin{subequations}
\begin{align}
U_L^\textnormal{inst}(t_r) &= I_L^{(\ell)}(t_r) + \mathcal{O}(c^{-3}) \,,\\
V_L^\textnormal{inst}(t_r) &= J_L^{(\ell)}(t_r) + \mathcal{O}(c^{-3}) \,,
\end{align}
\end{subequations}
with $t_r$ denoting here a ``dummy'' variable. Corrections from the gauge
moments ($W_L$, $X_L$, $Y_L$, $Z_L$) enter at higher orders. In this work, we
will focus on the hereditary tail contributions. For a complete treatment of
the instantaneous contributions, we refer to Ref.~\cite{mishra-2015}.
To the desired accuracy, the hereditary contributions to the radiative moments
are given by
\begin{widetext}
\begin{subequations}\label{eq: U_L}
\begin{align}
\label{eq: U_ij}
U_{ij}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\; \left[
\ln\left( \frac{\tau}{2\tau_0} \right) + \frac{11}{12} \right]
I_{ij}^{(4)}(t_r - \tau) - \frac{2G}{7c^5} \int_{-\infty}^{t_r}
\mathrm{d}\tau\; I_{a\langle i}^{(3)}(\tau) I_{j\rangle a}^{(3)}(\tau) \nonumber\\
&+ 2 \left( \frac{GM}{c^3} \right)^2 \int_{0}^{\infty} \mathrm{d}\tau\; \left[
\ln^2\left( \frac{\tau}{2\tau_0} \right) + \frac{57}{70} \ln\left(
\frac{\tau}{2\tau_0} \right) + \frac{124627}{44100} \right]
I_{ij}^{(5)}(t_r - \tau) + \mathcal{O}(c^{-7}) \,,\\
%
\label{eq: U_ijk}
U_{ijk}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\;
\left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{97}{60} \right]
I_{ijk}^{(5)}(t_r - \tau) \nonumber\\
&+ \frac{G}{c^5} \int_{-\infty}^{t_r} \mathrm{d}\tau\; \left[ -\frac{1}{3}
I_{a\langle i}^{(3)}(\tau) I_{jk\rangle a}^{(4)}(\tau) - \frac{4}{5}
\epsilon_{ab\langle i} I_{ja}^{(3)}(\tau) J_{k\rangle b}^{(3)}(\tau) \right] +
\mathcal{O}(c^{-6}) \,,\\
%
\label{eq: U_ijkl}
U_{ijkl}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\;
\left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{59}{30} \right]
I_{ijkl}^{(6)}(t_r - \tau) + \frac{2G}{5c^3} \int_{-\infty}^{t_r}
\mathrm{d}\tau\; I_{\langle ij}^{(3)}(\tau) I_{kl\rangle}^{(3)}(\tau) + \mathcal{O}(c^{-5})
\,,\\
%
\label{eq: U_ijklm}
U_{ijklm}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\;
\left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{232}{105} \right]
I_{ijklm}^{(7)}(t_r - \tau) + \frac{20G}{21c^3} \int_{-\infty}^{t_r}
\mathrm{d}\tau\; I_{\langle ij}^{(3)}(\tau) I_{klm\rangle}^{(4)}(\tau) + \mathcal{O}(c^{-4})
\,,
\end{align}
\end{subequations}
\begin{subequations}\label{eq: V_L}
\begin{align}
\label{eq: V_ij}
V_{ij}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\; \left[
\ln\left( \frac{\tau}{2\tau_0} \right) + \frac{7}{6} \right]
J_{ij}^{(4)}(t_r - \tau) + \mathcal{O}(c^{-6}) \,,\\
%
\label{eq: V_ijk}
V_{ijk}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty} \mathrm{d}\tau\;
\left[ \ln\left( \frac{\tau}{2\tau_0} \right) + \frac{5}{3} \right]
J_{ijk}^{(5)}(t_r - \tau) + \mathcal{O}(c^{-5}) \,,\\
%
\label{eq: V_ijkl}
V_{ijkl}^\textnormal{hered} (t_r) =&\; \frac{2GM}{c^3} \int_{0}^{\infty}
\mathrm{d}\tau\; \left[ \ln\left( \frac{\tau}{2\tau_0} \right) +
\frac{119}{60} \right] J_{ijkl}^{(6)}(t_r - \tau) + \mathcal{O}(c^{-4}) \,,
\end{align}
\end{subequations}
\end{widetext}
where $M = m (1 - \nu x / 2)+\mathcal{O}(c^{-4})$ is the Arnowitt-Deser-Misner (ADM)
mass of the source, $m = m_1 + m_2$ the total mass, $\nu = m_1 m_2 / m^2$ the
symmetric mass ratio, and $\tau_0$ an arbitrary length scale originally
introduced in the MPM formalism. None of the other moments contributes to the
hereditary part of the waveform~(\ref{eq: hTT}) at 3PN order, since
\begin{subequations}
\begin{align}
U_{L>5}^\textnormal{hered} &= \mathcal{O}(c^{-3}) \,, \\
V_{L>4}^\textnormal{hered} &= \mathcal{O}(c^{-3}) \,.
\end{align}
\end{subequations}
In the above hereditary contributions, there are two different types of
integrals: those with logarithms and those without. The logarithmic integral
in the first line of Eq.~(\ref{eq: U_ij}) is called the tail integral while the
one on the second line is the tails-of-tails integral. On the other hand, the
integral without a logarithmic kernel is the memory integral. Note that there
are no memory contributions to the radiative current moments $V_L$. Physically,
wave tails come from the scattering of the linear waves, generated by the
matter source, off the space-time curvature due to the total ADM mass of the
isolated system. It is a (power of) monopole-wave interaction effect with a
weak past dependence. By contrast, the memory pieces of the waves are produced
by the effective stress-energy tensor of the source radiation itself. It is a
wave-wave interaction effect with a strong past dependence~\cite{blanchet-1992}.
The expressions for the source moments ($I_L$, $J_L$) in terms of the binary
separation $r$, its time derivative $\dot{r}$, the polar angle $\phi$ of the
relative position, and its derivative $\dot{\phi}$ are now required. Observing
Eqs.~(\ref{eq: U_L}--\ref{eq: V_L}), we note that $I_{ij}$, $J_{ij}$ and
$I_{ijk}$ are needed to an accuracy of 1PN, while all other multipole moments
are only needed to leading Newtonian order. The relevant expressions are listed
in Ref.~\cite{arun-2008-2} using standard harmonic (SH) coordinates. The
logarithms appearing at 3PN order in the SH gauge can, however, be transformed
away in appropriate modified harmonic coordinates, as demonstrated Sec.~IV~B
of Ref.~\cite{arun-2008-2}. For the hereditary parts, this will not make any
difference, as we shall only need relative 1PN-accurate expressions for
certain ($I_L$, $J_L$), but, when adding up instantaneous terms from
Ref.~\cite{mishra-2015} to our hereditary parts, we shall always work within
the MH gauge. The binary separation vector will be represented by
$x^i\equiv r\, n^i$, whereas $v^i=\mathrm{d} x^i/\mathrm{d} t$ will stand for the relative
velocity. The expressions relevant for the calculation of the hereditary parts
are
\begin{widetext}
\begin{subequations}\label{eq: I_L}
\begin{align}
\label{eq: I_ij}
I_{ij} =&\; \nu m \left( A_1\, x_{\langle ij\rangle} + A_2\, \frac{r \dot{r}}{c^2}
x_{\langle i} v_{j\rangle} + A_3\, \frac{r^2}{c^2} v_{\langle ij\rangle}\right) +
\mathcal{O}(c^{-7}) \,,\\
%
\label{eq: I_ijk}
I_{ijk} =&\; -\nu m \Delta \left( B_1\, x_{\langle ijk\rangle} + B_2\, \frac{r
\dot{r}}{c^2} x_{\langle ij} v_{j\rangle} + B_3\, \frac{r^2}{c^2} x_{\langle i}
v_{jk\rangle}\right) + \mathcal{O}(c^{-6}) \,,\\
%
\label{eq: I_ijkl}
I_{ijkl} =&\; \nu m (1-3\nu) x_{\langle ijkl\rangle} + \mathcal{O}(c^{-5}) \,,\\
%
\label{eq: I_ijklm}
I_{ijklm} =&\; -\nu m \Delta (1-2\nu) x_{\langle ijklm\rangle} + \mathcal{O}(c^{-4}) \,,
\end{align}
\end{subequations}
\begin{subequations}\label{eq: J_L}
\begin{align}
\label{eq: J_ij}
J_{ij} =&\; -\nu m \Delta \left( C_1\, \epsilon_{ab\langle i} x_{j\rangle a} v_b +
C_2\, \frac{r \dot{r}}{c^2} \epsilon_{ab\langle i} v_{j\rangle b} x_{a} \right)
+ \mathcal{O}(c^{-6}) \,,\\
%
\label{eq: J_ijk}
J_{ijk} =&\; \nu m (1-3\nu) \epsilon_{ab\langle i} x_{jk\rangle a} v_b +
\mathcal{O}(c^{-5}) \,,\\
%
\label{eq: J_ijkl}
J_{ijkl} =&\; -\nu m \Delta (1-2\nu) \epsilon_{ab\langle i} x_{jkl\rangle a} v_b +
\mathcal{O}(c^{-4}) \,,
\end{align}
\end{subequations}
where $\Delta = (m_1 - m_2) / m$ is the mass difference ratio and the constants
$A_i$, $B_i$, and $C_i$ read
\begin{subequations}
\begin{align}
A_1 =&\; 1 + \frac{1}{c^2} \left[ v^2 \left( \frac{29}{42} - \frac{29
\nu}{14} \right) + \frac{Gm}{r} \left( -\frac{5}{7} + \frac{8\nu}{7}
\right) \right] \,,\\
A_2 =&\; -\frac{4}{7} + \frac{12\nu}{7} \,,\\
A_3 =&\; \frac{11}{21} - \frac{11\nu}{7} \,,\\
B_1 =&\; 1 + \frac{1}{c^2} \left[ v^2 \left( \frac{5}{6} - \frac{19\nu}{6}
\right) + \frac{Gm}{r} \left( -\frac{5}{6} + \frac{13\nu}{6} \right)
\right] \,,\\
B_2 =&\; -(1-2\nu) \,,\\
B_3 =&\; 1-2\nu \,,\\
C_1 =&\; 1 + \frac{1}{c^2} \left[ v^2 \left( \frac{13}{28} - \frac{17
\nu}{7} \right) + \frac{Gm}{r} \left( \frac{27}{14} + \frac{15\nu}{7}
\right) \right] \,,\\
C_2 =&\; \frac{5}{28} (1-2\nu) \,.
\end{align}
\end{subequations}
\end{widetext}
\subsection{Quasi-Keplerian parametrization}\label{sec: keplerian
parametrization}
The expressions in Eqs.~(\ref{eq: I_L}--\ref{eq: J_L}) in terms of the
variables ($r$, $\dot{r}$, $\phi$, $\dot{\phi}$) are the most general ones.
Now, when calculating the tail integrals, we should replace the latter
quantities by their actual analytic time evolution for eccentric orbits. At
the third post-Newtonian order, the conservative orbital dynamics of compact
binaries in eccentric orbits is specified by providing the following
generalized quasi-Keplerian parametrization~\cite{memmesheimer-2004} for the
dynamical variables $r$ and $\phi$:
\begin{subequations}\label{eq: quasi-keplerian}
\begin{align}
r =\;& a_r \left(1 - e_r \cos u \right) \,,\\
\phi - \phi_{0} =\;& (1 + k ) v + \left(f_{4\phi} + f_{6\phi} \right) \sin
(2v) \nonumber\\
&+ \left(g_{4\phi} + g_{6\phi} \right) \sin (3v) + i_{6\phi}\sin (4v)
\nonumber\\
&+ h_{6\phi} \sin (5v) \label{eq: quasi-keplerian phi} \,,\\
\text{where} \quad
v =&\; 2 \arctan \left[\left( \frac{ 1 + e_{\phi} }{1 - e_{\phi}}
\right)^{1/2} \tan \frac{u}{2} \right] \,.
\end{align}
\end{subequations}
An interesting feature in the above equations is the presence of different
eccentricity parameters $e_r$ and $e_\phi$, introduced in such a way that the
parametrization looks ``Keplerian''. The parameter $k$ is nothing but the
periastron advance per orbital revolution. The parameters $a_r$, $e_r$, and
$e_\phi$ are the PN-accurate semi-major axis and the radial and angular
eccentricities, while $f_{4\phi}$, $f_{6\phi}$, $g_{4\phi}$, $g_{6\phi}$,
$i_{6\phi}$, and $h_{6\phi}$ are some orbital functions of the energy and
angular momentum that enter at the 2PN and 3PN orders. The explicit expressions
are available in Ref.~\cite{memmesheimer-2004}.
The eccentric anomaly $u$ is linked to the mean anomaly $l$ through the
3PN-accurate Kepler equation
\begin{align}\label{eq: 3PN_KE}
l =&\; u - e_t \sin u + \left(g_{4t} + g_{6t} \right)(v-u) \nonumber\\
&+ \left(f_{4t} + f_{6t} \right)\sin v + i_{6t} \sin (2v) + h_{6t} \sin
(3v)\,.
\end{align}
Here, $e_t$ is another eccentricity parameter, usually called the time
eccentricity, and the functions $g_{4t}$, $g_{6t}$, $f_{4t}$, $f_{6t}$,
$i_{6t}$, and $h_{6t}$ are additional 2PN and 3PN orbital functions of the
energy and angular momentum. Together, Eqs.~(\ref{eq: quasi-keplerian}) and
(\ref{eq: 3PN_KE}) fully parametrize the conservative orbital dynamics of
compact binaries on eccentric orbits. Note that we choose to express all our
equations in terms of the post-Newtonian parameter $x = (Gm\omega / c^3)^{2/3}$
and the time eccentricity $e = e_t$, with $\omega = (1+k)n$ being the orbital
frequency and $n = 2 \pi / P$ the mean motion associated with the period $P$.
In the next section, we shall introduce post-adiabatic corrections to this
quasi-Keplerian description. We will then have to replace the parameters ($x$,
$e$) with their slowly evolving counterparts ($\bar{x}$, $\bar{e}$).
The appearance of the periastron precession at first post-Newtonian order
introduces a double periodic motion on two timescales: the orbital timescale
and the precession timescale. It is thus customary to split the phase $\phi$
into an angle $\lambda$ that is linear in $l$ and an oscillatory part $W(l)$
that is $2\pi$-periodic in $l$~\cite{gopakumar-2002, damour-2004,
tessmer-2007}. This leads us to write
\begin{subequations}
\begin{align}
\phi =\;& \lambda + W(l) \,,\\
\lambda =\;& \phi_0 + (1+k)l \,,\\
W(l) =\;& (1+k)(v-l) + \left(f_{4\phi} + f_{6\phi} \right) \sin (2v)
\nonumber\\
&+ \left(g_{4\phi} + g_{6\phi} \right) \sin (3v) + i_{6\phi}\sin (4v)
\nonumber\\
&+ h_{6\phi} \sin (5v) \label{eq: quasi-keplerian W}\,,
\end{align}
\end{subequations}
with $\phi_0$ denoting the initial polar angle at $u=0$.
To evaluate the various time integrals appearing in the tail contributions to
the waveform, we will need explicit expressions for $u$ and $\phi$ in terms of
the angles $l$ and $\lambda$. This can be achieved by solving the Kepler
equation (\ref{eq: 3PN_KE}). We employ the method described in
Ref.~\cite{boetzel-2017}, which yields
\begin{subequations}\label{eq: KE solution}
\begin{align}
u &= l + \sum_{s=1}^{\infty} A_s \sin(sl) \,,\\
A_s &= \frac{2}{s} J_s(s e_t) + \sum_{j=1}^{\infty} \alpha_j \left\{
J_{s+j}(s e_t) - J_{s-j}(s e_t) \right\} \,,
\end{align}
\end{subequations}
where the constants $\alpha_j$ are some PN-accurate functions of the energy and
angular momentum entering at the second post-Newtonian order. It remains to
display an explicit expression for the $2\pi$-periodic function $W(l)$ in terms
of $l$,
\begin{subequations}\label{eq: W solution}
\begin{align}
W(l) =\;& \sum_{s=1}^{\infty} \mathcal{W}_s \sin(sl) \,,\\
\mathcal{W}_s =\;& (1+k) B_s + \left(f_{4\phi} + f_{6\phi} \right)
\sigma_s^{2v} \nonumber\\
&+ \left(g_{4\phi} + g_{6\phi} \right) \sigma_s^{3v} + i_{6\phi}
\sigma_s^{4v} + h_{6\phi} \sigma_s^{5v} \,,
\end{align}
\end{subequations}
with the constants $B_s$ and $\sigma_s^{jv}$ given in Eqs.~(C8) and (32b) of
Ref.~\cite{boetzel-2017}. We finally find, expanding to $\mathcal{O}(x^3)$ and
$\mathcal{O}(e)$,
\begin{subequations}\label{eq: KE solution expanded}
\begin{align}
\label{eq: u solution}
u =&\; l + e \sin(l) + x^2 \left(-\frac{15}{2} + \frac{9\nu}{8} +
\frac{\nu^2}{8} \right) e\sin(l) \nonumber\\
&+ x^3 \left( -55 + \frac{104593\nu}{1680} + \frac{3\nu^2}{4} +
\frac{\nu^3}{24} \right) e \sin(l) \,,\\
\label{eq: phi solution}
\phi =&\; \lambda + 2 e \sin(l) + x (10-\nu) e\sin(l) \nonumber\\
&+ x^2 \left( 52 - \frac{235\nu}{12} + \frac{\nu^2}{12} \right) e
\sin(l) \nonumber\\
&+ x^3 \bigg( 292 + \left(-\frac{420131}{840} + \frac{287\pi^2}{32}
\right) \nu \nonumber\\
&+ \frac{521\nu^2}{24} + \frac{\nu^3}{24} \bigg) e\sin(l) \,.
\end{align}
\end{subequations}
We shall use these expressions to write the source multipole moments ($I_L$,
$J_L$) in terms of $l$ and $\lambda$.
\section{Phasing of the orbital elements}\label{sec: phasing}
So far, we used the conservative quasi-Keplerian description of the dynamics of
nonspinning compact binaries. This analytic parametrization is possible due to
the fact that the conservative problem admits four integrals of motion, or
even two, when the problem is restricted to the orbital plane. In our case,
those two integrals are encoded in the two intrinsic constants $x$ and $e=e_t$.
There also exist two extrinsic constants $c_l$ and $c_\lambda$,
\begin{subequations}
\begin{align}
l(t) &= n(t-t_0) + c_l \,, \\
\lambda(t) &= (1+k) n (t-t_0) + c_\lambda \,,
\end{align}
\end{subequations}
corresponding to the initial values of the two phase angles $l$ and $\lambda$,
respectively. We now move to include phasing effects due to energy and angular
momentum loss into this quasi-Keplerian parametrization. An efficient
description of the dynamics of nonspinning compact binaries with phasing
is presented in Refs.~\cite{damour-2004,koenigsdoerffer-2006}. Following
Ref.~\cite{damour-1983}, they employ a method of \emph{variation of constants}
where the constants of motion of the conservative problem ($x$, $e$, $c_l$,
$c_\lambda$) are treated as time-varying quantities. Specifically, the
post-Newtonian parameter $x = x(t)$ and the time eccentricity $e = e(t)$ are
now genuine functions of time, while the angles $l$ and $\lambda$ are given by
\begin{subequations}
\begin{align}
l(t) &= \int_{t_0}^{t} n(t') \mathrm{d} t' + c_l(t) \,, \\
\lambda(t) &= \int_{t_0}^{t} [1+k(t')] n(t') \mathrm{d} t' + c_\lambda(t) \,.
\end{align}
\end{subequations}
To obtain the evolution of the functions $c_\alpha(t) = (x(t), e(t), c_l(t),
c_\lambda(t))$, one starts from the PN-accurate equations of motion
\begin{subequations}
\begin{align}
\dot{\bf{x}} &= \bf{v} \,, \\
\dot{\bf{v}} &= \mathcal{A}_0(\bf{x}, \bf{v}) + \mathcal{A}'(\bf{x},
\bf{v}) \,,
\end{align}
\end{subequations}
with $\mathcal{A}_0$ being the conservative and $\mathcal{A}'$ the dissipative
piece of the equations of motion. These equations are first solved neglecting
the dissipative term $\mathcal{A}'$, leading to the conservative
quasi-Keplerian description of Sec.~\ref{sec: keplerian parametrization}. The
full solution including radiation reaction is then found by varying the
``constants'' $c_\alpha(t)$, leading to differential equations of the form
\begin{align}
\frac{\mathrm{d} c_\alpha}{\mathrm{d} l} &= G_\alpha(l, c_\alpha) \,.
\end{align}
One can then introduce a two-scale decomposition of all phase variables
$c_\alpha(l)$ into a slow (radiation-reaction timescale) secular drift and a
fast (orbital timescale) periodic oscillation as
\begin{align}
c_\alpha(t) = \bar{c}_\alpha(t) + \tilde{c}_\alpha(t) \,,
\end{align}
with
\begin{subequations}
\begin{align}
\frac{\mathrm{d}\bar{c}_\alpha}{\mathrm{d} l} &= \bar{G}_\alpha(l, c_\alpha) \label{eq:
secular evolution} \,,\\
\frac{\mathrm{d}\tilde{c}_\alpha}{\mathrm{d} l} &= \tilde{G}_\alpha(l, c_\alpha) =
G_\alpha(l, c_\alpha) - \bar{G}_\alpha(l, c_\alpha) \label{eq: osc
evolution} \,,
\end{align}
\end{subequations}
$\bar{G}_\alpha$ and $\tilde{G}_\alpha$ here being the orbital averaged and
oscillatory pieces of $G_\alpha$. The secular evolution of the orbital
elements (\ref{eq: secular evolution}) can also be derived from the heuristic
balance equations $\langle \mathrm{d} E/\mathrm{d} t \rangle = - \langle \mathcal{F} \rangle$ and
$\langle \mathrm{d} J/\mathrm{d} t \rangle = - \langle \mathcal{G} \rangle$, where $\mathcal{F}$ is the
energy flux and $\mathcal{G}$ the angular momentum flux. This approach is
discussed at the 3PN order in a series of papers~\cite{arun-2008-1,
arun-2008-2, arun-2009}, which notably take care of the hereditary
contributions to the energy and angular momentum fluxes.
After the above procedure is applied, we have
\begin{subequations}\label{eq: two-scale decomp}
\begin{align}
x(t) &= \bar{x}(t) + \tilde{x}(t) \,, \\
e(t) &= \bar{e}(t) + \tilde{e}(t) \,, \\
c_l(t) &= \bar{c}_l + \tilde{c}_l(t) \,, \\
c_\lambda(t) &= \bar{c}_\lambda + \tilde{c}_\lambda(t) \,,
\end{align}
\end{subequations}
where $\bar{c}_l$ and $\bar{c}_\lambda$ are found to be true integration constants. The
secular evolution of the orbital elements $\bar{n}(t)$, $\bar{k}(t)$, $\bar{x}(t)$,
and $\bar{e}(t)$ is given in Sec.~VI of Ref.~\cite{arun-2009}. At leading order,
these equations reduce to the famous formulas by Peters and
Mathews~\cite{peters-1963,peters-1964}:
\begin{subequations}\label{eq: peters-mathews}
\begin{align}
\frac{\mathrm{d}\bar{x}}{\mathrm{d} t} &= \frac{c^3 \nu}{Gm} \frac{\bar{x}^5}{(1-\bar{e}^2)^{7/2}}
\left( \frac{64}{5} + \frac{584}{15} \bar{e}^2 + \frac{74}{15} \bar{e}^4
\right) \,,\\
\frac{\mathrm{d}\bar{e}}{\mathrm{d} t} &= -\frac{c^3 \nu}{Gm} \frac{\bar{e} \,
\bar{x}^4}{(1-\bar{e}^2)^{5/2}} \left( \frac{304}{15} + \frac{121}{15} \bar{e}^2
\right) \,.
\end{align}
\end{subequations}
The periodic variations in Eqs.~(\ref{eq: two-scale decomp}) can be computed
from Eqs.~(34) and (35) of Ref.~\cite{koenigsdoerffer-2006} and are explicitly
given in Eqs.~(36). Note, though, that there is an error in the expressions for
$\tilde{c}_l$ and $\tilde{c}_\lambda$ provided by Eqs.~(36c) and (36d) of that paper. Indeed, the
periodic variations $\tilde{c}_l$ and $\tilde{c}_\lambda$ refer to the zero-average oscillatory
contributions to $c_l$ and $c_\lambda$. They are found by integrating Eqs.~(35)
and then subtracting the orbital average, i.e., finding the unique zero-average
primitive, so that we are left with a purely oscillatory solution. Now, we find
that, unfortunately, the explicit orbital averages of Eqs.~(36c) and (36d) in
Ref.~\cite{koenigsdoerffer-2006} do not give zero. This is because the
averaging of these terms is performed over the eccentric anomaly $\mathrm{d} u$,
whereas the orbital averaging requires integrating temporal variations over an
orbital period and, therefore, should be done using $\mathrm{d} l = (1 - e \cos u) \mathrm{d}
u$. We show below the corrected expressions for $\tilde{c}_l$ and $\tilde{c}_\lambda$ in terms of
$e_t = \bar{e}$, $\xi = \bar{x}^{3/2}$ and $u = \bar{u}$, as they appear in
Ref.~\cite{koenigsdoerffer-2006}:
\begin{widetext}
\begin{subequations}\label{eq: corrected cl}
\begin{align}
\tilde{c}_l =\;& -\frac{2 \xi^{5/3} \nu}{45 e_t^2} \bigg\{ \frac{144 e_t^2}{\chi}
+ \frac{18 - 258 e_t^2}{\chi^2} + \frac{-56 + 92 e_t^2 - 36
e_t^4}{\chi^3} + \frac{105 (1 - e_t^2)^2}{\chi^4} \nonumber\\
&- \frac{1}{2 (1 - e_t^2)^{1/2}} \left[ 134 - 339 e_t^2 + 288 e_t^2
\sqrt{1 - e_t^2} \right] \bigg\} + \mathcal{O}(\xi^{7/3}) \,,\\
\tilde{c}_\lambda =\;& \frac{2 \xi^{5/3} \nu}{45 e_t^2} \bigg\{ \left[
\frac{18}{\chi^2} - \frac{56 - 36 e_t^2}{\chi^3} + \frac{105 (1 -
e_t^2)}{\chi^4} \right] \sqrt{1 - e_t^2} - \frac{144 e_t^2}{\chi} -
\frac{18 - 258 e_t^2}{\chi^2} + \frac{56 - 92 e_t^2 + 36 e_t^4}{\chi^3}
- \frac{105 (1 - e_t^2)^2}{\chi^4} \nonumber\\
&- \frac{1}{2 (1 - e_t^2)} \left[ 134 - 147 e_t^2 + 288 e_t^4 - \left(
134 - 339 e_t^2 \right) \sqrt{1 - e_t^2} \right] \bigg\} +
\mathcal{O}(\xi^{7/3}) \,.
\end{align}
\end{subequations}
\end{widetext}
Similarly, we split the angles $l$ and $\lambda$ into orbital averaged and
oscillatory contributions
\begin{subequations}\label{eq: two-scale decomp 2}
\begin{align}
l(t) &= \bar{l}(t) + \tilde{l}(t) \,, \\
\lambda(t) &= \bar{\lambda}(t) + \tilde{\lambda}(t) \,,
\end{align}
\end{subequations}
with $\bar{l}(t)$ and $\bar{\lambda}(t)$ defined by
\begin{subequations}\label{eq: secular l and la}
\begin{align}
\bar{l}(t) &= \int_{t_0}^{t} \bar{n}(t')\mathrm{d} t' + \bar{c}_l \,,\\
\bar{\lambda}(t) &= \int_{t_0}^{t} [1+\bar{k}(t')] \bar{n}(t') \mathrm{d} t' + \bar{c}_\lambda \,.
\end{align}
\end{subequations}
The oscillatory contributions $\tilde{l}$ and $\tilde{\lambda}$ are calculated as in Eqs.~(39)
of Ref.~\cite{koenigsdoerffer-2006},
\begin{subequations}
\begin{align}
\tilde{l}(\bar{l}) &= \int \frac{\tilde{n}}{\bar{n}} \mathrm{d} l + \tilde{c}_l(\bar{l}) \,, \\
\tilde{\lambda}(\bar{l}) &= \int \left[ (1 + \bar{k}) \frac{\tilde{n}}{\bar{n}} +
\tilde{k} \right] \mathrm{d} l + \tilde{c}_\lambda(\bar{l}) \,,
\end{align}
\end{subequations}
where $\tilde{k} = (\partial k / \partial n) \tilde{n} + (\partial k / \partial
e_t) \tilde{e}_t$ denotes the periodic part of $k$ and the integrals again mean the
unique zero-average primitives. Equations~(40) for $\tilde{l}$ and $\tilde{\lambda}$ in
Ref.~\cite{koenigsdoerffer-2006} are erroneous, since they do not
average to zero either. We list below the corrected expressions:
\begin{widetext}
\begin{subequations}\label{eq: corrected lp}
\begin{align}
\tilde{l}(l) =\;& \frac{\xi^{5/3} \nu}{15 (1 - e_t^2)^3} \bigg\{ (602 + 673
e_t^2) \chi + (314 - 203 e_t^2 - 111 e_t^4) \ln \chi - (602 + 673
e_t^2) + \frac{-98 + 124 e_t^2 + 46 e_t^4 - 72 e_t^6}{\chi} \nonumber\\
&- \frac{105 (1 - e_t^2)^3}{\chi^2} - \frac{1}{2} \bigg[ 432 + 444
e_t^2 + 543 e_t^4 - 144 e_t^6 - (838 - 826 e_t^2 - 12 e_t^4) \sqrt{1 -
e_t^2} + (628 - 406 e_t^2 - 222 e_t^4) \nonumber\\
&\times \ln \bigg( \frac{1 + \sqrt{1 - e_t^2}}{2} \bigg) \bigg] \bigg\}
+ \frac{\xi^{5/3} \nu}{5 (1 - e_t^2)^{7/2}} \left( 96 + 292 e_t^2 + 37
e_t^4 \right) \int \left[ 2 \tan^{-1} \left( \frac{\beta_t \sin u}{1 -
\beta_t \cos u} \right) + e_t \sin u \right] \chi \mathrm{d} u \nonumber\\
&+ \tilde{c}_l(l) + \mathcal{O}(\xi^{7/3}) \,,\\
\tilde{\lambda}(l) =\;& \tilde{l}(l) - \tilde{c}_l(l) + \tilde{c}_\lambda(l) + \mathcal{O}(\xi^{7/3}) \,.
\end{align}
\end{subequations}
\end{widetext}
The errors in Eqs.~(36c), (36d), and (40) of Ref.~\cite{koenigsdoerffer-2006},
though, do not affect the other equations of that work. We refer to
Appendix~\ref{sec: integrals} for some integral relations necessary to compute
the zero-average primitives.
We finally give expressions for the oscillatory contributions $\tilde{x}$, $\tilde{e}$,
$\tilde{l}$, and $\tilde{\lambda}$ in terms of the slowly evolving variables $\bar{x}$, $\bar{e}$, and
$\bar{l}$. We list here the expressions to $\mathcal{O}(\bar{e}^2)$:
\begin{subequations}\label{eq: periodic variations}
\begin{align}
\tilde{x}(t) =\;& \nu \bar{x}^{7/2} \bar{e} \bigg[ 80 \sin(\bar{l}) + \frac{1436}{15} \bar{e}
\sin(2\bar{l}) \nonumber\\
&+ \bar{e}^2 \left( \frac{4538}{15} \sin(\bar{l}) + \frac{6022}{45} \sin(3\bar{l})
\right) \bigg] \nonumber\\
&+ \mathcal{O}(\bar{x}^{9/2}) \,, \\
\tilde{e}(t) =\;& -\nu \bar{x}^{5/2} \bigg[ \frac{64}{5} \sin(\bar{l}) + \frac{352}{15}
\bar{e} \sin(2\bar{l}) \nonumber\\
&+ \bar{e}^2 \left( \frac{1138}{15} \sin(\bar{l}) + \frac{358}{9} \sin(3\bar{l})
\right) \bigg] \nonumber\\
&+ \mathcal{O}(\bar{x}^{7/2}) \,,\\
\tilde{l}(t) =\;& -\nu \bar{x}^{5/2} \bigg[ \frac{64}{5\bar{e}} \cos(\bar{l}) +
\frac{352}{15} \cos(2\bar{l}) \nonumber\\
&+ \bar{e} \left( \frac{1654}{15} \cos(\bar{l}) + \frac{358}{9} \cos(3\bar{l})
\right) \nonumber\\
&+ \bar{e}^2 \left( \frac{694}{15} \cos(2\bar{l}) + \frac{1289}{20} \cos(4\bar{l})
\right) \bigg] \nonumber\\
&+ \mathcal{O}(\bar{x}^{7/2}) \,,\\
\tilde{\lambda}(t) =\;& -\nu \bar{x}^{5/2} \bigg[ \frac{296}{3} \bar{e} \cos(\bar{l}) +
\frac{199}{5} \bar{e}^2 \cos(2\bar{l}) \bigg] \nonumber\\
&+ \mathcal{O}(\bar{x}^{7/2}) \,.
\end{align}
\end{subequations}
These results agree with Eqs.~(4.9) of Ref.~\cite{moore-2016}, except two
constant terms in $\tilde{l}(t)$ and $\tilde{\lambda}(t)$, due to the already mentioned incorrect
average. Indeed, all our results are purely oscillatory, zero-average functions
and thus correctly describe the periodic post-adiabatic corrections.
Given the waveform in terms of the conservative quasi-Keplerian
parametrization, one can then include post-adiabatic effects by making the
simple substitutions
\begin{subequations}\label{eq: phasing subst}
\begin{align}
x &\rightarrow \bar{x} + \tilde{x} \,, \\
e &\rightarrow \bar{e} + \tilde{e} \,, \\
l &\rightarrow \bar{l} + \tilde{l} \,, \\
\lambda &\rightarrow \bar{\lambda} + \tilde{\lambda} \,.
\end{align}
\end{subequations}
As all of the periodic (tilde) contributions are of relative 2.5PN order
compared to the slowly evolving (bar) parts, we only have to make these
substitutions at leading Newtonian and 0.5PN order in the $h^{\ell m}$ to be
accurate to 3PN order. In all higher-order terms, we can simply replace the
variables ($x$, $e$, $l$, $\lambda$) by their secular evolving parts ($\bar{x}$,
$\bar{e}$, $\bar{l}$, $\bar{\lambda}$).
Note that Eq.~(\ref{eq: quasi-keplerian phi}) gives the relation between the
geometrical phase $\phi$ and the angles $l$ and $\lambda$. We can rewrite this
relation in terms of the slowly evolving angles $\bar{l}$ and $\bar{\lambda}$ and find
\begin{align}\label{eq: quasi-keplerian phi bar}
\phi =\;& \lambda + W(l) = \bar{\lambda} + \bar{W}(\bar{l}) + \tilde{\lambda} + (\tilde{v} - \tilde{l}) \,,
\end{align}
where $\bar{W}(\bar{l})$ is given by Eq.~(\ref{eq: quasi-keplerian W}), but with
all quantities on the RHS replaced with their secular evolving parts, and the
periodic variation $\tilde{v}$ of the true anomaly is given by
\begin{align}
\tilde{v} &= \frac{\partial \bar{v}}{\partial \bar{u}} \, \tilde{u} + \frac{\partial \bar{v}}{\partial \bar{e}} \, \tilde{e}
\nonumber\\
&= \frac{\sqrt{1 - \bar{e}^2}}{1 - \bar{e} \cos \bar{u}} \tilde{u} + \frac{\sin
\bar{u}}{\sqrt{1 - \bar{e}^2} (1 - \bar{e} \cos \bar{u})} \tilde{e} \,.
\end{align}
Expanded to $\mathcal{O}(\bar{x}^3)$ and $\mathcal{O}(\bar{e})$ this finally gives us
\begin{align}
\phi =&\; \bar{\lambda} + 2 \bar{e} \sin(\bar{l}) + \bar{x} (10-\nu) \bar{e} \sin(\bar{l}) \nonumber\\
&+ \bar{x}^2 \left( 52 - \frac{235\nu}{12} + \frac{\nu^2}{12} \right) \bar{e}
\sin(\bar{l}) \nonumber\\
&- \bar{x}^{5/2} \nu \left( \frac{128}{5} + \frac{888}{5} \bar{e} \cos(\bar{l})
\right) \nonumber\\
&+ \bar{x}^3 \bigg( 292 + \left(-\frac{420131}{840} + \frac{287\pi^2}{32}
\right) \nu \nonumber\\
&+ \frac{521\nu^2}{24} + \frac{\nu^3}{24} \bigg) \bar{e} \sin(\bar{l}) \,.
\end{align}
This is very similar to Eq.~(\ref{eq: phi solution}), but with the quantities
on the RHS replaced by their slowly evolving parts and with additional terms
at 2.5PN order.
\section{Hereditary Contributions}\label{sec: hereditary}
\subsection{Tail integrals}
Note that tail effects start appearing at 1.5PN order, and thus post-adiabatic
corrections to those will only enter the waveform at 4PN order and beyond. We
can thus neglect any radiation-reaction effects in this section and only
consider the conservative problem. At the end, we can then replace all
variables ($x$, $e$, $l$, $\lambda$) with their slowly evolving counterparts
($\bar{x}$, $\bar{e}$, $\bar{l}$, $\bar{\lambda}$) to get the secular evolving amplitudes.
We now employ the quasi-Keplerian parametrization introduced in Sec.~\ref{sec:
keplerian parametrization}. As we use the two angles $l$ and $\lambda$ to
parameterize the orbital motion, time derivatives of the source multipole
moments ($I_L$, $J_L$) can be calculated as
\begin{align}
\frac{\mathrm{d}}{\mathrm{d} t} =&\; n \left( \frac{\mathrm{d}}{\mathrm{d} l} + (1+k)
\frac{\mathrm{d}}{\mathrm{d}\lambda} \right) \,.
\end{align}
We use a low-eccentricity expansion to simplify expressions, so we expand
everything in powers of both $x$ and $e$. Inserting Eqs.~(\ref{eq: KE solution
expanded}) into the source multipole moments (\ref{eq: I_L}--\ref{eq: J_L}),
and substituting those into the radiative moments (\ref{eq: U_L}--\ref{eq:
V_L}) we can then easily calculate the spherical harmonic modes in terms of
$l$ and $\lambda$. We find, e.g., for the dominant $h^{22}_\textnormal{tail}$ mode
\begin{align}
h^{22}_\textnormal{tail} =&\; \frac{8 G m \nu}{c^2 R} x^{5/2} \sqrt{\frac{\pi}{5}}
\frac{x^{3/2} c^3}{Gm} \nonumber\\
&\times \int_{0}^{\infty} \mathrm{d}\tau\, \mathrm{e}^{-2\mathrm{i} (\lambda -
\lambda(\tau))} \left[ \ln\left( \frac{\tau}{2\tau_0} \right) +
\frac{11}{12} \right] \nonumber\\
&\times \bigg[ -8 + e \left( \frac{3}{2} \mathrm{e}^{\mathrm{i} (l - l(\tau))} -
\frac{81}{2} \mathrm{e}^{-\mathrm{i} (l - l(\tau))} \right) \bigg] \,,
\end{align}
where $l(\tau) = n\tau$ and $\lambda(\tau) = (1+k)n\tau$ and where we restrict
ourselves to the leading post-Newtonian order and $\mathcal{O}(e)$. All other modes
can be calculated similarly and be given as integrals over past history. These
integrals can then be solved using the standard formulas
\begin{subequations}
\begin{align}
\int_{0}^{\infty} \mathrm{d}\tau\,\mathrm{e}^{-\mathrm{i}\omega \tau} =&\; -\frac{\mathrm{i}}{\omega}
\,,\\
\int_{0}^{\infty} \mathrm{d}\tau\,\mathrm{e}^{-\mathrm{i}\omega \tau} \ln\left(
\frac{\tau}{2\tau_0} \right)=&\; \nonumber\\
-\frac{1}{\omega} \bigg( \frac{\pi}{2} \sign{\omega} \;-&\; \mathrm{i}
\left[\ln(2|\omega| \tau_0) + \gamma_\textnormal{E} \right] \bigg) \,,\\
\int_{0}^{\infty} \mathrm{d}\tau\,\mathrm{e}^{-\mathrm{i}\omega \tau} \ln^2\left(
\frac{\tau}{2\tau_0} \right)=&\; \nonumber\\
-\frac{\mathrm{i}}{\omega} \bigg( \frac{\pi^2}{6} +\bigg( \frac{\pi}{2}
\sign{\omega} \;-&\; \mathrm{i} \left[ \ln(2|\omega| \tau_0) + \gamma_\textnormal{E} \right]
\bigg)^2 \bigg) \,.
\end{align}
\end{subequations}
Note that for terms of the form $\int \mathrm{d}\tau\, \mathrm{e}^{-\mathrm{i} (\alpha \, l(\tau) +
\beta \, \lambda(\tau))} [\dots]$ we have $\omega = n(\alpha + (1+k) \beta)$.
We are now able to give the tail contributions to the spherical harmonic modes
in terms of the parameters $x$, $e = e_t$ and the angles $\phi$ and $l$. The
modes have the following structure:
\begin{align}
h^{\ell m}_\textnormal{tail} =&\; \frac{8 G m \nu}{c^2 R} x \sqrt{\frac{\pi}{5}}
\mathrm{e}^{-\mathrm{i} m\phi} H^{\ell m}_\textnormal{tail} \,.
\end{align}
The various contributions to, e.g., the $H^{22}_\textnormal{tail}$ mode are given to
$\mathcal{O}(e)$ by
\begin{widetext}
\begin{subequations}\label{eq: h22-tail}
\begin{align}
(H^{22}_\textnormal{tail})_\textnormal{1.5PN} =&\; x^{3/2} \Bigg( 2 \pi + 6 \mathrm{i} \ln
\left( \frac{x}{x_0'} \right) + e \bigg\{ \mathrm{e}^{-\mathrm{i} l} \left[ \frac{11
\pi}{4} + \frac{27 \mathrm{i}}{2} \ln \left( \frac{3}{2} \right) +
\frac{33}{4} \mathrm{i} \ln \left( \frac{x}{x_0'} \right) \right] \nonumber\\
&+\mathrm{e}^{\mathrm{i} l} \left[\frac{13 \pi }{4} + \frac{3 \mathrm{i}}{2} \ln (2) +
\frac{39}{4} \mathrm{i} \ln \left( \frac{x}{x_0'} \right) \right] \bigg\}
\Bigg) \,,\\
%
(H^{22}_\textnormal{tail})_\textnormal{2.5PN} =&\; x^{5/2} \Bigg( \pi \left(
-\frac{107}{21} + \frac{34 \nu}{21} \right) + \left( -\frac{107 \mathrm{i}}{7}
+ \frac{34 \mathrm{i} \nu}{7} \right) \ln \left( \frac{x}{x_0'} \right)
\nonumber\\
&+ e \bigg\{ \mathrm{e}^{\mathrm{i} l} \bigg[ -\frac{9 \mathrm{i}}{2} + \pi \left(
\frac{229}{168} + \frac{61 \nu}{42} \right) + \left( \frac{473 \mathrm{i}}{28}
- \frac{3 \mathrm{i} \nu}{7} \right) \ln (2) + \left( \frac{229 \mathrm{i}}{56} +
\frac{61 \mathrm{i} \nu }{14} \right) \ln \left( \frac{x}{x_0'} \right) \bigg]
\nonumber\\
&+ \mathrm{e}^{-\mathrm{i} l} \bigg[ -\frac{27 \mathrm{i}}{2} + \pi \left( -\frac{1081}{168}
+ \frac{137 \nu}{42} \right) + \left( \frac{27 \mathrm{i}}{4} + 9 \mathrm{i} \nu
\right) \ln \left(\frac{3}{2} \right) \nonumber\\
&+ \left( -\frac{1081 \mathrm{i}}{56} + \frac{137 \mathrm{i} \nu }{14} \right) \ln
\left( \frac{x}{x_0'} \right) \bigg] \bigg\} \Bigg) \,,\\
%
(H^{22}_\textnormal{tail})_\textnormal{3PN} =&\; x^3 \Bigg( -\frac{515063}{22050} +
\frac{428 \mathrm{i} \pi }{105} + \frac{2 \pi^2}{3} + \left( -\frac{428}{35} +
12 \mathrm{i} \pi \right) \ln \left( \frac{x}{x_0'} \right) - 18 \ln^2 \left(
\frac{x}{x_0'} \right) \nonumber\\
&+ e \bigg\{ \mathrm{e}^{-\mathrm{i} l} \bigg[ -\frac{515063}{7200} + \frac{749 \mathrm{i}
\pi}{60} + \frac{49 \pi^2}{24} + \left( -\frac{2889}{70} + \frac{81 \mathrm{i}
\pi}{2} \right) \ln \left( \frac{3}{2} \right) - \frac{81}{2} \ln^2
\left( \frac{3}{2} \right) \nonumber\\
&+ \left( -\frac{749}{20} + \frac{147 \mathrm{i} \pi }{4} - \frac{243}{2}
\ln \left( \frac{3}{2} \right) \right) \ln \left( \frac{x}{x_0'}
\right) - \frac{441}{8}\ln^2\left( \frac{x}{x_0'} \right) \bigg]
\nonumber\\
&+ \mathrm{e}^{\mathrm{i} l} \bigg[ -\frac{14936827}{352800} + \frac{3103 \mathrm{i} \pi
}{420} + \frac{29 \pi^2}{24} + \left( -\frac{107}{70} + \frac{3 \mathrm{i}
\pi}{2} \right) \ln (2) + \frac{3}{2} \ln^2(2) \nonumber\\
&+ \left( -\frac{3103}{140} + \frac{87 \mathrm{i} \pi }{4} - \frac{9}{2}
\ln(2) \right) \ln \left( \frac{x}{x_0'} \right) -\frac{261}{8} \ln^2
\left( \frac{x}{x_0'} \right) \bigg] \bigg\} \Bigg) \,.
\end{align}
\end{subequations}
\end{widetext}
Here, $x_0'$ is related to the arbitrary constant $\tau_0$ by
\begin{align}
x_0' &= \left( \frac{Gm}{c^3} \frac{\mathrm{e}^{11/12 - \gamma_\textnormal{E}}}{4 \tau_0}
\right)^{2/3} \,.
\end{align}
We list expressions for all $h^{\ell m}_\textnormal{tail}$ modes in a supplemental
\emph{Mathematica} file.
\subsection{Memory integrals}
The nonlinear memory effect arises from the nonlogarithmic integrals in
Eqs.~(\ref{eq: U_L}); e.g., for the $\ell = 2$ modes we have
\begin{align}
U_{ij}^\textnormal{mem} (t_r) =&\; -\frac{2G}{7c^5} \int_{-\infty}^{t_r} \mathrm{d}\tau\;
I_{a\langle i}^{(3)}(\tau) I_{j\rangle a}^{(3)}(\tau) \,.
\end{align}
There are two types of memory arising from these integrals: DC (or ``direct
current'') memory and oscillatory memory. The DC memory is a slowly increasing,
nonoscillatory contribution to the gravitational-wave amplitude, entering at
Newtonian order. This leads to a difference in the amplitude between early and
late times:
\begin{align}
\Delta h_\textnormal{mem} &= \lim_{t \rightarrow +\infty} h(t) - \lim_{t \rightarrow
-\infty} h(t) \,.
\end{align}
The oscillatory memory, on the other hand, is a normal periodic contribution
entering the gravitational-wave amplitude at higher PN order. In
Refs.~\cite{arun-2004} and~\cite{blanchet-2008}, the authors give expressions
for both leading-order DC and oscillatory memory in the circular limit. The
calculation of DC memory has been extended to 3PN order for circular binaries
in Ref.~\cite{favata-2009} and to Newtonian order for eccentric binaries
in Ref.~\cite{favata-2011}. In this paper, we will only briefly discuss the
leading-order contributions to the DC and oscillatory memory for eccentric
binaries, such that we can compare our results to the circular limit
in Ref.~\cite{blanchet-2008}. The complete post-Newtonian corrections to the
nonlinear memory are dealt with in a subsequent paper~\cite{ebersold-2019},
completing the hereditary contributions to the gravitational-wave amplitudes
for nonspinning eccentric binaries.
Following the same steps as in the previous section, we can calculate the
derivatives of the source moments, and we find, e.g., for the $20$-mode:
\begin{align}
h^{20}_\textnormal{DC} =&\; \frac{256}{7} \frac{G m \nu}{c^2 R} \sqrt{
\frac{\pi}{30}} \int_{-\infty}^{t_r} \mathrm{d} t\, \left( 1 + \frac{313}{48}
e^2 \right) x^5 \,.
\end{align}
We find that all DC memory modes will consist of such integrals of the form
\begin{align}
h^{\ell 0}_\textnormal{DC} \propto&\; \int_{-\infty}^{t_r} \mathrm{d} t\, x^p(t) \, e^q(t)
\,.
\end{align}
One can rewrite this as an integral over the eccentricity
\begin{align}\label{eq: hl0 mem integral}
h^{\ell 0}_\textnormal{DC} \propto&\; \int_{e_i}^{e(t_r)} \mathrm{d} e\,
\left( \frac{\mathrm{d} e}{\mathrm{d} t} \right)^{-1} x^p(e) \, e^q \,,
\end{align}
where $e_i$ is some initial eccentricity at early times. Solving the evolution
equations~(\ref{eq: peters-mathews}) to leading order, we find
\begin{align}
x(e) =&\; x_0 \left( \frac{e_0}{e} \right)^{12/19} \,,
\end{align}
where $x(e_0) = x_0$. We can insert this into Eq.~(\ref{eq: hl0 mem integral})
together with the evolution equation $\mathrm{d} e/\mathrm{d} t$ and integrate over $e$.
We then find DC memory at leading Newtonian order in the $20$-mode and
$40$-mode:
\begin{subequations}
\begin{align}
h^{20}_\textnormal{DC} =&\; \frac{8 G m \nu}{c^2 R} x \sqrt{\frac{\pi}{5}} \;
\frac{-5}{14 \sqrt{6}} \left\{ 1 - \left( \frac{e}{e_i} \right)^{12/19}
\right\} \,,\\
h^{40}_\textnormal{DC} =&\; \frac{8 G m \nu}{c^2 R} x \sqrt{\frac{\pi}{5}} \;
\frac{-1}{504 \sqrt{2}} \left\{ 1 - \left( \frac{e}{e_i}
\right)^{12/19} \right\} \,.
\end{align}
\end{subequations}
The time derivatives of the oscillatory modes are computed in the same way. We
find that they consist of integrals of the form
\begin{align}
h^{\ell m}_\textnormal{osc} \propto&\; \int_{-\infty}^{t_r} \mathrm{d} t\, x^p(t) \, e^q(t)
\, \mathrm{e}^{\mathrm{i} (s \lambda + r l)} \,,
\end{align}
which can be integrated to give
\begin{align}
h^{\ell m}_\textnormal{osc} \propto&\; -\frac{\mathrm{i}}{n ( r + (1+k) s)} x^p \, e^q \,
\mathrm{e}^{\mathrm{i} (s \lambda + r l)} \,.
\end{align}
Note that there are oscillatory memory contributions entering the waveform at
1.5, 2, 2.5 and 3PN order. We list here only the 2.5 and 3PN terms that have a
circular limit, as to compare our results to Ref.~\cite{blanchet-2008}. We
refer to our follow-up work~\cite{ebersold-2019} for a complete treatment of
nonlinear memory. The modes have the following structure:
\begin{align}
h^{\ell m}_\textnormal{osc} =&\; \frac{8 G m \nu}{c^2 R} x \sqrt{\frac{\pi}{5}}
\mathrm{e}^{-\mathrm{i} m\phi} H^{\ell m}_\textnormal{osc} \,.
\end{align}
The various contributions to $\mathcal{O}(e)$ are:
\begin{subequations}
\begin{align}
H^{31}_\textnormal{osc} =&\; \frac{-121\, x^3 \nu \Delta}{45 \sqrt{14}} \left( 1 + e
\left\{ \frac{301}{242} \mathrm{e}^{-\mathrm{i} l} + \mathrm{e}^{\mathrm{i} l} \right\} \right)
\,,\\
H^{33}_\textnormal{osc} =&\; \frac{11\, x^3 \nu \Delta}{27 \sqrt{210}} \left( 1 + e
\left\{ \frac{9}{2} \mathrm{e}^{-\mathrm{i} l} + \frac{3}{22} \mathrm{e}^{\mathrm{i} l} \right\}
\right) \,,\\
H^{44}_\textnormal{osc} =&\; \frac{\mathrm{i}\, x^{5/2} \nu}{9 \sqrt{35}} \left( 1 + e
\left\{ \frac{7}{5} \mathrm{e}^{-\mathrm{i} l} + 3 \mathrm{e}^{\mathrm{i} l} \right\} \right) \,,\\
H^{51}_\textnormal{osc} =&\; \frac{-13\, x^3 \nu \Delta}{63 \sqrt{385}} \left( 1 + e
\left\{ \frac{251}{208} \mathrm{e}^{-\mathrm{i} l} + \mathrm{e}^{\mathrm{i} l} \right\} \right)
\,,\\
H^{53}_\textnormal{osc} =&\; \frac{-x^3 \nu \Delta}{189 \sqrt{330}} \left( 1 + e
\left\{ \frac{201}{16} \mathrm{e}^{\mathrm{i} l} - \frac{369}{32} \mathrm{e}^{-\mathrm{i} l}
\right\} \right) \,,\\
H^{55}_\textnormal{osc} =&\; \frac{9\, x^3 \nu \Delta}{35 \sqrt{66}} \left( 1 + e
\left\{ \frac{2285}{1296} \mathrm{e}^{-\mathrm{i} l} + \frac{985}{288} \mathrm{e}^{\mathrm{i} l}
\right\} \right)\,.
\end{align}
\end{subequations}
\section{Constructing the full 3PN-accurate waveform}\label{sec: full waveform}
We now want to construct the full 3PN-accurate waveform valid during the
inspiral of a binary system. We begin by adding up the two contributions to the
spherical harmonic modes:
\begin{align}
h^{\ell m} &= (h^{\ell m})_\textnormal{inst} + (h^{\ell m})_\textnormal{hered} \,.
\end{align}
Note that we are still missing some memory contributions. These will be
computed in full in our follow-up work~\cite{ebersold-2019}, and we will give
expressions for the full waveform including memory there.
\subsection{Instantaneous parts}
The instantaneous parts $(h^{\ell m})_\textnormal{inst}$ of the spherical harmonic modes
for compact binaries in elliptical orbits have already been calculated to the
third post-Newtonian order in Ref.~\cite{mishra-2015}, although the results do
not include post-adiabatic corrections to the quasi-Keplerian parametrization.
They are given in terms of the constants of motion $x$ and $e = e_t$ and
parametrized by the eccentric anomaly $u$. We will rewrite these in terms of
the mean anomaly $l$ by using the solution to the Kepler equation~(\ref{eq: u
solution}). This gives us expressions for the instantaneous contributions to
the different modes in terms of the post-Newtonian parameter $x$ and the time
eccentricity $e$, parametrized by the angles $\phi$ and $l$. The modes again
have the following structure:
\begin{align}\label{eq: hlm inst}
h^{\ell m}_\textnormal{inst} =&\; \frac{8 G m \nu}{c^2 R} x \sqrt{\frac{\pi}{5}}
\mathrm{e}^{-\mathrm{i} m\phi} H^{\ell m}_\textnormal{inst} \,.
\end{align}
The various contributions to, e.g., the $H^{22}_\textnormal{inst}$ mode are given to
$\mathcal{O}(e)$ by
\begin{widetext}
\begin{subequations}\label{eq: h22 inst}
\begin{align}
(H^{22}_\textnormal{inst})_\textnormal{Newt} =&\; 1 + e \bigg\{ \frac{1}{4}
\mathrm{e}^{-\mathrm{i} l} + \frac{5}{4} \mathrm{e}^{\mathrm{i} l} \bigg\} \,,\\
%
(H^{22}_\textnormal{inst})_\textnormal{1PN} =&\; x \Bigg( -\frac{107}{42} + \frac{55
\nu}{42} + e \bigg\{ \mathrm{e}^{-\mathrm{i} l} \left[ -\frac{257}{168} +
\frac{169 \nu}{168} \right] + \mathrm{e}^{\mathrm{i} l} \left[ -\frac{31}{24} +
\frac{35 \nu}{24} \right] \bigg\} \Bigg) \,,\\
%
(H^{22}_\textnormal{inst})_\textnormal{2PN} =&\; x^2 \Bigg( -\frac{2173}{1512} -
\frac{1069 \nu}{216} + \frac{2047 \nu^2}{1512} + e \bigg\{
\mathrm{e}^{\mathrm{i} l} \left[ -\frac{2155}{252} - \frac{1655 \nu}{672} +
\frac{371 \nu^2}{288} \right] \nonumber\\
&+ \mathrm{e}^{-\mathrm{i} l} \left[ -\frac{4271}{756} - \frac{35131 \nu}{6048} +
\frac{421 \nu^2}{864} \right] \bigg\} \Bigg) \,,\\
%
(H^{22}_\textnormal{inst})_\textnormal{2.5PN} =&\; -x^{5/2} \mathrm{i} \nu \Bigg( \frac{56}{5}
+ e \bigg\{ \frac{7817}{420} \mathrm{e}^{\mathrm{i} l} + \frac{2579}{84}
\mathrm{e}^{-\mathrm{i} l} \bigg\} \Bigg) \,,\\
%
(H^{22}_\textnormal{inst})_\textnormal{3PN} =&\; x^3 \Bigg( \frac{761273}{13200} +
\left( -\frac{278185}{33264} + \frac{41 \pi^2}{96} \right) \nu -
\frac{20261 \nu^2}{2772} + \frac{114635 \nu^3}{99792} + \frac{856}{105}
\ln \left( \frac{x}{x_0} \right) \nonumber\\
&+ e \bigg\{ \mathrm{e}^{\mathrm{i} l} \left[ \frac{6148781}{75600} + \left(
-\frac{199855}{3024} + \frac{41 \pi^2}{48} \right) \nu - \frac{9967
\nu^2}{1008} + \frac{35579 \nu^3}{36288} + \frac{3103}{210} \ln \left(
\frac{x}{x_0} \right) \right] \nonumber\\
&+ \mathrm{e}^{-\mathrm{i} l} \left[ \frac{150345571}{831600} + \left(
-\frac{121717}{20790} - \frac{41 \pi^2}{192} \right) \nu - \frac{86531
\nu^2}{8316} - \frac{33331 \nu^3}{399168} + \frac{749}{30} \ln \left(
\frac{x}{x_0} \right) \right] \bigg\} \Bigg) \,,
\end{align}
\end{subequations}
\end{widetext}
where $x_0 = Gm/(c^3 \tau_0)$ is related to $x_0'$ by
\begin{align}\label{eq: logx0 relation}
\ln x_0' &= \frac{11}{18} -\frac{2}{3}\gamma_\textnormal{E} - \frac{4}{3} \ln 2 + \frac{2}{3}
\ln x_0 \,.
\end{align}
\subsection{Post-adiabatic corrections}
We now move to include post-adiabatic corrections into the waveform. As already
mentioned in Sec.~\ref{sec: hereditary}, post-adiabatic corrections to the
hereditary contributions will only enter at 4PN. We are thus left with
computing the corrections to the instantaneous contributions as described in
Sec.~\ref{sec: phasing}. Schematically, the substitutions in Eq.~(\ref{eq:
phasing subst}) may be described as
\begin{align}
h^{\ell m}(&x, e, l, \lambda) \nonumber\\
&\Downarrow \nonumber\\
h^{\ell m}(\bar{x} + \tilde{x}, \bar{e} &+ \tilde{e}, \bar{l} + \tilde{l}, \bar{\lambda} + \tilde{\lambda}) \nonumber\\
&\Downarrow \nonumber\\
h^{\ell m}(\bar{x}, \bar{e}, \bar{l}, \bar{\lambda}) + \bigg\{ \frac{\partial h^{\ell
m}}{\partial x} \tilde{x} \,+\,& \frac{\partial h^{\ell m}}{\partial e} \tilde{e} +
\frac{\partial h^{\ell m}}{\partial l} \tilde{l} + \frac{\partial h^{\ell
m}}{\partial \lambda} \tilde{\lambda} \bigg\} \nonumber\\
&\Downarrow \nonumber\\
h^{\ell m}(\bar{x}, \bar{e} , \bar{l}, \bar{\lambda}) \,+&\, \frac{1}{c^5} \, h^{\ell m}_\textnormal{post-ad}
(\bar{x}, \bar{e}, \bar{l}, \bar{\lambda}) \,.
\end{align}
In particular, we only need to make these substitutions at leading Newtonian
and 0.5PN order. At higher orders, we simply replace the variables ($x$, $e$,
$l$, $\lambda$) by their secular evolving parts ($\bar{x}$, $\bar{e}$, $\bar{l}$, $\bar{\lambda}$)
to get the secular evolving waveform.
The post-adiabatic contributions to the different modes in terms of the secular
evolving parameters $\bar{x}$ and $\bar{e}$, parametrized by the angles $\phi$ and
$\bar{l}$, have the following form:
\begin{align}
h^{\ell m}_\textnormal{post-ad} =&\; \frac{8 G m \nu}{c^2 R} \bar{x} \sqrt{\frac{\pi}{5}}
\mathrm{e}^{-\mathrm{i} m\phi} H^{\ell m}_\textnormal{post-ad} \,.
\end{align}
For example, the $H^{22}_\textnormal{post-ad}$ mode, that arises from including the
post-adiabatic corrections in $(H^{22}_\textnormal{inst})_\textnormal{Newt}$, is given by
\begin{align}\label{eq: h22-post-ad}
H^{22}_\textnormal{post-ad} =&\; \nonumber\\
\frac{192}{5} & \bar{x}^{5/2} \mathrm{i} \nu \Bigg( 1 + \bar{e} \bigg\{ \frac{401}{72}
\mathrm{e}^{-\mathrm{i}\bar{l}} + \frac{293}{72} \mathrm{e}^{\mathrm{i}\bar{l}} \bigg\} \Bigg) \,.
\end{align}
We can combine these post-adiabatic contributions with the instantaneous ones
to get the full secular evolving instantaneous waveform in terms of the
variables ($\bar{x}$, $\bar{e}$, $\bar{l}$, $\bar{\lambda}$). The result has again the following
form:
\begin{align}
h^{\ell m}_\textnormal{inst} =&\; \frac{8 G m \nu}{c^2 R} \bar{x} \sqrt{\frac{\pi}{5}}
\mathrm{e}^{-\mathrm{i} m\phi} H^{\ell m}_\textnormal{inst} \,.
\end{align}
In e.g.~the $H^{22}_\textnormal{inst}$ mode we find that the only term that is modified is
the 2.5PN order:
\begin{align}
&(H^{22}_\textnormal{inst})_\textnormal{2.5PN} =\nonumber\\
&\quad\quad
-\bar{x}^{5/2} \mathrm{i} \nu \Bigg( 24 + \bar{e} \bigg\{ \frac{43657}{420}
\mathrm{e}^{\mathrm{i}\bar{l}} + \frac{1013}{140} \mathrm{e}^{-\mathrm{i}\bar{l}} \bigg\} \Bigg) \,.
\end{align}
All other orders are exactly as in Eqs.~(\ref{eq: h22 inst}), but with ($x$,
$e$, $l$, $\lambda$) replaced by ($\bar{x}$, $\bar{e}$, $\bar{l}$, $\bar{\lambda}$) .
\subsection{Log cancellation}\label{sec: log cancel}
We observe that both instantaneous and tail terms still have some dependence on
the arbitrary constant $x_0'$ (or $x_0$). We find that this dependence on
$x_0'$ can be reabsorbed in a shift of the coordinate time $t$
\cite{blanchet-1996, arun-2004} through a redefinition of the mean anomaly as
\begin{align}
\xi &= \bar{l} - \frac{3GM}{c^3} \bar{n} \ln \Big( \frac{\bar{x}}{x_0'} \Big) \,,
\end{align}
where $M = m (1 - \nu \bar{x} / 2)$ is the ADM mass. Note that there are no
post-adiabatic corrections to $n$ and $x$ here, as phasing effects would only
enter at $1.5+2.5$PN order. This also means that both $\xi$ and $\bar{l}$ follow
the same evolution, i.e., $\mathrm{d}\xi/\mathrm{d} t = \mathrm{d}\bar{l}/\mathrm{d} t = \bar{n}$, and they
only differ by a constant factor. To simplify the final expressions, we also
introduce a redefined phase $\psi$ such that Eq.~(\ref{eq: quasi-keplerian phi
bar}) gives the relation between $\xi$ and $\psi$:
\begin{align}
\psi =\;& \bar{\lambda}_\xi + \bar{W}_\xi + \tilde{\lambda}_\xi + (\tilde{v}_\xi - \tilde{l}_\xi) \,.
\end{align}
Here
\begin{align}
\bar{\lambda}_\xi =&\; \bar{\lambda} - \frac{3GM}{c^3} (1 + \bar{k}) \bar{n} \ln \Big(
\frac{\bar{x}}{x_0'} \Big) \,,
\end{align}
is the phase $\bar{\lambda}$ evaluated at the shifted time defined by $\xi$, and
$\bar{W}_\xi$, $\tilde{\lambda}_\xi$, $\tilde{v}_\xi$, and $\tilde{l}_\xi$ are defined as in
Eq.~(\ref{eq: quasi-keplerian phi bar}), but with $\bar{l}$ replaced by $\xi$.
From this, we can easily deduce that
\begin{align}
\psi =\;& \phi + \sum_{s=1}^{\infty} \frac{1}{s!} \Bigg[ \left( \xi - \bar{l}
\right)^s \left( \frac{\mathrm{d}}{\mathrm{d}\bar{l}} \right)^s \nonumber\\
&+ \left( \bar{\lambda}_\xi - \bar{\lambda} \right)^s \left( \frac{\mathrm{d}}{\mathrm{d}\bar{\lambda}}
\right)^s \Bigg] \phi \,.
\end{align}
Note that the phase $\psi$ does not have the same geometric interpretation as
$\phi$. Expanding these equations to $\mathcal{O}(\bar{x}^3)$ and $\mathcal{O}(\bar{e})$, we find
\begin{subequations}
\begin{align}
\bar{l} =&\; \xi + 3 \left( \bar{x}^{3/2} - \bar{x}^{5/2} \left( 3 + \frac{\nu}{2}
\right) \right) \ln \Big( \frac{\bar{x}}{x_0'} \Big) \,,\\
\phi =&\; \psi + \bigg( \bar{x}^{3/2} \left( 3 + 6 \bar{e} \cos(\xi) \right)
\nonumber\\
&+ \bar{x}^{5/2} \left( -\frac{3\nu}{2} + 6 \bar{e} (2 - \nu) \cos(\xi) \right)
\bigg) \ln \Big( \frac{\bar{x}}{x_0'} \Big) \nonumber\\
&- 9 \bar{x}^3 \bar{e} \sin(\xi) \ln^2 \Big( \frac{\bar{x}}{x_0'} \Big) \,.
\end{align}
\end{subequations}
This redefinition of the time coordinate results in the cancellation of all log
terms involving the arbitrary constant $x_0'$.
\subsection{Full waveform}
The full waveform in terms of the redefined angles $\xi$ and $\psi$ -- minus
some memory contributions -- has the following form:
\begin{align}
h^{\ell m} =&\; \frac{8 G m \nu}{c^2 R} \bar{x} \sqrt{\frac{\pi}{5}}
\mathrm{e}^{-\mathrm{i} m\psi} H^{\ell m} \,.
\end{align}
The various contributions to, e.g., the $H^{22}$ mode are given to $\mathcal{O}(\bar{e})$
by
\begin{widetext}
\begin{subequations}\label{eq: Hlm inst+hered}
\begin{align}
H^{22}_\textnormal{Newt} =&\; 1 + \bar{e} \bigg\{ \frac{1}{4} \mathrm{e}^{-\mathrm{i}\xi} +
\frac{5}{4} \mathrm{e}^{\mathrm{i}\xi} \bigg\} \,,\\
%
H^{22}_\textnormal{1PN} =&\; \bar{x} \Bigg( -\frac{107}{42} + \frac{55 \nu}{42}
+ \bar{e} \bigg\{ \mathrm{e}^{-\mathrm{i}\xi} \left[ -\frac{257}{168} + \frac{169
\nu}{168} \right] + \mathrm{e}^{\mathrm{i}\xi} \left[ -\frac{31}{24} + \frac{35
\nu}{24} \right] \bigg\} \Bigg) \,,\\
%
H^{22}_\textnormal{1.5PN} =&\; \bar{x}^{3/2} \Bigg( 2 \pi + \bar{e} \bigg\{
\mathrm{e}^{-\mathrm{i} \xi} \left[ \frac{11 \pi }{4} + \frac{27 \mathrm{i}}{2} \ln \left(
\frac{3}{2} \right) \right] + \mathrm{e}^{\mathrm{i} \xi} \left[\frac{13 \pi }{4} +
\frac{3 \mathrm{i}}{2} \ln(2) \right] \bigg\} \Bigg) \,,\\
%
H^{22}_\textnormal{2PN} =&\; \bar{x}^2 \Bigg( -\frac{2173}{1512} - \frac{1069
\nu}{216} + \frac{2047 \nu^2}{1512} + \bar{e} \bigg\{ \mathrm{e}^{\mathrm{i}\xi} \left[
-\frac{2155}{252} - \frac{1655 \nu}{672} + \frac{371 \nu^2}{288}
\right] \nonumber\\
&+ \mathrm{e}^{-\mathrm{i}\xi} \left[ -\frac{4271}{756} - \frac{35131 \nu}{6048} +
\frac{421 \nu^2}{864} \right] \bigg\} \Bigg) \,,\\
%
H^{22}_\textnormal{2.5PN} =&\; \bar{x}^{5/2} \Bigg( -\frac{107 \pi}{21} +
\left( -24 \mathrm{i} + \frac{34 \pi}{21} \right) \nu \nonumber\\
&+ \bar{e} \bigg\{ \mathrm{e}^{\mathrm{i} \xi} \bigg[ -\frac{9 \mathrm{i}}{2} + \frac{229
\pi}{168} + \left( -\frac{43657 \mathrm{i}}{420} + \frac{61 \pi}{42} \right)
\nu + \left( \frac{473 \mathrm{i}}{28} - \frac{3 \mathrm{i} \nu }{7} \right) \ln (2)
\bigg] \nonumber\\
&+ \mathrm{e}^{-\mathrm{i} \xi} \bigg[ -\frac{27 \mathrm{i}}{2} -\frac{1081 \pi}{168} +
\left( -\frac{1013 \mathrm{i}}{140} + \frac{137 \pi}{42} \right) \nu + \left(
\frac{27 \mathrm{i}}{4} + 9 \mathrm{i} \nu \right) \ln \left( \frac{3}{2} \right)
\bigg] \bigg\} \Bigg) \,,\\
%
H^{22}_\textnormal{3PN} =&\; \bar{x}^3 \Bigg( \frac{27027409}{646800} +
\frac{428 \mathrm{i} \pi}{105} + \frac{2 \pi^2}{3} - \frac{856 \gamma_\textnormal{E}}{105} +
\left( -\frac{278185}{33264} + \frac{41 \pi^2}{96} \right) \nu -
\frac{20261 \nu^2}{2772} + \frac{114635 \nu^3}{99792} \nonumber\\
&- \frac{1712 \ln(2)}{105} - \frac{428 \ln(\bar{x})}{105} \nonumber\\
&+ \bar{e} \bigg\{ \mathrm{e}^{-\mathrm{i} \xi} \bigg[ \frac{219775769}{1663200} +
\frac{749 \mathrm{i} \pi}{60} + \frac{49 \pi^2}{24} - \frac{749 \gamma_\textnormal{E}}{30} +
\left( -\frac{121717}{20790} - \frac{41 \pi^2}{192}\right) \nu -
\frac{86531 \nu^2}{8316} - \frac{33331 \nu^3}{399168} \nonumber\\
&+ \left( -\frac{2889}{70} + \frac{81 \mathrm{i} \pi}{2}\right) \ln \left(
\frac{3}{2} \right) - \frac{81}{2} \ln^2 \left( \frac{3}{2} \right) -
\frac{749 \ln(2)}{15} - \frac{749 \ln(\bar{x})}{60} \bigg] \nonumber\\
&+ \mathrm{e}^{\mathrm{i} \xi} \bigg[ \frac{55608313}{1058400} + \frac{3103 \mathrm{i}
\pi}{420} + \frac{29 \pi^2}{24} - \frac{3103 \gamma_\textnormal{E}}{210} + \left(
-\frac{199855}{3024} + \frac{41 \pi^2}{48} \right) \nu -\frac{9967
\nu^2}{1008} + \frac{35579 \nu^3}{36288} \nonumber\\
&+ \left( -\frac{6527}{210} + \frac{3 \mathrm{i} \pi}{2}\right) \ln(2) +
\frac{3 \ln^2(2)}{2} - \frac{3103 \ln(\bar{x})}{420} \bigg] \bigg\} \Bigg)
\,.
\end{align}
\end{subequations}
\end{widetext}
For completeness all equations relating the different angles $\bar{l}$, $\bar{\lambda}$,
$\xi$ and $\psi$ are listed in Appendix~\ref{sec: quasi-kepl relations}.
\subsection{Quasi-Circular limit}\label{sec: circular}
We now check our results against those in Ref.~\cite{blanchet-2008} in the
quasi-circular limit. Note that the eccentricity is not a gauge-independent
quantity and one thus has to be careful when talking about the circular limit.
For a thorough discussion on different eccentricity parameters and
discrepancies between them we refer to Refs.~\cite{loutrel-2018, loutrel-2019}.
Normally, one uses the orbital averaged description for the evolution of $x$
and $e$, where one finds that the evolution equations~(\ref{eq: peters-mathews})
drive the eccentricity to zero during the inspiral. When introducing
post-adiabatic corrections, this will not be true anymore, as the eccentricity
is split into a orbital averaged part $\bar{e}$ and a periodic oscillatory part
$\tilde{e}$. The orbital averaged part $\bar{e}$ will still follow the same evolution
equations~(\ref{eq: peters-mathews}) and thus be driven to zero, but the
periodic variations $\tilde{e}$ will generally grow larger as the binary inspirals. As
discussed in Ref.~\cite{loutrel-2019}, the orbital averaged description also
breaks down in the late inspiral, failing to capture a secular growth in the
eccentricity observed when directly integrating the two-body equations of
motion.
In our case, it is reasonable to consider the circular limit as the limit where
$\bar{x} \rightarrow x$ and $\bar{e} \rightarrow 0$, with $x$ being the standard
circular frequency parameter. Then, the evolution equations~(\ref{eq:
peters-mathews}) reduce to the usual circular evolution equation
\begin{align}
\dot{x} &= \frac{64c^3 \nu}{5Gm} x^5 + \mathcal{O}(x^6)\,.
\end{align}
In this limit our redefined phase $\psi$ reduces to
\begin{align}
\psi|_{\bar{e}=0} &= \phi - 3 \left(1 - \frac{\nu x}{2}\right) x^{3/2} \ln
\Big( \frac{x}{x_0'} \Big) \,,
\end{align}
which matches exactly the phase $\psi$ used in Ref.~\cite{blanchet-2008}. We can
thus directly compare our results to the circular limit by setting $\bar{e} = 0$
and $\bar{x}|_{\bar{e}=0} = x$. We find, e.g., for the $h^{22}$ mode
\begin{align}
h^{22} = \frac{8Gm\nu}{c^2 R} x \sqrt{\frac{\pi}{5}} \mathrm{e}^{-2\mathrm{i}\psi} H^{22}
\,,
\end{align}
\begin{widetext}
\begin{align}
H^{22} =\;& 1 + x \left( -\frac{107}{42} + \frac{55\nu}{42} \right) + 2\pi
x^{3/2} + x^2 \left( -\frac{2173}{1512} - \frac{1069\nu}{216} +
\frac{2047\nu^2}{1512} \right) + x^{5/2} \left( -\frac{107\pi}{21} +
\left( -24\mathrm{i} + \frac{34\pi}{21} \right) \nu \right) \nonumber\\
&+ x^3 \bigg( \frac{27027409}{646800} + \frac{428\mathrm{i}\pi}{105} + \frac{2
\pi^2}{3} - \frac{856 \gamma_\textnormal{E}}{105} + \left( -\frac{278185}{33264} +
\frac{41\pi^2}{96} \right) \nu - \frac{20261 \nu^2}{2772} +
\frac{114635\nu^2}{99792} \nonumber\\
&- \frac{1712}{105} \ln(2) - \frac{428}{105} \ln(x) \bigg) \,.
\end{align}
\end{widetext}
This matches Eq.~(9.4a) of Ref.~\cite{blanchet-2008}. Similarly, we can compare
the other modes and find perfect agreement in all of them.
\section{Conclusion}\label{sec: summary}
In this work, we computed the tail contributions to the 3PN-accurate
gravitational waveform from nonspinning compact binaries on eccentric orbits.
This extends the work on instantaneous contributions in Ref.~\cite{mishra-2015}
and will be completed with the memory contributions in a follow-up
paper~\cite{ebersold-2019}. We also include post-adiabatic corrections to the
quasi-Keplerian parametrization when combining our tail results with the
instantaneous ones, giving us the full waveform (neglecting memory) that can
be compared to the circular one in the limit $e \rightarrow 0$. The tail
contributions to the $h^{22}$ mode are given at 3PN order and to $\mathcal{O}(e)$ in
Eq.~(\ref{eq: h22-tail}), the post-adiabatic corrections in Eq.~(\ref{eq:
h22-post-ad}). All other $h^{\ell m}$ modes up to $\ell = 5$ are listed in
the supplemental \emph{Mathematica} notebook~\cite{supplement}. To reiterate,
all results are in MH coordinates, which differ from the SH coordinates at 3PN
order.
Note that the instantaneous results in Ref.~\cite{mishra-2015} can be applied to
binary systems of arbitrary eccentricities, while the tail results presented
here are calculated in a small eccentricity expansion. This is due to the
complicated tail integrals over past history, which can only be analytically
calculated when decomposing the integrand into harmonics of the orbital
timescale using an eccentricity expansion. This means that our results are not
applicable for large eccentricities $e \sim 1$, though they might give
accurate results for moderate eccentricities $e \sim 0.4$ when combined with
orbital evolution equations that are not expanded in eccentricity; see, e.g.,
Ref.~\cite{klein-2018}.
\acknowledgments
We thank Riccardo Sturani for a first review. Y.~B. is supported by the Swiss
National Science Foundation and a Forschungskredit of the University of Zurich,
Grant No.~FK-18-084. Y.~B. would like to acknowledge the hospitality of the
Institut d’Astrophysique de Paris during the final stages of this collaboration.
| proofpile-arXiv_066-238 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{Sec1}
It is well known that Einstein’s general theory of relativity has fruitfully explained about several observations or cosmological measures including astrophysical backgrounds\cite{Tipler,Shapiro}. The golden age of cosmology saw the theory of Hubble, the material, the biological structure, the nuclear synthesis, as well as the higher level of precision in explaining the potential origin of the universe and its subsequent evolution. Basically Einstein general theory of relativity is generalization of Newtonian gravity which is mainly suitable to describe the structure of compact star in the strong gravitational fields. Few of these compact objects like pulsars, black holes and neutron stars have densities of the order greater than or equal $10^{14}gm/cm^3$. The Schwarzschild has discovered the first precise solution of Einstein field equations for the gravitational field in the inner part of a non-circular spherical body consisting of a non-compressible fluid. This is also known as constant density solution with outer being empty and has zero pressure at the surface. Now a days, the researcher are busy on the study of relativistic compact stars. For object modeling, we study the solutions of Einstein's equations of static spherically symmetric with different physical causes. These solution may be stated as perfect fluid, anisotropic fluid, and dust. However, there are strong theoretical evidence that steep excessive dense celestial bodies are not made of perfect fluids. In some cases, the objects with different physical phenomena are found, for example anisotropy. The first theoretical attempt to look at the effect of variance was seen in about 1922 when Jeans \cite{Jeans} looked anisotropic pressure on the self-gravitating bodies of Newtonian configurations.\\
After this, Ruderman \cite{Ruderman} has studied the effect of the anisotropy. He said that the stars may have anisotropic characteristics at very high density of the order $10^{15} gm/cm^3$ where the nuclear interaction becomes relativistic. Sudden after, Bowers and Liang\cite{Bowers} studied about confined properties of relativistic anisotropic matter distribution for static spherically symmetric configurations, which is comprehensively populated. Recently, An extensive research was conducted in the study of physics related to anisotropic pressures.In this connection, Dev and Gleiser \cite{Dev1,Dev2} have shown that pressure variation affects the physical properties of mass, structure and excessive pressure areas. Also there are other several analytical static solutions have been already discovered by several authors \cite{Herrera1985,Maurya1,Maurya2,Maurya3,Singh1,Singh2,Maurya4,Maurya5,Deb1,Maurya6,Maurya7,Maurya8,Singh3,Deb2,Maurya9,Maurya10,Maurya11,Maurya12,Maurya13,Maurya14,Gupta1,Mak2003}. Most pioneering work by Herrera and Santos \cite{Herrera1} where they have specified about effect of local anisotropy in self gravitating systems. More remarkably, the algorithm for all possible static isotropic, anisotropic as well as charged anisotropic solutions of Einstein's equation for the spherically symmetric line element can be attractively determined by a general procedure which are given in Refs. \cite{Lake,Herrera2,Maurya2017}. \\
It is essential to note that the redshift and mass of the stellar model both varies with the anisotropy. Recently, an extensive efforts have been made in the modeling of physical observed astronomical objects in the existence of anisotropy which can be seen in recent research papers\cite{Sharma,Ngubelanga,Murad1,Murad2} and the references therein. In these recent papers, the physical analysis reaffirms the significance of the presence of a non-zero anisotropy in the modeling of astrophysical objects. In order to create a substantially reliable object, it is necessary to find an analytical solution of Einstein field equations for relativistic matter distribution which can be solved by restricting the space-time geometry or stating an equation of state (EOS) of the matter distribution.
On the other hand, we can generate the exact solution of relativistic field equation using another different approach known as embedding class one condition. In this connection, Riemann has presented the idea, known as Riemannian geometry, to study the essential geometric properties of the objects. Immediately after this, Schlaefli \cite{Schlaefli} estimated that a Riemannian manifold of metric which is analytic with positively defined signature can be embedded locally and isometrically into the higher dimensional flat Euclidean space.
The idea of embedding locally and isometrically an n-dimensional Riemannian manifold $V_n$ into an $N = n(n + 1)/2$ dimensional pseudo-Euclidean space was proved in the past by authors \cite{Janet,Cartan,Burstin}. The embedding class $p$ of $V_n$ is the minimum number of extra dimensions required by the pseudo-Euclidean space, which is obviously equal to $p=N-n = n (n-1) /2$. As we know, general theory of relativity deals only with four dimensional spacetime, however embedding class solution may provide new characteristics to gravitational field, as well to physics. In case of relativistic space time $V_n$, the embedding class $p$ turns out to be $p=6$. In particular the classes of spherical and plane symmetric space-time are $p=2$ and $p=3$ respectively. The famous Friedman-Robertson-Lemaitre space-time, is of class $p=1$, while the Schwarzschild’s exterior and interior solutions are of class $p=2$ and class $p=1$ respectively, moreover Kerr metric is class 5. In the literature \cite{Barnes1,Kumar,Barnes2,Ponce,Akbar,Abbas,Kuhfitting1,Kuhfitting2}, there are many interesting work concerning the effects of the technique of embedding of lower dimensional Riemannian space into the higher dimensional pseudo-Euclidean space in the framework of GR. The main consequence of embedding a Riemannian variety corresponding to a spherically symmetric and static spacetime into a pseudo Euclidean space is the so-called Eisland condition. This condition links both metric potentials $e^{\nu}$ and $e^{\lambda}$ into a single differential equation. It is a mathematical simplification which reduces the problem of obtaining exact solutions to a single-generating function. The approach is to choose one of the gravitational potentials on physical grounds and to then integrate the Eisland condition to fully specify the gravitational behavior of the model. In this paper we utilize Eisland condition to derive solutions which describe compact objects in general relativity. We subject our solutions to rigorous physical tests which ensure that they do describe physically observable objects in the universe.
The article is organized as follows: In Sec. II we have specified the interior space time and Einstein field equations for anisotropic matter distribution. This section also includes the embedding class one condition along with non-vanishing Riemannian tensor for interior space time. IN next section III, we have presented a generalized Finch-Skea solution for anisotropic matter distribution using the class one condition. The nonsingular nature of pressures, density and bounds of the constant are given in Sec. IV. In Sec. V, we presents the necessary and sufficient conditions to determine all possible constant parameters that describe the anisotropic solution. For this purpose, we match our interior space-time to the exterior space-time (Schwarzscild metric). The section VI includes the energy conditions. In Sec. VII, we have discussed the most important features of the objects like equilibrium condition via. Tolman-Oppenhimer-Volkoff equation, Causality and stability condition through Herrera Aberu criterion, adiabatic index and Harrison-Zeldovich-Novikov static stability criterion……
\section{Interior space-time and field equations}
The interior space-time for spherically symmetric space-time is chosen as,
\begin{equation}
ds^{2}=e^{\nu(r)}dt^{2}-e^{\lambda(r)}dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2} \right) \label{met}
\end{equation}
where $\nu$ and $\lambda$ are functions of the radial coordinate `$r$' only.\\
The Einstein's field equations corresponding an anisotropic fluid distribution becomes
\begin{eqnarray}
R^\mu_\nu-{1\over 2}g^\mu_\nu R &=& -{8\pi} \big[(p_t +\rho c^2)v^\mu v_\nu-p_t g^\mu_\nu+(p_r-p_t) \nonumber \\
&& \chi_\nu \chi^\mu \big] \label{fil}
\end{eqnarray}
where the symbols have their usual meanings.\\
For the space-time \eqref{met}, the field equations can be written as
\begin{eqnarray}
\frac{1-e^{-\lambda}}{r^{2}}+\frac{e^{-\lambda}\lambda'}{r} &=& 8\pi\rho \label{dens}\\
\frac{e^{-\lambda}-1}{r^{2}}+\frac{e^{-\lambda}\nu'}{r} &=& 8\pi p_{r} \label{prs}\\
e^{-\lambda}\left(\frac{\nu''}{2}+\frac{\nu'^{2}}{4}-\frac{\nu'\lambda'}{4}+\frac{\nu'-\lambda'}{2r} \right) &=& 8\pi p_t. \label{prt}
\end{eqnarray}
The measure of anisotropy is defined as $\Delta = 8\pi (p_t - p_r)$.\\
On the other hand, It was proved by Eisenhart~\cite{Eisenhart1925} that an embedding class 1 space (A $(n+1)$ dimensional space $V^{n+1}$ can be embedded into a $(n+2)$ dimensional pseudo-Euclidean space $E^{n+2}$) can be described by a $(n+1)$ dimensional space $V^{n+1}$ if there exists a symmetric tensor $a_{mn}$ which satisfies the following Gauss- Codazzi equations: \\
\begin{eqnarray}\label{eqcls1.1}
R_{mnpq}=2\,e\,{a_{m\,[p}}{a_{q]n}}~~~\nonumber\\ \text{and}~~~a_{m\left[n;p\right]}-{\Gamma}^q_{\left[n\,p\right]}\,a_{mq}+{{\Gamma}^q_{m}}\,{}_{[n}\,a_{p]q}=0,
\end{eqnarray}
where $e=\pm1$, $R_{mnpq}$ denotes the curvature tensor and square brackets represent antisymmetrization. Here, $a_{mn}$ are the coefficients of the second differential form. Moerover, A necessary and sufficient condition for the embedding class I of Eq.~\ref{eqcls1.1} in a suitable convenient form was given by Eiesland \cite{Eiesland1925} as
\begin{eqnarray}
R_{{0101}}R_{{2323}}=R_{{0202}}R_{{1313}}-R_{{1202}}R_{{1303}}.\label{3.2}
\end{eqnarray}
The non-vanishing components of Riemannian tensor for the spherically symmetric interior space-time (\ref{met}) are given as
\begin{eqnarray}
&& R_{{0101}}=-\frac{1}{4}\,{{\rm e}^{\nu}} \left( -\nu^{{\prime}}\lambda^{{\prime}}+{\nu^{{\prime}}}^{2}+2
\,\nu^{{\prime\prime}} \right), \nonumber \\
&& R_{{2323}}=-{r}^{2} {\sin^2 \theta} \left( 1-{{\rm e}^{-\lambda}} \right),~~
R_{{0202}}=-\frac{1}{2}\,r\nu^{{\prime}}{{\rm e}^{\nu-\lambda}},\nonumber\\
&& R_{{1313}}=-\frac{1}{2}\,\lambda^{{\prime}}r \sin^2 \theta,~~ R_{{1202}}=0,~~~ R_{{1303}}=0
\end{eqnarray}
\\
By plugging the values of above Riemannian components into Eq.~(\ref{3.2}) we obtain a differential equation in $\nu$ and $\lambda$ of the form
\begin{eqnarray}\label{3.3}
({\lambda}^{{\prime}}-{{\nu}^{{\prime}
}})\,{\nu}^{{\prime}}\,{{\rm e}^{\lambda}}+2\,(1-{{\rm e}^{\lambda}}){\nu}^{{\prime\prime}}+{{\nu}^{{\prime}}}^{2}=0.
\end{eqnarray}
The solutions Eq.(\ref{3.3}) of are named as `embedding class one solution" and they can be embedded in five dimensional pseudo-Euclidean space.\\
On integration of Eq.(\ref{3.3}) we get
\begin{equation}
e^{\nu}=\left(A+B\int \sqrt{e^{\lambda}-1}~dr\right)^2\label{nu1}
\end{equation}
where $A$ and $B$ are constants of integration.
By using (\ref{nu1}) we can express the anisotropy as \cite{ma1,ma2}
\begin{eqnarray}
\Delta = {\nu' \over 4e^\lambda}\left[{2\over r}-{\lambda' \over e^\lambda-1}\right]~\left[{\nu' e^\nu \over 2rB^2}-1\right]. \label{del1}
\end{eqnarray}
For isotropic case $\Delta=0$ and there are three possible solutions when (a) $e^\nu = C$ and $e^\lambda=1$ (not physical), (b) Schwarzschild interior solution (not physical) and (c) Kohler-Chao solution (cosmological solution as the pressure vanishes at $r\rightarrow \infty$).
\section{A generalized solution for compact star model}
Since the field equations depend on metric functions $\nu$ and $\lambda$. To construct a viable anisotropic model, We have assumed the generalized form of Finch-Skea metric \cite{Finch} function $g_{rr}$ as
\begin{eqnarray}
\lambda &=& \ln (1+a r^2+b^{n-1} r^n)\label{ela}
\end{eqnarray}
where $a$ and $b$ are non-zero positive constants and $n$ is a positive integer. By substituting the value of $\lambda$ from Eq.\ref{ela} into Eq.(\ref{nu1}) we get
\begin{eqnarray}
e^\nu &=& \bigg(A-\bigg\{2 B \Big[a b (n-2) r^2 f(r) \sqrt{a b^{1-n} r^{2-n}+1}+\nonumber \\
&& (6-n) \left(a b r^2+b^n r^n\right)\Big]\bigg\} {(a+b^{n-1} r^{n-2})^{-1/2} \over b (n-6) (n+2)}\bigg)^2 \nonumber \\ \label{enu}
\end{eqnarray}
where $f(r) = ~_2F_1\left(\frac{1}{2},\frac{n-6}{2 (n-2)};\frac{10-3 n}{4-2 n};-a b^{1-n} r^{2-n}\right)$ is known as Gauss hypergeometric function. The behaviour of the metric potentials are plotted in Fig. \ref{mt}.\\
By using the metric potentials $\nu$ and $\lambda$, we directly obtain the expression for thermodynamic variables like density, radial and transverse pressure and anisotropy as
\begin{eqnarray}
8\pi \rho(r) &=& \frac{1}{\left(a r^2+b^{n-1} r^n+1\right)^2} \bigg[a^2 r^2+a \left(2 b^{n-1} r^n+3\right)\nonumber \\
&& +b^{n-1} r^{n-2} \left(b^{n-1} r^n+n+1\right)\bigg]\label{den}\\
8\pi p_r(r) &=& \Big[(n-6) b^n k(r) r^n \Big\{b \Big[2 B r \left(a r^2-n-2\right)+\nonumber \\
&& A (n+2) j(r)\Big]+2 B b^n r^{n+1}\Big\}-2 a b B (n-2) \nonumber \\
&& r^3 f(r) (a b r^2+b^n r^n) \Big] \Big[(6-n) \Big\{2 a b B r^3+A b \nonumber \\
&& (n+2) j(r)+2 B b^n r^{n+1}\Big\}+2 a b B (n-2) r^3 \nonumber \\
&& f(r) k(r)\Big]^{-1} \times \frac{b^{-n} r^{-n-2} \left(a b r^2+b^n r^n\right)}{k(r) \left(a b r^2+b^n r^n+b\right)}\label{pre1}\\
\Delta(r) &=& \frac{k(r) l(r) q(r)}{2 r^2 p(r) \left(a b r^2+b^n r^n\right) \left(a b r^2+b^n r^n+b\right)^2}\\
8\pi p_t (r) &=& 8\pi p_r+\Delta.
\end{eqnarray}
where,
\begin{eqnarray}
j(r) &=& \sqrt{a r^2+b^{n-1} r^n} \\
k(r) &=& \sqrt{a b^{1-n} r^{2-n}+1} \\
l(r) &=& 2 a^2 b^2 r^4+4 a b^{n+1} r^{n+2}+b^n r^n \big[2 b^n r^n\nonumber \\
&& +b (2-n)\big]\\
n(r) &=& b \left[B r (2 a r^2-n-2)+A (n+2) j(r)\right] \nonumber \\
&& +2 B b^n r^{n+1}\\
q(r) &=& 2 a b B (2-n) r^3 f(r) \left[a b r^2+b^n r^n\right]+(n-6) b^n \nonumber \\
&& k(r) n(r) r^n\\
p(r) &=& (n-6) \left[2 a b B r^3+A b (n+2) j(r)+2 B b^n r^{n+1}\right] \nonumber \\
&& +2 a b B (2-n) r^3 f(r) k(r)
\end{eqnarray}
There variations of the above physical quantities are given in Figs. \ref{fid}-\ref{fia}. We should ensure that values of $p_r/\rho$ and $p_t/\rho$ at the interior must be less than unity for a physical system (Fig. \ref{fie}).
The other physical parameters mass, compactness factor and red-shift can be determine as
\begin{eqnarray}
m(r) &=& 4\pi \int r^2 \rho ~dr=\frac{r}{2} \left(1-\frac{b}{a b r^2+b^n r^n+b}\right)~~\\
u(r) &=& {2m(r) \over r}= 1-\frac{b}{a b r^2+b^n r^n+b}\\
z(r) &=& e^{-\nu/2}-1.
\end{eqnarray}
We have plotted the $M-R$ diagram in Fig. \ref{fim}. Here we have determined the radius from surface density and determine the mass using this radius using the boundary condition. The trend of red-shift is plotted in Fig. \ref{fir}.
\section{Non-singular nature of the solution}
To check the physical validity of the solution, we ensure that the central values of pressure and density must be finite i.e.
\begin{eqnarray}
\rho_c &=& {3 a \over 8\pi} >0, \label{rhc}\\
p_{rc} &=& p_{tc} = \frac{\sqrt{a} \left(2 B-\sqrt{a} A\right)}{8\pi A} > 0. \label{pc}
\end{eqnarray}
It is also require to ensure that any physical fluid satisfies the Zeldovich's criterion i.e. $p_{rc}/ \rho_c \le 1$ which implies
\begin{eqnarray}
{p_{rc} \over \rho_c} = \frac{2 B-\sqrt{a} A}{3\sqrt{a}A} \le 1. \label{zel}
\end{eqnarray}
Now a physical constraint on $B/A$ arises due to (\ref{pc}) and (\ref{zel}) as
\begin{eqnarray}
{\sqrt{2} \over a} < {B \over A} \le {2\sqrt{a}}. \label{zell}
\end{eqnarray}
\section{Boundary Conditions and determination of constants}
It is necessary that we should match our interior space-time to the exterior $Schwarzschild$ \cite{kar} line element
\begin{eqnarray}
ds^{2} &=& \left(1-\frac{2m}{r}\right)dt^{2}-\left(1-\frac{2m}{r}\right)^{-1}dr^{2} \nonumber \\
&& -r^{2}\big(d\theta^{2}+\sin^{2}\theta d\phi^{2} \big)
\end{eqnarray}
at the boundary $r=R$. Also, the radial coordinate $r$ must be greater than $2m$ so that it doesn't form a black hole.\\
Using the continuity of the metric coefficients $e^{\nu}$ and $e^{\lambda}$ across the boundary ($r=R$) and vanishing of radial pressure at the boundary ($r=R$) we get the following equations
\begin{eqnarray}
1-\frac{2M}{R} &=& e^{\nu_s} = e^{-\lambda_s}\label{b1}\\
p_r(r=R) &=& 0. \label{b3}
\end{eqnarray}
On using the boundary conditions (\ref{b1}) and (\ref{b3}) we obtain the value of arbitrary constants as,
\begin{eqnarray}
a &=& \frac{b^n (R-2 M) R^n-2 b M}{b R^2 (2 M-R)} \\
A &=& \sqrt{1-\frac{2 M}{R}}+\frac{2 BR^2}{b (n-6) (n+2)}\Big[b (6-n) \nonumber \\
&& \hspace{-1 mm}\sqrt{a+b^{n-1} R^{n-2}}+a (n-2) b^{\frac{3-n}{2}} f(R) R^{\frac{2-n}{2}} \Big]\\
B &=& \sqrt{1-\frac{2 M}{R}} ~\frac{b (6-n) (n+2) \sqrt{a +b^{n-1} R^{n-2}}}{2} \nonumber \\
&& \bigg[2 (n-6) b^n R^n+b (n-6) \left(a R^2-n-2\right)- \nonumber
\end{eqnarray}
\begin{eqnarray}
&& \frac{a (n-2) b^{1-n} f(R) R^{2-n} \left(a b R^2+b^n R^n\right)}{\sqrt{a b^{1-n} R^{2-n}+1}}+\nonumber \\
&& a b (n-2) R^2 f(R) \sqrt{a b^{1-n} R^{2-n}+1}+(6-n) \nonumber \\
&& \left(a b R^2+b^n R^n\right)\bigg]^{-1}
\end{eqnarray}
Here $M$ and $R$ are chosen from observed values of compact stars and $b$ as free parameter.
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{metr.eps}}
\caption{Variation of metric potentials w.r.t radial coordinate $r$ for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$.}\label{mt}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{den.eps}}
\caption{Density profile of PSR J1614-2230 for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$.}\label{fid}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{pre.eps}}
\caption{Radial and transverse pressure profile of PSR J1614-2230 for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$.}\label{fip}
\end{figure}
\section{Energy Conditions}
In this section we are willing to verify the energy conditions namely null energy condition (NEC), dominant energy condition (DEC) and weak energy condition(WEC) at all points in the interior of a star which will be satisfied if the following inequalities hold simultaneously:
\begin{eqnarray}
\text{WEC} &:& T_{\mu \nu}t^\mu t^\nu \ge 0~\mbox{or}~\rho \geq 0,~\rho+p_i \ge 0 \\
\text{NEC} &:& T_{\mu \nu}l^\mu l^\nu \ge 0~\mbox{or}~ \rho+p_i \geq 0\\
\text{DEC} &:& T_{\mu \nu}t^\mu t^\nu \ge 0 ~\mbox{or}~ \rho \ge |p_i|\\
&& \mbox{where}~~T^{\mu \nu}t_\mu \in \mbox{nonspace-like vector} \nonumber \\
\text{SEC} &:& T_{\mu \nu}t^\mu t^\nu - {1 \over 2} T^\lambda_\lambda t^\sigma t_\sigma \ge 0 ~\mbox{or}~ \rho+\sum_i p_i \ge 0. \nonumber \\
\end{eqnarray}
where $i\equiv (radial~r, transverse ~t),~t^\mu$ and $l^\mu$ are time-like vector and null vector respectively. \\
We will check the energy conditions with the help of graphical representation. In Fig. \ref{fiec}, we have plotted the L.H.S of the above inequalities which verifies that all the energy conditions are satisfied at the stellar interior.
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{del.eps}}
\caption{Anisotropy profile of PSR J1614-2230 for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$.}\label{fia}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{eos.eps}}
\caption{Equation of state parameter profiles of PSR J1614-2230 for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$.}\label{fie}
\end{figure}
\section{Stability and equilibrium of the model }
\subsection{Equilibrium under various forces}
Equilibrium state under three forces $viz$ gravitational, hydrostatics and anisotropic forces can be analyze whether they satisfy the generalized Tolman-Oppenheimer-Volkoff (TOV) equation or not and it is given by
\begin{equation}
-\frac{M_g(r)(\rho+p_r)}{r}e^{\frac{\nu-\lambda}{2}}-\frac{dp_r}{dr}+\frac{2}{r}(p_t-p_r)=0, \label{to1}
\end{equation}
where $M_g(r) $ represents the gravitational mass within the radius $r$, which can derived from the Tolman-Whittaker formula and the Einstein field equations and is defined by
\begin{eqnarray}
M_g(r) &=& 4 \pi \int_0^r \big(T^t_t-T^r_r-T^\theta_\theta-T^\phi_\phi \big) r^2 e^{\nu+\lambda \over 2}dr .\label{mg}
\end{eqnarray}
For the Eqs. (\ref{dens})-(\ref{prt}), the above Eq. (\ref{mg}) reduced to
\begin{equation}
M_g(r)=\frac{1}{2}re^{(\lambda-\nu)/2}~\nu'.
\end{equation}
Plugging the value of $M_g(r)$ in equation (\ref{to1}), we get
\begin{equation}
-\frac{\nu'}{2}(\rho+p_r)-\frac{dp_r}{dr}+\frac{2}{r}(p_t-p_r)=0.
\end{equation}
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{m-r.eps}}
\caption{M-R diagram for $a=0.001$ and $b = 0.04$.}\label{fim}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{red.eps}}
\caption{Red-shift profiles of PSR J1614-2230 for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$.}\label{fir}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{ec.eps}}
\caption{Energy Consitions of PSR J1614-2230 for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$.}\label{fiec}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{tov.eps}}
\caption{TOV-equation profile of PSR J1614-2230 for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$.}\label{fit}
\end{figure}
The above expression may also be written as
\begin{equation}
F_g+F_h+F_a=0,
\end{equation}
where $F_g, F_h$ and $F_a$ represents the gravitational, hydrostatics and anisotropic forces respectively and can be written as,
\begin{eqnarray}
F_g &=& -\frac{\nu'}{2}(\rho+p_r)\\
F_h &=& -\frac{dp_r}{dr}\\
F_a &=& {2\Delta \over r}.
\end{eqnarray}
The profile of three different forces are plotted in Fig. \ref{fit} and we can see that the system is in equilibrium state.
\subsection{Causality and stability condition}
In this section we are going to find the subliminal velocity of sound and stability condition. For a physically acceptable model of anisotropic fluid sphere the radial and transverse velocities of sound should be less than 1, which is known as the causality condition. The radial velocity $(v_{sr}^{2})$ and transverse velocity $(v_{st}^{2})$ of sound can be obtained as
\begin{eqnarray}
v_{r}^{2} = {dp_r \over d\rho}=\alpha~~,~~v_{t}^{2} = {dp_t \over d\rho}.
\end{eqnarray}
The profile of radial and transverse velocities of sound have been plotted in Fig. \ref{fis}, the figure indicates that our model satisfies the causality condition. Now the stability condition proposed by Abreu \cite{abr07} i.e. $-1 \le v_t^2-v_r^2 \le 0$ (Fig. \ref{fist}).
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{sou.eps}}
\caption{Velocity of sound profiles of PSR J1614-2230 for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$. }\label{fis}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{stab.eps}}
\caption{Stability factor ($v_t^2-v_r^2$) profiles of PSR J1614-2230 for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$.}\label{fist}
\end{figure}
\subsection{Adiabatic index and stability condition}
For a relativistic anisotropic sphere the stability is related to the adiabatic index $\Gamma$, the ratio of two specific heats, defined by \cite{cha93},
\begin{equation}
\Gamma_r=\frac{\rho+p_r}{p_r}\frac{dp_r}{d\rho}.
\end{equation}
Now $\Gamma_r>4/3$ gives the condition for the stability of a Newtonian sphere and $\Gamma =4/3$ being the condition for a neutral equilibrium proposed by \cite{bon64}. This condition changes for a relativistic isotropic sphere due to the regenerative effect of pressure, which renders the sphere more unstable. For an anisotropic general relativistic sphere the situation becomes more complicated, because the stability will depend on the type of anisotropy. For an anisotropic relativistic sphere the stability condition is given by \cite{cha93},
\begin{equation}
\Gamma>\frac{4}{3}+\left[\frac{4}{3}\frac{(p_{ti}-p_{ri})}{|p_{ri}^\prime|r}+\frac{8\pi}{3}\frac{\rho_ip_{ri}}{|p_{ri}^\prime|}r\right]_{max},
\end{equation}
where, $p_{ri}$, $p_{ti}$, and $\rho_i$ are the initial radial, tangential pressures and energy density in static equilibrium satisfying (\ref{to1}). The first and last term inside the square bracket represent the anisotropic and relativistic corrections respectively and both the quantities are positive that increase the unstable range of $\Gamma$ \cite{her92,cha93}. For this solution the adiabatic index is more than 4/3 and hence stable, Fig. \ref{fiad}.
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{gamr.eps}}
\caption{Adiabatic index profiles of PSR J1614-2230 for $M = 1.97 M_{\odot},R = 9.69 km$ and $b = 0.04$.}\label{fiad}
\end{figure}
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{m-rc.eps}}
\caption{$M-\rho_c$ profiles with $R = 10.86 km$ and $b = 0.04$.}\label{fimr}
\end{figure}
\subsection{Harrison-Zeldovich-Novikov static stability criterion}
The stability analysis of Harrison et al. \cite{har65} and \cite{zel71} have shown that the adiabatic index of a pulsating star is same as in a slowly deformed matter. This leads to a stable configuration only if the mass of the star is increasing with central density i.e. $\partial m /\partial \rho_c > 0$ and unstable if $\partial m /\partial \rho_c < 0$.
In our solution, the mass as a function of central density can be written as
\begin{eqnarray}
m (\rho_c) &=& \frac{R}{2} \left(1-\frac{3b}{3b^n R^n+8\pi b \rho_c R^2+3b}\right) \label{mrhc}\\
{\partial m (\rho_c) \over \partial \rho_c} &=& \frac{12 \pi b^2 R^3}{\left[3 b^n R^n+b \left(8 \pi \rho R^2+3\right)\right]^2}> 0.\\
\end{eqnarray}
The satisfaction of the above condition is shown as a plot in Fig. \ref{fimr}.
\begin{figure}[t]
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=7cm,height=4.5cm]{i-m.eps}}
\caption{Variation of moment of inertia w.r.t. mass for $a=0.001$ and $b=0.04$. The red dots represents $\big(M,I_{max} \big)$ and blue dots $\big(M_{max},I \big)$}\label{im}
\end{figure}
\section{Discussion and conclusion}
The solution of Einstein's field equation with $e^{-\lambda} = 1+ar^2$ was presented by Duorah-Ray \cite{duo}, however, Finch-Skea \cite{} pointed out that the Duorah-Ray (DR) solution doesn't satisfy the field equations. Therefore, Finch-Skea (FS) corrected the solution and hence known as FS solution. FS not only corrected the DR solution but also performed extensive works to describe physically realistic neutron stars. The resulting equation of state from FS solution was also compared with Walecka's relativistic mean-field theory description and found to be quite in good agreement.
An interesting result was presented by Bhar et al. \cite{pbh} showing that with the assumption of electric charge and Adler $g_{tt}$ metric potential in the Karmarkar condition, one leads to FS $g_{rr}$ metric potential which is a well behaved solution while its neutral counterpart isn't.
The current paper generalized the FS $g_{rr}$ with the higher order term $b^{n-1}r^n$. We also successfully analysed the behaviour of the solution showing its well behaved range w.r.t. the parameter $n$. It is found that the solution exist and satisfy causality condition for $n=4,~5$ and within the range $7\le n \le 12$. All the solutions correspond to other values are not well-behaved. The fulfillment of the stable static criterion signifies that the solution is static and stable. The satisfaction of TOV-equation also implies the solution is in equilibrium. We have also plotted the M-R diagram for the range $7\le n \le 12$ and it shows that the maximum mass increases with $n$. For $n=7$ the maximum mass is 2.643$M_\odot$ with radius 8.976 km and for $n=12$, $M_{max} =3.063M_\odot$ with radius 10.85 km. The profile of adiabatic index (see Fig. 9) shows that the equation of state gets stiffer for larger values of $n$ since the central values of $\Gamma_r$ are larger. This increases the stiffness of the equation of state leading to increase the maximum mass.
The stiffness of an EoSf is also link with moment of inertia of the compact star. For a uniformly rotating star with angular velocity $\Omega$ the moment of inertia is given by \cite{latt}
\begin{eqnarray}
I = {8\pi \over 3} \int_0^R r^4 (\rho+p_r) e^{(\lambda-\nu)/2} ~{\omega \over \Omega}~dr
\end{eqnarray}
where, the rotational drag $\omega$ satisfy the Hartle's equation \cite{hart}
\begin{eqnarray}
{d \over dr} \left(r^4 j ~{d\omega \over dr} \right) =-4r^3\omega~ {dj \over dr} .
\end{eqnarray}
with $j=e^{-(\lambda+\nu)/2}$ which has boundary value $j(R)=1$. The approximate moment of inertia $I$ up to the maximum mass $M_{max}$ was given by Bejger and Haensel \cite{bejg} as
\begin{equation}
I = {2 \over 5} \Big(1+x\Big) {MR^2},
\end{equation}
where parameter $x = (M/R)\cdot km/M_\odot$. For the solution we have plotted mass vs $I$ in Fig. \ref{im} that shows as $n$ increases, the mass also increase and the moment of inertia increases till up to certain value of mass and then decreases. Therefore, we can say that as moment of inertia increases, the stiffness of the corresponding EoS also increases. Comparing Figs. \ref{fim} and \ref{im} we can see that the mass corresponding to $I_{max}$ is not equal to $M_{max}$ from $M-R$ diagram. In fact the mass corresponding to $I_{max}$ is lower by $\sim 3$\% from the $M_{max}$. This happens to the EoSs without any strong high-density softening due to hyperonization or phase transition to an exotic state \cite{bej}.
| proofpile-arXiv_066-311 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction }\label{S:1}
{\bf 1.1.} In this paper the new definition of the convolution
$\int_0^t(t-s)^{\lambda-1}b(s)ds$ with hyper-singular functions is
given. We compare this definition with the one, based on the distribution theory, \cite{GS}. The $b(s)$ is assumed locally integrable function on ${\mathbb R}_+:=[0,\infty)$. This assumption is satisfied
in the Navier-Stokes problem (NSP), see \cite{R691},
Chapter 5, where the integral equations of the type
$b(t)=b_0(t)+\int_0^t (t-s)^{\lambda-1}b(s)ds$ with $\lambda=-\frac 1 4$ are of interest. Classically these integral equations do not make sense because the integrals diverge if $\lambda\le 0$. In Sections 1--4 of this paper a new definition of such integrals is given, the solution
to the integral equation with hyper-singular kernel is
investigated. These results are used in Section 5 of this paper, where
the basic results concerning the NSP are obtained.
We analyze the NSP and prove that the NSP is physically not a correct description of the fluid mechanics problem and that the NSP does not have a solution, in general.
The words "in general" mean that if the initial velocity $v_0=0$ and the force $f=0$, then the NSP has the solution
$v(x,t)=0$ which exists for all $t\ge 0$. This meaning
is valid in Theorem 5.2, below.
For future use we define $\Phi_\lambda(t):=
\frac {t^{\lambda-1}}{\Gamma(\lambda)}$ and the convolution
$\Phi_\lambda \star b:=\int_0^t\Phi_\lambda(t-s) b(s)ds$.
Here and below $t:=t_+$, that is, $t=0$ if $t<0$ and $t=t$ if $t\ge 0$.
{\bf 1.2.} Let us give the standard definition of the singular integral
used in the distribution theory. Let
\begin{equation}\label{e100}
J:=\int_0^\infty t^{\lambda-1} \phi(t)dt,
\end{equation}
where the test function $\phi(t)\in C^\infty_0({\mathbb R})$.
Integral \eqref{e100} diverges classically (that is, in the classical sense) if $\lambda\le 0$. It is defined in distribution theory
(for example, in \cite{GS}) as follows:
\begin{equation}\label{e101}
J=\int_0^1 t^{\lambda-1} \phi(t)dt+\int_1^\infty t^{\lambda-1} \phi(t)dt:=j_1+j_2.
\end{equation}
The integral $j_2$ converges classically for any complex $\lambda
\in {\mathbb C}$ and is analytic with respect to $\lambda$.
The integral $j_1$ for $\lambda>0$ converges classically and can be written as
\begin{equation}\label{e102}
j_1=\int_0^1t^{\lambda-1}(\phi(t)-\phi(0))dt+\phi(0)\frac{t^\lambda|_0^1}{\lambda}=\int_0^1t^{\lambda-1}(\phi(t)-\phi(0))dt+\phi(0)/\lambda.
\end{equation}
The right side of \eqref{e102} admits analytic continuation with respect to $\lambda$ from Re$\lambda>0$ to the region Re$\lambda>-1$. Thus, formulas \eqref{e101} and \eqref{e102}
together define integral \eqref{e100} for Re$\lambda>-1$.
The singular integral $J$ has a simple pole at $\lambda=0$,
diverges classically for $-1< \lambda<0$, but is defined
in this region by formulas \eqref{e101} and \eqref{e102}
by analytic continuation with respect to $\lambda$.
This procedure can be continued and $J$ can be defined for
an arbitrary large fixed negative $\lambda$, $\lambda\neq 0,-1,-2,...$.
{\bf 1.3.} Let us define the convolution
\begin{equation}\label{e103}
I(t):=\Phi_\lambda\star b:=\int_0^t\Phi_\lambda(t-s)b(s)ds.
\end{equation}
We assume that $b(t)\in L^1({\mathbb R}_+)$ and the Laplace transform
of $b$ is defined for Re$p\ge 0$ by the formula
\begin{equation}\label{e104}
L(b):=\int_0^\infty e^{-pt}b(t)dt.
\end{equation}
Let us define $L(t^{\lambda-1})$ not using the distribution theory. For $\lambda>0$ one has:
\begin{equation}\label{e105}
L(t^{\lambda-1})=\int_0^\infty e^{-pt}t^{\lambda-1}dt=
\int_0^\infty e^{-s}s^{\lambda-1}ds p^{-\lambda}=\frac {\Gamma(\lambda)}{p^{\lambda}}.
\end{equation}
It follows from \eqref{e105} and from the definition of $\Phi_{\lambda}=\frac {t^{\lambda-1}}{\Gamma(\lambda)}$ that
\begin{equation}\label{e106}
L(\Phi_\lambda)=p^{-\lambda}.
\end{equation}
The gamma function $\Gamma(\lambda)$ is analytic in $\lambda\in {\mathbb C}$ except for the simple poles at $\lambda=-n$, $n=0,1,2....$,
with the residue at $\lambda=-n$ equal to $\frac{(-1)^n}{n!}$.
It is known that
\begin{equation}\label{e107}
\Gamma(z+1)=z\Gamma(z),\quad \Gamma(z)\Gamma(1-z)=\frac \pi {\sin (\pi z)},\quad 2^{2z-1}\Gamma(z)\Gamma(z+\frac 1 2)=\pi^{1/2}\Gamma(2z).
\end{equation}
The function $\frac 1 {\Gamma(\lambda)}$ is an entire function of $\lambda$.
These properties of $\Gamma(z)$ can be found, for example, in \cite{L}.
The right side of \eqref{e105} is analytic with respect to $\lambda\in{\mathbb C}$ except for $\lambda=0,-1,-2,....$ and, therefore, defines $L(t^{\lambda-1})$ for all these $\lambda$ by analytic continuation with respect to $\lambda$ without
using the distribution theory.
Let us define the convolution $I(t)$ using its Laplace transform
\begin{equation}\label{e108}
L(I(t))=L(\Phi_\lambda)L(b)=\frac {L(b)}{p^{\lambda}}
\end{equation}
and its inverse:
\begin{equation}\label{e109}
I(t)=L^{-1}\big(L(b)p^{-\lambda}\big),
\end{equation}
where $L^{-1}$ is the inverse of the Laplace transform.
Since the null-space of $L$ is trivial, that is, the zero element, the inverse $L^{-1}$ is well defined on the range of $L$.
For $-1<\lambda<0$, in particular for $\lambda=-\frac 1 4$, formula \eqref{e109}, can be interpreted as a generalized Fourier integral.
The value $\lambda=-\frac 1 4$ is very important in NSP, see monograph \cite{R691}, Chapter 5, and Section 5 below.
We return to this question later, when we discuss the integral equations with hyper-singular integrals.
{\bf 1.4.} Let us now prove the following result that will be
used later.
{\bf Theorem 1.1.} {\em One has
\begin{equation}\label{e110}
\Phi_\lambda \star \Phi_\mu=\Phi_{\lambda+\mu}.
\end{equation}
for any $\lambda, \mu\in {\mathbb C}$. If $\lambda+\mu=0$ then
\begin{equation}\label{e111}
\Phi_0(t)=\delta(t),
\end{equation}
where $\delta(t)$ is the Dirac distribution.
}
{\em Proof.} By formulas \eqref{e106} and \eqref{e108} with $b(t)=\Phi_\mu(t)$ one gets
\begin{equation}\label{e112}
L(\Phi_\lambda\star \Phi_\mu)=\frac 1 {p^{\lambda+\mu}}.
\end{equation}
By formula \eqref{e106} one has
\begin{equation}\label{e113}
L^{-1}\Big(\frac 1{p^{\lambda+\mu}}\Big)=\Phi_{\lambda+\mu}.
\end{equation}
This proves formula \eqref{e110}.
If $\lambda+\mu=0$ then
\begin{equation}\label{e113a}
p^{-(\lambda+\mu)}=1, \quad L^{-1}1=\delta(t).
\end{equation}
This proves formula \eqref{e111}.
Theorem 1.1 is proved. \hfill$\Box$
{\bf Remark 1.1.} Let us give an alternative proof of formula \eqref{e110}.
For Re$\lambda>0$, Re$\mu>0$ one has
\begin{equation}\label{e114}
\Phi_\lambda\star \Phi_\mu=
\frac 1 {\Gamma(\lambda)\Gamma(\mu)}
\int_0^t(t-s)^{\lambda -1}s^{\mu -1}ds
=\frac{t_+^{\lambda+\mu-1}}{\Gamma(\lambda)\Gamma(\mu)}\int_0^1
(1-u)^{\lambda-1}u^{\mu -1}du=\frac{t_+^{\lambda+\mu -1}}{\Gamma(\lambda+\mu)},
\end{equation}
where the right side of \eqref{e114} is equal to $\Phi_{\lambda+\mu}$ and
we have used the known formula for the
beta function:
\begin{equation}\label{e115}
B(\lambda, \mu):=\int_0^1u^{\lambda -1}(1-u)^{\mu -1}du=
\frac{\Gamma(\lambda)\Gamma(\mu)}
{\Gamma(\lambda+\mu)}.
\end{equation}
Analytic properties of beta function
follow from these of the gamma function.
{\bf Remark 1.2.} Theorem 1.1 is proved in \cite{GS}, pp.150--151. Our proof differs from the proof in \cite{GS}. It is not clear how the proof in \cite{GS} is related to the definition of regularized hyper-singular integrals used in \cite{GS}.
\section{Preparation for investigation of integral equations\newline with hyper-singular kernels }\label{S:2}
In this section we start an investigation of equations of the following type
\begin{equation}\label{e1}
b(t)=b_0(t)+c\int_0^t (t-s)^{\lambda -1}b(s)ds,
\end{equation}
where $b_0$ is a smooth functions rapidly decaying with all its derivatives as $t\to \infty$, $b_0(t)=0$ if $t<0$. We are especially interested in the value $\lambda=-\frac 1 4$, because of its importance for the Navier-Stokes theory, \cite{R691}, Chapter 5, \cite{R684}, \cite{R700}.
{\em The integral in \eqref{e1} diverges in the classical sense for $\lambda\le 0$. Our aim is to define this hyper-singular integral.}
There is a regularization method to define
singular integrals $J:=\int_{{\mathbb R}}t_+^{\lambda-1}\phi(t)dt$, $\lambda\le 0$, in the distribution theory, see the Introduction, Section 1.2. The integral in \eqref{e1} is a convolution, which is defined in
\cite{GS}, p.135, as a {\em direct product
of two distributions}.
This definition
{\em is not suitable} for our purposes because $t_+^{\lambda-1}$ for any $\lambda\le 0$, $\lambda\neq 0,-1,-2,...$ is a distribution
on the space $\mathcal{K}:=C^\infty_0(\mathbb{R}_+)$ of the test functions, but
{\em it is not a distribution in the
space of the test
functions $K:=C^\infty_0({\mathbb R})$ used in \cite{GS}}.
Indeed, one can find $\phi\in K$ such that
$\lim_{n\to \infty}\phi_n=\phi$ in
$K$, but $\lim_{n\to \infty}\int_{{\mathbb R}}
t_+^{\lambda-1} \phi_n(t)dt=\infty$
for $\lambda\le 0$, so that
$t_+^{\lambda-1}$ is not a linear bounded functional in $K$, i.e., not a distribution.
For example, the integral
$\int_0^\infty t^{\lambda-1}\phi(t)dt$ is not a bounded linear functional on $K$: take a $\phi$ which is vanishing for $t>1$, positive near $t=0$ and non-negative on $[0,1]$. Then this integral diverges at such $\phi$ and is not a bounded linear functional on $K$.
On the other hand, one can check that
$t_+^{\lambda-1}$ for any $\lambda\in R$
is a distribution (a bounded linear functional) in the space
$\mathcal{K}=C^\infty_0({\mathbb R}_+)$ with
the convergence $\phi_n\to \phi$ in
$\mathcal{K}$ defined by the following requirements:
a) the supports of all $\phi_n$ belong to an interval $[a,b]$,
$0<a\le b<\infty$,
b) $\phi_n^{(j)}\to
\phi^{(j)}$ in $C([a,b])$ for all $j=0,1,2,....$.
Indeed, the functional
$\int_0^\infty t_+^\lambda\phi(t)dt$ is linear and bounded in
$\mathcal{K}$:
\begin{equation}\label{e2a}
|\int_0^\infty t_+^\lambda\phi_n(t)dt|\le (a^\lambda+b^\lambda)
\int_a^b |\phi_n(t)|dt.
\end{equation}
A similar estimate holds for all the derivatives of $\phi_n$.
{\em Although $t_+^{-\frac 5 4}$ is a distribution in $\mathcal{K}$, the convolution
\begin{equation}\label{e2}
h:=\int_0^t(t-s)^{-\frac 5 4}b(s)ds:=
t_+^{-\frac 5 4}\star b
\end{equation}
cannot be defined similarly to the definition in
the book \cite{GS} because the function\newline
$\int_0^\infty \phi(u+s) b(s)ds$
does not, in general, belong to $\mathcal{K}$ even if
$\phi\in \mathcal{K}$.}
Let us define the convolution $h$ using the Laplace transform
\eqref{e105}. Laplace transform of distributions is studied in \cite{BP}. There one finds a definition of the Laplace transform of distributions, the Laplace transform of convolutions, tables of the Laplace transforms of distributions, in particular, formula \eqref{e105} and other information.
One has
\begin{equation}\label{e211}
L(t_+^{-\frac 5 4}\star b)=L(t_+^{-\frac 5 4})L(b).
\end{equation}
To define $L(t^{\lambda-1})$ for $\lambda\le 0$, note that for
Re$\lambda>0$ the classical definition \eqref{e105}
holds. The right side of \eqref{e105}
admits analytic continuation to the
complex plane of $\lambda$, $\lambda\neq 0,-1,-2,....$. This allows one to define integral \eqref{e105}
for any $\lambda\neq 0,-1,-2,...$.
It is known
that $\Gamma(z+1)=z\Gamma(z)$, so
\begin{equation}\label{e3a}
\Gamma(-\frac 1 4)=-4\Gamma(3/4):=-c_1, \quad c_1>0.
\end{equation}
Therefore, {\em we define}
$h$ by the formula $h=L^{-1}(Lh)$ and defining $L(h)$ as follows:
\begin{equation}\label{e4}
L(h)=-c_1p^{\frac 1 4}L(b),
\end{equation}
where formula \eqref{e105} with $\lambda=-\frac 1 4$
was used and we assume that $b$ is such that $L(b)$ can be defined. That $L(b)$ is well defined in the Navier-Stokes theory follows from the a priori estimates proved in \cite{R691}, Chapter 5 and in Section 5 below, see Theorem 5.1.
From \eqref{e4} one gets
\begin{equation}\label{e4a}
L(b)=-c_1^{-1}p^{-\frac 1 4} L(h).
\end{equation}
\section{Integral equation}\label{S:3}
Consider equation \eqref{e1}. It can be rewritten as
\begin{equation}\label{e1a}
b(t)=b_0(t)-cc_1\Phi_{\lambda}\star b,
\end{equation}
where
\begin{equation}\label{e1a1}
c_1=|\Gamma(-\frac 1 4)|, \quad \lambda=-\frac 1 4.
\end{equation}
{\bf Theorem 3.1.} {\em Equation \eqref{e1a}-\eqref{e1a1}
has a unique solution in $C(0,T)$ for any $T>0$ if $b_0$
is sufficiently smooth and rapidly decaying as $t$ grows. This solution can be obtained by iterations:}
\begin{equation}\label{e8}
b_{n+1}=-(cc_1)^{-1}\Phi_{1/4}\star b_{n} +(cc_1)^{-1}\Phi_{1/4}\star b_0, \quad
b_{n=0}=
(cc_1)^{-1}\Phi_{1/4}\star b_0, \quad b=\lim_{n\to \infty}b_n.
\end{equation}
{\em Proof.} Applying to equation\eqref{e1a} the operator $\Phi_{1/4}\star$ and using equation \eqref{e111}
one gets a Volterra-type equation
\begin{equation}\label{e8a}
\Phi_{1/4}\star b=\Phi_{1/4}\star b_0-cc_1b,
\end{equation}
or
\begin{equation}\label{e9}
b=-(cc_1)^{-1}\Phi_{1/4}\star b +(cc_1)^{-1}\Phi_{1/4} \star b_0.
\end{equation}
The operator $\Phi_\lambda\star$ with
$\lambda>0$ is a Volterra-type operator. Therefore
equation \eqref{e9} can be solved for $b$ by iterations, see Lemma 3.1 below and \cite{R691}, p.53,
Lemmas 5.10 and 5.11.
If $b_0\ge 0$ and $cc_1$ is sufficiently large, then
the solution to \eqref{e1a} is non-negative, $b\ge 0$,
see Reamark 3.1 below.
Theorem 3.1 is proved.\hfill$\Box$
For convenience of the reader let us
prove the result about solving equation \eqref{e9} by iterations, mentioned above.
{\bf Lemma 3.1.} {\em The operator $Af:=\int_0^t(t-s)^pf(s)ds$ in the space
$X:=C(0,T)$ for any fixed $T\in [0,\infty)$ and $p>-1$ has spectral radius $r(A)$
equal to zero, $r(A)=0$. The equation
$f=Af+g$ is uniquely solvable in $X$. Its solution can be obtained by iterations
\begin{equation}\label{e9a}
f_{n+1}=Af_n+g, \quad f_0=g; \quad
\lim_{n\to \infty}f_n=f,
\end{equation}
for any $g\in X$ and the convergence
holds in $X$.}
{\em Proof.} The spectral radius
of a linear operator $A$ is defined
by the formula $$r(A)=\lim_{n\to \infty}\|A^n\|^{1/n}.$$ By induction one proves that
\begin{equation}\label{e9b}
|A^nf|\le t^{n(p+1)}\frac{\Gamma^n(p+1)}{\Gamma(n(p+1)+1)}\|f\|_X, \quad n\ge 1.
\end{equation}
From this formula and the known asymptotic of the gamma function
the conclusion $r(A)=0$ follows.
The convergence result \eqref{e9a}
is analogous to the well known
statement for the assumption $\|A\|<1$.
A more detailed argument can be found in \cite{R691}, p.53.
Lemma 3.1 is proved. \hfill$\Box$
{\bf Remark 3.1.} {\em If $c>0$ is sufficiently large, then
the norm of the operator $B:=(cc_1)^{-1}\Phi_{1/4}\star$ in $C(0,T)$
is less than one: $ \|B\|<1$. In this case, $(I-B)^{-1}=\sum_{j=0}^\infty B^j$ is a positive operator.}
Let us now give another approach to solving integral equation \eqref{e1a} with $\lambda=-\frac 1 4$.
{\bf Theorem 3.2.} {\em The solution to equation \eqref{e1a}
with $\lambda=-\frac 1 4$ does exist, is unique, and belongs to $C({\mathbb R}_+)$ provided that $b_0(t)\in C({\mathbb R}_+)$ and $ |b_0(t)|+
|b'(t)|\le c(1+t)^{-2}$.}
{\em Proof.} Take the Laplace transform of equation \eqref{e1a}
with $\lambda=-\frac 1 4$, use formula \eqref{e106} to get
\begin{equation}\label{20}
L(b)=L(b_0)-cc_1p^{1/4}L(b).
\end{equation}
Thus,
\begin{equation}\label{21}
L(b)=\frac {L(b_0)}{1+cc_1p^{1/4}}
\end{equation}
Therefore
\begin{equation}\label{22}
b(t)=L^{-1}\Big(\frac {L(b_0)}{1+cc_1p^{1/4}}\Big).
\end{equation}
Let us check that
\begin{equation}\label{23}
max_{t\ge 0}|b(t)|\le c.
\end{equation}
From our assumptions about $b_0(t)$ it follows that $|L(b_0)|
\le c(1+|p|)^{-1}$, Re$p\ge 0$. Let $p=iw$.
Since $b(t)=(2\pi)^{-1}\int_{-\infty}^{\infty}e^{iwt}L(b)dw$, one gets
\begin{equation}\label{24}
|b(t)|\le\frac c {2\pi}\int_{-\infty}^{\infty}(1+|w|)^{-1}
|1+cc_1(iw)^{1/4}|^{-1}<c_2,
\end{equation}
where $c_2>0$ is some constant.
Here we have used the inequality
\begin{equation}\label{24a}
\inf_{w\in {\mathbb R}}|1+cc_1(iw)^{1/4}|^{-1}\le c.
\end{equation}
Recall that by $c>0$ various constants are denoted.
Let us check \eqref{24a} for $w\ge 0$. For $w<0$ the argument is similar. One has $(iw)^{1/4}=e^{i\pi/8}w^{1/4}$, $$J:=\frac 1 {|1+C
\cos(\pi/8)+iC\sin(\pi/8)|},$$
where $C:=cc_1w^{1/4}>0$. Therefore,
$$J^{-2}=[1+C\cos(\pi/8)]^2+C^2\sin^2(\pi/8)=1+C^2+2C\cos(\pi/8)>c>0.$$
Consequently, inequality \eqref{24a}
is checked.
Theorem 3.2 is proved. \hfill$\Box$
{\bf Remark 3.2.} {\em It follows from formula \eqref{e9}
that $b(0)=0$ because $\lim_{t\to 0}\Phi_{\frac 1 4}\star b_0=0$
and $\lim_{t\to 0}\Phi_{\frac 1 4}\star b=0$ holds if $b$ is a locally integrable bounded on ${\mathbb R}_+$ function $b=b(t)$.}
\section{Integral inequality}\label{S:4}
Consider the following inequality
\begin{equation}\label{e7}
q(t)\le b_0(t)+ct_+^{\lambda-1}\star q=b_0(t)-cc_1\Phi_{-\frac 1 4}q,
\end{equation}
where $c_1=-\Gamma(-\frac 1 4)$ for $\lambda=-\frac 14$
Let $f=f(t)\in L^1({\mathbb R}_+)$ be some function. If
$q\le f$ then $\Phi_{1/4}\star q\le \Phi_{1/4}\star f.$
Therefore, inequality \eqref{e7} with $\lambda=-\frac 1 4$, after applying to both sides the operator $\Phi_{1/4}\star$, implies
\begin{equation}\label{e9c}
q\le -(cc_1)^{-1}\Phi_{1/4}\star q +(cc_1)^{-1}\Phi_{1/4}\star b_0.
\end{equation}
Inequality \eqref{e9c} for sufficiently large $c>0$ can be solved by iterations with the initial term
$(cc_1)^{-1}\Phi_{1/4}\star b_0$, see Remark 3.1. This
yields
\begin{equation}\label{e11}
q(t)\le b(t),
\end{equation}
where $b$ solves the integral equation \eqref{e1a}.
This follows from Theorem 4.1.
{\bf Theorem 4.1.} {\em Assume that $b$ solves \eqref{e1a}, $c>0$ is sufficiently large, $b_0(t)$ satisfies conditions stated in Theorem 3.2 and $q\ge 0$
solves inequality \eqref{e7}. Then inequality \eqref{e11} holds.}
{\em Proof.} Denote $z:=b-q$, where
\begin{equation}\label{e9cd}
b=-(cc_1)^{-1}\Phi_{1/4}\star q +(cc_1)^{-1}\Phi_{1/4}\star b_0.
\end{equation}
Then
\begin{equation}\label{e11a}
0\le z+(cc_1)^{-1}\Phi_{\frac 1 4}\star z.
\end{equation}
Solving this inequality by iterations and using Remark 3.1 one obtains \eqref{e11}. If $c>0$ is arbitrary, then this argument yields inequality \eqref{e11} for sufficiently small $t>0$ because the norm of the operator $(cc_1)^{-1}\Phi_{1/4}\star$ tends to zero when $t\to 0$.
Theorem 4.1 is proved. \hfill$\Box$
Papers \cite{R677}, \cite{R698}, \cite{R704} also deal with hyper-singular integrals.
\section{Application to the Navier-Stokes problem}\label{S:5}
In this Section we apply the results of Sections 1--4
to the Navier-Stokes problem. Especially the results of
Sections 3 and 4 will be used.
The Navier-Stokes problem
(NSP) in ${\mathbb R}^3$ is discussed in many books and papers ( see \cite{R691}, Chapter 5, and references therein).
The uniqueness of a solution in ${\mathbb R}^3$ was proved in \cite{La}, \cite{R691} and in \cite{R700} in different norms.
The existence of the solution to the NSP is discussed
in \cite{R691}.
The goal of this Section is to prove that the statement of the NSP is contradictory. Therefore, the NSP is not a
physically correct statement of the problem of fluid mechanics.
We prove that the solution to the NSP does not exist,
in general. Therefore, in this Section a negative solution to one of the millennium problems is given.
{\em What is a physically correct statement of problems
of fluid mechanics remains an open problem.}
We prove {\em the paradox in the NSP}. This paradox can be described as follows:
{\em One can have initial velocity $v(x,0)>0$ in the NSP and, nevertheless, the
solution $v(x,t)$ to this NSP must have the zero initial velocity: $v(x,0)=0$. }
This paradox proves that the statement of the NSP is contradictory, that the NSP is not a physically correct statement of the fluid mechanics problem and the solution to the NSP does not exist, in general.
The NSP in ${\mathbb R}^3$ consists of solving the equations
\begin{equation}\label{e501} v'+(v, \nabla)v=-\nabla p +\nu\Delta v +f, \quad x\in {\mathbb R}^3,\,\, t\ge 0,\quad \nabla \cdot v=0,\quad v(x,0)=v_0(x),
\end{equation}
see, for example, books \cite{La} and \cite{R691}, Chapter 5.
Vector-functions velocity $v=v(x,t)$ and the exterior force $f=f(x,t)$ and the scalar function $p=p(x,t)$,
the pressure, are assumed to decay as $|x|\to \infty$ and
$t\in {\mathbb R}_+:=[0, \infty)$.
The derivative with respect to time is denoted $v':=v_t$,
$\nu=const>0$ is the viscosity coefficient, the velocity $v=v(x,t)$ and the pressure $p=p(x,t)$ are unknown, $v_0=v(x,0)$ and $f(x,t)$ are known. It is assumed that $ \nabla \cdot v_0=0$.
Equations \eqref{e501} describe viscous incompressible fluid with density $\rho=1$.
Let us assume for simplicity that $f=0$. This do not change our arguments and our logic.
The solution to NSP \eqref{e501} solves the integral equation:
\begin{equation}\label{e502} v(x,t)=F- \int_0^tds \int_{{\mathbb R}^3} G(x-y,t-s)(v,\nabla)v dy,
\end{equation}
where $(v,\nabla)v=v_j v_{p,j}$, over the repeated indices summation is assumed and
$v_{p,j}:=\frac {\partial v_p}{\partial x_j}$.
Equation \eqref{e502} implies an integral inequality
of the type studied in Sections 3 and 4 (see also \cite{R691}, Chapter 5).
Formula for the tensor $G=G(x,t)=G_{pm}(x,t)$
is derived in \cite{R691}, p.41:
\begin{equation}\label{e5021}
G(x,t)=(2\pi)^{-3}\int_{{\mathbb R}^3} e^{i\xi \cdot x}\Big( \delta_{pm}-\frac {\xi_p \xi_m}{\xi^2}\Big)e^{-\nu \xi^2 t}d\xi.
\end{equation}
The term $F=F(x,t)$, in our case when $f=0$, depends only on the data $v_0$ (see formula (5.42) in \cite{R691}):
\begin{equation}\label{e503}
F(x,t):=\int_{{\mathbb R}^3}g(x-y,t)v_0(y)dy,
\end{equation}
where
\begin{equation}\label{e503'}
g(x,t)=\frac{e^{-\frac{|x|^2}{4\nu t}}}{4\nu \pi t}, \quad t>0;
\quad g(x,t)=0, \quad t\le 0.
\end{equation}
We assume throughout that
\begin{equation}\label{e503"}
v_0=v(x,0)>0
\end{equation}
is such that
$F$ is bounded in all the norms we use.
Let us use the Fourier transform:
\begin{equation}\label{e5044}
\tilde{v}:=\tilde{v}(\xi,t):=(2\pi)^{-3}\int_{{\mathbb R}^3}v(x,t)e^{-i\xi \cdot x}dx.
\end{equation}
Fourier transform equation \eqref{e502} and get the integral equation:
\begin{equation}\label{e504}
\tilde{v}(\xi,t)=\tilde{F}(\xi,t)-\int_0^tds \tilde{G}(\xi,t-s) \tilde{v}\bigstar (i\xi \tilde{v}),
\end{equation}
where $\bigstar$ denotes the convolution in ${\mathbb R}^3$.
For brevity we omitted the tensorial indices:
instead of $\tilde{G}_{mp}\tilde{v}_j\bigstar (i\xi_j)\tilde{v}_p$, where one sums up over the repeated indices, we wrote
$ \tilde{G} \tilde{v}\bigstar (i\xi \tilde{v})$.
From formula (5.9) in \cite{R691}, see formula \eqref{e5021} one gets:
\begin{equation}\label{e5050}
\tilde{G}(\xi,t)=(2\pi)^{-3}\Big( \delta_{pm}-\frac {\xi_p \xi_m}{\xi^2}\Big)e^{-\nu \xi^2 t}.
\end{equation}
One has $|\delta_{pm}-\frac {\xi_p \xi_m}{\xi^2}|\le c$.
Therefore,
\begin{equation}\label{e505}
|\tilde{G}(\xi,t-s)|\le ce^{-\nu (t-s) \xi^2}.
\end{equation}
We denote by $c>0$ {\em various constants} independent of $t$ and $\xi$, by $\|\tilde{v}\|$ the norm in $L^2({\mathbb R}^3)$ and by $(v,w)$ the inner product in $L^2({\mathbb R}^3)$.
Let us introduce the norm
\begin{equation}\label{e505a}
\|v\|_1:=\|v\|+\|\nabla v\|.
\end{equation}
One has
\begin{equation}\label{e505b} (2\pi)^{3/2}\|\tilde{v}\|= \|v\|, \quad (2\pi)^3\||\xi|\tilde{v}\|^2=\|\nabla v\|^2,
\end{equation}
by the Parceval equality.
{\bf Assumption A.} {\em Assume that $F(x,t)$ is a smooth function rapidly decaying together with all its derivatives. In particular,
$$\sup_{t\ge 0}\Big((1+t^m)\|F(x,t)\|_1\Big)+\sup_{t\ge 0, \xi\in {\mathbb R}^3}\left((1+t^m+|\xi|^m)|\tilde{F}(\xi,t)|\right)<c, \quad m=1,2,3.$$ }
Assumption A holds throughout Section 5 and is not repeated. It is known that
\begin{equation}\label{e507a}
\sup_{t\ge 0}\left(\|v\|+\int_0^t \|\nabla v\|^2ds\right)<c, \quad \sup_{t\ge 0}\int_0^t \|\tilde{v}|\xi|\|^2ds<c,
\end{equation}
\begin{equation}\label{e507}
\sup_{t\ge 0}(|\xi||\tilde{v}(\xi,t)|)<\infty,
\quad |\tilde{v(\xi,t)}|\le c(1+t^{1/2}), \quad \sup_{t\ge 0}\|\nabla v\|< \infty,
\end{equation}
see \cite{R691}, p.52.
{\bf Theorem 5.1.} {\em Inequalities \eqref{e507a}--\eqref{e507} hold. }
{\bf Theorem 5.2.} {\em The NSP \eqref{e501} does not have a solution, in general.}
{\bf Proof of Theorem 5.1.} Proof of Theorem 5.1 can be found in \cite{R691}. Because of the importance
of the third inequality \eqref{e507} and of its novelty, we give its proof
in detail.
Let $|\tilde{v}(\xi,t)|:=u$, $|\tilde{F}|:=\mu(\xi,t):=\mu$. From equation \eqref{e504} one gets:
\begin{equation}\label{e514a}
u\le \mu+c\int_0^t e^{-\nu (t-s)\xi^2}\|u\|\||\xi|u\|ds\le \mu+c\int_0^t e^{-\nu (t-s)\xi^2}b(s)ds, \quad b(s):=\||\xi|u\|,
\end{equation}
where the Parceval formula
\begin{equation}\label{e51410}
(2\pi)^{3/2} \|\tilde{v}\|=\|v\|<c
\end{equation}
was used.
By direct calculation one derives the following inequality:
\begin{equation}\label{e51411}
\| e^{-\nu (t-s)\xi^2}|\xi|\|\le c(t-s)^{-\frac 5 4}.
\end{equation}
It follows from this inequality and from \eqref{e514a} by multiplying by $|\xi|$
and taking the norm $\|\cdot\|$ of the resulting inequality that the following integral inequality holds:
\begin{equation}\label{e514}
b(t)\le b_0(t)+c\int_0^t (t-s)^{-\frac 5 4}b(s)ds,
\end{equation}
where
\begin{equation}\label{e51412}
b_0(t):=\||\xi|\mu(\xi,t)\|,
\quad b(s):=\||\xi|u\|.
\end{equation}
The function $b_0(t)$ is smooth and rapidly decaying due to Assumption A.
Let $\beta$ solve the following equation:
\begin{equation}\label{e514b}
\beta(t)=b_0(t)+c\int_0^t (t-s)^{-\frac 5 4}\beta(s)ds.
\end{equation}
Equation \eqref{e514b} can be written as
\begin{equation}\label{e514bb}
\beta(t)=b_0(t)-cc_1\Phi_{-\frac 14}\star \beta,
\end{equation}
where $\star$ denotes the convolution of two functions on ${\mathbb R}_+$ and $c_1=|\Gamma(-\frac 1 4)|$. The convolution
on ${\mathbb R}_+$ was defined in the Introduction. In Section 4 the relation between the solutions to integral equation \eqref{e514bb} and integral inequality \eqref{e514} was
studied and the inequality \newline
$b(t)\le \beta(t)$ was proved.
Taking the Laplace transform of equation \eqref{e514b} and using
equation \eqref{e105}
with $\lambda=-\frac 1 4$, we get
\begin{equation}\label{e514cc}
L(\Phi_{-\frac 14}\star \beta)=L(\Phi_{-\frac 14})L(\beta)=p^{1/4} L(\beta),
\end{equation}
so
\begin{equation}\label{e514ccc}
L(\beta)=L(b_0)-cc_1p^{1/4}L(\beta).
\end{equation}
Therefore,
\begin{equation}\label{e514c}
L(\beta)=\frac {L(b_0)}{1+cc_1p^{1/4}}, \quad 0\le b(t)\le \beta(t).
\end{equation}
It follows from \eqref{e514c} that
\begin{equation}\label{e514d}
b(t)\le \beta(t)=\frac 1{2\pi}\int_{-\infty}^{\infty}
e^{i\tau t}\frac {L(b_0)}{1+cc_1(i\tau)^{1/4}}d\tau\le \frac 1{\pi}\int_0^\infty
\frac {|L(b_0)|}{|1+cc_1e^{i\pi/8}\tau^{1/4}|}d\tau\le c,
\end{equation}
where the argument $p$ of the function $L(b_0)$ is equal to $i\tau$,
$p=i\tau$, and we have used the decay
$O(|\tau|^{-1})$ of $|L(b_0)|$ as a function of
$p=i\tau$ as $|\tau|\to \infty$.
This decay
follows from Assumption A and implies that
the integrand in \eqref{e514d} belongs to $L^1({\mathbb R})$
because of the following inequality, proved at the end of Section 3:
\begin{equation}\label{e6111}
\inf_{\tau\in [0,\infty)}|1+cc_1e^{i\pi/8}\tau^{1/4}|>0,
\quad cc_1>0.
\end{equation}
Thus, the third estimate \eqref{e507} of Theorem 5.1 is proved.\hfill$\Box$
{\em Proof of theorem 5.2.} If $v_0(x)=v(x,0)\not\equiv 0$ and $\nabla \cdot v_0(x)=0$ then
$b_0(0)>0$. Apply to equation \eqref{e514b} the operator $\Phi_{1/4}\star$ and use Theorem 1.1. This yields
\begin{equation}\label{e61}
\Phi_{1/4}\star \beta=\Phi_{1/4}\star b_0-cc_1\beta(t),
\end{equation}
where formula \eqref{e105} was used, $c_1=-\Gamma(-\frac 1 4)>0$ and $\Phi_{\frac 1 4}\star\Phi_{-\frac 1 4}=\delta$, where $\delta$ is the delta-function, see formulas \eqref{e110}--\eqref{e111}. We assume that $b_0(t)$ satisfies Assumption A, so it is smooth and rapidly decaying. Then equation \eqref{e61} can be solved by iterations by Theorem 3.1
and the solution $\beta$ is also smooth. Therefore,
\begin{equation}\label{e71}
\lim_{t\to 0}\Phi_{1/4}\star \beta=0, \quad \lim_{t\to 0}\Phi_{1/4}\star b_0=0.
\end{equation}
Consequently, it follows from \eqref{e61} that
$\beta(0)=0$.
Since $0\le b(t)\le \beta(t)$, one concludes that
\begin{equation}\label{e72}
b(0)=0.
\end{equation}
This result proves that the NSP problem does not have a solution, in general. Indeed, starting with an initial data which is positive we prove that the corresponding solution to the NSP must have the initial data equal to zero. This is the {\em NSP paradox},
see \cite{R708}.
Of course, if the data $v_0(x)=v(x,0)=0$
then the solution exists for all $t\ge 0$ and is equal to zero by the uniqueness theorem, see, for example, \cite{R691}, \cite{R700}.
Other paradoxes of the theory of fluid mechanics are mentioned in \cite{La}.
Theorem 5.2 is proved. \hfill$\Box$
\newpage
| proofpile-arXiv_066-361 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:1}
Started with the photoelectric effect, photon-matter interaction has
been studied over 100 years. With the establishment of the special
relativity and quantum theory, scientists can make many accurate calculations
to describe how photons are absorbed and emitted and how electrons
are ionized and captured. Most of the work in early years were based
on perturbative techniques, as the light source was so weak that only
single photon effect was important. The accuracy maintained until
the invention of chirped pulse amplification (CPA) for lasers in 1980s.
Since then, the laser power density have increased 8 orders of magnitude,
approaching $10^{22}$ W$\cdot$cm$^{-2}$, which is stronger than
the direct ionization threshold of $10^{16}\sim10^{18}$ W$\cdot$cm$^{-2}$
\cite{Mourou2007,Krausz2009}. Such a strong field brings many new
physics, e.g., multiphoton ionization, above threshold ionization
(ATI), high harmonic generation (HHG) and stabilization, which play
a major role in modern high energy density physics, experimental astrophysics,
attosecond physics, strong field electrodynamics and controlled fusion
etc. \cite{Voronov1965,Keldysh1965,Faisal1973,Agostini1979,Reiss1980,Gontier1980,McPherson1987,Gallagher1988,Eberly1991,Pont1990,Krause1992,Eberly1993,Corkum1993,Lewenstein1994,Birula1994,Bao1996,Spielmann1997,Sali2001,Kienberger2004,Drake2006,Brabec2008,Kim2008,Smirnova2009,Le2009,Goulielmakis2010,Nepstad2010,Birkeland2010,Popmintchev2012,Piazza2012,Becker2012,Madsen2012,Argenti2013,Yuan2013,Guo2013,Klaiber2013,Vampa2014,Popruzhenko2014,Popmintchev2015,Kfir2015,Luu2015,Bukov2015,Hassan2016}.
There are several semi-classical non-perturbative methods to describe
these phenomena, both analytical and numerical, and some experimental
observations have been explained successfully \cite{Keldysh1965,Faisal1973,Reiss1980,Gallagher1988,Krause1992,Corkum1993,Birula1994,Lewenstein1994,Bao1996,Sali2001,Brabec2008,Le2009,Nepstad2010,Becker2012,Klaiber2013}.
Keldysh proposed the first non-perturbative theory describing the
ionization process in a strong laser field \cite{Keldysh1965}. It
was then developed by Faisal and Reiss in the $S$ matrix form known
as KFR theory \cite{Faisal1973,Reiss1980}. This theory was further
developed into the rescattering methods \cite{Bao1996,Le2009}. Simple
man model is a classical model which gives an intuitive perspective
to understand the ionization \cite{Gallagher1988}. In semi-classical
framework, the well known three-step model developed by Corkum gives
a basic tool to study the strong field physics \cite{Corkum1993,Corkum11}.
There are also some models developed based on quantum path-integral
theory which output some detailed results about the transient paths
\cite{Lewenstein1994,Sali2001}. Recently, relativistic corrections
for strong field ionization was taken into consideration by Klaiber
\emph{et al.} \cite{Klaiber2013}. Different from analytical models, directly
solving the time-dependent Schr\"{o}dinger equation (TDSE) is always
a crucially important method for photon-matter interactions. By numerical
simulations, Krause \emph{et al.} obtained the cut-off law of HHG \cite{Krause1992}.
Nepstad \emph{et al.} numerically studied the two-photon ionization of helium
\cite{Nepstad2010}. Birkeland \emph{et al.} numerically studied
the stabilization of helium in intense XUV laser fields \cite{Birkeland2010}.
Based on simulation results, much information about atom and molecular
in strong field can be obtained \cite{Madsen2012,Argenti2013}. Recently,
multi-configuration methods were introduced into TDSE simulations
to treat many-electron dynamics \cite{Hochstuhl2014,Miyagi2014,Bauch2014}.
Because of the multi-scale nature of the process and the large number
of degrees of freedom involved, most of the theoretical and numerical
methods adopted various types of approximations for the Schr\"{o}dinger
equation, such as the strong field approximation \cite{Lewenstein1994},
the finite energy levels approximation \cite{Wu1995}, the independent
external field approximation \cite{Popruzhenko2014} and the single-active
electron approximation \cite{Krause1992}, which often have limited
applicabilities \cite{Krausz2009,Piazza2012}. To understand the
intrinsic multi-scale, complex photon-matter interactions described
by the Schr\"{o}dinger-Maxwell (SM) equations, a comprehensive model
needs to be developed by numerically solving the SM equations.
For the Maxwell equations, many numerical methods, such as the finite-difference
time-domain method has been developed \cite{Yee1966,Mur1981,Berenger1994}.
For the Schr\"{o}dinger equation, unitary algorithm has been proposed
\cite{Wu1995,Blanes2006,Kormann2008,Shen2013}. Recently, a class
of structure-preserving geometric algorithms have been developed for
simulating classical particle-field interactions described by the
Vlasov-Maxwell (VM) equations. Specifically, spatially discretized
canonical and non-canonical Poisson brackets for the VM systems
and associated symplectic time integration algorithms have been discovered
and applied \cite{Squire12,JXiao2013,JXiao2015,Xiao15-112504,He15-124503,QHong2016,He16-092108,Xiao-M2016,Morrison2017,Michael-ar}.
In this paper, we develop a new structure-preserving geometric algorithm
for numerically solving the SM equations. For this purpose, the canonical
symplectic structure of the SM equations is first established. Note
that the canonical symplectic structure presented here is more transparent
than the version given in Refs. \cite{Masiello2004,Masiello2005},
which involves complications due to a different choice of gauge. The
structure-preserving geometric algorithm is obtained by discretizing
the canonical Poisson bracket. The wavefunctions and gauge field are
discretized point-wise on an Eulerian spatial grid, and the Hamiltonian
functional is expressed as a function of the discretized fields. This
procedure generates a finite-dimensional Hamiltonian system with a
canonical symplectic structure. The degrees of freedom of the discrete
system for a single electron atom discrete system is $4M$, where
$M$ is the number of grid points. For an ensemble of $N$ single-active
electron atoms, the discrete system has $(N+3)M$ degrees of freedom.
A symplectic splitting algorithm is developed for semi-explicit time advance.
The method inherits all the good numerical features of canonical symplectic
algorithms, such as the long-term bound on energy-momentum error.
We also design the algorithm such that it preserves unitary structure
of the Schr\"{o}dinger equation. These desirable features make the
algorithm a powerful tool in the study of photon-matter interactions
using the semi-classical model. We note the algorithm developed here
for the SM equations is inspired by the recent advances in the structure-preserving
geometric algorithms for classical particle-field interactions \cite{Squire12,JXiao2013,JXiao2015,Xiao15-112504,He15-124503,QHong2016,He16-092108,Xiao-M2016,Morrison2017,Michael-ar},
especially the canonical particle-in-cell method \cite{QHong2016}.
\section{Canonical Symplectic Structure of Schr\"{o}dinger-Maxwell Systems}
\label{sec:2}
In most strong field experiments, the atomic ensemble is weakly coupled,
which means that electrons are localized around the nuclei and there
is no direct coupling between different atoms. Electrons belong to
different atoms are well resolved. In a single-active electron atomic
ensemble, every electron can be labeled by a local atom potential.
The wavefunction is a direct product of the resolved single electron
wavefunctions. As the basic semi-classical model for photon-matter
interactions between atomic ensemble and photons, the SM equations
are
\begin{eqnarray}
i\frac{\partial}{\partial{t}}\psi_{i} & = & \hat{H}_{i}\psi_{i},\label{eq:1}\\
\partial_{\mu}F^{\mu\nu} & = & \sum_{i}\frac{4\pi}{c}J_{i}^{\nu},\label{eq:2}
\end{eqnarray}
where $\hat{H}_{i}=\frac{(\bm{P}-\bm{A})^{2}}{2}+V_{i}$ is the Hamiltonian
operator, $\bm{P}=-i\bigtriangledown$ is the canonical momentum,
$V_{i}$ is local atomic potential of the $i$-th atom, $F^{\mu\nu}=c(\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu})$
is the electromagnetic tensor, and $c$ is the light speed in atomic
units. The subscript $i$ is electron label. The atomic potential
can assume, for example, the form of $V_{i}(\bm{x})=-\frac{Z}{|\bm{x}-\bm{x}_{i}|}$
with $Z$ being atomic number and $\bm{x}_{i}$ the position of the
atom. With metric signature $\left(+,-,-,-\right)$, in Eq.\,\eqref{eq:2},
$J_{i}^{\mu}=i\left[\psi_{i}^{*}D^{\mu}\psi_{i}-\psi_{i}(D^{\mu}\psi_{i})^{*}\right]$
is the conserved Noether current, and $D_{\mu}=\partial_{\mu}+iA_{\mu}$
is the gauge-covariant derivative. In the nonrelativistic limit, the
density $J_{i}^{0}$ reduces to $\psi_{i}^{*}\psi_{i}$, while the
current density $J_{i}^{k}$ reduces to $\frac{i}{2}\left[\psi_{i}^{*}D^{k}\psi_{i}-\psi_{i}(D^{k}\psi_{i})^{*}\right]$,
which closes the SM system. The temporal gauge
$\phi=0$ has been adopted explicitly.
The complex wavefunctions and Hamiltonian operators can be decomposed
into real and imaginary parts,
\begin{gather}
\psi_{i}=\frac{1}{\sqrt{2}}\left(\psi_{iR}+i\psi_{iI}\right),\label{eq:3}\\
\hat{H}_{i}=\hat{H}_{iR}+i\hat{H}_{iI},\\
\hat{H}_{iR}=\frac{1}{2}\left(-\bigtriangledown^{2}+\bm{A}^{2}\right)+V_{i},\thinspace\thinspace\thinspace\hat{H}_{iI}=\frac{1}{2}\bigtriangledown\cdot\bm{A}+\bm{A}\cdot\bigtriangledown.
\end{gather}
In terms of the real and imaginary components, the Schr\"{o}dinger
equation is
\begin{eqnarray}
\frac{\partial}{\partial{t}}\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right)=\left(\begin{array}{cc}
\hat{H}_{iI} & \hat{H}_{iR}\\
-\hat{H}_{iR} & \hat{H}_{iI}
\end{array}\right)\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right).\label{eq:7}
\end{eqnarray}
The SM system admits an infinite dimensional canonical symplectic
structure with following Poisson structure and Hamiltonian functional,
\begin{eqnarray}
\left\{ F,G\right\} & \!=\! & \int\left[\sum_{i}\left(\frac{\delta{F}}{\delta\psi_{iR}}\frac{\delta{G}}{\delta\psi_{iI}}\!-\!\frac{\delta{G}}{\delta\psi_{iR}}\frac{\delta{F}}{\delta\psi_{iI}}\right)\!+\!\frac{\delta{F}}{\delta\bm{A}}\frac{\delta{G}}{\delta\bm{Y}}\!-\!\frac{\delta{G}}{\delta\bm{A}}\frac{\delta{F}}{\delta\bm{Y}}\right]\mathrm{d}^{3}x,\label{eq:10}\\
H\left(\psi_{iR},\psi_{iI},\bm{A},\bm{Y}\right) & \!=\! & \frac{1}{2}\int\left[\sum_{i}\left(\psi_{iR}\hat{H}_{iR}\psi_{iR}\!+\!\psi_{iI}\hat{H}_{iR}\psi_{iI}\right.\right.\nonumber \\
& \, & \left.\left.\!-\!\psi_{iR}\hat{H}_{iI}\psi_{iI}\!+\!\psi_{iI}\hat{H}_{iI}\psi_{iR}\right)\!+\!4\pi\bm{Y}^{2}\!+\!\frac{1}{4\pi}\left(c\bigtriangledown\!\times\!\bm{A}\right)^{2}\right]\mathrm{d}^{3}x.\label{eq:11}
\end{eqnarray}
Here, $\bm{Y}=\dot{\bm{A}}/4\pi$ and $F,$ $G,$ and $H$ are functionals
of $\left(\psi_{iR},\psi_{iI},\bm{A},\bm{Y}\right).$ The expression
$\delta F/\delta\psi_{iR}$ is the variational derivative of the functional
$F$ with respect to $\psi_{iR}$, and other terms, e.g., $\delta F/\delta\psi_{iI}$
and $\delta F/\delta\boldsymbol{A}$, have similar meanings. The Hamiltonian
functional $H\left(\psi_{iR},\psi_{iI},\bm{A},\bm{Y}\right)$ in Eq.\,\eqref{eq:11}
is equivalent to the following expression in terms of the complex
wavefunctions,
\begin{eqnarray}
H\left(\psi_{i}^{*},\psi_{i},\bm{A},\bm{Y}\right) & = & H_{qm}+H_{em},\label{eq:8}\\
H_{qm} & = & \int\sum_{i}\psi_{i}^{*}\hat{H}_{i}\psi_{i}\mathrm{d}^{3}x,\\
H_{em} & = & \frac{1}{2}\int\left[4\pi\bm{Y}^{2}+\frac{1}{4\pi}\left(c\bigtriangledown\times\bm{A}\right)^{2}\right]\mathrm{d}^{3}x.
\end{eqnarray}
Apparently, $H_{em}$ is the Hamiltonian for the electromagnetic field,
and $H_{qm}$ is the Hamiltonian for the wavefunctions. In this infinite
dimensional Hamiltonian system, the canonical pairs are $\left(\psi_{iR},\psi_{iI}\right)$
and $\left(\bm{A},\bm{Y}\right)$ at each spatial location. Their
canonical equations are
\begin{eqnarray}
\dot{\psi_{iR}} & = & \left\{ \psi_{iR},H\right\} =\!\frac{1}{2}\bigtriangledown\cdot\bm{A}\psi_{iR}\!+\!\bm{A}\cdot\bigtriangledown\psi_{iR}\!+\!\frac{1}{2}\left(\!-\!\bigtriangledown^{2}\!+\!\bm{A}^{2}\right)\psi_{iI}\!+\!V_{i}\psi_{iI}\!,\label{eq:13}\\
\dot{\bm{A}} & = & \left\{ \bm{A},H\right\} =4\pi\bm{Y},\label{eq:14}\\
\dot{\psi_{iI}} & = & \left\{ \psi_{iI},H\right\} =\frac{1}{2}\left(\bigtriangledown^{2}\!-\!\bm{A}^{2}\right)\psi_{iR}\!-\!V_{i}\psi_{iR}\!+\!\frac{1}{2}\bigtriangledown\cdot\bm{A}\psi_{iI}\!+\!\bm{A}\cdot\bigtriangledown\psi_{iI},\label{eq:15}\\
\dot{\bm{Y}} & = & \left\{ \bm{Y},H\right\} =\bm{\mathcal{J}}\!-\!\frac{c^{2}}{4\pi}\bigtriangledown\times\bigtriangledown\times\bm{A},\label{eq:16}
\end{eqnarray}
where $\bm{\mathcal{J}}=\frac{1}{2}\sum_{i}[\psi_{iR}\bigtriangledown\psi_{iI}-\psi_{iI}\bigtriangledown\psi_{iR}-(\psi_{iR}^{2}+\psi_{iI}^{2})\bm{A}]$
is the current density. In deriving Eqs.\,\eqref{eq:13}-\eqref{eq:16},
use is made of the following expression of the total variation of
Hamiltonian,
\begin{eqnarray}
\delta H & = & \frac{1}{2}\int\sum_{i}[\left(-\bigtriangledown^{2}\psi_{iR}+\bm{A}^{2}\psi_{iR}+2V_{i}\psi_{iR}-2\bm{A}\cdot\bigtriangledown\psi_{iI}-\bigtriangledown\cdot\bm{A}\psi_{iI}\right)\delta\psi_{iR}\nonumber \\
& \, & +\left(-\bigtriangledown^{2}\psi_{iI}+\bm{A}^{2}\psi_{iI}+2V_{i}\psi_{iI}+2\bm{A}\cdot\bigtriangledown\psi_{iR}+\bigtriangledown\cdot\bm{A}\psi_{iR}\right)\delta\psi_{iI}\nonumber \\
& \, & +\left(\psi_{iR}^{2}\bm{A}+\psi_{iI}^{2}\bm{A}+\psi_{iI}\bigtriangledown\psi_{iR}-\psi_{iR}\bigtriangledown\psi_{iI}\right)\cdot\delta\bm{A}]\mathrm{d}^{3}x\nonumber \\
& \, & +\int[\frac{c^{2}}{4\pi}\bigtriangledown\times\bigtriangledown\times\bm{A}\cdot\delta\bm{A}+4\pi\bm{Y}\cdot\delta\bm{Y}]\mathrm{d}^{3}x,\label{eq:12}
\end{eqnarray}
where integration by parts have been applied with fixed fields on
the boundary.
\section{Structure-preserving Geometric Algorithms for Schr\"{o}dinger-Maxwell
Systems}
\label{sec:3}
We now present the structure-preserving geometric algorithms for numerically
solving Eqs.\,\eqref{eq:13}-\eqref{eq:16}. We discretize the fields
$\left(\psi_{iR},\psi_{iI},\bm{A},\bm{Y}\right)$ on an Eulerian spatial
grid as
\begin{eqnarray}
\bm{A}\left(\bm{x},t\right)=\sum_{J=1}^{M}\bm{A}_{J}\left(t\right)\theta\left(\bm{x}-\bm{x}_{J}\right) & ,\,\,\, & \bm{Y}\left(\bm{x},t\right)=\sum_{J=1}^{M}\bm{Y}_{J}\left(t\right)\theta\left(\bm{x}-\bm{x}_{J}\right),\label{eq:17}\\
\psi_{iR}\left(\bm{x},t\right)=\sum_{J=1}^{M}\psi_{iRJ}\left(t\right)\theta\left(\bm{x}-\bm{x}_{J}\right) & ,\,\,\, & \psi_{iI}\left(\bm{x},t\right)=\sum_{J=1}^{M}\psi_{iIJ}\left(t\right)\theta\left(\bm{x}-\bm{x}_{J}\right),\label{eq:18}
\end{eqnarray}
where the distribution function $\theta\left(\bm{x}-\bm{x}_{J}\right)$
is defined as
\begin{eqnarray}
\theta\left(\bm{x}-\bm{x}_{J}\right)=\left\{ \begin{array}{cc}
1, & |x-x_{J}|<\frac{\bigtriangleup{x}}{2},|y-y_{J}|<\frac{\bigtriangleup{y}}{2},|z-z_{J}|<\frac{\bigtriangleup{z}}{2}\\
0, & elsewhere
\end{array}\right..\label{eq:19}
\end{eqnarray}
Then, the variational derivative with respect to $\bm{A}$ is
\begin{eqnarray}
\frac{\delta{F}}{\delta\bm{A}}=\sum_{J=1}^{M}\frac{\delta\bm{A}_{J}}{\delta\bm{A}}\frac{\partial{F}}{\partial\bm{A}_{J}}=\sum_{J=1}^{M}\frac{1}{\bigtriangleup{V}}\theta\left(\bm{x}-\bm{x}_{J}\right)\frac{\partial{F}}{\partial\bm{A}_{J}},\label{eq:20}
\end{eqnarray}
and the variational derivatives with respect to $\bm{Y}$, $\psi_{iR}$
and $\psi_{iI}$ have similar expressions. Here, $\bigtriangleup V=\bigtriangleup x\bigtriangleup y\bigtriangleup z$
is the volume of each cell. The canonical Poisson bracket is discretized
as
\begin{eqnarray}
\left\{ F,G\right\} _{d}\!=\!\sum_{J=1}^{M}\left[\sum_{i}\left(\frac{\partial{F}}{\partial\psi_{iRJ}}\frac{\partial{G}}{\partial\psi_{iIJ}}\!-\!\frac{\partial{G}}{\partial\psi_{iRJ}}\frac{\partial{F}}{\partial\psi_{iIJ}}\right)\!+\!\frac{\partial{F}}{\partial\bm{A}_{J}}\frac{\partial{G}}{\partial\bm{Y}_{J}}\!-\!\frac{\partial{G}}{\partial\bm{A}_{J}}\frac{\partial{F}}{\partial\bm{Y}_{J}}\right]\frac{1}{\bigtriangleup{V}}.\label{eq:21}
\end{eqnarray}
The Hamiltonian functional is discretized as
\begin{gather}
H_{d}\left(\psi_{iRJ},\psi_{iIJ},\bm{A}_{J},\bm{Y}_{J}\right)=H_{dem}+H_{dqm},\label{eq:Hds-1}\\
H_{dem}=\frac{1}{2}\sum_{J=1}^{M}\left[4\pi\bm{Y}_{J}^{2}\!+\!\frac{1}{4\pi}\left(c\bigtriangledown_{d}\!\times\!\bm{A}\right)_{J}^{2}\right]\bigtriangleup V,\label{eq:Hdem-1}\\
H_{dqm}=\frac{1}{2}\sum_{J=1}^{M}\sum_{i}\left[-\frac{1}{2}\psi_{iRJ}\left(\bigtriangledown_{d}^{2}\psi_{iR}\right)_{J}\!-\!\frac{1}{2}\psi_{iIJ}\left(\bigtriangledown_{d}^{2}\psi_{iI}\right)_{J}\!-\!\psi_{iRJ}\bm{A}_{J}\cdot\left(\bigtriangledown_{d}\psi_{iI}\right)_{J}\right.\nonumber \\
\left.+\psi_{iIJ}\bm{A}_{J}\cdot\left(\bigtriangledown_{d}\psi_{iR}\right)_{J}\!+\!\left(\frac{1}{2}\bm{A}_{J}^{2}\!+\!V_{iJ}\right)\left(\psi_{iRJ}^{2}\!+\!\psi_{iIJ}^{2}\right)\right]\bigtriangleup V,\label{eq:Hdqm-1}
\end{gather}
where $V_{iJ}=V_{i}\left(\bm{x}_{J}\right)$, and the discrete spatial
operators are defined as
\begin{eqnarray}
\left(\bigtriangledown_{d}\psi\right)_{J}=\left(\begin{array}{c}
\frac{\psi_{i,j,k}-\psi_{i-1,j,k}}{\bigtriangleup{x}}\\
\frac{\psi_{i,j,k}-\psi_{i,j-1,k}}{\bigtriangleup{y}}\\
\frac{\psi_{i,j,k}-\psi_{i,j,k-1}}{\bigtriangleup{z}}
\end{array}\right),\label{eq:23}
\end{eqnarray}
\begin{eqnarray}
\left(\bigtriangledown_{d}\cdot\bm{A}\right)_{J}=\frac{Ax_{i,j,k}-Ax_{i-1,j,k}}{\bigtriangleup{x}}+\frac{Ay_{i,j,k}-Ay_{i,j-1,k}}{\bigtriangleup{y}}+\frac{Az_{i,j,k}-Az_{i,j,k-1}}{\bigtriangleup{z}},\label{eq:24}
\end{eqnarray}
\begin{eqnarray}
\left(\bigtriangledown_{d}\times\bm{A}\right)_{J}=\left(\begin{array}{c}
\frac{Az_{i,j,k}-Az_{i,j-1,k}}{\bigtriangleup{y}}-\frac{Ay_{i,j,k}-Ay_{i,j,k-1}}{\bigtriangleup{z}}\\
\frac{Ax_{i,j,k}-Ax_{i,j,k-1}}{\bigtriangleup{z}}-\frac{Az_{i,j,k}-Az_{i-1,j,k}}{\bigtriangleup{x}}\\
\frac{Ay_{i,j,k}-Ay_{i-1,j,k}}{\bigtriangleup{x}}-\frac{Ax_{i,j,k}-Ax_{i,j-1,k}}{\bigtriangleup{y}}
\end{array}\right),\label{eq:25}
\end{eqnarray}
\begin{eqnarray}
\left(\bigtriangledown_{d}^{2}\psi\right)_{J} & = & \frac{\psi_{i,j,k}\!-\!2\psi_{i-1,j,k}\!+\!\psi_{i-2,j,k}}{\bigtriangleup{x}^{2}}\!+\!\frac{\psi_{i,j,k}\!-\!2\psi_{i,j-1,k}\!+\!\psi_{i,j-2,k}}{\bigtriangleup{y}^{2}}\nonumber \\
& & +\frac{\psi_{i,j,k}\!-\!2\psi_{i,j,k-1}\!+\!\psi_{i,j,k-2}}{\bigtriangleup{z}^{2}}.\label{eq:26}
\end{eqnarray}
Here, the subscript $J$ denotes grid position $(i,j,k)$. The discrete
spatial operators defined here use first order backward difference
schemes. High order spatial schemes can be developed as well.
The discrete canonical equations are
\begin{eqnarray}
\dot{\psi_{iRJ}} & = & \left\{ \psi_{iRJ},H_{d}\right\} _{d}\nonumber \\
& = & \frac{1}{2}\bm{A}_{J}\cdot\left(\bigtriangledown_{d}\psi_{iR}\right)_{J}\!-\!\frac{1}{2}\sum_{K=1}^{M}\psi_{iRK}\bm{A}_{K}\cdot\frac{\partial}{\partial\psi_{iIJ}}\left(\bigtriangledown_{d}\psi_{iI}\right)_{K}\nonumber \\
& & -\frac{1}{4}\left(\bigtriangledown_{d}^{2}\psi_{iI}\right)_{J}\!-\!\frac{1}{4}\sum_{K=1}^{M}\psi_{iIK}\frac{\partial}{\partial\psi_{iIJ}}\left(\bigtriangledown_{d}^{2}\psi_{iI}\right)_{K}\!+\!\left(\frac{1}{2}\bm{A}_{J}^{2}\!+\!V_{iJ}\right)\psi_{iIJ},\label{eq:27}\\
\dot{\bm{A}_{J}} & = & \left\{ \bm{A}_{J},H_{d}\right\} _{d}=4\pi\bm{Y}_{J},\label{eq:28}\\
\dot{\psi_{iIJ}} & = & \left\{ \psi_{iIJ},H_{d}\right\} _{d}\nonumber \\
& = & \frac{1}{4}\left(\bigtriangledown_{d}^{2}\psi_{iR}\right)_{J}\!+\!\frac{1}{4}\sum_{K=1}^{M}\psi_{iRK}\frac{\partial}{\partial\psi_{iRJ}}\left(\bigtriangledown_{d}^{2}\psi_{iR}\right)_{K}\!-\!\left(\frac{1}{2}\bm{A}_{J}^{2}\!+\!V_{iJ}\right)\psi_{iRJ}\nonumber \\
& & +\frac{1}{2}\bm{A}_{J}\cdot\left(\bigtriangledown_{d}\psi_{iI}\right)_{J}\!-\!\frac{1}{2}\sum_{K=1}^{M}\psi_{iIK}\bm{A}_{K}\cdot\frac{\partial}{\partial\psi_{iRJ}}\left(\bigtriangledown_{d}\psi_{iR}\right)_{K},\label{eq:29}\\
\dot{\bm{Y}_{J}} & = & \left\{ \bm{Y}_{J},H_{d}\right\} _{d}=\bm{\mathcal{J}}_{J}\!-\!\frac{c^{2}}{4\pi}\left(\bigtriangledown_{d}^{T}\times\bigtriangledown_{d}\times\bm{A}\right)_{J},\label{eq:30}
\end{eqnarray}
where $\bm{\mathcal{J}}_{J}=\frac{1}{2}\sum_{i}[\psi_{iRJ}\left(\bigtriangledown_{d}\psi_{iI}\right)_{J}-\psi_{iIJ}\left(\bigtriangledown_{d}\psi_{iR}\right)_{J}-\bm{A}_{J}\left(\psi_{iRJ}^{2}+\psi_{iIJ}^{2}\right)]$
is the discrete current density. The last term in Eq.~\eqref{eq:30}
is defined to be,
\begin{eqnarray}
\left(\bigtriangledown_{d}^{T}\times\bigtriangledown_{d}\times\bm{A}\right)_{J}=\frac{1}{2}\frac{\partial}{\partial\bm{A}_{J}}\left[\sum_{K=1}^{M}\left(\bigtriangledown_{d}\times\bm{A}\right)_{K}^{2}\right],\label{eq:yinyong}
\end{eqnarray}
which indicates that the right-hand side of Eq.~\eqref{eq:yinyong}
can be viewed as the discretized $\bigtriangledown\times\bigtriangledown\times\bm{A}$
for a well-chosen discrete curl operator $\bigtriangledown_{d}\times$.
We will use the following symplectic splitting algorithms to numerically
solve this set of discrete canonical Hamiltonian equations. In Eq.\,\eqref{eq:Hds-1},
$H_{d}$ is naturally split into two parts, each of which corresponds
to a subsystem that will be solved independently. The solution maps
of the subsystems will be combined in various way to give the desired
algorithms for the full system. For the subsystem determined by $H_{dqm},$
the dynamic equations are
\begin{eqnarray}
\dot{\psi_{iRJ}} & = & \left\{ \psi_{iRJ},H_{dqm}\right\} _{d}\nonumber \\
& = & \frac{1}{2}\bm{A}_{J}\cdot\left(\bigtriangledown_{d}\psi_{iR}\right)_{J}\!-\!\frac{1}{2}\sum_{K=1}^{M}\psi_{iRK}\bm{A}_{K}\cdot\frac{\partial}{\partial\psi_{iIJ}}\left(\bigtriangledown_{d}\psi_{iI}\right)_{K}\nonumber \\
& & -\frac{1}{4}\left(\bigtriangledown_{d}^{2}\psi_{iI}\right)_{J}\!-\!\frac{1}{4}\sum_{K=1}^{M}\psi_{iIK}\frac{\partial}{\partial\psi_{iIJ}}\left(\bigtriangledown_{d}^{2}\psi_{iI}\right)_{K}\!+\!\left(\frac{1}{2}\bm{A}_{J}^{2}\!+\!V_{iJ}\right)\psi_{iIJ},\label{eq:psiRJ}\\
\dot{\psi_{iIJ}} & = & \left\{ \psi_{iIJ},H_{dqm}\right\} _{d}\nonumber \\
& = & \frac{1}{4}\left(\bigtriangledown_{d}^{2}\psi_{iR}\right)_{J}\!+\!\frac{1}{4}\sum_{K=1}^{M}\psi_{iRK}\frac{\partial}{\partial\psi_{iRJ}}\left(\bigtriangledown_{d}^{2}\psi_{iR}\right)_{K}\!-\!\left(\frac{1}{2}\bm{A}_{J}^{2}\!+\!V_{iJ}\right)\psi_{iRJ}\nonumber \\
& & +\frac{1}{2}\bm{A}_{J}\cdot\left(\bigtriangledown_{d}\psi_{iI}\right)_{J}\!-\!\frac{1}{2}\sum_{K=1}^{M}\psi_{iIK}\bm{A}_{K}\cdot\frac{\partial}{\partial\psi_{iRJ}}\left(\bigtriangledown_{d}\psi_{iR}\right)_{K},\label{eq:psiIJ}\\
\dot{\bm{A}_{J}} & = & \left\{ \bm{A}_{J},H_{dqm}\right\} _{d}=0,\\
\dot{\bm{Y}_{J}} & = & \left\{ \bm{Y}_{J},H_{dqm}\right\} _{d}=\bm{\mathcal{J}}_{J}.\!\label{eq:30-1}
\end{eqnarray}
Equations \eqref{eq:psiRJ} and \eqref{eq:psiIJ} can written as
\begin{gather}
\frac{d}{dt}\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right)=\Omega(\boldsymbol{A})\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right),
\end{gather}
where $\Omega(\boldsymbol{A})$ is an skew-symmetric matrix. It easy
to show that $\Omega(\boldsymbol{A})$ is also an infinitesimal generator
of the symplectic group. To preserve the unitary property of $\psi_{i}$,
we adopt the symplectic mid-point method for this subsystem, and the
one step map $M_{qm}:(\psi_{i},\boldsymbol{A},\boldsymbol{Y})^{n}\longmapsto(\psi_{i},\boldsymbol{A},\boldsymbol{Y})^{n+1}$
is given by
\begin{gather}
\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right)^{n+1}=\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right)^{n}+\frac{\Delta t}{2}\Omega(\boldsymbol{A}^{n})\left[\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right)^{n}+\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right)^{n+1}\right],\label{eq:psin+1}\\
\boldsymbol{A}^{n+1}=\boldsymbol{A}^{n},\\
\boldsymbol{Y}^{n+1}=\boldsymbol{Y}^{n}+\Delta{t}\bm{\mathcal{J}}\left(\frac{\psi_{iR}^{n}+\psi_{iR}^{n+1}}{2},\frac{\psi_{iI}^{n}+\psi_{iI}^{n+1}}{2}\right).
\end{gather}
Equation \eqref{eq:psin+1} is a linear equation in terms of $(\psi_{iR}^{n+1},\psi_{iI}^{n+1})$.
Its solution is
\begin{gather}
\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right)^{n+1}=Cay(\Omega(\boldsymbol{A}^{n})\frac{\Delta t}{2})\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right)^{n},\\
Cay(\Omega(\boldsymbol{A}^{n})\frac{\Delta t}{2})=\left(1-\Omega(\boldsymbol{A}^{n})\frac{\Delta t}{2}\right)^{-1}\left(1+\Omega(\boldsymbol{A}^{n})\frac{\Delta t}{2}\right),
\end{gather}
where $Cay(S)$ denotes the Cayley transformation of matrix $S.$
It is well-known that $Cay(S)$ is a symplectic rotation matrix when
$S$ in the Lie algebra of the symplectic rotation group. Thus, the
one-step map from $\psi_{i}^{n}=\psi_{iR}^{n}+i\psi_{iI}^{n}$ to
$\psi_{i}^{n+1}=\psi_{iR}^{n+1}+i\psi_{iI}^{n+1}$ induced by $M_{qm}$
for the subsystem $H_{dqm}$ is unitary. Since $\Omega(\boldsymbol{A}^{n}\Delta t/2)$
is a sparse matrix, there exist efficient algorithms to solve Eq.\,\eqref{eq:psin+1}
or to calculate $Cay(\Omega(\boldsymbol{A}^{n})\Delta t/2)$. Once
$\psi_{i}^{n+1}$is known, $\boldsymbol{Y}^{n+1}$ can be calculated
explicitly. Thus, $M_{qm}:(\psi_{i},\boldsymbol{A},\boldsymbol{Y})^{n}\longmapsto(\psi_{i},\boldsymbol{A},\boldsymbol{Y})^{n+1}$
is a second-order symplectic method, which also preserves the unitariness
of $\psi_{i}$.
For the subsystem $H_{dem},$ the dynamic equations are
\begin{eqnarray}
\dot{\psi_{iRJ}} & = & \left\{ \psi_{iRJ},H_{dem}\right\} _{d}=0,\label{eq:27-1-1}\\
\dot{\psi_{iIJ}} & = & \left\{ \psi_{iIJ},H_{dem}\right\} _{d}=0,\label{eq:29-1-1}\\
\dot{\bm{A}_{J}} & = & \left\{ \bm{A}_{J},H_{dem}\right\} _{d}=4\pi\bm{Y}_{J},\label{eq:AJ}\\
\dot{\bm{Y}_{J}} & = & \left\{ \bm{Y}_{J},H_{dem}\right\} _{d}=-\frac{c^{2}}{4\pi}\left(\bigtriangledown_{d}^{T}\times\bigtriangledown_{d}\times\bm{A}\right)_{J}.\!\label{eq:YJ}
\end{eqnarray}
Equations \eqref{eq:AJ} and \eqref{eq:YJ} are linear in terms of
$\boldsymbol{A}$ and $\boldsymbol{Y}$, and can be written as
\begin{gather}
\frac{d}{dt}\left(\begin{array}{c}
\boldsymbol{A}\\
\boldsymbol{Y}
\end{array}\right)=Q\left(\begin{array}{c}
\boldsymbol{A}\\
\boldsymbol{Y}
\end{array}\right),
\end{gather}
where $Q$ is a constant matrix. We also use the second order symplectic
mid-point rule for this subsystem, and the one step map $M_{em}:(\psi_{i},\boldsymbol{A},\boldsymbol{Y})^{n}\longmapsto(\psi_{i},\boldsymbol{A},\boldsymbol{Y})^{n+1}$
is given explicitly by
\begin{gather}
\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right)^{n+1}=\left(\begin{array}{c}
\psi_{iR}\\
\psi_{iI}
\end{array}\right)^{n},\\
\left(\begin{array}{c}
\boldsymbol{A}\\
\boldsymbol{Y}
\end{array}\right)^{n+1}=Cay\left(Q\frac{\Delta t}{2}\right)\left(\begin{array}{c}
\boldsymbol{A}\\
\boldsymbol{Y}
\end{array}\right)^{n}.
\end{gather}
Since the map does not change $\psi_{i}$, it is unitary.
Given the second-order symmetric symplectic one-step maps $M_{em}$ and $M_{qm}$
for the subsystems $H_{dem}$ and $H_{dqm}$, respectively, various
symplectic algorithms for the system can be constructed by composition.
For example, a first-order algorithm for $H_{d}$ is
\begin{equation}
M(\Delta t)=M_{em}(\Delta t)\circ M_{qm}(\Delta t).
\end{equation}
A second-order symplectic symmetric method can be constructed by the
following symmetric composition,
\begin{equation}
M^{2}(\Delta t)=M_{em}(\Delta t/2)\circ M_{qm}(\Delta t)\circ M_{em}(\Delta t/2).
\end{equation}
From a $2l$-th order symplectic symmetric method $M^{2l}(\Delta t)$,
a $2(l+1)$-th order symplectic symmetric method can be constructed
as
\begin{gather}
M^{2(l+1)}(\Delta t)=M^{2l}(\alpha_{l}\Delta t)\circ M^{2l}(\beta_{l}\Delta t)\circ M^{2l}(\alpha_{l}\Delta t)\thinspace,\\
\text{with~}\alpha_{l}=\left(2-2^{1/(2l+1)}\right)^{-1},\thinspace\textrm{ and }\beta_{l}=1-2\alpha_{l}\thinspace.
\end{gather}
Obviously, the composed algorithms for the full system is symplectic
and unitary.
\section{Numerical Examples}
\label{sec:5} As numerical examples, two semi-classical problems
have been solved using an implementation of the first-order structure-preserving
geometric algorithm described above. Simulations are carried out on
a Scientific Linux 6.3 OS with two 2.1 GHz Intel Core2 CPUs. The data
structure is designed in coordinate sparse format and the BICGSTAB
method (iteration accuracy $10^{-9}$) is introduced to implement
the Cayley transformation.
The first numerical example is the oscillation of a free hydrogen
atom, which has been well studied both theoretically and experimentally
\cite{Landau1965,Dirac1958}. The simulation domain is a 100$\times$100$\times$100
uniform Cartesian grid, which represents a {[}-5, 5{]}$\times${[}-5,
5{]}$\times${[}-5, 5{]} $\textrm{a.u.}^{3}$ physical space. All
boundaries are periodic. A hydrogen nucleus is fixed on the origin
and the initial wave function is a direct discretization of the ground-state
wavefunction $\psi=\frac{1}{\sqrt{\pi}}e^{-r}$. The time step is
$\bigtriangleup{t}=1.5\delta/\sqrt{3}c$ a.u., where $\delta=\bigtriangleup{x}=\bigtriangleup{y}=\bigtriangleup{z}=0.1$
a.u. and $c\approx137$ a.u. A total of $2\times10^{4}$ simulation
steps covers a complete oscillation cycle of the ground state. Simulation
results show the ground-state oscillation with very small numerical
noise. Due to the finite-grid size effect and self-field effect, the
initial wave function is not the exact numerical ground state of discrete
hydrogen atom. It is only a good approximation, which couples weakly
to other energy levels. The real and imaginary parts of the wavefunction
on the $z=0$ plane at four different times are plotted in Fig.~\ref{fig:1}.
The numerical oscillation period is found to be 12.58 a.u., which
agrees the analytical result $4\pi$ a.u. very well.
\begin{figure}
\includegraphics[width=15cm,height=8cm]{fig1} \caption{\label{fig:1} Oscillation of the wavefunction for a hydrogen atom.
The real and imaginary parts of wave function on the $z=0$
plane which passes through the nuclear center. One oscillation cycle
is shown.}
\end{figure}
The mode structures at the frequency $\nu=1/4\pi$ a.u. are plotted
in Fig.~\ref{fig:2}.
\begin{figure}
\includegraphics[width=15cm,height=8cm]{fig2} \caption{\label{fig:2} Mode structure of the ground state. Real part (a) and
imaginary part (b) on $z=0$ plane are plotted for the frequency
component at $\nu=1/4\pi$ a.u..}
\end{figure}
As expected, the structure-preserving geometric algorithm has excellent
long-term properties. The time-history of numerical errors are plotted
in Fig.~\ref{fig:3}. After a long-term simulation, both total probability
error and total Hamiltonian error are bounded by a small value.
\begin{figure}
\includegraphics[width=15cm,height=8cm]{fig3} \caption{\label{fig:3} Time-history of numerical errors. After a long-term
simulation, both total probability error (a)-(b) and total Hamiltonian
error (c)-(d) are bounded by a small value.}
\end{figure}
In the second example, we simulate the continuous ionization of a
hydrogen atom in an ultrashort intense pulse-train of electromagnetic
field. Because the light-electron speed ratio is about 137, the coupling
between a single ultrashort pulse and the atom is weak. But with the
continuous excitation by the intense pulse-train, the atom can be
ionized gradually. The computation domain and initial wavefunction
are the same as the first example, and the time step is chosen to
$\bigtriangleup{t}=0.1\delta/\sqrt{3}c$ a.u. to capture the scattering
process. To introduce the incident pulse-train, we set the initial
gauge field to be $\bm{A}^{0}=100e^{-(z+2.5)^{2}/0.25}\bm{e}_{x}$
and $\bm{Y}^{0}=0$, representing two linearly-polarized
modulated Gaussian waves which counter-propagate along the
$z$-direction. The evolution of wave function is plotted in Figs.~\ref{fig:4}
and \ref{fig:5}, which depicts the continuous ionization process
by the ultrashort intense pulse-train. The ionization is indicated
by the increasing plane-wave components of the wavefunction. Figure~\ref{fig:6}
illustrates the evolution of scattered gauge field, which dependents
strongly on the electron polarization current.
\begin{figure}
\includegraphics[width=15cm,height=8cm]{fig4} \caption{\label{fig:4} Evolution of the wavefunction (real part). It shows
that at early time, the wave function is localized and the atomic
state is maintained. After a few pulses, the wave function is slightly
modified by the gauge field and plane wave components along the $z$-direction
can be found, which marks the beginning of ionization. With the accumulation
of pulse-train, the wave function drifts along the $\bm{A}\times\bm{k}$
direction, and the atomic state is broken. The increasing plane-wave
components due to ionization can be clearly identified. In this process,
photon momentum is transferred to the electron gradually.}
\end{figure}
\begin{figure}
\includegraphics[width=15cm,height=8cm]{fig5} \caption{\label{fig:5} Evolution of the wavefunction (imaginary part). It
shows the same ionization process as in Fig.\,\ref{fig:4}.}
\end{figure}
\begin{figure}
\includegraphics[width=15cm,height=8cm]{fig6} \caption{\label{fig:6} Evolution of the $A_{z}$ component of scattered gauge
field. The scattered field dependents strongly on the electron polarization
current. It is weak relative to the incident field, which indicates
that the effect of a single atom is small. An ensemble with $10^{3}-10^{4}$
atoms will show significant effects.}
\end{figure}
To demonstrate the excellent long-term properties of the structure-preserving
geometric algorithm, the time-history of numerical errors in this
example are plotted in Fig.~\ref{fig:7}. After a long-term simulation,
the numerical errors of conservation quantities are bounded by a small
value.
\begin{figure}
\includegraphics[width=15cm,height=8cm]{fig7} \caption{\label{fig:7} Time-history of numerical errors. After a long-term
simulation, both total probability error (a)-(b) and total Hamiltonian
error (c)-(d) are bounded by a small value.}
\end{figure}
\section{Conclusions}
\label{sec:6}
The structure-preserving geometric algorithms developed provide us
with a first-principle based simulation capability for the SM system
with long-term accuracy and fidelity. Two numerical examples validated
the algorithm and demonstrated its applications. This approach is particularly valuable when the
laser intensity reaches $10^{18}$ W$\cdot$cm$^{-2}$, which invalidates
many reduced or simplified theoretical and numerical models based
on perturbative analysis. For example, structure-preserving geometric
algorithms can be applied to achieve high fidelity simulations of
the HHG physics and the stabilization effect of ionization. The HHG
has been partially explained by the three-step semi-classical model
and the Lewenstein model in the strong field approximation \cite{Krause1992,Corkum1993,Lewenstein1994}.
After ionization, acceleration and recapture in a strong field, the
electron emits photons with a high order harmonic spectrum. The step
and cutoff structures of the spectrum strongly depend on the beam
intensity, photon energy and atomic potential. With the time dependent
wave function, the spectrum $F(\omega)=\int_{T}\int_{V}\psi^{*}(t)\ddot{\bm{x}}\psi(t)e^{i\omega{t}}\mathrm{d}^{3}x\mathrm{d}t$
can be calculated numerically. It can also be obtained by calculating the scattered gauge field spectrum
via a class of numerical probes around the potential center. Numerically
calculated wave functions also contains detailed information about
the dynamics of ionization. In a strong field, the atomic potential
is seriously dressed, and the wave function becomes non-localized.
Therefore, electrons have a chance of jumping into free states. Above
a specified threshold, the stabilization will quickly appears, i.e.,
the ionization rate increases slowly with the growth of beam intensity
and photon energy \cite{Pont1990,Eberly1993}. By introducing a proper
absorbing boundary condition in the simulation, the ionization rate
can be calculated as $\Gamma_{I}=\oint\frac{1}{2}(\psi_{R}\bigtriangledown\psi_{I}-\psi_{I}\bigtriangledown\psi_{R}){\cdot}\mathrm{d}\bm{S}$,
which gives a non-perturbative numerical treatment of the phenomena.
\section*{Acknowledgements}
\label{ack}
This research is supported by the National Natural Science Foundation
of China (NSFC-51477182, 11575185, 11575186), ITER-China Program (2015GB111003)
and Key Research Program of Frontier Sciences CAS (QYZDB-SSW-SYS004).
\section*{References}
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}
\expandafter\ifx\csname url\endcsname\relax \global\long\def\url#1{\texttt{#1}}
\fi \expandafter\ifx\csname urlprefix\endcsname\relax\global\long\def\urlprefix{URL }
\fi \expandafter\ifx\csname href\endcsname\relax \global\long\def\href#1#2{#2}
\global\long\def\path#1{#1}
\fi
| proofpile-arXiv_066-622 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Abstract (Not appropriate in this style!)
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center
\quotation
\fi
}
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{
\@ifundefined{maketitle}{\def\maketitle#1{}}{
\@ifundefined{affiliation}{\def\affiliation#1{}}{
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{
\@ifundefined{newfield}{\def\newfield#1#2{}}{
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par
\newcount\c@chapter}{
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{
\@ifundefined{subsection}{\def\subsection#
{\par(Subsection head:)#1\par }}{
\@ifundefined{subsubsection}{\def\subsubsection#
{\par(Subsubsection head:)#1\par }}{
\@ifundefined{paragraph}{\def\paragraph#
{\par(Subsubsubsection head:)#1\par }}{
\@ifundefined{subparagraph}{\def\subparagraph#
{\par(Subsubsubsubsection head:)#1\par }}{
\@ifundefined{therefore}{\def\therefore{}}{
\@ifundefined{backepsilon}{\def\backepsilon{}}{
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{
\@ifundefined{registered}
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\text{R}$}\hfil\crcr
\mathhexbox20D}}}}{
\@ifundefined{Eth}{\def\Eth{}}{
\@ifundefined{eth}{\def\eth{}}{
\@ifundefined{Thorn}{\def\Thorn{}}{
\@ifundefined{thorn}{\def\thorn{}}{
\def\TEXTsymbol#1{\mbox{$#1$}
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{
\newdimen\theight
\def\Column
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight
\rightline{\rlap{\box\z@}
\vss
\def\qed
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@
\def\cents{\hbox{\rm\rlap/c}
\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}
\def\vvert{\Vert
\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}
\def\dB{\hbox{{}}
\def\mB#1{\hbox{$#1$}
\def\nB#1{\hbox{#1}
\def\note{$^{\dag}
\def\newfmtname{LaTeX2e}
\def\chkcompat
\if@compatibility
\else
\usepackage{latexsym}
\fi
}
\ifx\fmtname\newfmtname
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\chkcompat
\fi
\def\alpha{{\Greekmath 010B}
\def\beta{{\Greekmath 010C}
\def\gamma{{\Greekmath 010D}
\def\delta{{\Greekmath 010E}
\def\epsilon{{\Greekmath 010F}
\def\zeta{{\Greekmath 0110}
\def\eta{{\Greekmath 0111}
\def\theta{{\Greekmath 0112}
\def\iota{{\Greekmath 0113}
\def\kappa{{\Greekmath 0114}
\def\lambda{{\Greekmath 0115}
\def\mu{{\Greekmath 0116}
\def\nu{{\Greekmath 0117}
\def\xi{{\Greekmath 0118}
\def\pi{{\Greekmath 0119}
\def\rho{{\Greekmath 011A}
\def\sigma{{\Greekmath 011B}
\def\tau{{\Greekmath 011C}
\def\upsilon{{\Greekmath 011D}
\def\phi{{\Greekmath 011E}
\def\chi{{\Greekmath 011F}
\def\psi{{\Greekmath 0120}
\def\omega{{\Greekmath 0121}
\def\varepsilon{{\Greekmath 0122}
\def\vartheta{{\Greekmath 0123}
\def\varpi{{\Greekmath 0124}
\def\varrho{{\Greekmath 0125}
\def\varsigma{{\Greekmath 0126}
\def\varphi{{\Greekmath 0127}
\def\nabla{{\Greekmath 0272}}
\def\FindBoldGroup
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}
}
\def\Greekmath#1#2#3#4
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}
\newcounter{equationnumber}
\def\mathletters
\addtocounter{equation}{1}
\edef\@currentlabel{\theequation
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0
\edef\theequation{\@currentlabel\noexpand\alph{equation}
}
\def\endmathletters
\setcounter{equation}{\value{equationnumber}
}
}{}
\@ifundefined{BibTeX}
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{
\@ifundefined{AmS
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{
\ifx\ds@amstex\relax
\message{amstex already loaded}\makeatother\endinpu
\else
\@ifpackageloaded{amstex
{\message{amstex already loaded}\makeatother\endinput}
{}
\@ifpackageloaded{amsgen
{\message{amsgen already loaded}\makeatother\endinput}
{}
\fi
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode
\def\FN@{\futurelet\next
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@
\def\iiiint{\DOTSI\intno@4 \FN@\ints@
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@
\def\ints@{\findlimits@\ints@@
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop
\def\intic@
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}
\def\intdots@{\mathchoice{\plaincdots@
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}
\def\RIfM@{\relax\protect\ifmmode}
\def\text{\RIfM@\expandafter\text@\else\expandafter\mbox\fi}
\let\nfss@text\text
\def\text@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}
{\textdef@\textstyle\tf@size{\firstchoice@false #1}
{\textdef@\textstyle\sf@size{\firstchoice@false #1}
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}
\glb@settings}
\def\textdef@#1#2#3{\hbox{
\everymath{#1
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr
\def\Sb{_\multilimits@
\def\endSb{\crcr\egroup\egroup\egroup
\def\Sp{^\multilimits@
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$
\def\overrightarrow{\mathpalette\overrightarrow@
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}
\def\overleftrightarrow{\mathpalette\overleftrightarrow@
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}
\def\underrightarrow{\mathpalette\underrightarrow@
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}
\def\underleftrightarrow{\mathpalette\underleftrightarrow@
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\projlim{\qopnamewl@{proj\,lim}}
\def\injlim{\qopnamewl@{inj\,lim}}
\def\varinjlim{\mathpalette\varlim@\rightarrowfill@}
\def\varprojlim{\mathpalette\varlim@\leftarrowfill@}
\def\varliminf{\mathpalette\varliminf@{}}
\def\varliminf@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\varlimsup{\mathpalette\varlimsup@{}}
\def\varlimsup@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}
\def\binom#1#2{{#1 \choose #2}
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}
\def\QATOP#1#2{{#1 \atop #2}
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}
\def\QABOVE#1#2#3{{#2 \above#1 #3}
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}
\def\tint{\mathop{\textstyle \int}
\def\tiint{\mathop{\textstyle \iint }
\def\tiiint{\mathop{\textstyle \iiint }
\def\tiiiint{\mathop{\textstyle \iiiint }
\def\tidotsint{\mathop{\textstyle \idotsint }
\def\toint{\mathop{\textstyle \oint}
\def\tsum{\mathop{\textstyle \sum }
\def\tprod{\mathop{\textstyle \prod }
\def\tbigcap{\mathop{\textstyle \bigcap }
\def\tbigwedge{\mathop{\textstyle \bigwedge }
\def\tbigoplus{\mathop{\textstyle \bigoplus }
\def\tbigodot{\mathop{\textstyle \bigodot }
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }
\def\tcoprod{\mathop{\textstyle \coprod }
\def\tbigcup{\mathop{\textstyle \bigcup }
\def\tbigvee{\mathop{\textstyle \bigvee }
\def\tbigotimes{\mathop{\textstyle \bigotimes }
\def\tbiguplus{\mathop{\textstyle \biguplus }
\def\dint{\mathop{\displaystyle \int}
\def\diint{\mathop{\displaystyle \iint }
\def\diiint{\mathop{\displaystyle \iiint }
\def\diiiint{\mathop{\displaystyle \iiiint }
\def\didotsint{\mathop{\displaystyle \idotsint }
\def\doint{\mathop{\displaystyle \oint}
\def\dsum{\mathop{\displaystyle \sum }
\def\dprod{\mathop{\displaystyle \prod }
\def\dbigcap{\mathop{\displaystyle \bigcap }
\def\dbigwedge{\mathop{\displaystyle \bigwedge }
\def\dbigoplus{\mathop{\displaystyle \bigoplus }
\def\dbigodot{\mathop{\displaystyle \bigodot }
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }
\def\dcoprod{\mathop{\displaystyle \coprod }
\def\dbigcup{\mathop{\displaystyle \bigcup }
\def\dbigvee{\mathop{\displaystyle \bigvee }
\def\dbigotimes{\mathop{\displaystyle \bigotimes }
\def\dbiguplus{\mathop{\displaystyle \biguplus }
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation
\fi
\fi
\global\tag@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\endequation
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\tag@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\tag@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\tag@fals
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \tag@false
\def\tag{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1
\global\tag@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1
\global\tag@true
\global\def\@taggnum{#1
}
\makeatother
\endinput
\end{filecontents}
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{makeidx}
\usepackage{amssymb}
\usepackage{times}
\usepackage{anysize}
\setcounter{MaxMatrixCols}{10}
\marginsize{3cm}{3cm}{2cm}{2cm}
\newtheorem{satz}{Theorem}[section]
\newtheorem{definition}[satz]{Definition}
\newtheorem{lemma}[satz]{Lemma}
\newtheorem{koro}[satz]{Corollary}
\newtheorem{bemerkung}[satz]{Remark}
\newtheorem{assumption}{Assumption}
\newtheorem{proposition}[satz]{Proposition}
\newtheorem{notation}[satz]{Notation}
\newenvironment{proof}{\par\noindent {\it Proof:} \hspace{7pt}}{\hfill\hbox{\vrule width 7pt depth 0pt height 7pt}
\par\vspace{10pt}}
\newcommand{\bT}{{\mathbb T}}
\newcommand{\bN}{{\mathbb N}}
\newcommand{\bC}{{\mathbb C}}
\newcommand{\bZ}{{\mathbb Z}}
\newcommand{\bR}{{\mathbb R}}
\newcommand{\bE}{{\mathbb E}}
\newcommand{\bX}{{\mathbb X}}
\newcommand{\cB}{{\cal B}}
\newcommand{\cC}{{\cal C}}
\newcommand{\cF}{{\cal F}}
\newcommand{\cT}{{\cal T}}
\newcommand{\cU}{{\cal U}}
\newcommand{\cP}{{\cal P}}
\newcommand{\cE}{{\cal E}}
\newcommand{\cH}{{\cal H}}
\newcommand{\GC}{Gram constant}
\newcommand{\DB}{determinant bound}
\newcommand{\Mcomm}[1]{\par\noindent {\bf M: \em #1} \par\noindent}
\newcommand{\Wcomm}[1]{\par\noindent {\bf W: \em #1} \par\noindent}
\newcommand{\dB}{\delta}
\newcommand{\dbigcup}{\mathop{\displaystyle \bigcup }}
\input{tcilatex}
\begin{document}
\title{Microscopic Foundations of Ohm and Joule's Laws -- The Relevance of
Thermodynamics}
\author{J.-B. Bru \and W. de Siqueira Pedra}
\maketitle
\begin{abstract}
We give a brief historical account on microscopic explanations of electrical
conduction. One aim of this short review is to show that Thermodynamics is
fundamental to the theoretical understanding of the phenomenon. We discuss
how the 2nd law, implemented in the scope of Quantum Statistical Mechanics,
can be naturally used to give mathematical sense to conductivity of very
general quantum many--body models. This is reminiscent of original ideas of
J.P. Joule. We start with Ohm and Joule's discoveries and proceed by
describing the Drude model of conductivity. The impact of Quantum Mechanics
and the Anderson model are also discussed. The exposition is closed with the
presentation of our approach to electrical conductivity based on the 2nd law
of Thermodynamics as passivity of systems at thermal equilibrium. It led to
new rigorous results on linear conductivity of interacting fermions. One
example is the existence of so--called AC--conductivity measures for such a
physical system. These measures are, moreover, Fourier transforms of time
correlations of current fluctuations in the system. I.e., the conductivity
satisfies, for a large class of quantum mechanical microscopic models,
Green--Kubo relations.
\end{abstract}
\noindent\textbf{Keywords:} Ohm's law, Joule's law, conductivity measure, 2nd law, fermions.
\section{Electrical Conductivity and Classical Physics}
\subsection{The Genesis of Ohm and Joule's laws}
G.S. Ohm was born in 1789 in Erlangen and came from a modest background (son
of a master locksmith). Nevertheless, he succeeded in learning basic
mathematics and sciences and became for about a decade teacher of
mathematics and physics in Cologne. During this time, he had been able to
elaborate his own experiments on electrical resistivity. He was originally
inspired by J. Fourier's work, published in 1822, about heat theory. Indeed,
G.S. Ohm drew a comparison of heat conduction with the electrical one in a
metallic bar. Based on this intuition, he published a few papers on
experimental outcomes about electrical resistivity in metals. He concluded
his work on electrical conduction by his famous theory \cite{thermo-ohm},
which was a theoretical deduction of his law from \textquotedblleft \emph
first principles}\textquotedblright . Indeed, he states that the current in
the steady regime is proportional to the voltage applied to the conducting
material. The proportionality coefficient is the conductivity (or inverse
resistivity) of the physical system. It is an empirical law which looks
almost obvious nowadays.
At that time, however, his writings were almost unknown. His book \cit
{thermo-ohm} was at best completely ignored and at worst treated really
negatively. Rather than scientific, some critics were more ethical as they
were based on a priori conceptions on what science and nature are, probably
on what L. Daston and P. Galison have called \emph{truth--to--nature} \cit
{objectivity}. Quoting \cite[p. 243]{history1}: \bigskip
\noindent ...\textit{Ohm's theory, to quote one critic, was
\textquotedblleft a web of naked fancies\textquotedblright , which could
never find the semblance of support from even the most superficial
observation of facts; \textquotedblleft he who looks on the
world\textquotedblright , proceeds the writer, \textquotedblleft with the
eye of reverence must turn aside from this book as the result of an
incurable delusion, whose sole effort is to detract from the dignity of
nature\textquotedblright . ... where he had looked for commendation he found
at best complete indifference, and at worst open abuse and derision. ... The
influence of this opposition} (some school official)\textit{\ reached the
Minister of Education himself, and he, speaking officially, definitely
pronounced it as his opinion that \textquotedblleft a physicist who
professed such heresies was unworthy to teach science\textquotedblright .
\bigskip
Retrospectively, such comments in \textquotedblleft a country so well
represented in the world of science by men of eminence and
knowledge\textquotedblright ~\cite{history1} are marks of revolutionary
ideas, but it was a real bitter blow for G.S. Ohm: He gave up his teacher
position at Cologne and started six years of hard times. His work was
nevertheless occasionally cited and rumors about Ohm's theory started to
appear in different places. This includes America where the famous physicist
J. Henry asked in 1833 his colleagues: \textquotedblleft Could you give me
any information about the theory of Ohm? Where is it to be
found?\textquotedblright\ J. Henry succeeded in having this information by
going to England in 1837 at a time when Ohm's work had already become
famous, particularly outside his own country.
Although at the origin of Ohm's intuition, the relation between heat and
electrical conduction has not been established by himself, but J.P. Joule,
who was born in 1818 in England. The pivotal ingredient was the wide concept
of energy. Joule's intuition seems to have been that the different physical
properties appearing in nature can be tracked by the concept of energy. He
thus studied different forms of energy in order to relate them. The
conversion of mechanical work into heat is a famous topic of such studies.
His works, although also very controversial at the beginning, were seminal
and yielded the \emph{1st law of Thermodynamics}, see, e.g.,
~\cite{Thermo1
. Recall also that all mechanical work can be converted to heat but the
converse is not true, in general. This observation refers to the \emph{2nd
law of Thermodynamics} and the concept of \emph{entropy} invented by R.J.E.
Clausius in 1865.
Applied to electricity theory, Joule's intuition allowed to establish a
relation between heat and electrical conduction. Indeed, more than one
decade after Ohm's discovery \cite{thermo-ohm} on linear electrical
conduction, the physicist J. P. Joule observed \cite{J} in 1840 that the heat
(per second) produced within an electrical circuit is proportional to the
electrical resistance and the square of the current:\bigskip
\noindent ...\textit{the calorific effects of equal quantities of
transmitted electricity are proportional to the resistances opposed to its
passage, whatever may be the length, thickness, shape, or kind of metal
which closes the circuit: and also that, coeteris paribus, these effects are
in the duplicate ratio of the quantities of transmitted electricity; and
consequently also in the duplicate ratio of the velocity of transmission.
\smallskip
\hfill \lbrack Joule, 1840]\bigskip
Nowadays, electrical conductivity usually refers to Ohm and Joule's laws.
They are indeed among the most resilient laws of (classical) electricity
theory. Materials are called ohmic or nonohmic, depending on whether they
obey Ohm's law. Both assertions are empirical laws and, as usual, they
generated at least as many theoretical problems as they solved. From a
mathematically rigorous point of view, the microscopic origin of the
phenomenological behavior of macroscopic conductors described by these laws
is still not completely understood, specially in the DC regime. Moreover, as
recent experiments show, Ohm's law is not only valid at macroscopic\emph{\
scales. Indeed, the validity of Ohm's law at the atomic scale for a purely
quantum system has experimentally been verified \cite{Ohm-exp} in 2012. Such
a behavior was unexpected \cite{Ohm-exp2}:\bigskip
\noindent \textit{...In the 1920s and 1930s, it was expected that classical
behavior would operate at macroscopic scales but would break down at the
microscopic scale, where it would be replaced by the new quantum mechanics.
The pointlike electron motion of the classical world would be replaced by
the spread out quantum waves. These quantum waves would lead to very
different behavior. ... Ohm's law remains valid, even at very low
temperatures, a surprising result that reveals classical behavior in the
quantum regime. }\smallskip
\hfill \lbrack D.K. Ferry, 2012]
\subsection{Towards a microscopic theory of electrical conduction}
In the end of the nineteenth century, the so--called classical physics
reached a very high level of completeness with Classical Mechanics,
Electrodynamics, and Thermodynamics. However, borderline problems became
manifestly more and more important and eventually led to the scientific
revolution of the beginning of the twentieth century. For instance, the
study of the link between Classical Mechanics and Thermodynamics yielded the
so--called Statistical Physics via Gibbs and Boltzmann's genius intuitions.
Classical Mechanics is indeed a causal theory based on elementary physical
objects satisfying Newton's laws of motion. The exceptional success of this
theory together with new technologies like photography had propagated a new
view point on science during the last part of the nineteenth century with
the so--called \emph{mechanical objectivity} \cite{objectivity}. By contrast,
Thermodynamics emphasizes the concepts of energy and entropy of macroscopic
systems. It speaks about reversible and irreversible processes, but it does
not care about the concrete system under consideration. Classical Mechanics
is in some sense a bottom--up or \textquotedblleft local\textquotedblright\
approach, whereas Thermodynamics is a top--down or global one.
In order to bridge the gap between both theories, L. Boltzmann successfully
tried to go from Classical Mechanics towards Thermodynamics via statistical
or probabilist methods. For more details on Boltzmann's legacy, see
~\cite{Boltzman
. His $H$--theorem (1872) was undoubtedly an important achievement \cit
{Boltzmanbisbis} as it has provided a mechanical explanation of the 2nd law
of Thermodynamics from the dynamics of rarefied gases. Boltzmann's vision of
\textquotedblleft atoms\textquotedblright\ as the physical objects
satisfying Newton's laws was however again very controversial for a long tim
\cite{Boltzmanbis}: \textquotedblleft have you seen any?\textquotedblright\
might have said the famous physicist and philosopher E. Mach as a reply to
the issue of atoms (cf. \cite[p. 20]{Boltzmanbis}). E. Mach had indeed a
philosophical approach centered on the world of sensations in a similar
spirit of mechanical objectivity, whereas L. Boltzmann also focussed on
mathematical structures. See later the development of \emph{structural
objectivity}\cite{objectivity} (M. Planck (1906), B. Russell, H. Poincar\'{e
, C.S. Peirce, etc.). Similar ethical oppositions appeared in other
sciences: S. Ram\'{o}n y Cajal and C. Goldi were together Nobel laureates in
1906, but C. Goldi violently opposed S. Ram\'{o}n y Cajal's theory of
neurons (similar to Boltzmann's theory of atoms) to explain the global
system which is the brain. For more details, see
~\cite{cajal-goldi
. As explained in
~\cite{objectivity
, the opposite conceptions of science were in this case truth--to--nature
(Goldi) and mechanical objectivity (Ram\'{o}n y Cajal) as well as continuous
(Goldi) versus discontinuous (Ram\'{o}n y Cajal) visions.
In the same spirit as Boltzmann, it was natural to rise the question of the
microscopic origin of Ohm and Joule's laws. In 1846, W. Weber conjectured
that currents were a flow of charged fluids and in 1881, H. von Helmholtz
argued the existence of positive and negative charges as \textquotedblleft
atoms of electricity\textquotedblright . The discovery of the electron took
place in the last years of the nineteenth century by J.J. Thomson (Nobel
Prize in Physics 1906) and others. It is the first discovered elementary
particle. Based on the vision that current is a flow of electrons, the
celebrated Drude model was next proposed \cite{drude} in 1900 to give a
mechanical explanation of the phenomenon of conductivity. This model or its
extension, the Drude--Lorentz model (1905), are still used as microscopic
explanations of conductivity in textbooks. Indeed, although the motion of
electrons and ions is treated classically and the interaction between these
two species is modeled by perfectly elastic random collisions, this quite
elementary model provides a qualitatively good description of DC-- and
AC--conductivities in metals.
\section{Electrical Conductivity and Quantum Mechanics}
\subsection{Emergence of Quantum Mechanics}
The main principles of physics were considered as well--founded by the end
of the nineteenth century, even with, for instance, no satisfactory
explanation of the phenomenon of thermal radiation, first discovered in 1860
by G. Kirchhoff. In contrast to classical physics, which deals with
continuous quantities, Planck's intuition was to introduce an intrinsic
discontinuity of energy and a unsual \footnote{in regards to Boltzmann's studies, which meanwhile have strongly influenced Planck's work. In modern terms Planck used the celebrated Bose--Einstein statistics.}
statistics (without any conceptual foundation, in a ad hoc way) to explain thermal radiation. Assuming the existence
of a quantum of action $h$, the celebrated Planck's constant, and this pivotal statistics he derived the
well--known Planck's law of thermal radiation. Inspired by Planck's ideas,
Einstein presented his famous discrete (corpuscular) theory of light to
explain the photoelectric effect.
Emission spectra of chemical elements had also been known since the
nineteenth century and no theoretical explanation was available at that
time. It became clear that electrons play a key role in this phenomenon.
However, the classical solar system model of the atom failed to explain the
emitted or absorbed radiation. Following again Planck's ideas, N. Bohr
proposed in 1913 an atomic model based on discrete energies that
characterize electron orbits. It became clear that the main principles of
classical physics are unable to describe atomic physics.
Planck's quantum of action, Einstein's quanta of light (photons), and Bohr's
atomic model could not be a simple extension of classical physics, which, in
turn, could also not be questioned in its field of validity. N. Bohr tried
during almost a decade to conciliate the paradoxical looking microscopic
phenomena by defining a radically different kind of logic. Bohr's concept of
complementarity gave in 1928 a conceptual solution to that problem and
revolutionized the usual vision of nature. See, e.g.,
~\cite{chevalley
. Classical logic should be replaced by quantum logic as claimed \cit
{BvonNeu} by G. Birkhoff and J. von Neumann in 1936.
On the level of theoretical physics, until 1925, quantum corrections were
systematically included, in a rather \emph{ad hoc} manner, into classical
theories to allow explicit discontinuous properties. Then, as explained for
instance in
~\cite{shrodinger
, two apparently complementary directions were taken by W.K. Heisenberg and
E. Shr\"{o}dinger, respectively, to establish basic principles of the new
quantum physics, in contrast with the \textquotedblleft old quantum
theory\textquotedblright\ starting in 1900. Indeed, even with the so--called
correspondence principle of N. Bohr, \textquotedblleft many problems, even
quite central ones like the spectrum of helium atom, proved inaccessible to
any solution, no matter how elaborate the conversion\textquotedblright , see
\cite[p. 18]{shrodinger}.
\subsection{Quantum Fermi liquids}
Electric current is carried by electrons, purely quantum objects (W.E.
Pauli, 1925; E. Fermi, 1925; P.A.M. Dirac, 1929), whereas the Drude model
describes \emph{non--interacting classical} particles interacting with
impurities via perfectly elastic collisions. Quantum Mechanics, which
governs the microscopic world, represents a radical transformation of usual
principles of classical physics and it is therefore not at all satisfactory
to see the Drude (or the Drude--Lorentz) model as a proper microscopic
explanation of conductivity, even with good agreements with experimental
data. As one can see from the existence of superconducting phases first
discovered in 1911, electrons can show collective behaviors while satisfying
the celebrated Pauli exclusion principle.
A. Sommerfeld and H. Bethe modified in 1933 the Drude model to take into
account quantum effects. Essentially, they replaced the classical
point--like particles of Drude, carriers of electrical current, with
fermions. In particular, the carriers present quantum coherences and obey
the Fermi--Dirac statistics. However, the Drude--Sommerfeld model describes
a system of non--interacting fermions although electrons strongly interact
with each other via the Coulomb repulsion. A formal explanation of the
success of this model is given \cite{Landau} by L.D. Landau in the fifties.
His theory is based on the concept of \emph{Landau Fermi liquids} (or Fermi
liquids).
Landau's idea is, in a caricatured view, that the low--energy exited states
of a Fermi system with interparticle interactions can be mapped onto the
states of an effective non--interacting (or ideal) Fermi system. The
theoretical justification of such a behavior, i.e., the fact that the
electron--electron scattering remains negligible to change the momentum
distribution, results from the Pauli exclusion principle for energies near
the Fermi level. More precisely, if the system is at initial time in a state
closed to an ideal system (weakly excited), then its time--dependent state
can be uniquely described by occupation numbers of \emph{quasiparticles} (as
approximated quantum numbers). Moreover, L.D. Landau postulates the
existence of a function $\mathrm{f}_{k,k^{\prime }}$, the so--called Landau
interaction function, which quantifies the energy change of a quasiparticle
of quasimomentum $k$ in the presence of a quasiparticle of quasimomentum
k^{\prime }$. The effective mass, another parameter of Fermi liquids,
determines the dispersion relation of quasiparticles, i.e., the energy of
quasiparticles as a function of their quasimomenta. This effective (or
phenomenological) theory has been very successful in explaining the physics
of some electron systems, called Fermi liquids. Fermion systems are called
non--Fermi liquids if their behavior does not correspond to Landau's
predictions. Non--Fermi liquid behaviors usually appear in low dimensions.
For instance, in one dimension, the celebrated Luttinger liquid replaces the
(Landau) Fermi liquid. For more details, see
~\cite{dia-current
.
\subsection{From theoretical physics to mathematics: The Anderson model}
Resistivity of metals is believed to be due to interparticle interactions
but also to inhomogeneities of the conducting crystal. Disordered electron
liquids are therefore an important issue in this context. The theory of
Fermi liquids can be extended to disordered systems but major differences
appear as compared to the (space) homogeneous systems. New properties like
the so--called \emph{Anderson localization} are consequence of strong space
inhomogeneities, even in the absence of interparticle interactions.
Anderson localization corresponds to the absence of electron transport at
strong disorder and has been predicted \cite{Anderson} by the physicist P.W.
Anderson in 1958. This allows to guess a metal--insulator transition in
three dimensions. This theory has experimentally been investigated and P.W.
Anderson, together with N.F. Mott and J.H. van Vleck, won the 1977 Nobel
price in physics for \textquotedblleft their fundamental theoretical
investigations of the electronic structure of magnetic and disordered
systems\textquotedblright .
The Anderson model corresponds to a single quantum particle within a random
potential. It is one of the most important types of random Schr\"{o}dinger
operators, which constitute nowadays an advanced and relatively mature
branch of mathematics. In fact, random Schr\"{o}dinger operators start to be
studied in the seventies. The Anderson localization for an one--dimensional
model was first proved by I. Goldsheid, S. Molchanov and L. Pastur in 1977,
while a similar result for the one--dimensional Anderson model was obtained
in 1981 by H. Kunz and B. Souillard. It is known that, in general, the
one--dimensional Anderson model only has purely point spectrum with a
complete set of localized eigenstates (Anderson localization) and it is thus
believed that no steady current can exist in this case. For more detailed
introduction to the Anderson model and more general random Schr\"{o}dinger
operators, see for instance the lecture notes of
~\cite{WernerKirsch
.
Nevertheless, mathematical studies usually focus on the existence of
(dynamical or spectral) Anderson localizations and, even in absence of
interactions, there are only few mathematical results on transport
properties of such random models that yield Ohm's law in some form.
In 2007, A. Klein, O. Lenoble and P. M\"{u}ller introduced \cite{Annale} for
the first time the concept of a \textquotedblleft condu
\
tivity measure\textquotedblright\ for a system of non--interacting fermions
subjected to a random potential. More precisely, the authors considered the
Anderson tight--binding model in presence of a time--dependent spatially
homogeneous electric field that is adiabatically switched on. The fermionic
nature of charge carriers -- electrons or holes in crystals -- as well as
thermodynamics of such systems were implemented by choosing the Fermi--Dirac
distribution as the initial density matrix of particles. In
~\cite{Annale}
only systems at zero temperature with Fermi energy lying in the localization
regime are considered, but it is shown in
~\cite{JMP-autre}
that a conductivity measure can also be defined without the localization
assumption and at any positive temperature. Their study can thus be seen as
a mathematical derivation of Ohm's law for space--homogeneous electric
fields having a specific time behavior.
~\cite{Cornean}
is another mathematical result on free fermions proving Ohm's law for
graphene--like materials subjected to space--homogeneous and time--periodic
electric fields. Joule's law and heat production are not considered, at
least not explicitly, in these mathematical studies.
\section{Electrical Conductivity and 2nd Law of Thermodynamics}
Via
~\cite{Annale,JMP-autre}
one sees that measures (instead of functions or other types of
distributions) are the natural mathematical objects to be used to describe
conductivity starting form microscopic quantum dynamics. We claim that this
is so because of the 2nd law of Thermodynamics. Indeed, such a principle
guarantees the positivity of certain quadratic forms on external electric
fields, which naturally appears by considering linear response. By Bochner's
theorem, in a convenient form, such quadratic forms define measures. In the
case of current response to external electric fields, one gets
AC--conductivity measures. This approach permits to tackle the mathematical
problem of a rigorous microscopic description of the phenomenon of linear
conductivity starting from first principles. Moreover, it is general enough
to be applied to interacting systems. We implement the 2nd law of
Thermodynamics in the scope of algebraic Quantum Mechanics, by using the
remarkable results \cite{PW} of W. Pusz and S. L. Woronowicz: We consider
the 2nd law as a first principle of Physics which supplements Quantum
Mechanics in the sense that it singles out special states of the considered
systems. Indeed, states of infinite systems that are compatible with the 2nd
law exist for a huge class of dynamics.
In fact, the 2nd law is \textquotedblleft \textit{one of the most perfect
laws in physics}\textquotedblright\ \cite[Section 1]{lieb-yngvasonPhysReport}
and it has never been faulted by reproducible experiments. Its history
starts with works of S. Carnot in 1824. Different popular formulations of
the same principle have been stated by R.J.E. Clausius, W. Thomson or Lord
Kelvin (and M. Planck), and C. Car
\
th\'{e}odory. Our study is based on Kelvin--Planck statement while avoiding
the concept of \textquotedblleft cooling\textquotedblright\ \cite[p. 49
{lieb-yngvasonPhysReport}: \bigskip
\noindent \textit{No process is possible, the sole result of which is a
change in the energy of a simple system (without changing the work
coordinates) and the raising of a weight. }\bigskip
Using this formulation of the 2nd law, we define the concept of \emph
thermal equilibrium} states by using algebraic Quantum Mechanics as
mathematical framework. It is a well--known approach -- originally due to J.
von Neumann (cf. von Neumann algebras, $C^{\ast }$--algebras) -- that
extends the Hilbert space formulation of Quantum Mechanics. One important
result of the theory of $C^{\ast }$--algebras, obtained in the forties, is
the celebrated GNS (Gel'fand--Naimark--Segal) representation of states,
which permits a natural relation between the new algebraic formulation and
the usual Hilbert space based formulation of Quantum Mechanics to be
established. Indeed, I.E. Segal proposed to leave the Hilbert space approach
to consider quantum observables as elements of certain involutive Banach
algebras, now known as $C^{\ast }$--algebra. The GNS representation has also
led to very important applications of the Tomita--Takesaki theory, developed
in seventies, to Quantum Field Theory and Statistical Mechanics. These
developments mark the beginning of the algebraic approach to Quantum
Mechanics and Quantum Field Theory. For more details, see, e.g.,
~\cite{Emch
.
The algebraic formulation turned out to be extremely important and fruitful
for the mathematical foundations of Quantum Statistical Mechanics and have
been an important branch of research during decades with lots of works on
quantum spin and Fermi systems. See, e.g.,
~\cite{BratteliRobinson,Israel}
(spin) and
~\cite{Araki-Moriya,BruPedra2,BruPedra-homog}
(Fermi). Basically, it uses some $C^{\ast }$--algebra $\mathcal{X}$, the
self--adjoint elements of which are the so--called observables of the
physical system. States on the $C^{\ast }$--algebra $\mathcal{X}$ are, by
definition, continuous linear functionals $\rho \in \mathcal{X}^{\ast }$
which are normalized and positive, i.e., $\rho (\mathbf{1})=1$ and $\rho
(A^{\ast }A)\geq 0$ for all $A\in \mathcal{X}$. They represent the state of
the physical system.
To conveniently define equilibrium states in our case,
~\cite{PW}
is pivotal because it gives a definition of equilibrium by using the
Kelvin--Planck statement via the notion of \emph{passive} states: The
internal dynamics of the system is a strongly continuous one--parameter
group $\tau \equiv \{\tau _{t}\}_{t\in {\mathbb{R}}}$ of $\ast
--automorphisms of $\mathcal{X}$ with (generally unbounded) generator
\delta $. Usually, $\delta $ is a dissipative and closed derivation of
\mathcal{X}$. On this system, one applies a \emph{cyclic} process of length
T\geq 0$, that is, a continuously differentiable family $\{A_{t}\}_{t\in
\mathbb{R}}\subset \mathcal{X}$ of self--adjoint elements of $\mathcal{X}$
such that $A_{t}=0$ for all $t\leq s$ and $t\geq T+s$. The perturbed
dynamics is the solution $\{\tau _{t,s}\}_{t\geq s}$ of the non--autonomous
evolution equation defined, for any $B\in \mathrm{Dom}(\delta )$, by
\[
\forall s,t\in {\mathbb{R}},\ t\geq s:\quad \partial _{t}\tau _{t,s}\left(
B\right) =\tau _{t,s}\left( \delta \left( B\right) +i\left[ A_{t},B\right]
\right) ,\quad \tau _{s,s}\left( B\right) :=B\ .
\
The state of the system evolves as $\rho _{t}=\rho \circ \tau _{t,s}$ for
any $t\geq s$ at fixed initial state $\rho \in \mathcal{X}^{\ast }$. Then,
as explained in \cite[p. 276]{PW}, a state $\rho \in \mathcal{X}^{\ast }$ is
\emph{passive} iff the full work performed by the external device is
non--negative for all cyclic process $\{A_{t}\}_{t\geq s}\subset \mathcal{X}$
of any time--length $T\geq 0$, i.e.,
\begin{equation}
L_{\rho }^{A}:=\int_{s}^{T}\rho \circ \tau _{t,s}\left( \partial
_{t}A_{t}\right) \mathrm{d}t\geq 0\ . \label{work}
\end{equation
In this way the Kelvin--Planck statement of the 2nd law can be formulated in
precise mathematical terms. If the product state $\otimes _{j=1}^{n}\rho $
is passive for any $n\in \mathbb{N}$ copies $(\mathcal{X}_{1},\tau _{1},\rho
_{1}),$ $\ldots ,(\mathcal{X}_{n},\tau _{n},\rho _{n})$ of the original system
defined by $(\mathcal{X},\tau ,\rho )$, then $\rho $ is called \emph
completly passive} \cite{PW}. Such states are the \emph{thermal equilibrium
states} of our setting. \cite[Theorem 1.4]{PW} shows that thermal
equilibrium states in this sense are exactly the \emph{KMS} \cit
{BratteliRobinson} (Kubo--Martin--Schwinger) states of the corresponding
C^{\ast }$--dynamical system.
In our approach to electrical conduction, the $C^{\ast }$--algebra $\mathcal
X}$\ is the CAR algebra associated to the $d$--dimensional cubic lattice
\mathfrak{L}:=\mathbb{Z}^{d}$ ($d\in \mathbb{N}$) and particles of finite
spin. The initial state is a thermal equilibrium state and cyclic processes
are induced by electromagnetic potentials $\{\eta A_{t}\}_{t\geq s}$ with
constant strength $\eta \geq 0$ within some finite region $\Lambda $ of the
lattice $\mathfrak{L}$. This yields to perturbed dynamics with discrete
magnetic Laplacians in disordered media (like in the Anderson model). The
quadratic response with respect to $\eta $ of the full heat production or
electromagnetic work per unit volume turns out to equal $Q_{\rho
}^{A}=\varphi (A\ast \tilde{A})$, where $\varphi $ is a distribution,
\tilde{A}(t):=A(-t)$ and $A\in C_{0}^{\infty }(\mathbb{R},\mathbb{R})$.
Indeed, we have shown \cite{OhmII, OhmV} that $|\Lambda |^{-1}L_{\rho }^{\eta
A}-\eta ^{2}Q_{\rho }^{A}$ is of order $\mathcal{O}(\eta ^{3})$, uniformly
w.r.t. $A$ and the size $|\Lambda |$ of the region $\Lambda $ where the
electromagnetic field is applied. The 2nd law, that is, (\ref{work}),
implies then that $Q_{\rho }^{A}=\varphi (A\ast \tilde{A})\geq 0$, i.e.,
\varphi $ is a \emph{distribution of positive type}. By the Bochner--Schwarz
theorem, there is a positive measure $\tilde{\mu}$ on $\mathbb{R}$ such that
\[
Q_{\rho }^{A}=\int_{\mathbb{R}}\mathrm{d}\tilde{\mu}(\nu )|\hat{A}(\nu
)|^{2}=\int_{\mathbb{R}\backslash \{0\}}\mathrm{d}\tilde{\mu}(\nu )\nu ^{-2}
\hat{E}(\nu )|^{2}
\
for all $A\in C_{0}^{\infty }(\mathbb{R},\mathbb{R})$. Here, $E=-\partial
_{t}A$ is the electric field in the Weyl gauge and $\hat{A},\hat{E}$ are the
Fourier transforms of $A,E$ with support outside some neighborhood of $\nu =0
$.
The measure $\mathrm{d}{\mu }(\nu ):=\nu ^{-2}\mathrm{d}\tilde{\mu}(\nu )$
on $\mathbb{R}\backslash \{0\}$ turns out to be the AC--conductivity measure
we are looking for and the quantity $\mathrm{d}\mu (\nu )|\hat{E}(\nu )|^{2}$
is the heat production due to the component $\hat{E}(\nu )$ of frequency
\nu $ of the electric field $E$, in accordance with Joule's law. It is
directly related to the passivity property of thermal equilibrium states on
the CAR algebra. In other words, the existence of AC--conductivity measures
results from the 2nd law of Thermodynamics in connection with the full
quantum microscopic dynamics of the considered system.\emph{\ }Moreover, the
approach to linear (or quadratic in the energetic view point) response we
propose has also the following technical and conceptual advantages, even in
the non--interacting case:
\begin{itemize}
\item The conductivity measure naturally appears as the Fourier transform of
current--current time correlations, that is,\ four--point correlation
functions, in this framework. This means that Green--Kubo relations are
generally valid, from first principles.\emph{\smallskip }
\item The algebraic formulation allows a clear link between macroscopic
transport properties of fermion systems and the CCR algebra of current
fluctuations associated to that systems. The latter is related to
non--commutative central limit theorems.\emph{\smallskip }
\item Moreover, this approach can be naturally used to define and analyze
conductivity measures for \emph{interacting} fermions as well.\emph
\smallskip }
\end{itemize}
In
~\cite{OhmI,OhmII}
we study free lattice fermions subjected to a static bounded potential and a
time-- and space--dependent electromagnetic field.
~\cite{OhmI,OhmII}
establish a form of Ohm and Joule's laws valid at microscopic scales,
uniformly with respect to the size of the region on which the electric field
is applied. It is in accordance with the validity of Ohm's law in the
quantum world, i.e., at microscopic scales, see
~\cite{Ohm-exp
.
~\cite{OhmIII,OhmIV}
extend the results of
~\cite{OhmI,OhmII}
to macroscopic scales and are reminiscent of
~\cite{Annale,JMP-autre
. For more details, see the discussions in
~\cite{OhmIII
. Part of the phenomenology of the Drude model can be derived from our more
detailed study \cite{OhmIV} of macroscopic conductivity measures of
free--fermions in disordered media.
Therefore,
~\cite{OhmI,OhmII,OhmIII,OhmIV}
give a complete, mathematically rigorous, microscopic derivation of the
phenomenon of linear conductivity from first principles of Thermodynamics
and Quantum Mechanics, only. These studies are restricted to
non--interacting fermion systems. However, it is believed in theoretical
physics that electric resistance of conductors should also result from the
interactions between charge carriers, and not only from the presence of
inhomogeneities (impurities). In
~\cite{OhmVbis,OhmV,OhmVI
, we succeed to extend our previous results to fermion systems with
interactions.
To conclude, we think that such an approach can be useful in other contexts
since it gives appropriate tools to tackle mathematically what is known in
Physics as \emph{excitation spectrum}. Indeed, the concept of excitation
spectrum is usually associated with the spectrum of a self-adjoint operator
describing the energy of the system. In condensed mater physics, this notion
mainly comes from superfluid helium 4, a quantum boson liquid which can be
described by the spectrum of collective excitations via the celebrated
Bogoliubov theory \cite{BZ}. However, there is a plethora \cit
{spectrumexcitation1,spectrumexcitation2} of other types of elementary
excitations not covered by the Bogoliubov theory. We show \cit
{OhmII,OhmIII,OhmV} that the notion of conductivity measure we defined is
nothing else than a spectral measure of the generator of dynamics in an
appropriate representation.
| proofpile-arXiv_066-629 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
General relativistic spaces filled with black holes have recently
been under scrutiny as exact cosmological models with a discrete mass
distribution which is, in some sense, uniform on large scales. The
construction of these spaces in numerical relativity has enabled
the investigation of several questions without approximations, such as
how such configurations evolve in time and what their global physical
properties are~\cite{Yoo:2012jz,Bentivegna:2012ei,Yoo:2013yea,Bentivegna:2013jta,Yoo:2014boa}.
At the same time, the numerical simulations have been complemented
by insight coming from analytical studies, which have illustrated
some general features of these spacetimes such as the behaviour of special
submanifolds~\cite{Clifton:2013jpa,Clifton:2014lha,Korzynski:2015isa},
the conditions under which they behave like the
Friedmann-Lema{\^i}tre-Robertson-Walker (FLRW) models~\cite{Korzynski:2013tea},
and the link between their behaviour and the validity of Gauss's law
in a generic theory of gravity~\cite{Fleury:2016tsz}.
In this work, we use numerical spacetimes representing black-hole
lattices (BHLs) to probe a different aspect of inhomogeneous
cosmologies, namely their optical behaviour.
As is well known, null
geodesics are the bedrock of cosmological observations: light
from distant sources is the primary tool for measuring the
Universe's density parameters, equation of state, and perturbations.
Increasing the accuracy of models of light propagation and identifying the
biases introduced by various approximation frameworks is thus
critical.
Modelling light propagation in inhomogeneous cosmologies is a long-standing
effort, which has followed two complementary courses: approximation
schemes on one hand, and toy models on the other. The best-known
approach in the former class is the Empty-Beam Approximation (EBA) of
Zeldovich~\cite{1964SvA.....8...13Z}, later generalized by
Dyer and Roeder~\cite{1972ApJ174L115D,Dyer:1973zz}. This approach
is based on the idea that different effects are at play when light propagates
in a perturbed fluid or through discretely-distributed point masses, as
different components of the curvature become dominant in either regime
(this is sometimes referred to as the \emph{Ricci-Weyl problem}~\cite{Fleury:2013sna}).
This approach provides an excellent estimate of light propagation in
Swiss-Cheese models, and can be used to constrain the fraction of
voids in a cosmological model~\cite{Fleury:2013sna,Fleury:2013uqa,Fleury:2014gha}.
The notion that discreteness may affect light propagation more
than inhomogeneity itself has also appeared in other studies, such as
those on light propagation through Schwarzschild-cell
universes~\cite{Clifton:2009jw,Clifton:2009bp,Clifton:2011mt}.
The existing literature points in a number of common directions:
first, examining individual geodesics, one concludes that the effective
value of the cosmological constant (the one obtained fitting the
spacetime to an FLRW model with the same matter density) is higher
than its microscopic value (the one appearing in the gravitational action).
Second, a statistical average of photon
trajectories usually leads to a partial suppression of this
difference. A suppression is also obtained by considering
the perturbative solution corresponding to a regular arrangements
of objects of equal mass, at least until the perturbative condition
is respected~\cite{Bruneton:2012ru}.
Though consistent on many aspects, these studies are limited
by the conditions imposed on the underlying model: most of the
discrete-mass studies are either based on spherically-symmetric
building blocks or on the requirements that the objects be not too
compact. It is presently not clear what the optical properties of a more
generic space would be.
To investigate this issue and test the generality of the existing results,
in this paper we compute the photon redshift and luminosity
distance along null geodesics running through a BHL spacetime, constructed
exactly through the integration of Einstein's equation, non-perturbatively and
in three dimensions. First we compare
the result to some reference models from the FLRW class, to the Milne
cosmology, and to a generic universe in the Empty Beam Approximation (EBA)~\cite{1964SvA.....8...13Z,1972ApJ174L115D,Dyer:1973zz}.
We find that the latter provides the closest approximation to light
propagation on the BHL, and derive a simple argument to explain this result,
which in some sense extends the reasoning of~\cite{Fleury:2014gha}
to completely vacuum spacetimes.
We then turn to the question of whether it is possible to tune the
cosmological parameters in the FLRW class to improve the fit. We find, in
particular, that one can reproduce the luminosity-distance--to--redshift
relationship of a BHL with that of an FLRW model with the same average
matter density and a fictitious, time-dependent
cosmological constant $\Lambda$, and provide the first measurement of
this running in our base configuration. Finally, we study how this behaviour
depends on the BHL inhomogeneity parameter $\mu$~\cite{Bentivegna:2013jta},
which roughly corresponds to the ratio between the central mass and the
lattice spacing, and in particular we analyse the continuum limit of $\mu \to 0$.
An important factor in this discussion is the choice of light
sources and observers, as the photon frequencies and number counts
will depend on the reference frame in which they are measured.
In FLRW models there is an obvious option: the comoving sources and
observers. In inhomogeneous spaces, on the other hand, identifying a
``cosmic flow'' is more tricky (when at all possible) and relies on the
somewhat arbitrary split between global cosmological evolution and
``local effects'' sourced by nearby gravitational structures.
For the purpose of this work, we sidestep this question
by noticing that, for a given geodesic, the angular and luminosity distances
can be obtained by applying a certain linear operator to the four-velocity
of the observer, with no dependence whatsoever on the motion of the light source.
It is therefore straightforward to quantify the effect of different
observer prescriptions on these observables.
Section~\ref{sec:lprop} introduces the formalism of light propagation and justifies
the approach we take in our analysis, providing some examples
in simple spacetimes. Section~\ref{sec:bhl} provides an approximate description
of light propagation in a BHL via a perturbative analysis.
We present the numerical results in section~\ref{sec:results} and in section~\ref{sec:dc} we comment on them.
We provide tests of the geodesic integrator, used for the first time
in this study, in the appendix.
We use geometric units $G=c=1$ everywhere.
\section{Fundamentals of light propagation}
\label{sec:lprop}
Let us start by considering a null ray emanating from a light
source $\cal{S}$ and reaching an observer $\cal{O}$: this curve can be
described as an affinely-parametrized null geodesic $\gamma(\lambda)$, with
$\cal{S}$ and $\cal{O}$ as end points corresponding to the affine parameter
values $\lambda_{\cal{S}}$ and $\lambda_{\cal{O}}$:
\begin{eqnarray}
\gamma(\lambda_{\cal{S}}) = \cal{S}\\
\gamma(\lambda_{\cal{O}}) = \cal{O}
\end{eqnarray}
The curve is described by the geodesic equation:
\begin{equation}
\label{eq:GE}
{ \nabla_p} p^a = 0
\end{equation}
where:
\begin{equation}
p^a = \frac{\ensuremath{\textrm{d}} x^a}{\ensuremath{\textrm{d}} \lambda}
\end{equation}
is the tangent vector to $\gamma$.
In order to measure distances with null rays, however, we need more
than a single geodesic: we need to consider a whole {\it beam} of
rays~\cite{Seitz:1994xf}, centred on $\gamma$, and study the evolution
of its cross-sectional area as it makes its way from $\cal{S}$ to
$\cal{O}$.
The time evolution of a beam's cross section is described by the
geodesic deviation equation (GDE). Let $\xi^a$ be the separation
vector between the fiducial geodesic $\gamma$ and a neighbouring
one, called $\tilde \gamma$. It satisfies
\begin{eqnarray}
\nabla_p \nabla_p \xi^a = R\UD{a}{bcd}\,p^b\,p^c\,\xi^d. \label{eq:GDE}
\end{eqnarray}
The GDE is a second order ODE for the 4--vector $\xi^a$, or equivalently
a first order ODE for $\xi^a$ and $\nabla_p \xi^a$.
It is valid for any neighbouring geodesic, but since in the geometrical optics
we are only interested in null geodesics, we impose a restriction
on the solution $\xi^a(\lambda)$ of the form:
\begin{eqnarray}
p_a \nabla_p \xi^a = 0, \label{eq:null}
\end{eqnarray}
which ensures that $\tilde\gamma$ is null.
Note that if the equation above is satisfied at one point, then it is
automatically satisfied along the whole of $\gamma$ because of equation
(\ref{eq:GDE}).
Let us now restrict the geodesics under consideration to those which
lie on the same wavefront as $\gamma$, i.e.~for which the separation vector satisfies
\begin{eqnarray}
\xi^a\,p_a = 0. \label{eq:wavefront}
\end{eqnarray}
The condition above means that, for a given observer at a given time, the
photon corresponding to the geodesic $\gamma$ and the one corresponding to
$\tilde \gamma$ lie on the same 2-plane perpendicular to the direction of
propagation (see Figure~\ref{fig:wavefronts}). This condition is Lorentz-invariant, meaning that
if it is satisfied in one reference frame then it is valid in all frames.
Moreover, for null geodesics it propagates along $\gamma$, i.e.~if it is
satisfied at one time it is satisfied along the whole of $\gamma$. This
follows easily from (\ref{eq:null}) and (\ref{eq:GDE}).
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{plots/wavefront.pdf}
\caption{The null geodesics lying on the same wavefront consist of geodesics for which the photons at any instant of time
and for any observer lie on the same plane perpendicular to the direction of propagation given by $p_a$.
\label{fig:wavefronts}}
\end{figure}
The reason why we are interested only in geodesics which lie on the same
wavefront is that we want to study geodesics which cross at one point,
either the emission point ${\cal S}$ or the observation point ${\cal O}$.
If this is the case, then $\xi^a = 0$ at either $\lambda_{\cal O}$ or
$\lambda_{\cal S}$, so that (\ref{eq:wavefront}) is trivially satisfied there
and thus also \emph{everywhere} on $\gamma$.
By imposing (\ref{eq:null}) and (\ref{eq:wavefront}) we have effectively
reduced the number of degrees of freedom from four to three. It turns out
that a further reduction is possible.
Note that at every point we are free to add a vector proportional to $p^a$
to both $\xi^a$ and $\nabla_p\xi^a$. The former corresponds to using a
different point of \emph{the same} geodesic $\gamma$ in the definition of
the separation vector $\xi^a$, while the latter is just a rescaling of the
affine parametrization of $\gamma$. Neither transformation affects the
physical content of the equations, as long as we are in the regime of
geometrical optics. As a matter of fact, it is easy to see that equations
(\ref{eq:GDE})--(\ref{eq:wavefront}) are insensitive to these transformations
as well:
\begin{eqnarray}
&&\nabla_p \nabla_p \left(\xi^a + C(\lambda)\,p^a\right) = R\UD{a}{bcd}\,p^b\,p^c\,\xi^d + \ddot C\,p^a \\
&&\nabla_p \left(\xi^a + C(\lambda)\,p^a\right)\,p_a = \dot C\,p^a\,p_a = 0 \\
&&\left(\xi^a + C(\lambda)\,p^a\right)\,p_a = C\,p^a\,p_a = 0.
\end{eqnarray}
It follows that (\ref{eq:GDE})--(\ref{eq:wavefront}) can be reinterpreted as
equations on the space $p^\perp/p$, consisting of vectors orthogonal to
$p_a$ and divided by the relation $\xi^a \sim \eta^a \iff \xi^a = \eta^a + A\,p^a$.
We shall denote the equivalence class corresponding
to a vector $\xi^a$ in $p^\perp$ as $\left[\xi\right]^A$.
The space $p^\perp/p$ is two--dimensional and inherits the positive-definite
metric from $g_{ab}$ via the relation $\left[X\right]^A\,\left[Y\right]^B\,g_{AB}
= X^a\,Y^b\,g_{ab}$, where $X^a$ and $Y^b$ are any vectors in the tangent
space corresponding to the equivalence classes $[X]^A$ and $[Y]^B$, respectively.
It can be thought of as the space of null geodesics lying in the neighbourhood
of $\gamma$ on the same wavefront, without any specification of which point on $\gamma$ we assign to
which point of $\tilde \gamma$. It is straightforward to verify that the covariant derivative
$\nabla_p$ can also be defined as an operator on $p^\perp/p$.
In the standard formalism due to Sachs \cite{Sachs309, lrr-2004-9}, we then
introduce a frame with two spatial, orthonormal screen vectors $\xi_1^a$ and $\xi_2^a$,
both orthogonal to $p^a$ and to a timelike observer $u^a_{\cal O}$. Notice that
this is not strictly necessary:
all that matters in geometrical optics are the \emph{equivalence classes}
$\left[\xi_1\right]^A$ and $\left[\xi_2\right]^B$, which turn out to be entirely
$u^a_{\cal O}$\emph{-independent}. More precisely, for any other choice of the observer
$\tilde u^a_{\cal O}$ and the corresponding $\tilde \xi_1^a$ and $\tilde \xi_2^a$ perpendicular to
$p_a$, the classes $\left[\tilde \xi_1\right]^A$ and $\left[\tilde \xi_2\right]^B$ are
related to $\left[\xi_1\right]^A$ and $\left[\xi_2\right]^B$ via a simple spatial rotation.
The image distortion of a distant object and its angular distance can now be
calculated by finding the Jacobi matrix ${\ensuremath{\cal D}}\UD{A}{B}$ of the GDE in the space $p^\perp/p$
\begin{eqnarray}
\nabla_p \nabla_p {\ensuremath{\cal D}}\UD{A}{B} = R\UD{A}{\mu\nu C}\,p^\mu\,p^\nu\,{\ensuremath{\cal D}}\UD{C}{B} \label{eq:GDE2}
\end{eqnarray}
with the initial data of the form
\begin{eqnarray}
&&{\ensuremath{\cal D}}\UD{A}{B}(\lambda_{\cal O}) = 0 \label{eq:ID2} \\
&&\nabla_p {\ensuremath{\cal D}}\UD{A}{B}(\lambda_{\cal O}) = \delta\UD{A}{B} \nonumber
\end{eqnarray}
(see \cite{lrr-2004-9} for its geometric definition and the discussion of its properties). Note that the initial data depends on the choice of parametrization of the null
geodesic $\gamma$, because if we rescale $\lambda \mapsto C\,\lambda$, the tangent
vector rescales accordingly via $p^a \to C^{-1} p^a$. Thus ${\ensuremath{\cal D}}\UD{A}{B}$ is
parametrization-dependent. Nevertheless, the tensor product $p_\mu\,{\ensuremath{\cal D}}\UD{A}{B}$
is parametrization-independent and is therefore an intrinsic property of the light cone
centred at the observation point $\cal O$. In practice the equations (\ref{eq:GDE2})--(\ref{eq:ID2})
are solved by first introducing a Sachs frame and then using the corresponding
screen vectors $\left[ \xi_1\right]^A$ and $\left[ \xi_2\right]^B$ as a basis in $p^\perp/p$.
The image distortion seen by the observer with 4-velocity $u_{\cal O}^a$ at the
observation point is finally:
\begin{eqnarray}
I\UD{A}{B} = \left|u_{\cal O}^a\,p_a\right|\,{\ensuremath{\cal D}}\UD{A}{B}(\lambda_{\cal S})
\end{eqnarray}
while the angular distance is
\begin{eqnarray}
\label{eq:DA}
D_{\rm A}=\left|u_{\cal O}^a\,p_a\right|\,\sqrt{\left|\det {\ensuremath{\cal D}}\UD{A}{B}(\lambda_{\cal S})\right|}
\end{eqnarray}
(see also \cite{lrr-2004-9} and references therein). Note that the result does not depend on the
4-velocity of the source, while the dependence on the 4-velocity of the observer is quite simple.
For instance, it is easy to prove that, on an FLRW spacetime, observers boosted with respect to the
comoving frame measure smaller angular distances, because the quantity $\left|u_{\cal O}^a\,p_a\right|$
decreases as the boost parameter is increased. One can therefore use equation (\ref{eq:DA}) to work
out which observers (if any) would measure a specified angular distance for an object in a given
spacetime.
The luminosity distance is defined using the total energy flux from the source through a fixed area at the observation point. In the formalism above
it can be expressed as
\begin{eqnarray}
\label{eq:DL}
D_{\rm L}=\left|u_{\cal S}^a\,p_a\right|\,\sqrt{\left|\det \tilde{\ensuremath{\cal D}}\UD{A}{B}(\lambda_{\cal O})\right|}(1+z)
\end{eqnarray}
where $\tilde{\ensuremath{\cal D}}\UD{A}{B}$ satisfies (\ref{eq:GDE2}), but with the initial conditions (\ref{eq:ID2}) imposed at the source rather than at the observer,
and $z$ is the relative change in the photon frequency as it moves along the geodesic, also known as its {\it redshift}:
\begin{equation}
z = \frac{\nu_{\cal S}-\nu_{\cal O}}{\nu_{\cal O}} = \frac{u_{\cal S}^a\,p_a}{u_{\cal O}^a\,p_a} - 1.
\end{equation}
The fundamental result by Etherington \cite{springerlink:10.1007/s10714-007-0447-x} relates these quantities: the reciprocity relation reads
\begin{eqnarray}
\left|\det \tilde{\ensuremath{\cal D}}\UD{A}{B}(\lambda_{\cal O})\right| = \left|\det {\ensuremath{\cal D}}\UD{A}{B}(\lambda_{\cal S})\right|. \label{eq:Etherington}
\end{eqnarray}
It follows easily that
\begin{equation}
D_{\rm L} = (1+z)^2 D_{\rm A}.
\end{equation}
Relation (\ref{eq:Etherington}) allows one to calculate both distances by solving the GDE with the initial conditions (\ref{eq:ID2}) imposed either
at the source or at the observation point.
In this paper we have found it much simpler to impose the initial conditions
at the location of the source, and to
integrate the equations forward in time. Moreover, instead of solving the GDE directly, we simply use the geodesic tracker
and follow directly two additional null geodesics $\gamma_1(\lambda)$ and $\gamma_2(\lambda)$, slightly perturbed with respect to the principal one, which we denote with $\gamma_0(\lambda)$.
We specify the initial conditions for them at the source:
\begin{eqnarray}
x^a_1(\lambda_{\cal S}) &=& x^a_2(\lambda_{\cal S}) = x^a_0(\lambda_{\cal S}) \\
p_1^a(\lambda_{\cal S}) &=& p_0^a(\lambda_{\cal S}) + \epsilon \xi_1^a(\lambda_{\cal S}) \\
p_2^a(\lambda_{\cal S}) &=& p_0^a(\lambda_{\cal S}) + \epsilon \xi_2^a(\lambda_{\cal S})
\end{eqnarray}
where $x^a_I$ are the coordinates of geodesic $\gamma_I$ and $p^a_I$ is its 4-momentum.
We can then compute ${\ensuremath{\cal D}}\UD{A}{B}$ by using the fact that:
\begin{eqnarray}
{\ensuremath{\cal D}}\UD{A}{B}(\lambda) &=& \lim_{\epsilon \to 0} \frac{\sqrt{g(\lambda_{\cal S})}}{\epsilon} \left [
\begin{array}{ll}
g_{ab}(x^a_1-x^a_0)\,\xi_1^b \qquad & g_{ab}(x^a_2-x^a_0) \,\xi_1^b \\[0.5cm]
g_{ab}(x^a_1-x^a_0)\,\xi_2^b \qquad & g_{ab}(x^a_2-x^a_0) \,\xi_2^b
\end{array}
\right ]
\end{eqnarray}
where $g(\lambda_{\cal S})$ is the determinant of $g_{ab}$ at the geodesic initial
location.
This is the approach we take in the computations described in Section~\ref{sec:results}.
\subsection{Homogeneous cosmologies}
This formalism takes on a particularly simple form in the exactly
homogeneous and isotropic cosmological models (the FLRW class),
defined by the line element:
\begin{equation}
d s^2 = - d t^2 + a(t)^2 dl^2
\end{equation}
where $dl^2$ is the line element of one of the three three-dimensional
constant-curvature spaces of Euclidean signature.
In this case, geodesics can move along coordinate lines and be parametrized
by the coordinate time. In the flat case,
for instance, we can
choose $x$ as the geodesic direction (so that $\xi_1^a=a(t) \delta_y^a$ and
$\xi_2^a=a(t) \delta_z^a$, where $a(t)$ is the scale factor). The
matrix ${\ensuremath{\cal D}}\UD{A}{B}$ is then given by:
\begin{eqnarray}
{\ensuremath{\cal D}}\UD{A}{B}(t) &=& a_{\cal S} \left [
\begin{array}{cc}
a(t) x(t) & 0 \\
0 & a(t) x(t)
\end{array}
\right ]
\end{eqnarray}
where $x(t)$ is the coordinate distance travelled along the geodesic
at time $t$:
\begin{equation}
x(t)=\int_{t_{\cal S}}^{t} \frac{dt}{a(t)}
\end{equation}
Given the initial normalization $u_{\cal S}^a\,p_a=-a_{\cal S}^{-1}$, equation (\ref{eq:DL})
becomes:
\begin{equation}
\label{eq:flrwDL}
D_{\rm L}= a_{\cal O} (1+z) \int_{t_{\cal S}}^{t_{\cal O}} \frac{dt}{a(t)}
\end{equation}
Noticing that, in an FLRW model, the redshift
$z$ only depends on the ratio between the scale factor at the time of
detection and the scale factor at the time of emission:
\begin{equation}
z=\frac{a(t_{\cal O})}{a(t_{\cal S})} - 1,
\end{equation}
it is easy to show that equation (\ref{eq:flrwDL}) coincides with the
usual textbook expression for $D_{\rm L}$, which we quickly recall.
We first need to calculate the comoving distance covered by a photon between
${\cal S}$ and ${\cal O}$:
\begin{equation}
D_{\rm M}(z)=a_{\cal O} \int_{t_{\cal S}}^{t_{\cal O}} \frac{dt}{a(t)} = (1+z) S\left(\Omega_k,\int_0^z \frac{d\zeta}{H(\zeta)(1+\zeta)^2}\right),
\end{equation}
with
\begin{equation}
H(\zeta) = H_{\cal S}\sqrt{\Omega^{\cal S}_{\rm M}(1+\zeta)^{-3}+\Omega^{\cal S}_\Lambda+\Omega^{\cal S}_k(1+\zeta)^{-2}},
\end{equation}
and
\begin{eqnarray}
S(k,x) &=& \left\{
\begin{array}{ll}
k^{-1/2} \sin k^{1/2} x & \textrm{for } k > 0 \\
x & \textrm{for } k = 0 \\
|k|^{-1/2} \sinh |k|^{1/2} x & \textrm{for } k < 0
\end{array}
\right.
\end{eqnarray}
Notice that the reference values for all quantities are those at the source:
$a_{\cal S}$, $H_{\cal S}$, and $\Omega^{\cal S}_{\rm X}$ are the model's scale
factor, Hubble rate, and density parameters at the time the photon is emitted,
respectively. As is customary, we also define the curvature $\Omega$ parameter by:
\begin{equation}
\Omega^{\cal S}_k=1-\Omega^{\cal S}_{\rm M}-\Omega^{\cal S}_{\Lambda}.
\end{equation}
Notice that referring to the initial values of these parameters rather
than the final ones changes our expressions from the standard textbook
treatment. It is straightforward to show that the usual formulae are
recovered if one expresses all quantities at the source in terms of the
corresponding ones at the observer.
Having found an expression for $D_{\rm M}(z)$, we can use it to derive the apparent
luminosity $\ell$ of an object of intrinsic luminosity ${\cal L}$
(for details, see e.g.~\cite{Hogg:1999ad}):
\begin{equation}
\ell = \frac{\cal L}{4 \pi D_{\rm M}(z)^2 (1+z)^2}.
\end{equation}
Since the apparent luminosity is defined as:
\begin{equation}
D_{\rm L} (z) = \sqrt{\frac{\cal L}{4 \pi \ell}},
\end{equation}
we finally obtain:
\begin{equation}
\label{eq:ldflrw}
D_{\rm L} (z) = D_{\rm M}(z) (1+z) = (1+z)^2 S \left( \Omega_k, \int_0^z \frac{d\zeta}{H(\zeta)(1+\zeta)^2} \right)
\end{equation}
This can be easily identified, on a flat background, with (\ref{eq:flrwDL}).
In homogeneous and isotropic cosmologies, therefore, the luminosity distance
only depends on the redshift, and is parametrized by global quantities such
as the matter density and the curvature of spatial slices.
In the Einstein-de Sitter (EdS) model, $D_{\rm L}$ simply reduces to
\begin{equation}
\label{eq:ldeds}
D_{\rm L} (z) = \frac{2 (1+z)^2}{H_{\cal S}} \left( (1+z)^{1/2}-1 \right)
\end{equation}
\subsection{Inhomogeneous cosmologies}
The propagation of light in lumpy spacetimes has been studied since the
1960's with various approaches, starting with the EBA
proposed in~\cite{1964SvA.....8...13Z} and later
generalized in~\cite{1972ApJ174L115D,Dyer:1973zz}.
The key idea inspiring these studies is that, in cosmological models
where the matter is distributed in lumps, a large fraction of the light
beams would not contain matter, and would therefore not be affected
by the Ricci focusing characteristic of their FLRW counterparts.
Other limitations of the FLRW approximation and the related physical
effects were subsequently analysed, both in approximate scenarios and
in exact cosmological models (typically belonging to the Swiss-Cheese
family)~\cite{Fleury:2014gha,Seitz:1994xf,
KristianSachs,1967ApJ150737G,1969ApJ...155...89K,
1970ApJ...159..357R,Lamburt2005,2010PhRvD..82j3510B,
2011PhRvD..84d4011S,Nwankwo:2010mx,2012MNRAS.426.1121C,
2012JCAP...05..003B,Lavinto:2013exa,Troxel:2013kua,
Bagheri:2014gwa,
Peel:2014qaa}.
A few robust features of these studies, that do not depend on
the details of the models used, include that:
\begin{itemize}
\item Light sources appear reduced in size and dimmer in a lumpy
spacetime than in a homogeneous one with the same mean density;
\item The angular distance does not have a maximum, but keeps growing
all the way to the cosmic horizon;
\item The actual deceleration parameter $q_0$
is larger
than in
the case where the same data is analysed with an FLRW model with the
same mean density.
\end{itemize}
Later, when we measure the $D_{\rm L}(z)$ relationship in BHL spacetimes,
we will use these features as guidelines for what to expect. Many of
them do indeed hold for such highly nonlinear spacetimes too.
In fact, as discussed at length in Section~\ref{sec:results}, the luminosity
distance in a BHL follows rather closely the EBA~\cite{1964SvA.....8...13Z},
which we report for completeness:
\begin{equation}
\label{eq:eb}
D_{\rm L}(z)=\frac{2(1+z)^2}{5 H_{\cal S}}\left( 1 - \frac{1}{(1+z)^{5/2}}\right).
\end{equation}
In Section \ref{sec:bhl} we will explain why the EBA
is a good approximation of the redshift--luminosity
distance in a BHL, and point out that it is equivalent to neglecting the
Ricci term in the standard geodesic deviation equation.
\subsection{Geodesics and observer classes}
As with many other quantities of interest that can be calculated in
inhomogeneous cosmologies, the calculation of $D_{\rm L}(z)$
requires the choice of a time coordinate. In general, representing
the spacetime in the geodesic gauge will lead to coordinate observers
which are diversely affected by neighbouring gravitational structures,
and may experience, e.g., light redshifting which has nothing to do with
a global, suitably defined expansion rate (an observational cosmologist
would call these {\it local effects}).
A study of light propagation in inhomogeneous spaces, especially one that is
targeted at the comparison with the FLRW class, is then left with two
possibilities: a statistical approach in which observers and sources are
distributed stochastically throughout the spacetime, and a single
$D_{\rm L}(z)$ relationship is obtained by averaging over
their locations and four-momenta; or the construction of one or more
classes of {\it cosmological} observers, based on geometry-inspired
considerations such as following the geodesics of the average
gravitational field, or geodesics with minimal deviation.
We find the latter approach more likely to yield insight on the
different gauge choices and related effects, and therefore use
it in the remaining of this paper. Statistical reasoning is,
however, also an important ingredient, as the observational data
is arguably to be modelled through a mix of different observer and source
states of motion. As statistical analyses are a tricky endeavour in
cosmology, we leave this task for future work.
Notice that the second strategy is particularly difficult to deploy
on vacuum spacetimes, as the sources of the gravitational field are
only perceived through their effect on the metric tensor, and not
through the presence of matter, so singling out a ``local''
component of the gravitational field will in some cases not even be
well defined (for a discussion of this point, see
e.g.~\cite{Marra:2012pj} or~\cite{lrr-2004-9} and references therein).
We will however exploit the existence of global (albeit discrete)
symmetries in our BHLs and only turn our attention to geodesics which
are by construction least affected by local effects: these include, for
instance, the geodesics running along the edges of the fundamental
periodic cell constituting the lattice.
\section{Light propagation in BHLs}
\label{sec:bhl}
In this section, we build an approximate model for the propagation of light
in a BHL, based on a perturbative expansion in the BHL compactness parameter.
This will serve as a qualitative analysis of the physics of the propagation of light and as a support in the interpretation of the numerical results
presented in section~\ref{sec:results}. Note that an expansion in a similar parameter has already appeared in the context of BHLs \cite{Bruneton:2012ru}, although the details are different.
\subsection{A perturbative expansion in the compactness parameter}
Let $L$ denote the characteristic size of a lattice cell, such as its initial geodesic length,
and let $M$ be a characteristic mass, i.e.~the total mass contained in a cell.
As in~\cite{Bentivegna:2013jta}, we can introduce the dimensionless parameter
\begin{eqnarray}
\mu = \frac{M}{L}
\end{eqnarray}
measuring the lattice compactness. If we additionally introduce the characteristic mass density
$\rho = M L^{-3}$, we can see that
\begin{eqnarray}
\mu = \rho L^2,
\end{eqnarray}
i.e.~it goes down to zero as we decrease the size of
a cell keeping the mass density of the corresponding FLRW model fixed. Note that $\rho$ is related to the curvature
scale of the Friedmann model via $R=\rho^{-1/2}$, so $\mu$ can be reinterpreted as the separation of scales between the
size of an individual lattice cell and the radius of curvature of the FLRW model:
\begin{eqnarray}
\mu = \frac{L^2}{R^2}.
\end{eqnarray}
Note that the definition of $\mu$ involves a certain vagueness: we may take for the mass scale $M$
the ADM mass of the black hole measured at the other end of the Einstein-Rosen bridge, but also some other related parameter.
Also the choice of the length scale involves a certain arbitrariness. At the leading order we expect this ambiguity to be irrelevant.
We will now show how $\mu$ can be used to find a perturbative approximation for the metric tensor of the lattice model.
The approximation is different from the standard perturbative approximation on an FLRW background, in the sense that it
does not require the density contrast $\delta$ of the matter perturbation to be small. Obviously the problem of BHLs lies beyond the validity regime of the cosmological perturbation theory, because in a BHL we are
dealing with $\delta = -1$ everywhere.
We begin by introducing a coordinate system on a single cell. Let $\ensuremath{g^{(0)}}$ denote the background FLRW metric and $x^\mu$ be the Riemannian normal coordinate system around any point $P$. The metric takes the form of
\begin{eqnarray}
\ensuremath{g^{(0)}}_{\mu\nu} = \eta_{\mu\nu} - \frac{1}{3} R_{\mu\alpha\nu\beta}\Big|_P\,x^\alpha\,x^\beta + O(x^3).
\end{eqnarray}
Since $R$ is the curvature scale of the metric, the coefficients in the expansion above are of order $R^0$ (the flat metric), $R^{-2}$
(the Riemann tensor),
$R^{-3}$ (the next term involving $\nabla_\sigma R_{\mu\alpha\nu\beta}$), and so on. The Taylor expansion
in the Riemannian normal coordinates becomes thus the expansion in negative powers of $R$.
We now introduce the rescaled coordinates $\tilde x^\mu = L^{-1}\,x^\mu$ and the
rescaled Riemann tensor at point $P$ in coordinates $x^\mu$.
\begin{eqnarray}
r_{\mu\alpha\nu\beta} = R^2\,R_{\mu\alpha\nu\beta}\Big|_P.
\end{eqnarray}
Both $\tilde x$ and $r_{\mu\alpha\nu\beta}$ are $O(1)$ in the expansion in $R$, at least within a single lattice cell
around $P$. The metric $\ensuremath{g^{(0)}}$ can be expressed in the new coordinates. The
metric tensor components in those coordinates will be denoted by $\ensuremath{{\tilde g}^{(0)}}_{\mu\nu}$,
i.e.
\begin{eqnarray}
\ensuremath{g^{(0)}} = \ensuremath{g^{(0)}}_{\mu\nu} \,\ensuremath{\textrm{d}} x^\mu\otimes\ensuremath{\textrm{d}} x^\nu =
\ensuremath{{\tilde g}^{(0)}}_{\mu\nu}\,\ensuremath{\textrm{d}} \tilde x^\mu\otimes\ensuremath{\textrm{d}} \tilde x^\nu.
\end{eqnarray}
Its expansion in $\tilde x^\mu$ takes the form of
\begin{eqnarray}
\ensuremath{{\tilde g}^{(0)}}_{\mu\nu} = L^2\left(\eta_{\mu\nu} - \frac{\mu}{3} r_{\mu\alpha\nu\beta}\,\tilde x^\alpha\,\tilde x^\beta + O(\tilde x^3 L^3)\right).
\end{eqnarray}
The first of the remaining higher-order terms $\nabla_\sigma R_{\mu\alpha\nu\beta} \,L^{3}\,\tilde x^\sigma\,
\tilde x^\alpha\,\tilde x^\beta$ contains the covariant derivative $\nabla_\sigma R_{\mu\alpha\nu\beta}\Big|_P$, which is
$O(R^{-3})$ as we noted before. Therefore the whole term in question can be re-expressed as $r_{\sigma\mu\alpha\nu\beta}\,\tilde x^\sigma\,
\tilde x^\alpha\,\tilde x^\beta\,\frac{L^3}{R^3}$, where we have defined by analogy
the rescaled derivative of the curvature $r_{\sigma\mu\alpha\nu\beta} = R^{3}\,\nabla_\sigma R_{\mu\alpha\nu\beta}$, which again is $O(1)$ in $R$. We see that the whole term turns out to be $O(\mu^{3/2})$. Similar reasoning can be applied to all higher terms, yielding
higher powers of the dimensionless parameter $\mu$.
We thus see that
\begin{eqnarray}
\ensuremath{{\tilde g}^{(0)}}_{\mu\nu} = L^2\left(\eta_{\mu\nu} - \frac{\mu}{3} r_{\mu\alpha\nu\beta}\,\tilde x^\alpha\,\tilde x^\beta + O(\mu^{3/2} )\right),
\end{eqnarray}
i.e. in the rescaled coordinates the expansion in the negative powers of $R$ turns in a natural way into an expansion in powers of $\mu$,
valid in a region of size $L$ around $P$.
We can explain the physical meaning of the expansion above in the following way: if the background metric $\ensuremath{g^{(0)}}$ has
the curvature scale of $R$, then in an appropriately picked, quasi-Cartesian coordinate system $x^\mu$ it has the Taylor expansion in which
the terms are of increasing order in $R^{-1}$. If we then pick a domain of size $L$, then the metric in this domain, again in
appropriate coordinates,
has the form of the flat metric plus perturbations from the curvature and its derivatives. A simple way to obtain a perturbation of this kind is to use the Taylor expansion we mentioned before and rescale the coordinates by $L$, which yields an expansion in
powers of $\mu^{1/2}$.
Now we can add the perturbation due to the discrete matter content. We assume the full metric to be
\begin{eqnarray}
\tilde g_{\mu\nu} = L^2\left(\eta_{\mu\nu} - \frac{\mu}{3} r_{\mu\alpha\nu\beta}\,\tilde x^\alpha\,\tilde x^\beta +
\mu\,h_{\mu\nu}\left(\tilde x^\alpha\right) + O(\mu^{3/2} )\right) \label{eq:fullg}
\end{eqnarray}
with the perturbation $h_{\mu\nu}\left(\tilde x^\alpha\right)$ of order $O(1)$ in $\mu$. Note that
the dependence on $\tilde x^\mu$ means that the characteristic physical size of the perturbation is the size of a cell, i.e.~$L$.
The Einstein tensor of the metric above is
\begin{eqnarray}
G_{\mu\nu}\left[\tilde g_{\alpha\beta}\right] = G_{\mu\nu}\left[\ensuremath{{\tilde g}^{(0)}}_{\alpha\beta}\right] +
\mu\,G'_{\mu\nu}\left[h_{\alpha\beta}\right]\left(\tilde x^\alpha\right) + O(\mu^{3/2}),
\end{eqnarray}
where $G'_{\mu\nu}[\cdot]$ is the linearisation of the Einstein tensor around a flat metric $\eta_{\mu\nu}$. In particular,
in the harmonic gauge it is simply $-\frac{1}{2}\Box h_{\alpha\beta}$.
We now return to the original, unrescaled
coordinate system, where this equation takes the form of
\begin{eqnarray}
G_{\mu\nu}\left[g_{\alpha\beta}\right] = G_{\mu\nu}\left[\ensuremath{g^{(0)}}_{\alpha\beta}\right]+
\rho\,G'_{\mu\nu}\left[h_{\alpha\beta}\right]\left(x^\alpha / L\right) + O(\mu^{3/2}),
\end{eqnarray}
i.e.~the perturbation of the Einstein tensor is $O(\rho)$, just like the Einstein tensor of the FLRW metric.
It means that this approximation works even if the density perturbation is of the order of the background energy density.
We may therefore use $h_{\mu\nu}$ to cancel the stress-energy tensor of the underlying FLRW metric everywhere except
on a single worldline.
Recall that $G_{\mu\nu}\left[\ensuremath{g^{(0)}}_{\alpha\beta}\right] = 8\pi G \rho \, u_{\mu} u_{\nu}$, where $u^{\mu} = (1,0,0,0)$ is the cosmic fluid 4-velocity.
We impose the linear PDE on the metric perturbation:
\begin{eqnarray}
G'_{\mu\nu}\left[h_{\alpha\beta}\right] = 8\pi G \left(-1 + C\delta^{(3)}(x^\alpha)\right) u_{\mu}\,u_{\nu}
\end{eqnarray}
with periodic boundary conditions and with the constant $C$ chosen so that the RHS integrates out to zero
over one cell. The solution can be obtained using Appell's $\zeta$ function~\cite{Steiner:2016tta}.
It diverges at the centre, where the approximation fails, but near the cell's boundary it is likely to work well.
The resulting approximate metric is vacuum everywhere and periodic.
\subsection{The continuum limit}
Let us now consider the metric (\ref{eq:fullg}) along with its Christoffel symbols and Riemann tensor. It is straightforward
to see that
\begin{eqnarray}
\tilde g_{\mu\nu} &=& \ensuremath{{\tilde g}^{(0)}}_{\mu\nu} + L^2\,\mu\,h_{\mu\nu}(\tilde x^\rho) \\
\Gamma\UD{\alpha}{\beta\gamma} \left[\tilde g_{\kappa\lambda}\right]&=&
\Gamma\UD{\alpha}{\beta\gamma} \left[\ensuremath{{\tilde g}^{(0)}}_{\kappa\lambda}\right] +
\mu\,{\Gamma'}\UD{\alpha}{\beta\gamma}\left[h_{\kappa\lambda}\right](\tilde x^\rho) \\
R\UD{\alpha}{\beta\gamma\delta} \left[\tilde g_{\kappa\lambda}\right] &=&
R\UD{\alpha}{\beta\gamma\delta} \left[\ensuremath{{\tilde g}^{(0)}}_{\kappa\lambda}\right] +
\mu\, {R'}\UD{\alpha}{\beta\gamma\delta} \left[h_{\kappa\lambda}\right](\tilde x^\rho).
\end{eqnarray}
We can now go back to the original, unrescaled coordinates and obtain
\begin{eqnarray}
g_{\mu\nu} &=& \ensuremath{g^{(0)}}_{\mu\nu} + \mu\,h_{\mu\nu} \left(x^\rho/L\right) \label{eq:gpert-original}\\
\Gamma\UD{\alpha}{\beta\gamma} \left[g_{\kappa\lambda}\right]&=&
\Gamma\UD{\alpha}{\beta\gamma} \left[\ensuremath{g^{(0)}}_{\kappa\lambda}\right] +
\mu^{1/2}\,\rho^{1/2}\,{\Gamma'}\UD{\alpha}{\beta\gamma}\left[h_{\kappa\lambda}\right] \left(x^\rho/L\right) \label{eq:gammapert-original}\\
R\UD{\alpha}{\beta\gamma\delta} \left[g_{\kappa\lambda}\right] &=&
R\UD{\alpha}{\beta\gamma\delta} \left[\ensuremath{g^{(0)}}_{\kappa\lambda}\right] +
\rho\, {R'}\UD{\alpha}{\beta\gamma\delta} \left[h_{\kappa\lambda}\right] \left(x^\rho/L\right)\label{eq:Riemannpert-original}
\end{eqnarray}
plus higher order terms in $\mu$. Consider now the limit $\mu \to 0$, i.e.~where the size of the perturbations decreases in comparison
to the curvature scale of the background FLRW model, or the limit where the compactness $M/L$ vanishes.
Obviously we see that the metric tensor and the Christoffel symbols converge to the FLRW values in this case, while
the curvature does not. This is due to the fact that the metric $g_{\mu\nu}$ is that of a vacuum spacetime for all positive $\mu$,
while the FLRW one is not.
This is a key observation in the study of the optical properties of a BHL, which are determined by the GDE
and are therefore sensitive to the form of the Riemann tensor.
To illustrate this point, consider first a null geodesic. It follows from equations (\ref{eq:gpert-original})--(\ref{eq:Riemannpert-original}) above that its equation has the form of a perturbed FLRW geodesic
\begin{eqnarray}
x^\mu(\lambda) = \tilde x^\mu(\lambda) + \mu^{1/2}\,\delta x^\mu(\lambda).
\end{eqnarray}
where the tilde denotes the FLRW solution without the inhomogeneities.
The parallel transport of a frame along the geodesic has a similar expansion in $\mu$:
\begin{eqnarray}
e\DU{a}{\mu}(\lambda) = \tilde e\DU{a}{\mu}(\lambda) + \mu^{1/2}\,\delta e\DU{a}{\mu}(\lambda).
\end{eqnarray}
We can now rewrite the GDE in the parallel-propagated frame along the geodesic
\begin{eqnarray}
\frac{\ensuremath{\textrm{d}}^2 X^a}{\ensuremath{\textrm{d}}\lambda^2} &=& \left(R\UD{a}{bcd} \left[\ensuremath{g^{(0)}}_{\kappa\lambda}\right] +
\rho\, {R'}\UD{a}{bcd} \left[h_{\kappa\lambda}\right]\right) p^b\,p^c + O(\mu^{1/2}).
\end{eqnarray}
We see that, already at the leading order $O(1)$ in $\mu$, we must take into account the full physical Riemann tensor instead of the simple FLRW one. In particular, since the BHLs are vacuum spacetimes,
we need to solve the Ricci-free GDE and possibly take into account the non-vanishing Weyl tensor along the way in order to calculate the angular and luminosity distance.
Neglecting the Ricci tensor in the GDE is equivalent to the EBA (for a discussion of this point, see e.g.~\cite{Fleury:2014gha}). We may thus expect the redshift--luminosity
relations for BHLs in the continuum limit to be close to the EBA.\footnote{We neglect here the finite-beam-size
effects which would become large when $\mu$ becomes very small: the beam may at some point become
wide enough to encompass a large number of
black holes. In this situation the interaction between the beam and the black holes becomes quite complicated as we
cannot use the GDE approximation any more.}
At the $O(\mu^{1/2})$ order we may expect additional corrections to $D_{\rm L}$ and $D_{\rm A}$ due to higher-order contributions to the geodesic equation as
well as to the
GDE equation. Additionally, at this order we need to take into account the impact of the inhomogeneities on the observers in their free fall. In this work, we will not
concern ourselves with a quantitative analysis of these effects, but we will signal their appearance to the reader when appropriate.
\section{Results}
\label{sec:results}
In order to compute the relationship between redshift and luminosity
distance on the spacetime of an expanding BHL, we carry out
the numerical integration of the geodesic equation (with null
tangent), along with the integration of Einstein's equation
required to obtain the metric tensor. The latter operation is
performed by a code generated with the Einstein Toolkit, based on
the \texttt{Cactus}~\cite{cactus} software framework along with modules
such as \texttt{Carpet}~\cite{carpet,carpetweb}, \texttt{McLachlan}~\cite{mclachlan,kranc},
and \texttt{CT\_MultiLevel}~\cite{Bentivegna:2013xna},
as already presented in~\cite{Bentivegna:2012ei,Bentivegna:2013jta,Korzynski:2015isa}.
The geodesic integrator, on the other hand, is a new \texttt{Cactus}
module that we have written. It implements a 3+1 decomposition of the geodesic
equation in the form given in \cite{Hughes:1994ea} and we have verified it
against several exact solutions, as reported in Appendix~\ref{app:geo}.
\subsection{Initial data and evolution}
As in~\cite{Yoo:2012jz,Bentivegna:2013jta}
we first construct an initial-data configuration by solving the
Hamiltonian and momentum constraints on the cube $[-L/2,L/2]^3$ with
periodic boundary conditions. In particular, we choose free data
corresponding to conformal flatness:
\begin{equation}
\gamma_{ij} = \psi^4 \delta_{ij}
\end{equation}
and set the trace of the extrinsic curvature to
zero around the origin and to a negative constant $K_c$ near the boundaries,
with a transition region starting at a distance $l$ from the origin:
\begin{eqnarray}
K_{ij} &=& \frac{1}{3} K_{\rm c} T(r) \gamma_{ij} + \psi^{-2} \tilde A_{ij}\\
T(r) &=& \left\{
\begin{array}{ll}
0 & \textrm{for } 0 \le r \le l \\
\left(\frac{(r-l-\sigma)^6}{\sigma^6}-1\right)^6&\textrm{for } l \le r \le l + \sigma \\
1 & \textrm{for } l + \sigma \le r
\end{array}
\right.
\end{eqnarray}
where we choose $l=0.05 L$ and $\sigma=0.4 L$.
We represent the traceless part of the extrinsic curvature as:
\begin{equation}
\tilde A_{ij} = \tilde D_i X_j + \tilde D_j X_i - \frac{2}{3} \tilde \gamma_{ij} \tilde D_k X^k
\end{equation}
and the conformal factor as:
\begin{equation}
\psi = \psi_{\rm r} + \frac{M}{2r} (1-T(r)),
\end{equation}
where $M$ is the bare mass of the central black hole, and solve the constraints for $\psi_{\rm r}$ and $X^i$.
For our basic configuration, we use $L=10$ and $M=1$ as in~\cite{Bentivegna:2013jta}.
We then proceed to the time evolution of $\gamma_{ij}$ and $K_{ij}$ using a
variant of the BSSN formulation, implemented in the \texttt{McLachlan} module,
and to the concurrent integration of the geodesic equation (\ref{eq:GE}).
\subsection{Computation of geodesics}
In order to compute geodesics in a 3+1 numerical spacetime, we first
perform a 3+1 decomposition of the geodesic equation (\ref{eq:GE}),
\begin{equation}
{ \nabla_p} p^a = 0.
\end{equation}
We decompose the geodesic tangent vector $p^a$ into its components along and orthogonal to the
unit hypersurface normal $n^a$, which we call $\sigma$ and $q^a$, respectively: $p^a = \sigma n^a + q^a$. The vector $q^a$ is spatial, i.e.~$q^an_a=0$, and $\sigma = -n_a p^a$. We use an
affine parametrisation, and $p^a$ is normalized as
\begin{eqnarray}
p^a p_a = \kappa,
\end{eqnarray}
with $\kappa = 0$ for null geodesics.
The spatial coordinates, covariant components of the tangent vector and affine parameter of the geodesic, ($x^i$, $q_i$, $\lambda$) satisfy
\begin{eqnarray}
\frac{dx^i}{dt} &=& -\beta^i + (p^0)^{-1} \gamma^{ik} q_k, \label{eqn:geo3+1x} \\
\frac{dq_i}{dt} &=& -p^0 \alpha \alpha_{,i} + q^j \beta^k_{,i} \gamma_{kj} - \frac{1}{2} (p^0)^{-1} q_l q_m \gamma^{lm}_{,i}, \label{eqn:geo3+1q} \\
\frac{d\lambda}{dt} &=& (p^0)^{-1} \label{eqn:geo3+1lambda}
\end{eqnarray}
where
\begin{eqnarray}
p^0 &=& \frac{(q_k q_j \gamma^{kj} - \kappa)^{1/2}}{\alpha}
\end{eqnarray}
is the time component of $p$ in the foliation-adapted coordinate
basis. Note that the derivative is with respect to coordinate time $t$, not the affine parameter $\lambda$. These equations are the same as those given in \cite{Hughes:1994ea}, and a derivation is outlined in Appendix~\ref{app:geo3+1}.
Given $(x^i, q_i, \lambda)$ at a time $t$,
equations (\ref{eqn:geo3+1x})--(\ref{eqn:geo3+1lambda}) determine their
evolution along a single geodesic.
The right hand sides of
eqs. (\ref{eqn:geo3+1x})--(\ref{eqn:geo3+1lambda}) are computed by
interpolating the metric quantities $\beta^i$, $\gamma_{ij}$, $\alpha$
from the evolution grid to the point $x^i(t)$ using fourth-order Lagrange
interpolation, and $(x^i(t), q_i(t), \lambda(t))$ is integrated using
a fourth-order Runge-Kutta method using the Cactus \code{MoL} component.
Additionally, the metric and various other quantities of interest are
interpolated to $x^i$, and all quantities are output as curves
parametrised by $t$ for use in any subsequent analysis once the
simulation is complete.
We implement the above prescription in two new Cactus components
\code{Geodesic} and \code{ParticleUtils}. The former contains the
equations themselves, and the latter provides library-type
functionality for integrating systems of equations along curves.
A few validation tests are provided in Appendix~\ref{app:geo}.
We now face the crucial task of selecting which geodesics to track.
Let us notice that, on a space filled with periodic cells, symmetry
reasons imply that an obvious class of cosmological observers is that formed
by observers sitting at the cell vertices. Due to the symmetry, these observers
do not exhibit any proper motions on top of the cosmic expansion, and the ratio of the
proper distances between arbitrary pairs of observers is constant at all times.
For this study, we construct and analyse two geodesics from this class
(which we will denote $A$ and $B$), starting at
the vertex $(-L/2,-L/2,-L/2)$, with initial tangents equal to $p_a^A=(p_0^A,1,0,0)$ and
$p_a^B=(p_0^B,1/\sqrt{2},1/\sqrt{2},0)$ respectively. $p_0^A = - \alpha \sqrt{\gamma^{xx}}|_A$
and $p^0_B = - \alpha \sqrt{(\gamma^{xx}+\gamma^{yy}+2\gamma^{xy})/2}|_B$ are
chosen by the geodesic integrator to ensure that the geodesics
are null. The two geodesics are plotted in Figure~\ref{fig:z}.
In order to measure the luminosity distance along geodesics $A$ and $B$, we
evolve two further pairs of geodesics, with spatial directions given by:
\begin{eqnarray}
(1,\epsilon,0) \\
(1,0,\epsilon)
\end{eqnarray}
and
\begin{eqnarray}
\left(\frac{1-\epsilon}{\sqrt{2}},\frac{1+\epsilon}{\sqrt{2}},0 \right) \\
\left(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}},\epsilon\right)
\end{eqnarray}
with $\epsilon=10^{-3}$, representative of two narrow beams close to each
original geodesic. We can then construct the redshift and luminosity distance
along the two beams.
Again, we emphasize that, since we keep the source parameters fixed
and observe the time evolution of each geodesic, this setup is different
(but essentially equivalent) to the one usually adopted in cosmology, where
the observer is fixed and sources with different parameters are considered.
As in~\cite{Bentivegna:2013jta},
we run this configuration on a uniform grid with three different resolutions
(corresponding to $160$, $256$, and $320$ points per side) in order to estimate
the numerical error.
All results presented below are convergent to first order, consistently with the
convergence order reported for the geometric variables in~\cite{Bentivegna:2013jta}.
All curves represent the Richardson extrapolation, at this order,
of the numerical data.
The corresponding truncation error (when visible) is indicated by a shaded region
around each curve.
\subsection{Small-redshift behaviour}
For small distances $d$ from the source, we expect the photon redshift and
luminosity distance to behave respectively like
\begin{eqnarray}
z(d) &\sim& H_{\cal S} d \\
D_{\rm L}(d) &\sim& d
\end{eqnarray}
where $H_{\cal S}$ is related to
the first time derivative of the local volume element at the source location:
\begin{eqnarray}
\label{eq:Hi}
H_{\cal S} = \left . \frac{{\rm tr}(K_{ij})}{3} \right |_{\cal S}
\end{eqnarray}
(see \cite{KristianSachs}). Figure~\ref{fig:z} shows that this expectation
is confirmed by our computation.
For large $d$, however, both quantities grow larger than the linear order.
Furthermore, the redshift clearly exhibits a non-monotonic behaviour engendered
by the inhomogeneous gravitational field. This is easy to explain as a small, periodic redshift
due to the photons climbing a potential hill near the vertices (away from the nearest black holes) and falling
into wells near the edge or diagonal midpoints (closer to the black holes).
Naturally, the two geodesics are
affected in different ways as they trace different paths through the gravitational
field.
\begin{figure}[!h]
\begin{center}
\centering
\begin{minipage}[b]{0.6\linewidth}
\includegraphics[width=1.0\textwidth]{plots/geoAB.pdf}
\includegraphics[width=1.2\textwidth]{plots/zd.pdf}
\includegraphics[width=1.2\textwidth]{plots/DLd.pdf}
\end{minipage}
\caption{Top: the paths of geodesics $A$ and $B$ in one of the BHL cells. The
geodesics run close to the cell edge and diagonal, respectively, at all times. Middle: photon redshift
as a function of the coordinate distance from the source. Bottom: luminosity
distance as a function of the coordinate distance from the source.
The error bars are indicated by shaded regions (when not visible, they are
included in the width of the curves).
\label{fig:z}}
\end{center}
\end{figure}
\subsection{Luminosity distance}
Due to numerical error,
the geodesics deviate from the cell edge and face
diagonal during the evolution, but remain quite close to them (the coordinate separation is less than $0.01\%$ after
three cell crossings, in both cases).
We can compare the $D_{\rm L}(z)$ relationship for geodesics $A$
and $B$ to the same quantity calculated according to four reference models:
\begin{enumerate}
\item The EdS model (equation (\ref{eq:ldeds}));
\item An FLRW model (equation (\ref{eq:ldflrw}))
with $\Omega_M=0.3$ and $\Omega_\Lambda=0.7$ (henceforth denoted $\Lambda$CDM);
\item The Milne model~\cite{MILNE01011934}, where redshift and luminosity distance are related by:
\begin{equation}
D_{\rm L}(z)=\frac{1}{H_{\cal S}} \frac{z}{(1+z)^2}\left(1+\frac{z}{2}\right);
\end{equation}
\item The estimate of $D_{\rm}(z)$ via the EBA,
equation (\ref{eq:eb}).
\end{enumerate}
All models are fitted according to two prescriptions:
the initial scale factor $a_{\cal S}$ is always set according to
\begin{equation}
a_{\cal S} = {\rm det}(\gamma_{ij})^{1/6}|_{\cal S},
\end{equation}
while
the initial expansion rate $H_{\cal S}$ is set to either
(i)
the initial time derivative
of the proper length of the domain edge (say, the one between $(-L/2,0,0)$ and
$(L/2,0,0$)), which we call a {\it global fit}, and is the same procedure as~\cite{Bentivegna:2013jta};
or (ii)
equation (\ref{eq:Hi}) (which we call a {\it local fit}).
\begin{figure}[!h]
\begin{center}
\centering
\includegraphics[width=0.7\textwidth]{plots/DL.pdf}
\includegraphics[width=0.5\textwidth]{plots/dDLAglob.pdf} \qquad \qquad \qquad \qquad \\
\includegraphics[width=0.5\textwidth]{plots/dDLBglob.pdf} \qquad \qquad \qquad \qquad \\
\includegraphics[width=0.5\textwidth]{plots/dDLAloc.pdf} \qquad \qquad \qquad \qquad \\
\includegraphics[width=0.5\textwidth]{plots/dDLBloc.pdf} \qquad \qquad \qquad \qquad \\
\caption{Luminosity distance as a function of redshift for geodesics $A$ and $B$ (top plot).
The same relationships in the EdS model, in the $\Lambda$CDM (i.e., FLRW with $\Omega_\Lambda=0.7$
and $\Omega_M=0.3$) model, in the Milne model and in the
EBA are also plotted. The four models are fitted according to the procedure
described in~\cite{Bentivegna:2013jta}, using the global expansion rate
computed from the first time derivative of the edge proper length. The relative difference
between the four models and the BHL $D_{\rm L}$ is plotted in the second and third panel.
The fourth and fifth panel illustrate the result of the same procedure, where the
four models have been fitted using the local expansion rate (\ref{eq:Hi}) instead.
On all plots, the dashed vertical lines mark the points where the geodesics
cross over the periodic boundary.
The error bars are indicated by shaded regions (when not visible, they are
included in the width of the curves or of the data points).
\label{fig:DL}}
\end{center}
\end{figure}
Figure~\ref{fig:DL} shows all the resulting curves.
We first recall that the expansion of the BHL, measured by the proper
distance of one of its cell edges, could be fitted quite well by an
EdS model with the same initial expasion, as shown in~\cite{Bentivegna:2013jta}.
The two models, however, exhibits markedly
different optical properties. For geodesic $A$, the relative difference
reaches $60\%$ by redshift $z=6$. This is not surprising: the conditions
under which these light rays
propagate in a BHL and in an EdS model are substantially different.
In the former case, for instance, null geodesics infinitesimally
close to $A$ or $B$ accelerate away from, rather than towards, them.
We notice that the EBA provides the best estimate
for $D_{\rm L}(z)$ in a BHL. We conjecture that this result is due to the
fact that this approximation can capture both the large-scale geometrical
properties of a non-empty universe and the small-scale behaviour of light
rays in vacuum. None of the other models satisfies both these conditions.
Note also that, for longer times, the EBA works better for the geodesic $A$
(along the edge) than for geodesic $B$ (along the face diagonal). This is
easy to explain if we notice that, because of the 4-fold discrete rotational
symmetry around the edge, there are no
Weyl focusing effects on $A$ and therefore the GDE with the Ricci tensor
neglected and no Weyl contribution is likely
to be a good approximation for the propagation of the neighbouring light rays. On the other hand along the face diagonal we may expect
a non-vanishing Weyl lensing around the midpoint area due to the tidal distortion of the rays. Such an effect is not taken into account in the EBA.
\subsection{Fitting the FLRW class}
It is tempting to consider an FLRW cosmology with the same matter content and
initial expansion as the reference EdS, plus an additional stress-energy
contribution coming from a cosmological constant, and attempt to tune its value
to reproduce the luminosity distance in the BHL.
The left panel of
Figure~\ref{fig:OLfit} shows a plot of the required $\Omega_\Lambda$ at each $z$,
for values of $\Omega_M$ in $[0.2,1]$. The right panel shows a cross section of
this surface with the planes $\Omega_M=1$ and $\Omega_M^{\rm eff}=8 \pi/(3 H_{\cal S}^2 L_{\rm prop}^3)$,
where $L_{\rm prop}$ is the initial proper length of a cell edge.
Notice, however, that none of these models would reproduce the expansion history of
the BHL spacetime, which follows closely that of a region of an EdS
model ($\Omega_M=1$ and $\Omega_\Lambda=0$) with the same $L_{\rm prop}$ and $H_{\cal S}$,
as discussed in~\cite{Bentivegna:2013jta}. This is the core of the fitting problem: the mapping between
different properties of an inhomogeneous spacetime to the FLRW class will be
different, and in general it will not be possible to identify a single
FLRW counterpart capable of reproducing all of the dynamical and optical aspects
of an inhomogeneous cosmology.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{plots/omol.pdf}
\includegraphics[width=0.45\textwidth]{plots/OL.pdf}
\caption{Value of $\Omega_\Lambda$ in the best-fit FLRW cosmology, based on the
luminosity distance measured on geodesic $A$ (left), and its cross sections
with the planes $\Omega_M=1$ and $\Omega_M=\Omega_M^{\rm eff}=8 \pi/(3 H_{\cal S}^2 L_{\rm prop}^3)$
(curve yellow and blue, respectively, on the right plot).
The error bars are indicated by shaded regions (when not visible, they are
included in the width of the curves).
\label{fig:OLfit}}
\end{center}
\end{figure}
In Figure~\ref{fig:OLfitglob}, we show the constant-$\Omega_\Lambda$ models
which best fit the $D_{\rm L}(z)$ curves for geodesics $A$ and $B$.
They are obtained for $\Omega_\Lambda^A=1.225$ and $\Omega_\Lambda^B=1.103$,
respectively. The relative difference between these models and the exact solution
is largest around $z=1$, where it reaches $30\%$.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{plots/bestOL.pdf}
\includegraphics[width=0.5\textwidth]{plots/bestOLA.pdf} \qquad \qquad \qquad \qquad \qquad \\
\includegraphics[width=0.5\textwidth]{plots/bestOLB.pdf} \qquad \qquad \qquad \qquad \qquad \\
\caption{$D_{\rm L}(z)$ for an FLRW model with $\Omega_{\rm M}=1$,
and $\Omega_\Lambda$ equal to the best-fit values $\Omega_\Lambda^A=1.225$
and $\Omega_\Lambda^B=1.103$, as well as to a few other representative
values.
The best-fit models differ from the BHL $D_{\rm L}(z)$ at the $20\%$ level.
The error bars are indicated by shaded regions (when not visible, they are
included in the width of the curves or of the data points).
\label{fig:OLfitglob}}
\end{center}
\end{figure}
Notice that essentially all quantities discussed so far are affected by oscillations
with a substantial initial amplitude, which is subsequently damped. Similarly
to the oscillations in the redshift, we conjecture that these features are due
to the inhomogeneous gravitational field, and in particular to radiative
modes which likely originate in the oversimplified
initial-data setup we employed. In a space without an asymptotically-flat
region, it is of course difficult to test (or even formulate) this conjecture rigorously. The
compactness of the spatial hypersurfaces, furthermore, means that one cannot
simply ignore this initial transient as is customary in, e.g., binary-black-hole
simulations, as the waves cannot escape from the domain (although their
amplitude is significantly attenuated by the expansion).
The presence of this unphysical component of the gravitational field, which we
could barely notice in the length scaling we measured~\cite{Bentivegna:2013jta},
affects very prominently, on the other hand, the BHL optical properties,
and in particular the photon redshift. Better initial-data constructions
which are free from these modes are an interesting field of investigation
which goes beyond the purpose of this work.
Finally, it is worth observing that, as mentioned in Section~\ref{sec:lprop},
different observers would measure a different luminosity distance on the same
spacetime, thereby potentially bringing the BHL result closer to the EdS
curve. A boost with respect to the lattice would, for instance, lower the value
of the distance, according to equation (\ref{eq:DA}). So would a stronger
gravitational field, as would be the case if an observer was located closer
to the centre of a lattice cell.
\subsection{Continuum limit $\mu \to 0$}
Finally, it is instructive to study how this behaviour depends on how tightly packed the BHL
is, as represented by the quantity $\mu=M/L$ introduced in Section~\ref{sec:bhl}.
For simplicity, here we use the bare mass of the central black hole as an estimate of
$M$, and the coordinate size of a cell edge as $L$.
In order to keep $M/L^3$ constant at the value of our base configuration (which had
$M=1$ and $L=10$), we need to have $\mu=M^{2/3}/10$.
As representative masses we choose $M=\{1/100,1/8,1/2,1,5\}$;
various properties of this BH family are illustrated in Table~\ref{tab:cont}.
\begin{center}
\begin{table}[!b]
\centering
\caption{The bare mass $M$, coordinate size of a cell edge $L=10 M^{1/3}$, its proper size $L_{\rm prop}$,
and the compactness parameter $\mu=M^{2/3}/10$ for a constant-density family of BHLs.
\label{tab:cont}}
\begin{tabular}{|c|c|c|c|}
\hline
$M$ & $L$ & $L_{\rm prop}$ & $\mu$ \\
\hline
0.010 & \phantom{0}2.15 & \phantom{0}2.73 & 0.0046 \\
0.125 & \phantom{0}5.00 & \phantom{0}6.28 & 0.0250 \\
0.500 & \phantom{0}7.94 & \phantom{0}9.84 & 0.0630 \\
1.000 & 10.00 & 12.26 & 0.1000 \\
5.000 & 17.10 & 21.77 & 0.2924 \\
\hline
\end{tabular}
\end{table}
\end{center}
We plot the luminosity distance as a function of $\mu$ in Figures~\ref{fig:mu}
and~\ref{fig:murow}. We observe, in particular, that the difference between the
luminosity distance in a BHL and in an appropriately fitted EdS does not tend to
zero as $\mu \to 0$. The EdS model, therefore, can reproduce the large-scale
expansion history of a BHL (as illustrated numerically in~\cite{Yoo:2013yea,
Bentivegna:2013jta}, and deduced analytically in~\cite{Korzynski:2013tea}),
but is unable to fit its optical properties, even in the limit $\mu \to 0$.
The numerical result is in agreement with the result of the perturbative analysis of Section~\ref{sec:bhl},
where we identified $O(1)$ differences in the GDE of a BHL with respect to
that of an FLRW model.
This indicates that cosmological-distance estimates of a lumpy spacetime
based on a fit with the FLRW class will exhibit a systematic error,
\emph{regardless of how lumpy the spacetime is}.
These effects are substantially, but not exhaustively, captured by the EBA, as
already observed in the case of other inhomogeneous spacetimes~\cite{2012MNRAS.426.1121C,2012JCAP...05..003B,Fleury:2014gha}.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{plots/DLmu.pdf}
\includegraphics[width=0.45\textwidth]{plots/dDLmuEdS2.pdf}
\caption{Left: luminosity distance for a family of BHLs with the same density
but varying $\mu$.
Right: residual with respect to the EdS model (fitted via the local
expansion rate) of the four lowest-mass models along with their extrapolation
for $\mu \to 0$.
\label{fig:mu}}
\end{center}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\textwidth, trim=60 0 60 0, clip=true]{plots/DLmurow.pdf}
\caption{
Behaviour of the luminosity distance
at fixed redshift, for various values of $\mu$. The green triangle
represents the polynomial extrapolation of the data series for $\mu \to 0$,
while the yellow
dashed curve represents the expected luminosity distance in EdS for each
specific value of $z$.
\label{fig:murow}}
\end{figure}
An important remark is that we observe that the tensor modes discussed in
Section~\ref{sec:results} intensify as $\mu \to 0$, affecting the smaller-$\mu$
BHLs to the point that it becomes impossible to identify a monotonic trend
in the luminosity distance for large $z$.
For this reason, we are forced to limit our study to very small $z$.
\section{Discussion and conclusions}
\label{sec:dc}
We have investigated the propagation of light along two special curves in
the spacetime of a BHL, constructed by numerically integrating Einstein's
equation in 3+1 dimensions.
In particular, we have measured the redshift and luminosity distance along
these curves, and compared them to the estimates of these observables
obtained in suitably fitted homogeneous models and in the EBA.
The comparison shows that the latter approximation is the one most capable
of reproducing the exact behaviour; we have built a heuristic argument to
explain this finding, based on the analysis of the different curvature
terms in the GDE.
Our finding is congruous with the conclusions of similar studies in other
inhomogeneous spacetimes~\cite{Fleury:2014gha}; in our case, however, the
models are not backreaction-free by construction, so that we can measure
all the relevant contributions to the GDE.
We have also fitted the $D_{\rm L}(z)$ relationship from the FLRW models with both a
constant and a $z$-dependent $\Lambda$ to the data, finding that a value
of $\Omega_\Lambda$ approximately equal to $\Omega_M$ reproduces the optical
properties of the BHL better than the corresponding models with $\Omega_\Lambda=0$.
In other words, in the BHL spacetime the luminosity distance for a redshift $z$
is larger than in the corresponding EdS model (the correspondence being based
on the same initial proper size and expansion rate). This is also in line
with the conclusions of previous studies~\cite{Clifton:2009bp}, and arguably
equivalent to the finding that fitting $\Omega_M$ alternatively leads to a smaller
value for this parameter~\cite{Fleury:2013uqa}.
Finally, we have examined a family of BHLs with varying BH masses and separations,
in order to estimate how our result changes as $\mu=M/L \to 0$. In this limit,
it was proven in~\cite{Korzynski:2013tea} that the expansion history of
a BHL tends to that of a flat FLRW model with the same average density.
Here, however, we find that the optical properties of a BHL exhibit a finite
deviation from the corresponding FLRW model, which reaches $5\%$ by $z=0.06$.
Given a considerable pollution by tensor modes, which we conjecture originate
in our initial-data construction, the luminosity distance is oscillatory, and
we are unable to evaluate the continuum limit for larger $z$.
Building a picture of the mechanisms involved in these results, as well as
generalizing it to inhomogeneous spacetimes with different matter content
and density profiles, is a particularly intriguing but hard-to-approach task.
We can start to tackle it by comparing our results to a recent study~\cite{Giblin:2016mjp},
which also measured the effects of light propagation in an inhomogeneous
model which, unlike the ones considered in this work, was filled with dust.
In that investigation, percent-level deviations were detected from the
homogeneous Hubble law, which are about an order of magnitude smaller than
the deviations reported here. From the arguments presented in this paper,
we infer that the discrepancy is largely due to the different representation
of the matter filling the two models. The quantitative formulation of
this statement is a problem which we reserve for further study.
\section*{Acknowledgements}
MK and EB would like to thank the Max Planck Institute for Gravitational
Physics (Albert Einstein Institute) in Potsdam for hospitality. The work
was supported by the project \emph{``The role of small-scale inhomogeneities
in general relativity and cosmology''} (HOMING PLUS/2012-5/4), realized
within the Homing Plus programme of Foundation for Polish Science,
co-financed by the European Union from the Regional Development Fund, and by
the project ``\emph{Digitizing the universe: precision modelling for precision
cosmology}'', funded by the Italian Ministry of Education, University and Research (MIUR).
Some of the computations were performed on the Marconi cluster at CINECA.
| proofpile-arXiv_066-648 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Respiratory infections are associated with a variety of health outcomes from mild colds to severe illness and mortality and are caused by multiple pathogens of which viruses are believed to be the most common agent \cite{Makela1998,Clark2014}. Diagnostic tests routinely used in the UK healthcare system now permit simultaneous detection of multiple viruses and are used for monitoring respiratory viruses as part of national surveillance programs \cite{Gunson2008}. As respiratory viruses infect the same anatomical regions (i.e. the respiratory tract) it is possible that there are interactions between viruses species within the host that affect the epidemiological dynamics, or alternatively, that behavioural responses, including hospitalisation, due to one virus affect the risk of infection and therefore the dynamics of another. Consequently, inference about infection risk should be carried out with consideration to the community of viruses as a whole \cite{ModellingReview1, interact3}. The existence of dependencies between viruses could mean that control measures against one virus could influence the occurrence of another virus \cite{IAV_RSV}. Thus, inferring dependencies between viruses could assist public health planning and have implications for vaccination strategies. Observational studies have indicated that respiratory viruses may impact each other's seasonality \cite{Observational3,Observational2,Observational1}. However, there is a notable lack of statistical methodology to explicitly estimate such dependencies between respiratory virus whilst accounting for testing intensities.
Public health policy and preparedness for viral outbreaks, epidemics and pandemics depend on mathematical and statistical analysis of infectious disease data. The major challenges in this expanding field derive from its focus on prospective detection of virus outbreaks to enable effective and efficient control measures \cite{ModellingReview1}. This differs from the main focus of this paper. We aim to retrospectively infer interactions between co-circulating viruses such that when one respiratory virus reaches a peak another is relatively inactive and this cannot be explained by climate or social behaviour \cite{Interference1,Rohani1998,Rohani2003,Interference3}. Our proposed methodology centers around non-independent temporal patterns in infection risk between viruses.
Bayesian disease mapping models are a class of regression model and have received much attention in recent years for the analysis of spatial distributions of incidence data routinely collected by public health bodies \cite{Lawson01082016, IntroDM}. These models are typically used to estimate spatial patterns of disease risk over some geographical region with several models proposed to capture spatial autocorrelations using conditional autoregressive priors \cite{RR4NoInteractions, Comparison1}. Multivariate disease mapping models provide a suitable framework for estimating dependencies between viruses as they naturally incorporate a between disease convariance matrix \cite{Review1}. Extensions to disease mapping models have been made to include temporal patterns \cite{SpaceTime2, RR4NoInteractions} and space-time interactions \cite{SpaceTime1, RR4}. However, most disease mapping applications focus on spatial structures \cite{AutoTime} with temporal trends in disease incidence often being overlooked although they can offer information on whether an infection is unusually active or inactive \cite{Yorshire, Outbreak2}.
Here, we extend the multivariate disease mapping framework to exploit monthly incidence data on multiple viral infections measured over several years. Within a disease mapping framework, an observed number of cases of a disease, or pathogen, at a given location is modelled in terms of the true unknown relative risk and a pre-determined expected number of cases \cite{ModellingReview1}. In the case respiratory virus time series data, concentrating on temporal patterns, expected counts should reflect the expected number of cases of a virus in a given month adjusted for the particular month, acting as a proxy for climate and expected seasonality, and potential demographic risk factors \cite{Sema}.
With multiple infections, multivariate models may be used to estimate between-disease dependencies through a covariance matrix by imposing a multivariate normal structure on disease risks \cite{MvDM1, WishartPrior}. The off-diagonal entries of this matrix can be used to infer spatial, or in this case temporal, covariances between disease risks by testing which entries are significantly different from zero \cite{MvDM2}.
The aim of this paper was to develop a Bayesian multivariate autoregressive model that capture both within- and between-year temporal structures. Subsequently estimating covariances in disease risk represents a novel technique for retrospectively inferring population level within-year temporal dependencies in excess risk of infection between communities of distinct respiratory virus species from disease incidence data.
\section{Respiratory virus infection time series data }
Our dataset comprises 60,971 clinical samples tested for respiratory viruses by the West of Scotland Specialist Virology Center (WoSSVC) for Greater Glasgow and Clyde Health Board between January 2005 and December 2013. Each sample was tested by multiplex real-time RT-PCR and test results (virus positive or negative) were available for five groups of respiratory viruses: adenovirus (AdV); coronavirus (CoV); human metapneumovirus (MPV); influenza B virus (IBV); and respiratory syncytial virus (RSV) \cite{Gunson2005}. Sampling date, patient age, patient gender, the sample origin (hospital or general practice submission that we used as a proxy for infection severity) were recorded. Multiple samples from the same patient received within a 30-day period were aggregated into a single episode of respiratory illness. The 60,971 clinical samples corresponded to 36,157 patient episodes. A patient was considered virus-positive during an episode if at least one clinical sample was positive during the 30-day window.
Whilst data are available at the individual level, we are predominantly interested in estimating non-independent patterns in temporal patterns between the five viruses at the population level. Therefore, for each virus, data were aggregated into monthly infection counts across the time frame of this study.
In total, 35\% of patients tested positive for at least one virus and detection was most common in children aged between 1 and 5 years \cite{Sema}. Detection of any virus in a given clinical sample was most common in December and least common in August. We observed differing patterns between the five viruses (Figure~\ref{fig1}, black lines). IBV, RSV and CoV were more prevalent in winter, AdV was generally less common with a slight increase prevalence in spring and MPV shifts from winter peaks to summer peaks after 2010 \cite{Sema}.
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=10cm]{Figure1}
\caption{Observed (Obs) and expected (Exp) counts of the five groups of respiratory viruses between 2005 and 2013 (black and grey lines respectively). Expected counts were estimated as described in main text.}
\label{fig1}
\end{figure}
\subsection{Interactions between respiratory viruses}
Several studies have postulated interactions between viruses such that when one respiratory virus reaches a peak another is relatively inactive and this cannot be explained by climate or social behaviour \cite{Interference1,Interference3}. Possible interactions are most apparent during epidemic periods where a competing virus obstructs the outbreak development of another \cite{interact3,Observational3,Observational2,Observational1,interact2, Interference2}. Within a disease mapping framework, the estimated relative risks identify time points where observed numbers of infection are higher or lower than expected with expected counts accounting for expected seasonality and risk factors associated with respiratory infection \cite{Sema} (section~\ref{section:expected}). Consequently, relative risks measure the excess risk of viral infection that cannot be explained by anticipated seasonality or patient demographics. Therefore, by inferring dependencies between viral species in terms of excess risks, we can directly infer viral interactions.
\section{Multivariate Spatio-temporal model} \label{MSTM}
Conditional autoregressive models are extensively used in the analysis of spatial data to model the relative risk of a virus or more generally a disease \cite{BDM, CAR1}. The class of Bayesian model typically used in this context is given by
\begin{equation}
\begin{split}
Y_{i}|E_{i},RR_{i} &\sim \mbox{Poisson}(E_{i}RR_{i}) \\
\log(RR_{i}) &= \alpha + \phi_{i} \nonumber
\end{split}
\end{equation}
where $Y_{i}$, $E_{i}$ and $RR_{i}$ are the observed count, expected count and relative risk for some index $i$ (for example, location or time interval) \cite{Comparison1} and $\phi=\{\phi_{1}, \ldots, \phi_{I}\}$ spatial random effects modelled jointly through a conditional autoregressive (CAR) distribution \cite{MvDM1}
\begin{eqnarray*}
\phi &\sim& \mbox{MVN}(0, (\tau(D-\lambda W))^{-1}).
\end{eqnarray*}
Matrix $W$ is a proximity matrix, $\lambda$ a smoothing parameter, $\tau$ a measure of precision and $D$ a diagonal matrix such that $D_{i}=\sum_{i'}{W_{ii'}}$.
Extending this model to multiple viruses, or more generally multiple diseases, then
\begin{equation}
\begin{split}
Y_{iv}|E_{iv},RR_{iv} &\sim \mbox{Poisson}(E_{iv}RR_{iv}) \\
\log\frac{}{}(RR_{iv}) &= \alpha_{v} + \phi_{iv} \nonumber
\end{split}
\end{equation}
where $Y_{iv}$, $E_{iv}$ and $RR_{iv}$ are the observed count, expected count and relative risk of virus $v$ and $\alpha_{v}$ a virus specific intercept term. A multivariate CAR (MCAR) distribution can jointly model $\phi$ by incorporating a between virus covariance matrix $\Lambda^{-1}$ of dimension $V \times V$ (where $V$ is the total number of viruses):
\begin{eqnarray*}
\phi &\sim& \mbox{MVN}(0, [\Omega \otimes \Lambda ]^{-1}).
\end{eqnarray*}
In this case, $\Omega=D-\lambda W$, $\phi=\{\phi_{.1}, \ldots, \phi_{.V}\}$ and $\phi_{.v}=\{\phi_{1v},\ldots, \phi_{Iv}\}$ \cite{MvDM3, MvDM2}.
Temporal autocorrelations may be induced in this model, at time point $j$, through the conditional expectation of $\phi_{j}|\phi_{j-1}$
\begin{eqnarray*}
\phi_{j}|\phi_{j-1} &\sim& \mbox{MVN}(s\phi_{j-1}, [\Omega \otimes \Lambda ]^{-1}).
\end{eqnarray*}
The parameter $s$ controls the level of temporal autocorrelation such that $s=0$ implies no autocorrelation whereas $s=1$ is equivalent to a first order random walk \cite{SpaceTime1}. Typically, where temporal autocorrelations are modelled through the conditional expectation, spatial autocorrelations are modelled through the precision matrix \cite{SpaceTime1}.
\section{Modelling multivariate time series data}
We aim to model monthly time series count data from multiple viruses simultaneously over a nine year period. We index over monthly time intervals and so monthly autocorrelations are modelled in terms of the precision matrix and yearly autocorrelations are modelled in terms of the conditional expectation in a similar fashion to the multivariate spatial-temporal model detailed in section~\ref{MSTM}. The observed count of virus $v$ in month $m$ of year $t$, $Y_{mtv}$ is modelled in terms of the expected count $E_{mtv}$ and relative risk $RR_{mtv}$:
\begin{equation
\begin{split}
Y_{mtv}|E_{mtv},RR_{mtv} &\sim \mbox{Poisson}(E_{mtv}RR_{mtv}) \\
\log(RR_{mtv}) &= \alpha_{v} + \phi_{mtv} \nonumbe
\end{split}
\end{equation}
with $\alpha_{v}$ an intercept term specific to virus $v$ and $\phi_{.t.}=\{\phi_{.t1}, \ldots, \phi_{.tV}\}$ a vector of random effects modelled conditionally through a MCAR prior
\begin{eqnarray*}
\phi_{.t.}|\phi_{.t-1.} &\sim& \mbox{MVN}(s_{v}\phi_{.t-1.}, [\Omega \otimes \Lambda]^{-1}).
\end{eqnarray*}
This parameterisation of a MCAR model captures both the seasonal trends of each virus via $\Omega$ and long-term temporal trends via $s_{1}, \ldots, s_{V}$. The conditional expectation of $\phi_{.t.}$ depends on the previous year $\phi_{.t-1.}$, capturing long term temporal tends. By allowing dependencies between neighbouring months, we account for seasonality in viral infection frequencies.
\section{Inferring viral interactions}
We focus primarily on the estimation of covariance matrix $\Lambda^{-1}$ in order to infer potential temporal dependencies. By formally testing which off-diagonal entries of $\widehat{\Lambda}^{-1}$ are significantly different from zero, we can explicitly provide statistical support for viral interactions.
\subsection{MCAR prior specification} \label{section:precision}
The covariance structure of the MCAR distribution used to model random seasonal-temporal effects is the Kronecker product of precision matrices $\Omega$ and $\Lambda$.
The between-virus precision matrix $\Lambda$ accounts for dependencies between viral relative risks in terms of monthly trends. Wishart priors can be used for unstructured precision matrices such as $\Lambda$ \cite{WishartPrior}, however, we employed a modified Cholesky decomposition to estimate covariance matrix $\Lambda^{-1}$:
\begin{eqnarray*}
\Lambda^{-1} &=& \Sigma\Gamma\Gamma^{T}\Sigma
\end{eqnarray*}
where $\Sigma$ was a diagonal matrix with elements proportional to viral standard deviations and $\Gamma$ a lower triangular matrix relating to viral correlations \cite{MChol1}. This parameterisation ensured the positive-definiteness of $\Lambda^{-1}$, although we note that other parameterisations are available \cite{MChol2}.
Matrix $\Omega$ captures seasonal trends in terms of monthly dependencies defined through a neighbourhood matrix $W$. We will consider two possible constructions of $W$.
\subsubsection*{Neighbourhood structure}
Assuming neighbouring month are more similar than distance months, $W$ can be defined such that $w_{ij}=1$ if months $i$ and $j$ are neighbouring months and $w_{ij}=0$ if months $i$ and $j$ are not neighbouring months. In this paper, neighbours were fixed as the proceeding and preceding three month. Taking a neighbourhood approach, we set
\begin{eqnarray*}
\Omega_{neigh} &=& D - \lambda W_{neigh}
\end{eqnarray*}
where $\lambda$ was a smoothing parameter and $D$ a $12 \times 12$ diagonal matrix with $D_{i}=\sum_{j}{w_{neigh_{ij}}}$, in other words the total number of nearest neighbours of month $i$. \cite{MvDM3,CARSAR}.
\subsubsection*{Autoregressive structure}
$W$ may be defined through a random walk \cite{RR4} or as an autoregressive process \cite{AutoTime} (denoted by $W_{auto}$). We set the $ij$th entry of $W_{auto}$ ($i \neq j$) as $\rho^{d_{ij}}$ with $d_{ij}$ the distance between months $i$ and $j$ and $\rho$ a temporal correlation parameter with $\rho<1$. We defined distance as the number of months between $i$ and $j$.
Taking an autoregressive approach, we set
\begin{eqnarray*}
\Omega_{auto} &=& D - \lambda W_{auto}
\end{eqnarray*}
with $D$ a diagonal matrix with $D_{i}=\sum_{j}{w_{auto_{ij}}}.$ We note that these formation can easily be extended \cite{MvDM3,CARpriors}.
\subsection{Expected counts} \label{section:expected}
We required expected counts of each virus at each time point in this study. Since individual level data were available, a series of logistic regressions were used to estimate the probability of testing positive for a virus at a given time point. For month $m$, the log odds of virus $v$, logit$(p_{mv})$, was estimated through fixed effects of age, sex and severity (estimated by hospital or general practice submission) and a yearly random effect. The standardised probability of virus $v$ in month $m$, $p^{s}_{mv}$, was estimated as
\begin{eqnarray*}
\widehat{\mbox{p}}^{s}_{mv} &=& \sum_{a,s,l,t}{\frac{N_{aslt}\widehat{p}_{mv_{aslt}}}{N_{mv}}}.
\end{eqnarray*}
where $N_{aslt}$ was the number of people of age $a$, sex $s$ and had infection severity $l$ in year $t$, $\widehat{p}_{mv_{aslt}}$ the estimated probability of a person of age $a$, sex $s$ with infection severity $l$ in year $t$ testing positive for virus $v$ in month $m$ and $N_{mv}$ the number of swabs tested for virus $v$ in month $m$. The estimated probabilities of each virus in each month are therefore standardised for age, sex and severity and account for yearly differences in circulation.
The expected count for virus $v$ in month $m$ of year $t$ was then
\begin{eqnarray*}
E_{mtv}=N_{mtv}\widehat{\mbox{p}}^{s}_{mv}
\end{eqnarray*}
with $N_{mtv}$ the number of of swabs tested for virus $v$ in month $m$ in year $t$.
\subsection{Full model} \label{section:fullmodel}
Let $Y_{mtv}$ denote the observed count of virus $v$ during the $m$th month of year $t$, $V$ the total number of viruses and $T$ the number of years. We modelled $Y_{mtv}$ in terms of the expected count $E_{mtv}$ and relative risk $RR_{mtv}$ and used a Cholesky decomposition to estimate covariance matrix $\Lambda^{-1}$ \cite{DBDA} (Figure~\ref{fig_fullmodel}).
\begin{figure}
\centering
\includegraphics[width=11cm,height=15cm]{Full_Model_DBDA2}
\caption{Diagram of full models used to estimate pairwise relative risk covariances. The diagram should be read from the bottom (starting with $Y_{mtv}$) to the top. All prior choices have been fully specified. Numbers indicate hyperparameter choices, for instance, mean and variance in the normal distribution, lower and upper bound in the uniform distributions and shape and rate in the gamma distribution.}
\label{fig_fullmodel}
\end{figure}
This model was implemented in jags \cite{jags} using the R2jags package \cite{R2jags} in R \cite{R}. All results are averaged across five independent chains. In each chain, we took the 100th draw across 500,000 iterations after a burn-in period of 300,000 iterations. Our R code used to model these data and an example of simulated are available upon request to first author. We note that the multivariate intrinsic Gaussian CAR prior distribution is fully specified in GeoBUGS \cite{GeoBUGS}. However, our approach allows for other parameterisations of the MCAR distribution providing more flexibility and better suiting our specific needs. In addition, our method is easily adapted to additional parameterisations.
\section{Simulation Studies}
The specific aim of this paper was to estimate the between-virus covariance matrix $\Lambda^{-1}$. We show the validity of our proposed model (Figure~\ref{fig_fullmodel}) in modelling multivariate time series data through a three virus simulation and that this model can accurately and precisely estimate $\Lambda^{-1}$ through a five virus simulation.
\subsection{Three virus example}
We first present an analysis of simulated data for three viruses; virus 1, virus 2 and virus 3. Seasonal effects were assigned such that virus 1 peaked in winter, virus 2 peaked in summer and virus 3 had no seasonal pattern (Figure~\ref{fig_3virus_setup_a}). Virus 1 and virus 2 were negatively correlated and both viruses were independent of virus 3 (Table~\ref{table1}, true values).
The probabilities and expected counts of each virus in each month were estimated using the method described in section~\ref{section:expected}. Individual level data were simulated in order to reflect the virological diagnostic data. We simulated 200 samples per month per virus over a four year period. For each sample, an age, sex and severity were drawn from the observed virological diagnostic data distributions \cite{Sema}. Regression coefficients used to estimate the probability of each virus were drawn such that $\beta_{intercept}=0$, $\beta_{age} \sim N(0,0.1)$, $\beta_{gender} \sim N(0,0.1)$ and $\beta_{severity} \sim N(0,0.1)$ and standardised probabilities of each virus in each month were estimated (Figure~\ref{fig_3virus_setup_b}). Expected counts were taken as the product of the standardised probabilities and the number of samples (i.e. 200) (Figure~\ref{fig_3virus_setup_c}, grey lines).
Random effects $\phi$ were drawn from multivariate normal distributions with yearly smoothing parameters and monthly smoothing parameter ($s_{1}, s_{2}$, $s_{3}$ and $\lambda$) set as 0.5. Seasonal dependencies were simulated through neighbourhood matrix $W_{neigh}$ defined in section~\ref{section:precision}. We set virus specific intercept terms $\alpha_{1}=\alpha_{2}=\alpha_{3}=0$ and calculate the relative risks of each virus at each time point.
Finally, observed counts were taken as the product of relative risks and expected counts (Figure~\ref{fig_3virus_setup_c}, black lines).
We fitted both models (Figure~\ref{fig_fullmodel}, neighbourhood and autoregressive structure) to the simulated data and found little difference in fit between the neighbourhood construction (DIC = 960.7) and the autoregressive construction (DIC = 957.6).
A significant negative covariance was estimated between virus 1 and virus 2 and non-significant covariances between virus 1 and virus 3 and between virus 2 and virus 3 using both the neighbourhood and autoregressive constructions (Table~\ref{table1}, 95\% credible intervals).
By taking the absolute difference, $d$, between the true relative risks and the estimated relative risks using $W_{neigh}$ we found the estimated pattern of relative risks across the time frame of this simulation followed closely to the true relative risks (Figure~\ref{figSS_RR}). We found very similar results using $W_{auto}$. This sumulation study illustrates our method can accurately estimate the relative risk of each virus across the time frame of the simulation and is viable tool in modelling multivariate time series data.
\begin{figure}[!t]
\centering
\begin{subfigure}[h]{0.4\textwidth}
\centering
\includegraphics[height=5cm,width=6cm]{3Virus_effectsize2}
\caption{}
\label{fig_3virus_setup_a}
\end{subfigure}
\hspace{1.25cm}
\begin{subfigure}[h]{0.4\textwidth}
\centering
\includegraphics[height=5cm,width=6cm]{3Virus_prob}
\caption{}
\label{fig_3virus_setup_b}
\end{subfigure}
\begin{subfigure}[h]{1\textwidth}
\centering
\includegraphics[height=5cm,width=14cm]{3Virus_obsVexp_redo2}
\caption{}
\label{fig_3virus_setup_c}
\end{subfigure}
\caption{(a) Monthly effect sizes, (b) estimated monthly standardised probabilities, (c) simulated expected (grey lines) and observed (black lines) counts of virus 1, virus 2 and virus 3 in three virus simulation.}
\end{figure}
\begin{table}[t]
\begin{tabular}{|c|c|c|c|}
\hline
\mbox{parameter} & \mbox{true value} & \mbox{95\% credible interval} ($W_{neigh}$) & \mbox{95\% credible interval} ($W_{auto}$) \\ \hline
$\Lambda^{-1}_{1,2}$ & -0.5 & $(-0.89, -0.30)$ & $(-0.72,-0.36)$ \\
$\Lambda^{-1}_{1,3}$ & 0 & $(-0.39, 0.27)$ & $(-0.40, 0.32)$ \\
$\Lambda^{-1}_{2,3}$ & 0 & $(-0.32, 0.35)$ & $(-0.37, 0.39)$ \\
\hline
\end{tabular}
\caption{True values of matrix $\Lambda$ from simulation study and estimated 95\% credible intervals from full model using the neighbourhood ($W_{neigh}$, true model) and autoregressive ($W_{auto}$) structures.}
\label{table1}
\end{table}
\begin{figure}[!h]
\centering
\includegraphics[width=9cm,height=9cm]{3Virus_RR_redo}
\caption{Absolute difference $d$ between the true relative risks used in simulated study and the estimated relative risk. The maximum difference was 0.3.}
\label{figSS_RR}
\end{figure}
\subsection{Five virus example}
To demonstrate the accuracy of the full model in estimating the between virus covariance matrix, we simulated data from five viruses over a 15 year period (Figure~\ref{figSS2}) with
$$\Lambda^{-1}=
\left(
\begin{array}{ccccc}
1 & -0.5 & 0 & 0 & 0 \\
-0.5 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0.5 \\
0 & 0 & 0 & 0.5 & 1 \\
\end{array}
\right).
$$
Virus 1 and virus 2 were negatively correlated, virus 4 and virus 5 were positively correlated and virus 3 circulated independently.
\begin{figure}[!t]
\centering
\includegraphics[width=10cm,height=10cm]{5virus_obsVexp}
\caption{Simulated observed (black lines) and expected (grey lines) virus counts from five viruses over fifteen years.}
\label{figSS2}
\end{figure}
Data from the first three viruses were simulated under identical conditions to those described in the three virus example. Seasonal effects were assigned such that virus 4 and virus 5 peaked in autumn (Figure~\ref{figSS2}, observed counts). The probabilities and expected counts of virus 4 and virus 5 in each month were estimated using the method described in section~\ref{section:expected}. Likewise, regression coefficients were drawn such that $\beta_{intercept}=0$, $\beta_{age} \sim N(0,0.1)$, $\beta_{gender} \sim N(0,0.1)$ and $\beta_{severity} \sim N(0,0.1)$ . Random effects $\phi$ were drawn from multivariate normal distributions with yearly smoothing parameters and monthly smoothing parameter ($s_{1}$, $s_{2}$, $s_{3}$, $s_{4}$, $s_{5}$ and $\lambda$) set as 0.5. Seasonal dependencies were simulated through neighbourhood matrix $W_{neigh}$ defined in section~\ref{section:precision}. We set virus specific intercept terms $\alpha_{1}=\alpha_{2}=\alpha_{3}=\alpha_{4}=\alpha_{5}=0$ and calculate the relative risks of each virus at each time point. Finally, observed counts were taken as the product of relative risks and expected counts (Figure~\ref{figSS2}, grey lines).
Using the full model with the neighbourhood construction, we initially estimated $\Lambda^{-1}$ using only the first year of data (Figure~\ref{figSS3}, Year 1). We then combined the first two years of data to estimated $\Lambda^{-1}$ (Figure~\ref{figSS3}, Year 2). This process was repeated, adding the current year's data at each iteration, until all fifteen years of data were combined to estimate $\Lambda^{-1}$.
For each pair of viruses, at each iteration, we assessed deviations of each covariance parameter from zero (pre-mcc) and applied a multiple comparison correction (post-mcc). P-values were used as a convenient and effective way of locating the hypothesised value of zero in the posterior distribution of each covariance parameter. We chose to control the false discovery rate (the expected proportion of falsely rejected hypotheses amongst those rejected) at rate of 0.05 \cite{MCP}. Covariance parameters with an adjusted p-value less than 0.05 were deemed significantly different from zero and used as support for a significant covariance between the corresponding viruses.
Using data from the first year, we estimated non-significant covariances between all pairs of viruses (Figure~\ref{figSS3}, Year 1). Using data from two years onwards, we found varying degrees of significance in the covariance between virus 1 and virus 2, $\Lambda^{-1}_{1,2}$, virus 2 and virus 4, $\Lambda^{-1}_{2,4}$, and virus 4 and virus 5, $\Lambda^{-1}_{4,5}$ (Figure~\ref{figSS3}, Year 2 onwards).
Following the multiple comparison correction, we found significant covariances between virus 1 and virus 2, $\Lambda^{-1}_{1,2}$, and virus 4 and virus 5, $\Lambda^{-1}_{4,5}$, from year 9 onwards (Figure~\ref{figSS3}, solid navy and green lines) validating our model for long term time series data. Again, we found very similar results using $W_{auto}$.
This example shows this method can precisely and accurately estimate the between virus covariance matrix given long term time series data.
\begin{figure}[!h]
\centering
\includegraphics[width=7cm,height=7cm]{5virus_sim_corr_est}
\caption{Estimated covariance parameters significantly different from zero using data from year 1 through to year 15. Grey dots show point estimates of all covariance parameters at each time point. Solid lines show parameters significantly different from zero once multiple comparison correction (mcc) was applied. Dotted lines show parameters only significantly different from zero before multiple comparison correction (pre-mcc). }
\label{figSS3}
\end{figure}
\section{Application to virological diagnostic data}
We applied our model (Figure~\ref{fig_fullmodel}) to monthly infection count data from five respiratory viruses (AdV, Cov, MPV, IBV and RSV) across nine years in order to infer interactions between these viruses.
\subsection{Estimating $RR$ and $\Lambda^{-1}$}
Expected counts were estimated for each virus as described in section~\ref{section:expected} (Figure~\ref{fig1}, grey lines).
The relative risk of AdV infection remained relatively high between 2005 to 2010 but decreased during the summer and autumn months of 2011, 2012 and 2013 (Figure~\ref{figRR_11}, AdV). We found an increased relative risk of IBV infection during the autumn and winter periods of 2005/2006, 2010/2011 and 2012/2013 (Figure~\ref{figRR_11}, IBV). During the second half of 2009, we found a heightened risk of RSV and MPV infections. More generally, the relative risk of RSV infection peaked during late summer through to autumn from 2008 onwards whereas the risk of MPV infection shifted from winter, between 2005 and 2008, to summer, from 2011 onwards (Figure~\ref{figRR_11}, RSV and MPV).
Under the neighbourhood structure, we found a positive covariance between RSV \& MPV (adjusted p-value = 0.01) and negative covariances between IBV \& AdV (adjusted p-value = 0.049 ), IBV \& MPV (adjusted p-value = 0.049) and Cov \& MPV (adjusted p-value = 0.049) whereas we found a positive covariance between RSV \& MPV (adjusted p-value = 0.01) and a negative covariance between IBV \& AdV (adjusted p-value = 0.045) using the autoregressive structure (Table~\ref{figCov_11}). Under this construction, we found adjusted p-values between IBV \& MPV and Cov \& MPV to be 0.075 and 0.073 respectively.
We found the autoregressive structure to provide a better fit to these data (DIC=2686.4) compared to the neighbourhood structure (DIC=2795.6).
\begin{figure}[t]
\centering
\includegraphics[width=10cm,height=10cm]{Figure7}
\caption{Estimated relative risks (RR) of adenovirus (AdV), coronavirus (Cov), human metapneumovirus (MPV), influenza B virus (IBV) and human Respiratory syncytial virus (RSV) 2005 and 2013.}
\label{figRR_11}
\end{figure}
\begin{table}[!h]
{\begin{tabular}{|l|l|cc|}
\hline
& & $W_{neigh}$ & $W_{auto}$ \\ \hline
Adv & Cov & (-0.27, 045) & (-0.31, 0.41)\\
& MPV & (-0.35, 0.22) & (-0.35, 0.20)\\
& IBV & \textbf{(-0.67, -0.16)} & \textbf{(-0.68, -0.15)}\\
& RSV & (-0.37, 0.29) & (-0.32, 0.41)\\
Cov & MPV & \textbf{(-0.66, -0.11) } & (-0.66, -0.08)\\
& IBV & (-0.23, 0.45) & (-0.18, 0.43)\\
& RSV & (-0.28, 0.32) & (-0.32, 0.29)\\
MPV & IBV & \textbf{(-0.66 -0.13)} & (-0.64, -0.07)\\
& RSV & \textbf{(0.32, 0.71)} & \textbf{(0.18, 0.67)}\\
IBV & RSV & (-0.51, 0.05) & (-0.54, 0.04)\\
\hline
\end{tabular}}
\caption{Posterior density interval estimates of covariancess between adenovirus (AdV), coronavirus (Cov), human metapneumovirus (MPV), influenza B virus (IBV) and respiratory syncytial virus (RSV). Covariances significantly different from zero are highlighted in bold.}
\label{figCov_11}
\end{table}
\section{Conclusion}
Humans are infected by a community of respiratory viruses that occupy a shared niche in the respiratory tract. However, the development and implementation of controls such as vaccination primarily focuses on individual virus species. Interactions between viral species could mean that the control of one infection could increase the incidence of another. The increasing uptake of influenza vaccines and the development of a vaccine against respiratory syncytial virus and human parainfluenza virus \cite{Schmidt2011,MODJARRAD2016} highlights the importance of identifying virus interactions in order to be able to better forecast the impact of individual control strategies on the joint epidemiological dynamics of interacting viruses.
Although observational data \cite{Observational3,Observational2,interact2, Observational1}, univariate regression models \cite{Greer2009,interact2, interact3,ModellingReview1} and time series analyses \cite{Bhattacharyya2015,IAV_RSV} indicate that there may be interactions between common respiratory viruses, there is a lack of multivariate statistical tools for examining interactions within a community of viruses. We have presented a novel multivariate autoregressive framework for modelling time series count data from multiple co-circulating viruses that allowed us to retrospectively infer statistically significant viral interactions.
The multivariate autoregressive model (\ref{fig_fullmodel}) adapts a multivariate disease mapping model, typically employed to infer relative risks of multiple spatially non-independent diseases, to model viral within- and between-year temporal non-independences . Within this framework, we mapped time-intervals to retrospectively identify unusual virus activity. Usual time trends were assessed through expected counts, estimated using a series of logistic regressions using age, gender and infection severity (GP or hospital) to estimate the probability of infection of a given respiratory virus within a given month \cite{LogisticRegression}. Our method captures both within- and between-year temporal dependencies \cite{Outbreak1} and is advantageous since it naturally incorporates a between-virus covariance matrix, providing a novel basis for inferring viral interactions. We employed a modified Cholesky decomposition that guaranteed the positive-definiteness of the estimated covariance matrix and inferred interactions by identifying off-diagonal entries of this estimated matrix significantly different from zero whilst correcting for multiple comparisons.
Simulation studies of three and five viruses validated the application of this method in modelling multivariate time series data and showed this
method to be sufficiently powerful in inferring significant interactions between viruses using the observed diagnostic data which spanned nine years.
We found a positive covariance between RSV and MPV and a negative covariance between IBV and AdV. The neighbourhood model found additional negative covariances between AdV and MPV and between MPV and Cov (Figure~\ref{figRR_11} and Table~\ref{figCov_11}). However, the autoregressive model provided a better fit to these data.
The described method is a viable statistical tool for inferring respiratory virus interactions leading to a better understanding of the joint epidemiological dynamics of interacting viruses. As a result, policy makers will be better informed and prepared for future changes in virus incidences resulting from vaccination programs and seasonal outbreaks.
\section*{Acknowledgments}
This work was funded by the Medical Research Council of the United Kingdom (Grant number MC\_UU\_12014/9). We are grateful for support from National Science Foundation DEB1216040, BBSRC grants BB/K01126X/1, BB/L004070/1, BB/L018926/1, BB/N013336/1, BB/L004828/1, BB/H009302/1, BB/H009175/1, the Foods Standards Agency FS101055 and the Scottish Government Rural and Environment Science and Analytical Services Division, as part of the Centre of Expertise on Animal Disease Outbreaks (EPIC). We thank Paul Johnson for his helpful comments on the manuscript.
\bibliographystyle{Vancouver}
| proofpile-arXiv_066-666 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
\addcontentsline{toc}{section}{Introduction}
One of the major motivations of number theory is the description of rational or integral solutions of diophantine equations, which from a geometric perspective amounts to understanding the behaviour of rational or integral points on algebraic varieties. In dimension one, there are many techniques and results providing a good overview of the situation such as the famous Faltings' theorem (for genus $\geq 2$ and algebraic points) or Siegel's theorem (for integral points and a function with at least three poles). Nevertheless, in many cases the quest for effectivity (meaning a bound on the height on these points) is still open, and effective methods are quite different from these two powerful theoretical theorems.
We focus in this paper on a method for integral points on algebraic varieties called \textit{Runge's method}, and its generalisations and applications for Siegel modular varieties.
To keep the introduction fluid, we first explain the principles behind Runge's method and its applicatons to Siegel modular varieties, with simplified statements and a minimum of references and details. Afterwards, we describe precisely the structure of the article, in particular where the details we omitted first are given.
On a smooth algebraic projective curve $C$ over a number field $K$, Runge's method proceeds as follows. Let $\phi \in K(C)$ be a nonconstant rational function on $C$. For any finite extension $L/K$, we denote by $M_L$ the set of places of $L$ (and $M_L^\infty$ the archimedean ones). For $S_L$ a finite set of places of $L$ containing $M_L^\infty$, we denote the ring of $S_L$-integers of ${\mathcal O}_L$ by
\[
{\mathcal O}_{L,S_L} = \{ x \in L \, \, |x|_v \leq 1 \, \, {\textrm{for all }} \, v \in M_L \backslash S_L \}.
\]
Now, let $r_L$ be the number of orbits of poles of $\phi$ under the action of $\operatorname{Gal}(\overline{L}/L)$. The \textit{Runge condition} on a pair $(L,S_L)$ is the inequality
\begin{equation}
\label{eqRungeconditioncourbes}
|S_L|<r_L.
\end{equation}
Then, Bombieri's generalisation (\cite{BombieriGubler}, paragraph 9.6.5 and Theorem 9.6.6) of Runge's theorem, the latter being formulated only for $L=K={\mathbb{Q}}$ and $r_{\mathbb{Q}} \geq 2$, states that for every pair $(L,S_L)$ satisfying Runge condition and every point $P \in C(L)$ such that $\phi(P) \in {\mathcal O}_{L,S_L}$, there is an \textit{absolute} bound $B$ (only depending on $C$ and $\phi$, \textit{not} on such a pair $(L,S_L)$) such that
\[
h(\phi(P)) \leq B,
\]
where $h$ is the Weil height. In short, as long as the point $\phi(P)$ has few non-integrality places (the exact condition being \eqref{eqRungeconditioncourbes}), there is an absolute bound on the height of $\phi(P)$. There is a very natural justification (due to Bilu) for Bombieri's theorem: let us fix a pair $(L,S_L)$ satisfying Runge condition and $P \in C(L)$ such that $\phi(P) \in {\mathcal O}_{L,S_L}$. For every place $v \in M_L \backslash S_L$, as $|\phi(P)|_v$ is small, it means that $P$ is $v$-adically far from all orbits of poles of $\phi$. For $v \in S_L$, $P$ can be $v$-adically close to one of the orbits but only one of them because they are pairwise disjoint. We eliminate such an orbit if it exists, and applying the process for every $v \in S_L$, Runge's condition guarantees that there remains at the end of the process one orbit ${\mathcal O}$ which is $v$-far from $P$ for \textit{all} places $v \in M_L$. This in turn implies finiteness : indeed, choosing by Riemann-Roch an auxiliary function $g_{\mathcal O} \in L(C)$ whose poles are the points of ${\mathcal O}$, this means that $h(g_{\mathcal O}(P))$ is small as $P$ is far from its poles at every places, hence $P$ belongs to a finite set by Northcott condition. It is a bit more technical to obtain a bound on the height $h (\phi(P))$ (and which does not depend on $(L,S_L)$) in the general case) but it is the same idea. This justification also provides a method to bound in practice the heights of such points (when one knows well enough the auxiliary functions $g_{\mathcal O}$), which is called \textit{Runge's method}. When applicable, this method has two important assets: it gives good bounds, and it is uniform in the pairs $(L,S_L)$, which for example is not true for Baker's method.
The goal of this paper was to find ways to transpose the ideas for Runge's method on curves to higher-dimensional varieties, where it is generally very difficult to obtain finiteness of integral or rational points, as the extent of our knowledge is much more limited. First, let us recall a previous generalisation of Bombieri's theorem in higher dimensions obtained by Levin (\cite{Levin08}, Theorem 4). To sum it up in a simpler case, on a projective smooth variety $X$, the analogues of poles of $\phi$ are effective divisors $D_1, \cdots, D_r$. We have to fix a smooth integral model ${\mathcal X}$ of $X$ on ${\mathcal O}_K$, and denote by ${\mathcal D}_1, \cdots, {\mathcal D}_r$ the Zariski closures of the divisors in this model, of union ${\mathcal D}$, so our integral points here are the points of $({\mathcal X} \backslash {\mathcal D}) ({\mathcal O}_{L,S_L})$. There are two major changes in higher dimension. Firstly, the divisors have to be ample (or at least big) to obtain finiteness results (this was automatic for dimension 1). Secondly, instead of the condition $|S_L| < r$ as for curves, the \textit{higher-dimensional Runge condition} is
\begin{equation}
\label{eqintroRungemultidim}
m |S_L|<r,
\end{equation}
where $m$ is the smallest number such that any $(m+1)$ divisors amongst $D_1, \cdots, D_r$ have empty common intersection. Levin's theorem states in particular that when the divisors are ample,
\[
\left( \bigcup_{\substack{(L,S_L) \\ m |S_L|< r}} \! \! \left( {\mathcal X} \backslash {\mathcal D} \right) ({\mathcal O}_{L,S_L}) \right) \, \, \, {\textrm{is finite}}.
\]
The issue with \eqref{eqintroRungemultidim} is that the maximal number $|S_L|$ satisfying this condition is much lowered because of $m$, even more as the ample (or big) hypothesis tends to give a lower bound on this $m$. When we tried to apply Levin's theorem to some Siegel modular varieties with chosen divisors, we found that the higher-dimensional Runge condition was too restrictive (remember that $S_L$ contains archimedean places, so $|S_L|\geq [K:{\mathbb{Q}}]/2$), hence the theorem was not applicable. This was the initial motivation for a generalisation of this theorem, called ``tubular Runge theorem'', designed to be more flexible in terms of Runge condition. Let us explain its principle below.
Additionally to $X$ and $D_1, \cdots, D_r$, we fix a closed subvariety $Y$ of $X$ which is meant to be ``a subvariety of $X$ where the divisors $D_1, \cdots, D_r$ intersect a lot more than outside it''. More precisely, let $m_Y$ the smallest number such that any $(m_Y+1)$ divisors amongst $D_1, \cdots, D_r$ have common intersection included in $Y$. In particular, $m_Y \leq m$, and the goal is to have $m_Y$ as small as possible without asking $Y$ to be too large. Now, we fix a ``tubular neighbourhood'' of $Y$, which is the datum of a family ${\mathcal V}=(V_v)_v$ where $v$ goes through the places $v$ of $\overline{K}$, every $V_v$ is a neighbourhood of $Y$ in $v$-adic topology, and this family is uniformly not too small in some sense. For example, if ${\mathcal Y}$ is the Zariski closure of $Y$ in ${\mathcal X}$, we can define at a finite place $v$ the neighbourhood $V_v$ to be the set of points of ${\mathcal X}(\overline{K_v})$ reducing in ${\mathcal Y}$ modulo $v$. We say that a point $P \in X(\overline{K})$ does \emph{not} belong to ${\mathcal V}$ if $P \notin V_v$ for every place $v$ of $\overline{K}$, and intuitively, this means that $P$ is $v$-adically far away from $Y$ for \emph{every} place $v$ of $\overline{K}$. Now, assume our integral points are not in ${\mathcal V}$. It implies that at most $m_Y$ divisors amongst $D_1, \cdots, D_r$ can be $v$-adically close to them, hence using the same principles of proof as Levin, this gives the \textit{tubular Runge condition}
\begin{equation}
\label{eqintroRungetub}
m_Y |S_L| < r.
\end{equation}
With this additional data, one can now give an idea of our tubular Runge theorem.
\begin{thmsansnom}[Simplified version of ``tubular Runge'' (Theorem \ref{thmRungetubulaire})]
\hspace*{\fill}
For $X,{\mathcal X},Y,D_1, \cdots,D_r,m_Y$ and a tubular neighbourhood ${\mathcal V}$ of $Y$ as in the paragraph above, let $({\mathcal X} \backslash {\mathcal D}) ({\mathcal O}_{L,S_L}) \backslash {\mathcal V}$ be the set of points of $({\mathcal X} \backslash {\mathcal D})({\mathcal O}_{L,S_L})$ which do not belong to ${\mathcal V}$. Then, if $D_1, \cdots, D_r$ are ample, for every such tubular neighbourhood, the set
\[
\left( \bigcup_{\substack{(L,S_L) \\ m_Y |S_L|< r}} \! \! \left( {\mathcal X} \backslash {\mathcal D} \right) ({\mathcal O}_{L,S_L}) \backslash {\mathcal V} \right) \, \, \, {\textrm{is finite}},
\]
and bounded in terms of some auxiliary height.
\end{thmsansnom}
This is a very simplified form of the theorem : one can have $D_1, \cdots, D_r$ defined on a scalar extension of $X$ and big instead of ample, and ${\mathcal X}$ normal for example. The general (and more precise) version is Theorem \ref{thmRungetubulaire}. As the implicit bound on the height is parametered by the tubular neighbourhood ${\mathcal V}$, it can be seen as a \textit{concentration result} rather as a finiteness one : essentially, it states that the points of $({\mathcal X} \backslash {\mathcal D}) ({\mathcal O}_{L,S_L})$ concentrate near the closed subset $Y$. As such, we have compared it to theorems of \cite{CorvajaLevinZannier}, notably Autissier Theorem and CLZ Theorem, in section \ref{sectiontubularRunge} (in particular, our version is made to be effective, whereas these results are based on Schmidt's subspace theorem, hence theoretically ineffective).
In the second part of our paper, we applied the method for Siegel modular varieties, both as a proof of principle and because integral points on these varieties are not very well understood, apart from Shafarevich conjecture proved by Faltings. As we will see below, this is also a case where a candidate for $Y$ presents itself, thus giving tubular neighbourhoods a natural interpretation.
For $n \geq 2$, the variety denoted by $A_2(n)$ is the variety over ${\mathbb{Q}}(\zeta_n)$ parametrising triples $(A,\lambda,\alpha_n)$ with $(A,\lambda)$ is a principally polarised abelian variety of dimension 2 and $\alpha_n$ is a symplectic level $n$ structure on $(A,\lambda)$. It is a quasi-projective algebraic variety of dimension 3, and its Satake compactification (which is a projective algebraic variety) is denoted by $A_2(n)^S$, the boundary being $\partial A_2(n) = A_2(n)^S \backslash A_2(n)$. The extension of scalars $A_2(n)_{\mathbb{C}}$ is the quotient of the half-superior Siegel space ${\mathcal H}_2$ by the natural action of the symplectic congruence subgroup $\Gamma_2(n)$ of $\operatorname{Sp}_4({\mathbb{Z}})$ made up with the matrices congruent to the identity modulo $n$. Now, we consider some divisors ($n^4/2 +2$ of them) defined by the vanishing of some modular forms, specifically theta functions. One finds that they intersect a lot on the boundary $\partial A_2(n)$ ($m$ comparable to $n^4$), but when we fix $Y=\partial A_2(n)$, we get $m_Y \leq (n^2 - 3)$ hence giving the \textit{tubular Runge condition}
\[
(n^2 - 3) |S_L| < \frac{n^4}{2} + 2.
\]
Now, the application of our tubular Runge theorem gives for every even $n \geq 2$ a finiteness result for the integral points for these divisors and some tubular neighbourhoods associated to potentially bad reduction for the finite places : this is Theorem \ref{thmtubularRungegeneral}. In the special case $n=2$, as a demonstration of the effectiveness of the method, we made this result completely explicit in Theorem \ref{thmproduitCEexplicite}. A simplified case of this Theorem is the following result.
\begin{thmsansnom}
[Theorem \ref{thmproduitCEexplicite}, simplified case]
Let $K$ be either ${\mathbb{Q}}$ or a quadratic imaginary field.
Let $A$ be a principally polarised abelian surface defined over $K$ as well as all its 2-torsion and having potentially good reduction at all finite places of $K$.
Then, if the semistable reduction of $A$ is a product of elliptic curves at most at 3 finite places of $K$, we have the explicit bound
\[
h_{\mathcal F}(A) \leq 1070,
\]
where $h_{\mathcal F}$ is the stable Faltings height. In particular, there are only finitely many such abelian surfaces.
\end{thmsansnom}
\setcounter{thm}{0}
To conclude this introduction, we explain the structure of the paper, emphasizing where the notions sketched above and proofs are given in detail.
\[
\xymatrix{
\ref{sectionnotations} \ar[d] \ar[rrd] & & \\
\ref{sectionvoistub} \ar[d] & & \ref{sectionrappelsSiegel} \ar[d] \\
\ref{sectionresultatscles} \ar[d] \ar[r] & \ref{sectiontubularRunge} \ar[rd] \ar[r]& \ref{sectionapplicationsSiegel} \ar[d] \\
\ref{sectionRungecourbes} & & \ref{sectionexplicitRunge}
}
\]
Section \ref{sectionnotations} is devoted to the notations used throughout the paper, including heights, $M_K$-constants and bounded sets (Definition \ref{defMKconstante}). We advise the reader to pay particular attention to this first section as it introduces notations which are ubiquitous in the rest of the paper. Section \ref{sectionvoistub} is where the exact definition (Definition \ref{defvoistub}) and basic properties of tubular neighbourhoods are given. In section \ref{sectionresultatscles}, we prove the key result for Runge tubular theorem (Proposition \ref{propcle}), essentially relying on a well-applied Nullstellensatz. For our purposes, in Proposition \ref{propreductionamplegros}, we also translate scheme-theoretical integrality in terms of auxiliary functions. In section \ref{sectionRungecourbes}, we reprove Bombieri's theorem for curves (written as Proposition \ref{propBombieri}) with Bilu's idea, as it is not yet published to our knowledge (although this is exactly the principle behind Runge's method in \cite{BiluParent09} for example). To finish with the theoretical part, we prove and discuss our tubular Runge theorem (Theorem \ref{thmRungetubulaire}) in section \ref{sectiontubularRunge}.
For the applications to Siegel modular varieties, section \ref{sectionrappelsSiegel} gathers the necessary notations and reminders on these varieties (subsection \ref{subsecabvarSiegelmodvar}), their integral models with some discussions on the difficulties on dealing with them in dimension at least 2 (subsection \ref{subsecfurtherpropSiegelmodvar}) and the important notion of theta divisors on abelian varieties and their link with classical theta functions (subsection \ref{subsecthetadivabvar}). The theta functions are crucial because the divisors we use in our applications of tubular Runge method are precisely the divisors of zeroes of some of these theta functions.
In section \ref{sectionapplicationsSiegel}, we consider the case of abelian surfaces we are interested in, especially for the behaviour of theta divisors (subsection \ref{subsecthetadivabsur}) and state in subsection \ref{subsectubularRungethmabsur} the applications of Runge tubular theorem for the varieties $A_2(n)^S$ and the divisors mentioned above (Theorems \ref{thmtubularRungeproduitCE} and \ref{thmtubularRungegeneral}).
Finally, in section \ref{sectionexplicitRunge}, we make explicit Theorem \ref{thmtubularRungeproduitCE} by computations on the ten fourth powers of even characteristic theta constants. To do this, the places need to be split in three categories. The finite places not above 2 are treated by the theory of algebraic theta functions in subsection \ref{subsecalgebraicthetafunctions}, the archimedean places by estimates of Fourier expansions in subsection \ref{subsecarchimedeanplaces} and the finite places above 2 (the hardest case) by the theory of Igusa invariants and with polynomials built from our ten theta constants in subsection \ref{subsecplacesabove2}. The final estimates are given as Theorem \ref{thmproduitCEexplicite} in subsection \ref{subsecfinalresultRungeCEexplicite}, both in terms of a given embedding of $A_2(2)$ and in terms of Faltings height.
The main results of this paper have been announced in the recently published note \cite{LeFourn4}, and apart from section \ref{sectionexplicitRunge} and some improvements can be found in the author's thesis manuscript \cite{LeFournthese2} (both in French).
\section*{Acknowledgements}
I am very grateful to Fabien Pazuki and Qing Liu for having kindly answered my questions and given me useful bibliographic recommandations on the subject of Igusa invariants.
\begin{tableofcontents}
\end{tableofcontents}
\section{Notations and preliminary notions}
\label{sectionnotations}
The following notations are classical and given below for clarity. They will be used throughout the paper.
\begin{itemize}
\item[$\bullet$] $K$ is a number field.
\item[$\bullet$] $M_K$ (resp. $M_K^\infty$) is the set of places (resp. archimedean places). We also denote by $M_{\overline{K}}$ the set of places of $\overline{K}$.
\item[$\bullet$] $|\cdot|_\infty$ is the usual absolute value on ${\mathbb{Q}}$, and $|\cdot|_p$ is the place associated to $p$ prime, whose absolute value is normalised by
\[
|x|_p = p^{-\operatorname{ord}_p (x)},
\]
where $\operatorname{ord}_p (x)$ is the unique integer such that $x = p^{\operatorname{ord}_p(x)} a/b$ with $p \nmid ab$. By convention, $|0|_p=0$.
\item[$\bullet$] $|\cdot|_v$ is the absolute value on $K$ associated to $v \in M_K$, normalised to extend $|\cdot|_{v_0}$ when $v$ is above $v_0 \in M_{\mathbb{Q}}$, and the local degree is $n_v = [K_v : {\mathbb{Q}}_{v_0}]$, so that for every $x \in K^*$, one has sthe product formula
\[
\prod_{v \in M_K} |x|_v^{n_v} = 1.
\]
When $v$ comes from a prime ideal ${\mathfrak{p}}$ of ${\mathcal O}_K$, we indifferently write $|\cdot|_v$ and $|\cdot|_{\mathfrak{p}}$.
\item[$\bullet$] For any place $v$ of $K$, one defines the sup norm on $K^{n+1}$ by
\[
\| (x_0, \cdots, x_n) \|_v = \max_{0 \leq i \leq n} |x_i|_v.
\]
(this will be used for projective coordinates of points of $\P^n (K)$).
\item[$\bullet$] Every set of places $S \subset M_K$ we consider is finite and contains $M_K^ {\infty}$. We then define the ring of $S$-integers as
\[
{\mathcal O}_{K,S} = \{ x \in K \, \, | \, \, |x|_v \leq 1 \textrm{ for every } v \in M_K \backslash S \},
\]
in particular ${\mathcal O}_{K,M_K^{\infty}} = {\mathcal O}_K$.
\item[$\bullet$] For every $P \in \P^n (K)$, we denote by
\[
x_P=(x_{P,0}, \cdots, x_{P,n}) \in K^{n+1}
\]
any possible choice of projective coordinates for $P$, this choice being of course fixed for consistency when used in a formula or a proof.
\item[$\bullet$] The logarithmic Weil height of $P \in \P^n(K)$ is defined by
\begin{equation}
\label{eqdefinitionhauteurdeWeil}
h(P) = \frac{1}{[K : {\mathbb{Q}}]}\sum_{v \in M_K} n_v \log \| x_P \|_v,
\end{equation}
does not depend on the choice of $x_P$ nor on the number field, and satisfies Northcott property.
\item[$\bullet$] For every $n \geq 1$ and every $i \in \{0, \cdots,n\}$, the $i$-th coordinate open subset $U_i$ of $\P^n$ is the affine subset defined as
\begin{equation}
\label{eqdefUi}
U_i = \{ (x_0 : \cdots : x_n) \, \, | \, \, x_i \neq 0 \}.
\end{equation}
The normalisation function $\varphi_i : U_i \rightarrow {\mathbb{A}}^{n+1}$ is then defined by
\begin{equation}
\label{eqdefvarphii}
\varphi_i (x_0 : \cdots : x_n) = \left(\frac{x_0}{x_i}, \cdots, 1, \cdots \frac{x_n}{x_i} \right).
\end{equation}
Equivalently, it means that to $P \in U_i$, we associate the choice of $x_P$ whose $i$-th coordinate is 1.
\end{itemize}
For most of our results, we need to formalize the notion that some families of sets indexed by the places $v \in M_K$ are ``uniformly bounded''. To this end, we recall some classical definitions (see \cite{BombieriGubler}, section 2.6).
\begin{defi}[$M_K$-constants and $M_K$-bounded sets]
\hspace*{\fill}
\label{defMKconstante}
\begin{itemize}
\item[$\bullet$] An \textit{$M_K$-constant} is a family ${\mathcal C} = (c_v)_{v \in M_K}$ of real numbers such that $c_v=0$ except for a finite number of places $v \in M_K$.
The $M_K$-constants make up a cone of ${\mathbb{R}}^{M_K}$, stable by finite sum and maximum on each coordinate.
\item[$\bullet$] Let $L/K$ be a finite extension. For an $M_K$-constant $(c_v)_{v \in M_K}$, we define (with abuse of notation) an $M_L$-constant $(c_w)_{w \in M_L}$ by $c_w : = c_v$ if $w|v$. Conversely, if $(c_w)_{w \in M_L}$ is an $M_L$-constant, we define (again with abuse of notation) $(c_v)_{v \in M_K}$ by $c_v := \max_{w|v} c_w$, and get in both cases the inequality
\begin{equation}
\label{eqineqinductionMKconstante}
\frac{1}{[L : {\mathbb{Q}}]} \sum_{w \in M_L} n_w c_w \leq \frac{1}{[K: {\mathbb{Q}}]} \sum_{v \in M_K} n_v c_v.
\end{equation}
\item[$\bullet$] If $U$ is an affine variety over $K$ and $E \subset U(\overline{K}) \times M_{\overline{K}}$, a regular function $f \in \overline{K}[U]$ is \textit{$M_K$-bounded on $E$} if there is a $M_K$-constant ${\mathcal C} = (c_v)_{v \in M_K}$ such that for every $(P,w) \in E$ with $w$ above $v$ in $M_K$,
\[
\log |f(P)|_w \leq c_v.
\]
\item[$\bullet$]
An \textit{$M_K$-bounded subset of $U$} is, by abuse of definition, a subset $E$ of $U(\overline{K}) \times M_{\overline{K}}$ such that every regular function $f \in \overline{K}[U]$ is $M_K$-bounded on $E$.
\end{itemize}
\end{defi}
\begin{rem}
\label{remdefMKconstantes}
There are fundamental examples to keep in mind when using these definitions:
$(a)$ For every $x \in K^*$, the family $(\log |x|_v)_{v \in M_K}$ is an $M_K$-constant.
$(b)$ In the projective space $\P^n_K$, for every $i \in \{ 0 , \cdots, n\}$, consider the set
\begin{equation}
\label{eqdefEi}
E_i = \{ (P,w) \in \P^n(\overline{K}) \times M_{\overline{K}} \, \, | \, \, |x_{P,i}|_w = \|x_P\|_w \}.
\end{equation}
The regular functions $x_j/x_i$ ($j \neq i$) on $\overline{K}[U_i]$ (notation \eqref{eqdefUi}) are trivially $M_K$-bounded (by the zero $M_K$-constant) on $E_i$, hence $E_i$ is $M_K$-bounded in $U_i$. Notice that the $E_i$ cover $\P^n (\overline{K}) \times M_{\overline{K}}$. We will also consider this set place by place, by defining for every $w \in M_{\overline{K}}$ :
\begin{equation}
\label{eqdefEiw}
E_{i,w} = \{ P \in \P^n(\overline{K}) \, \, |\, \, |x_{P,i}|_w = \|x_P\|_w \}.
\end{equation}
$(c)$ With notations \eqref{eqdefinitionhauteurdeWeil}, \eqref{eqdefUi} and \eqref{eqdefvarphii}, for a subset $E$ of $U_i(\overline{K})$, if the coordinate functions of $U_i$ are $M_K$-bounded on $E \times M_{\overline{K}}$, the height $h \circ \varphi_i$ is straightforwardly bounded on $E$ in terms of the involved $M_K$-constants. This simple observation will be the basis of our finiteness arguments.
\end{rem}
The following lemma is useful to split $M_K$-bounded sets in an affine cover.
\begin{lem}
\label{lemMKbornerecouvrement}
Let $U$ be an affine variety and $E$ an $M_K$-bounded set. If $(U_j)_{j \in J}$ is a finite affine open cover of $U$, there exists a cover $(E_j)_{j \in J}$ of $E$ such that every $E_j$ is $M_K$-bounded in $U_j$.
\end{lem}
\begin{proof}
This is Lemma 2.2.10 together with Remark 2.6.12 of \cite{BombieriGubler}.
\end{proof}
Let us now recall some notions about integral points on schemes and varieties.
For a finite extension $L$ of $K$, a point $P \in \P^n(L)$ and a nonzero prime ideal ${\mathfrak{P}}$ of ${\mathcal O}_L$ of residue field $k({\mathfrak{P}}) = {\mathcal O}_L/{\mathfrak{P}}$, the point $P$ extends to a unique morphism $\operatorname{Spec} {\mathcal O}_{L,{\mathfrak{P}}} \rightarrow \P^n_{{\mathcal O}_K}$, and the image of its special point is \textit{the reduction of $P$ modulo ${\mathfrak{P}}$}, denoted by $P_{\mathfrak{P}} \in \P^n (k({\mathfrak{P}}))$. It is explicitly defined as follows : after normalisation of the coordinates $x_P$ of $P$ so that they all belong to ${\mathcal O}_{L,{\mathfrak{P}}}$ and one of them to ${\mathcal O}_{L,{\mathfrak{P}}}^*$, one has
\begin{equation}
\label{eqreddansPn}
P_{\mathfrak{P}} = (x_{P,0} \! \! \mod {\mathfrak{P}} : \cdots : x_{P,n} \! \! \mod {\mathfrak{P}}) \in \P^n_{k({\mathfrak{P}})}.
\end{equation}
The following (easy) proposition expresses scheme-theoretic reduction in terms of functions (there will be another in Proposition \ref{propreductionamplegros}). We write it below as it is the inspiratoin behind the notion of tubular neighbourhood in section \ref{sectionvoistub}.
\begin{prop}
\label{proplienreductionpointssvaluation}
Let $S$ be a finite set of places of $K$ containing $M_K^\infty$, and ${\mathcal X}$ be a projective scheme on ${\mathcal O}_{K,S}$, seen as a closed subscheme of $\P^n_{{\mathcal O}_{K,S}}$.
Let ${\mathcal Y}$ be a closed sub-${\mathcal O}_{K,S}$-scheme of ${\mathcal X}$.
Consider $g_1, \cdots, g_s \in {\mathcal O}_{K,S} [X_0, \cdots, X_n]$ homogeneous generators of the ideal of definition of ${\mathcal Y}$ in $\P^n_{{\mathcal O}_{K,S_0}}$. For every nonzero prime ${\mathfrak{P}}$ of ${\mathcal O}_L$ not above $S$, every point $P \in {\mathcal X}(L)$, the reduction $P_{\mathfrak{P}}$ belongs to ${\mathcal Y}_{\mathfrak{p}} (k({\mathfrak{P}}))$ (with ${\mathfrak{p}} = {\mathfrak{P}} \cap {\mathcal O}_K$) if and only if
\begin{equation}
\label{eqecplicitereduction2}
\forall j \in \{1, \cdots, s \}, \quad
|g_j (x_P)|_{\mathfrak{P}} < \|x_P\|_{\mathfrak{P}}^{\deg g_j}.
\end{equation}
\end{prop}
\begin{proof}
For every $j \in \{1, \cdots, s\}$, by homogeneity of $g_j$, for a choice $x_P$ of coordinates for $P$ belonging to ${\mathcal O}_{L,{\mathfrak{P}}}$ with one of them in ${\mathcal O}_{L,{\mathfrak{P}}}^*$, the inequality \eqref{eqecplicitereduction2} amounts to
\[
g_j(x_{P,0}, \cdots, x_{P,n}) = 0 \mod {\mathfrak{P}}
\] .
On another hand, the reduction of $P$ modulo ${\mathfrak{P}}$ belongs to ${\mathcal Y}_{\mathfrak{p}} (\overline{k({\mathfrak{P}})})$ if and only if its coordinates satisfy the equations defining ${\mathcal Y}_{\mathfrak{p}}$ in $X_{\mathfrak{p}}$, but these are exactly the equations $g_1, \cdots, g_s$ modulo ${\mathfrak{p}}$. This remark immediately gives the Proposition by \eqref{eqreddansPn}.
\end{proof}
\section{Definition and properties of tubular neighbourhoods}
\label{sectionvoistub}
The explicit expression \eqref{eqecplicitereduction2} is the motivation for our definition of \textit{tubular neighbourhood}, at the core of our results.
This definition is meant to be used by exclusion : with the same notations as Proposition \ref{proplienreductionpointssvaluation}, we want to say that a point $P \in X(L)$ is \textit{not} in some
tubular neighbourhood of ${\mathcal Y}$ if it \textit{never} reduces in ${\mathcal Y}$, whatever the prime ideal ${\mathfrak{P}}$ of ${\mathcal O}_L$ is.
The main interest of this notion is that it provides us with a convenient alternative to this assumption for the places in $S$ (which are the places where the reduction is not well-defined, including the archimedean places), and also allows us to loosen up this reduction hypothesis in a nice fashion. Moreover, as the definition is function-theoretic, we only need to consider the varieties over a base field, keeping in mind that Proposition \ref{proplienreductionpointssvaluation} above makes the link with reduction at finite places.
\begin{defi}[Tubular neighbourhood]
\label{defvoistub}
\hspace*{\fill}
Let $X$ be a projective variety over $K$ and $Y$ be a closed $K$-subscheme of $X$.
We choose an embedding $X \subset \P^n_K$, a set of homogeneous generators $g_1, \cdots, g_s$ in $K[X_0, \cdots, X_n]$ of the homogeneous ideal defining $Y$ in $\P^n$ and an $M_K$-constant ${\mathcal C} = (c_v)_{v \in M_K}$.
The \textit{tubular neighbourhood of $Y$ in $X$ associated to ${\mathcal C}$ and $g_1, \cdots, g_s$} (the embedding made implicit) is the family
${{\mathcal V} = (V_w)_{w \in M_{\overline{K}}}}$ of subsets of $X(\overline{K})$ defined as follows.
For every $w \in M_{\overline{K}}$ above some $v \in M_K$, $V_w$ is the set of points $P \in X(\overline{K})$ such that
\begin{equation}
\label{eqdefvoistub}
\forall j \in \{1, \cdots,s\}, \quad \log |g_j(x_P)|_w < \deg (g_j) \cdot \log \|x_P \|_w + c_v.
\end{equation}
\end{defi}
As we said before, this definition will be ultimately used by exclusion:
\begin{defi}
\label{defhorsdunvoistub}
\hspace*{\fill}
Let $X$ be a projective variety over $K$ and $Y$ be a closed $K$-subscheme of $X$.
For any tubular neighbourhood ${\mathcal V} = (V_w)_{w \in M_{\overline{K}}}$ of $Y$, we say that a point $P \in X(\overline{K})$ \textit{ does not belong to }${\mathcal V}$ (and we denote it by $P \notin {\mathcal V}$) if
\[
\forall w \in M_{\overline{K}}, \quad P \notin V_w.
\]
\end{defi}
\begin{rem}
\hspace*{\fill}
\label{remhorsvoistub}
$(a)$ Comparing \eqref{eqecplicitereduction2} and \eqref{eqdefvoistub}, it is obvious that for the $M_K$-constant ${\mathcal C}=0$ and with the notations of Proposition \ref{proplienreductionpointssvaluation}, at the finite places $w$ not above $S$, the tubular neighbourhood $V_w$ is exactly the set of points $P \in X(\overline{K})$ reducing in ${\mathcal Y}$ modulo $w$. Furthermore, instead of dealing with any homogeneous coordinates, one can if desired manipulate normalised coordinates, which makes the term $\deg(g_j) \log \|x_P\|_v$ disappear. Actually, we will do it multiple times in the proofs later, as it amounts to covering $\P^n_{\overline{K}}$ by the bounded sets $E_i$ (notation \eqref{eqdefEi}) and thus allows to consider affine subvarieties when needed.
$(b)$ In a topology, a set containing a neighbourhood is one as well : here, we will define everything by being out of a tubular neighbourhood, therefore allowing sets too large would be too restrictive. One can think about this definition as a family of neighbourhoods being one by one not too large but not too small, and uniformly so in the places.
$(c)$ If $Y$ is an ample divisor of $X$ and ${\mathcal V}$ is a tubular neighbourhood of $Y$, one easily sees that if $P \notin {\mathcal V}$ then $h(\psi(P))$ is bounded for some embedding $\psi$ associated to $Y$, from which we get the finiteness of the set of points $P$ of bounded degree outside of ${\mathcal V}$. This illustrates why such an assumption is only really relevant when $Y$ is of small dimension.
$(d)$ A tubular neighbourhood of $Y$ can also be seen as a family of open subsets defined by bounding strictly a global arithmetic distance function to $Y$ (see \cite{Vojtadiophapp}, paragraph 2.5).
\end{rem}
\begin{exe}
We have drawn below three different pictures of tubular neighbourhoods at the usual archimedean norm. One consider $\P^2({\mathbb{R}})$ with coordinates $x,y,z$, the affine open subset $U_z$ defined by $z \neq 0$, and $E_x,E_y,E_z$ the respective sets such that $|x|,|y|,|z| = \max (|x|,|y|,|z|)$. These different tubular neighbourhoods are drawn in $U_z$, and the contribution of the different parts $E_x$, $E_y$ and $E_z$ is made clear.
\begin{figure}[H]
\centering
\resizebox{8cm}{8cm}
{
\begin{tikzpicture}
\draw[->] (-1,0) -- (9,0);
\draw[->] (0,-1) -- (0,9);
\fill[color=gray!20]
(0,6) -- (2,2)
-- (2,2) -- (6,0)
-- (6,0) -- (6,6)
-- (6,6) -- (0,6)
-- cycle;
\draw (2,2) node[below left]{${(2,2)}$};
\draw (2,2) node {${ \bullet}$};
\draw (0,6) node[left]{$ (0,6)$};
\draw (0,6) node {${\bullet}$};
\draw (6,0) node[above right]{${ (6,0)}$};
\draw (6,0) node {${ \bullet}$};
\draw (6,6) node[right]{${ (6,6)}$};
\draw (6,6) node {${ \bullet}$};
\draw (3,3) node[right]{${\displaystyle P}$};
\draw (3,3) node {$\bullet$};
\draw[ultra thin] (2,2) -- (6,6);
\draw (11/4,5) node[below] {${ E_y}$};
\draw (5,5/2) node[below] {${ E_x}$};
\end{tikzpicture}
}
\caption{Tubular neighbourhood of the point $P = (3:3:1)$ associated to the inequality
$\max (|x-3y,y-3z|) < \frac{1}{2} \max(|x|,|y|,|z|).$
}
\end{figure}
\begin{figure}[H]
\centering
\resizebox{8cm}{8cm}
{
\begin{tikzpicture}
\draw[->, very thin] (-6,0) -- (6,0);
\draw[->, very thin] (0,-6) -- (0,6);
\fill[color=gray!20, opacity=0.8]
(-1/2,1) -- (4,4)
-- (4,4) -- (6,6)
-- (6,6) -- (1,6)
-- (1,6) -- (-4/3,4/3)
(-4/3,4/3) -- (-6,-1)
-- (-6,-1) -- (-6,-6)
-- (-6,-6) -- (-4,-4)
-- (-4,-4) -- (-1,1/2)
-- (-1,1/2) -- (-1/2,1)
-- cycle;
\draw[thick] (-6,-4) -- (4,6);
\draw (-4,-4) node {${ \bullet}$};
\draw (-4,-4) node[below right] {${ \scriptstyle (-4,-4)}$};
\draw (-4/3,4/3) node[above left] {${\scriptstyle (-4/3,4/3)}$};
\draw (4,4) node[right] {${ \scriptstyle (4,4)}$};
\draw (-4/3,4/3) node {${ \bullet}$};
\draw (4,4) node {${ \bullet}$};
\draw (-1,1/2) node {${ \bullet}$};
\draw (-1,1/2) node[below] {${\scriptstyle (-1,1/2)}$};
\draw (-1/2,1) node {${ \bullet}$};
\draw (-1/2,1) node[right] {${\scriptstyle (-1/2,1)}$};
\draw (2,5) node {\textbf{\textit{L}}};
\draw (-3/4,3/4) node {${ E_z}$};
\draw[ultra thin] (-1,1) -- (-1/2,1);
\draw[ultra thin] (-1,1/2) -- (-1,1);
\draw[ultra thin] (-1,1) -- (-4/3,4/3);
\draw (1,4) node {${ E_y}$};
\draw (3,4) node {${ E_y}$};
\draw (-4,-1) node {${ E_x}$};
\draw (-4,-3) node {${E_x}$};
\end{tikzpicture}
}
\caption{Tubular neighbourhood of the line $D : y - x + 2z = 0$ associated to the inequality
$\max (|x-y+2z|) < \frac{1}{2} \max(|x|,|y|,|z|)$.}
The boundary of the neighbourhood is made up with segments between the indicated points
\end{figure}
\begin{figure}[H]
\centering
\resizebox{8cm}{8cm}
{
\begin{tikzpicture}
\draw[dashed, ultra thin] (-3,-6) -- (3,6);
\draw[dashed, ultra thin] (-3,6) -- (3,-6);
\draw[dashed, ultra thin] (-6,-3) -- (6,3);
\draw[dashed, ultra thin] (-6,3) -- (6,-3);
\draw[->, very thin] (-6,0) -- (6,0);
\draw[->, very thin] (0,-6) -- (0,6);
\draw[domain=7/6:3/2] plot (\x, {-\x + sqrt(\x +3)});
\fill[color=gray!30, opacity=0.8]
plot[domain=-17/6:1/2] (\x, {-\x + sqrt(\x*\x +2)})
-- plot[domain=1/2:1] (\x,{1/(2*\x)})
-- plot[domain=1:6] (\x, {-\x/2 + 1/\x})
-- (6,-5/2) -- (6,7/2)
-- plot [domain=6:sqrt(2)] (\x,{\x/2 + 1/\x})
-- plot [domain=sqrt(2):19/6] (\x,{\x + sqrt(\x*\x - 2)})
-- (19/6,6) -- (-17/6,6)
-- cycle;
\fill[color=gray!30, opacity=0.8]
plot[domain=-6:-1] (\x, {-\x/2 + 1/\x})
-- plot[domain=-1:-1/2] (\x,{1/(2*\x)})
-- plot[domain=-1/2:17/6] (\x, {-\x - sqrt(\x*\x +2)})
-- (17/6,-6) -- (-19/6,-6)
-- plot [domain=-19/6:{-sqrt(2)}] (\x,{\x - sqrt(\x*\x - 2)})
-- plot [domain={-sqrt(2)}:-6] (\x,{\x/2 + 1/\x})
-- (-6,-19/6) -- (-6,17/6)
-- cycle;
\draw[domain=1/6:6,samples=300, thick] plot (\x,{1/\x});
\draw[domain=-6:-1/6,samples=300, thick] plot (\x,{1/\x});
\draw (1,5) node {\textit{\textbf{H}}};
\draw[thin] (1/2,1) -- (1,1);
\draw[thin] (1,1) -- (1,1/2);
\draw[thin] (-1,-1) -- (-1,-1/2);
\draw[thin] (-1,-1) -- (-1/2,-1);
\draw (-3/4,-3/4) node {${ E_z}$};
\draw (3/4,3/4) node {${ E_z}$};
\draw[thin] (1,1) -- ({sqrt(2)},{sqrt(2)});
\draw[thin] (-1,-1) -- ({-sqrt(2)},{-sqrt(2)});
\draw (-7/2,1/2) node {${ E_x}$};
\draw (-7/2,-1) node {${ E_x}$};
\draw (-1,-4) node {${ E_y}$};
\draw (1,-4) node {${ E_y}$};
\draw (1,4) node {${ E_y}$};
\draw (-1,4) node {${ E_y}$};
\draw (7/2,-1/2) node {${ E_x}$};
\draw (7/2,1) node {${ E_x}$};
\draw ({-sqrt(2)},{-sqrt(2)}) node {$\bullet$};
\draw ({-sqrt(2)},{-sqrt(2)}) node[below left] {${\scriptscriptstyle - (\sqrt{2},\sqrt{2})}$};
\draw ({sqrt(2)},{sqrt(2)}) node {$\bullet$};
\draw ({sqrt(2)},{sqrt(2)}) node[above right] {${\scriptscriptstyle (\sqrt{2},\sqrt{2})}$};
\draw (1/2,1) node {$\bullet$};
\draw (1/2,1) node[below left] {${\scriptscriptstyle (1/2,1)}$};
\draw (1,1/2) node {$\bullet$};
\draw (1,1/2) node[below] {${\scriptscriptstyle (1,1/2)}$};
\draw (-1/2,-1) node {$\bullet$};
\draw (-1/2,-1) node[above right] {${\scriptscriptstyle (-1/2,-1)}$};
\draw (-1,-1/2) node {$\bullet$};
\draw (-1,-1/2) node[above] {${\scriptscriptstyle (-1,-1/2)}$};
\end{tikzpicture}
}
\caption{Tubular neighbourhood of the hyperbola $H : xy - z^2 = 0$ given by the inequality
$|xy-z^2| < \frac{1}{2} \max (|x|,|y|,|z|)$.} The boundary is made up with arcs of hyperbola between the indicated points.
\end{figure}
\end{exe}
The notion of tubular neighbourhood does not seem very intrinsic, but as the proposition below shows, it actually is.
\begin{prop}[Characterisation of tubular neighbourhoods]
\label{propcarvoistub}
Let $X$ be a projective variety over $K$ and $Y$ a closed $K$-subscheme of $X$.
A family ${\mathcal V}=(V_w)_{w \in M_{\overline{K}}}$ is included in a tubular neighbourhood of $Y$ in $X$ if and only if for every affine open subset $U$ of $X$, every $E \subset U(\overline{K}) \times M_{\overline{K}}$ which is $M_K$-bounded in $U$, and every regular function $f \in \overline{K}[U]$ such that $f_{|Y \cap U} = 0$, there is an $M_K$-constant ${\mathcal C}$ such that
\[
\forall (P,w) \in E, \quad P \in V_w \Rightarrow \log |f(P)|_w < c_v
\]
(intuitively, this means that every function vanishing on $Y$ is ``$M_K$-small'' on ${\mathcal V}$).
\end{prop}
\begin{rem}
One can also give a criterion for containing a tubular neighbourhood (using generators in $\overline{K}[U]$ of the ideal defining $Y \cap U$). Together, these imply that the tubular neighbourhoods made up by an embedding of $X$ are essentially the same. Indeed, one can prove that for two different projective embeddings of $X$, a tubular neighbourhood as defined by the first one can be an intermediary between two tubular neighbourhoods as defined by the second embedding.
\end{rem}
\begin{proof}
First, a family ${\mathcal V}$ satisfying this property is included in a tubular neighbourhood. Indeed, if we choose $g_1, \cdots, g_s$ homogeneous generators of the ideal defining $Y$ for some embedding of $X$ in $\P^n_K$, for every $i \in \{0, \cdots, n\}$, consider (using notations \eqref{eqdefUi}, \eqref{eqdefvarphii} and \eqref{eqdefEi}) the $M_K$-bounded set $E_i$ and the regular functions $g_j \circ \varphi_i$ on $U_i$, $1 \leq j \leq s$. By hypothesis, (taking the maximum of all the $M_K$-constants for $0 \leq i \leq n, 1 \leq j \leq s$), there is an $M_K$-constant $(c_v)_{v \in M_K}$ such that for every $w \in M_{\overline{K}}$,
\[
\forall j \in \{1, \cdots, s \}, \forall i \in \{0, \cdots, n\}, \forall P \in E_{i,w}, \textrm{ if } \, P \in V_w, \quad \log |g_j \circ \varphi_i(P)|_w < c_v
\]
because $g_j \circ \varphi_i = 0$ on $Y \cap U_i$ by construction and the $\varphi_i(P)$ are normalised coordinates for $P \in E_{i,w}$. Hence, ${\mathcal V}$ is included in the tubular neighbourhood of $Y$ in $X$ associated to ${\mathcal C}$ and the generators $g_1, \cdots, g_s$.
It now remains to prove that any tubular neighbourhood of $Y$ satisfies this characterisation, and we will do so (with the same notations as Definition \ref{defvoistub}) for the tubular neighbourhood defined by a given embedding $X \subset \P^n_K$, homogeneous equations $g_1, \cdots, g_s$ defining $Y$ in $\P^n_K$ and some $M_K$-constant ${\mathcal C}_0 = (c_{0,v})_{v \in M_K}$ (we will use multiple $M_K$-constants, hence the numbering).
Let us fix an affine open subset $U$ of $X$ and $E$ an $M_K$-bounded set on $U$. We can cover $U$ by principal affine open subsets of $X$, more precisely we can write
\[
U = \bigcup_{h \in {\mathcal F}} U_h
\]
where $h$ runs through a finite family ${\mathcal F}$ of nonzero homogeneous polynomials of $\overline{K}[X_0, \cdots, X_n]$ and
\[
U_h = \{ P \in X \, | \, h(P) \neq 0 \}.
\]
For every such $h$, the regular functions on $U_h$ are the $s/h^k$ where $s$ is homogeneous on $\overline{K}[X_0, \cdots, X_n]$ of degree $k \cdot \deg(h)$ (as $X$ is a closed subvariety of $\P^n$, the only subtlety is that identical regular functions on $U_h$ can come from different fractions $s/h^k$ but this will not matter in the following).
By Lemma \ref{lemMKbornerecouvrement}, there is a cover $E = \cup_{h \in {\mathcal F}} E_h$ such that every $E_h$ is $M_K$-bounded on $U_h$. This implies that for any $i \in \{0, \cdots, n\}$, the functions $x_i^{\deg (h)} / h \in \overline{K}[U_f]$ are $M_K$-bounded on $E_h$, therefore we have an $M_K$-constant ${\mathcal C}_{1}$ such that for all $(P,w) \in E_h$ with coordinate $x_0, \cdots, x_n$,
\begin{equation}
\label{eqfoncintercoordonnees}
\log \| x_P \|_w \leq c_{1,v} + \frac{1}{\deg(h)} \log |h(x_P)|_w.
\end{equation}
Now, let $f$ be a regular function on $\overline{K}[U]$ such that $f_{|Y \cap U} = 0$. For every $h \in {\mathcal F}$, we can write $f_{|U_h}=s/h^k$ for some homogeneous $s \in \overline{K}[X_0, \cdots, X_n]$, therefore as a homogeneous function on $X$, one has $h \cdot s = 0$ on $Y$ (it already cancels on $Y \cap U$, and outside $U$ by multiplication by $f$). Hence, we can write
\[
f_{|U_h} = \sum_{j=1}^s \frac{a_{j,h} g_{j}}{h^{k_j}}
\]
with the $a_{j,h}$ homogeneous on $\overline{K}[X_0, \cdots, X_n]$ of degree $k_j \deg(h) - \deg(g_j)$. Now, bounding the coefficients of all the $a_{j,h}$ (and the number of monomials in the archimedean case), we get an $M_K$-constant ${\mathcal C}_2$ such that for every $P \in \P^n(\overline{K})$,
\[
\log |a_{j,h} (x_P)|_w \leq c_{2,v} + \deg(a_{j,h}) \cdot \log \|x_P \|_w.
\]
Combining this inequality with \eqref{eqecplicitereduction2} and \eqref{eqfoncintercoordonnees}, we get that for every $h \in {\mathcal F}$, every $(P,w) \in E_h$ and every $j \in \{1, \cdots, s\}$ :
\[
\textrm{ if } P \in V_w, \quad \log \left| \frac{a_{j,h} g_j}{h^{k_j}}(P) \right|_w < c_{0,v} + c_{2,v} + k_j c_{1,v}
\]
which after summation on $j \in \{1, \cdots, s \}$ and choice of $h$ such that $(P,w) \in E_h$ proves the result.
\end{proof}
\section{Key results}
\label{sectionresultatscles}
We will now prove the key result for Runge's method, as a consequence of the Nullstellensatz. We mainly use the projective case in the rest of the paper but the affine case is both necessary for its proof and enlightening for the method we use.
\begin{prop}[Key proposition]
\hspace*{\fill}
\label{propcle}
$(a)$ (Affine version)
Let $U$ be an affine variety over $K$ and $Y_1, \cdots, Y_r$ closed subsets of $U$ defined over $K$, of intersection $Y$. For every $\ell \in \{1, \cdots, r\}$, define $g_{\ell,1}, \cdots g_{\ell,s_\ell}$ generators of the ideal of definition of $Y_\ell$ in $K[U]$, and $h_1, \cdots, h_s$ generators of the ideal of definition of $Y$ in $K[U]$.
For every $M_K$-bounded set $E$ of $U$ and every $M_K$-constant ${\mathcal C}_0$, there is an $M_K$-constant ${\mathcal C}$ such that for every $(P,w) \in E$ with $w$ above $v \in M_K$, one has the following dichotomy :
\begin{equation}
\label{eqdichoaff}
\max_{\substack{1 \leq \ell \leq r \\ 1 \leq j \leq s_i }} \log |g_{\ell,j} (P)|_w \geq c_v \quad \textrm{or} \quad \max_{1 \leq j \leq s} \log |h_j (P)|_w < c_{0,v}.
\end{equation}
$(b)$ (Projective version)
Let $X$ be a normal projective variety over $K$ and $\phi_1, \cdots, \phi_r \in K(X)$. Let $Y$ be the closed subset of $X$ defined as the intersection of the supports of the (Weil) divisors of poles of the $\phi_i$. For every tubular neighbourhood ${\mathcal V}$ of $Y$ (Definition \ref{defvoistub}), there is an $M_K$-constant ${\mathcal C}$ depending on ${\mathcal V}$ such that for every $w \in M_{\overline{K}}$ (above $v \in M_K)$ and every $P \in X(\overline{K})$,
\begin{equation}
\label{eqdichoproj}
\min_{1 \leq \ell \leq r} \log |\phi_\ell (P)|_w \leq c_v \quad \textrm{or} \quad P \in V_w.
\end{equation}
\end{prop}
This result has an immediate corollary when $Y=\emptyset$: Lemma 5 of \cite{Levin08}, restated below.
\begin{cor}[\cite{Levin08}, Lemma 5]
\hspace*{\fill}
\label{corpasdepolecommun}
Let $X$ be a normal projective variety over $K$ and $\phi_1, \cdots, \phi_r \in K(X)$ having globally no common pole. Then, there is an $M_K$-constant ${\mathcal C}$ such that for every $w \in M_{\overline{K}}$ (above $v \in M_K)$ and every $P \in X(\overline{K})$,
\begin{equation}
\label{eqpasdepolecommun}
\min_{1 \leq \ell \leq r} \log |\phi_\ell (P|_w \leq c_v.
\end{equation}
\end{cor}
\begin{rem}
\hspace*{\fill}
$(a)$ As will become clear in the proof, part $(b)$ is actually part $(a)$ applied to a good cover of $X$ by $M_K$-bounded subsets of affine open subsets of $X$ (inspired by the natural example of Remark \ref{remdefMKconstantes} $(b)$).
$(b)$ Besides the fact that the results must be uniform in the places (hence the $M_K$-constants), the principle of $(a)$ and $(b)$ is simple. For $(a)$, we would like to say that if a point $P$ is sufficiently close to $Y_1, \cdots, Y_r$ (i.e. the first part of the dichotomy is not satisfied) it must be close to a point of intersection of the $Y_i$, hence the generators of the intersection should be small at $P$ (second part of the dichotomy). This is not true in the affine case, taking for example the hyperbola and the real axis in ${\mathbb{A}}^2$, infinitely close but disjoint (hence the necessity of taking a bounded set $E$ to compactify the situation), but it works in the projective case because the closed sets are then compact.
$(c)$ Corollary \ref{corpasdepolecommun} is the key for Runge's method in the case of curves in section \ref{sectionRungecourbes}. Notice that Lemma 5 of \cite{Levin08} assumed $X$ smooth, but the proof is actually exactly the same for $X$ normal. Moreover, the argument below follows the structure of Levin's proof.
$(d)$ If we replace $Y$ by $Y' \supset Y$ and ${\mathcal V}$ by a tubular neighbourhood ${\mathcal V}'$ of $Y'$, the result remains true with the same proof, which is not surprising because tubular neighbourhood of $Y'$ are larger than tubular neighbourhoods of $Y$.
\end{rem}
\begin{proof}[Proof of Proposition \ref{propcle}]
\hspace*{\fill}
$(a)$ By the Nullstellensatz applied on $K[U]$ to the $Y_\ell \quad (1 \leq \ell \leq r)$ and $Y$, by hypothesis, for some power $p \in {\mathbb{N}}_{>0}$, there are regular functions $f_{\ell,j,m} \in K[U]$ such that for every $m \in \{1, \cdots, s\}$,
\[
\sum_{\substack{1 \leq \ell \leq r \\ 1 \leq j \leq s_\ell}} g_{\ell,j} f_{\ell,j,m} = h_m^p.
\]
As $E$ is $M_K$-bounded on $U$, all the $f_{\ell,j,m}$ are $M_K$-bounded on $E$ hence there is an auxiliary $M_K$-constant ${\mathcal C}_1$ such that for all $P \in E$,
\[
\max_{\substack{1 \leq \ell \leq r \\ 1 \leq j \leq s_\ell \\ 1 \leq m \leq s}} \log |f_{\ell,j,m} (P)|_w \leq c_{1,v},
\]
therefore
\[
|h_m(P)^p|_w = \left| \sum_{\substack{1 \leq \ell \leq r \\ 1 \leq j \leq s_\ell}} g_{\ell,j} (P) f_{\ell,j,m} (P) \right|_w \leq N^{\delta_v} e^{c_{1,v}} \max_{\substack{1 \leq \ell \leq r \\ 1 \leq j \leq s_\ell }} |g_{\ell,j} (P)|_w
\]
where $\delta_v$ is 1 if $v$ is archimedean and 0 otherwise, and $N$ the total number of generators $g_{\ell,j}$. For fixed $w$ and $P$, either $\log |h_m(P)|_w < c_{0,v}$ for all $m \in \{ 1, \cdots, s\}$ (second part of dichotomy \eqref{eqdichoaff}), or the above inequality applied to some $m \in \{1, \cdots,s\}$ gives
\[
p \cdot c_{0,v} \leq \delta_v \log(N) + c_{1,v} + \max_{\substack{1 \leq \ell \leq r \\ 1 \leq j \leq s_\ell }} \log |g_{\ell,j} (P)|_w,
\]
which is equivalent to
\[
\max_{\substack{1 \leq \ell \leq r \\ 1 \leq j \leq s_\ell }} \log |g_{\ell,j} (P)|_w \geq \delta_v \log(N) + c_{1,v} - p \cdot c_{0,v},
\]
and taking the $M_K$-constant defined by $c_v := c_{1,v} + \delta_v \log(N) - p \cdot c_{0,v}$ for every $v \in M_K$ gives exactly the first part of dichotomy \eqref{eqdichoaff}.
$(b)$ We consider $X$ as embedded in some $\P^n_K$ so that ${\mathcal V}$ is exactly the tubular neighbourhood of $Y$ in $X$ associated to an $M_K$-constant ${\mathcal C}_0$ and generators $g_1, \cdots, g_s$ for this embedding. We will use again the notations \eqref{eqdefUi}, \eqref{eqdefvarphii} and \eqref{eqdefEi}. In particular we define $X_i := X \cap U_i$ for every $i \in \{0, \cdots, n\}$. The following argument is designed to make $Y$ appear as a common zero locus of regular functions built with the $\phi_\ell$.
For every $\ell \in \{1, \cdots, r\}$, let $D_\ell$ be the positive Weil divisor of zeroes of $\phi_\ell$ on $X$. For every $i \in \{0, \cdots, n\}$, let $I_{\ell,i}$ be the ideal of $K[X_i]$ made up with the regular functions $h$ on the affine variety $X_i$ such that $\div(h) \geq (D_\ell)_{|X_i}$, and we choose generators $h_{\ell,i,1}, \cdots, h_{\ell,i,j_{\ell,i}}$ of this ideal. The functions $h_{\ell,i,j}/(\phi_\ell)_{|X_i}$ are then regular on $X_i$ and
\[
\forall j \in \{1, \cdots, j_{\ell,i} \}, \quad \div \left( \frac{h_{\ell,i,j}}{(\phi_\ell)_{|X_i}} \right) \geq (\phi_{\ell,i})_\infty
\]
(the divisor of poles of $\phi_\ell$ on $X_i$). By construction of $I_{\ell,i}$, the minimum (prime Weil divisor by prime Weil divisor) of the $\div(h_{\ell,i,j})$ is exactly $(D_\ell)_{|X_i}$ : indeed, for every finite family of distinct prime Weil divisors $D'_1, \cdots, D'_s, D''$ on $X_i$, there is a uniformizer $h$ for $D''$ of order 0 for each of the $D'_k$, otherwise the prime ideal associated to $D''$ in $X_i$ would be included in the finite union of the others. This allows to build for every prime divisor $D'$ of $X_i$ not in the support of $(D_\ell)_{|X_i}$ a function $h \in I_{\ell,i}$ of order $0$ along $D'$ (and of the good order for every $D'$ in the support of $(D_\ell)_{|X_i}$. Consequently, the minimum of the divisors of the $h_{\ell,i,j} / (\phi_{\ell})_{|X_i}$, being naturally the minimum of the divisors of the $h / (\phi_{\ell})_{|X_i} \, \, ( h \in K[X_i])$, is exactly $(\phi_{\ell,i})_\infty$.
Thus, by definition of $Y$, for fixed $i$, the set of commmon zeroes of the regular functions $h_{\ell,i,j} / (\phi_\ell)_{|X_i} \, (1 \leq \ell \leq r, 1 \leq j \leq j_{\ell,i})$ on $X_i$ is $Y \cap X_i$, so they generate a power of the ideal of definition of $Y \cap X_i$. We apply part $(a)$ of this Proposition to the $h_{\ell,i,j} / (\phi_\ell)_{|X_i} \, (1 \leq \ell \leq r, 1 \leq j \leq j_{\ell,i})$, the $g_j \circ \varphi_i \, (1 \leq j \leq s)$ and the $M_K$-constant ${\mathcal C}_0$, which gives us an $M_K$-constant ${\mathcal C}'_i$ and the following dichotomy on $X_i$ for every $(P,w) \in E_i$ :
\[
\max_{\substack{1 \leq \ell \leq r \\ 1 \leq j \leq s_i }} \log \left| \frac{h_{\ell,i,j}}{\phi_\ell} (P) \right|_w \geq c'_{i,v} \quad \textrm{or} \quad \max_{1 \leq j \leq s} \log |g_j \circ \varphi_i (P)|_w < c_{0,v}.
\]
Now, the $h_{\ell,i,j}$ are regular on $X_i$ hence $M_K$-bounded on $E_i$, therefore there is a second $M_K$-constant ${\mathcal C}''_i$ such that for every $(P,w) \in E_i$ :
\[
\max_{\substack{1 \leq \ell \leq r \\ 1 \leq j \leq s_i }} \log \left| \frac{h_{\ell,i,j}}{\phi_\ell} (P) \right|_w \geq c'_{i,v} \Longrightarrow \min_{1 \leq \ell \leq r} \log |\phi_\ell (P)|_w \leq c''_{i,v}.
\]
Taking ${\mathcal C}$ as the maximum of the $M_K$-constants ${\mathcal C}''_i, 0 \leq i \leq n$, for every $(P,w) \in X(\overline{K}) \times M_{\overline{K}}$, we choose $i$ such that $(P,w) \in E_i$ and then we have the dichotomy \eqref{eqdichoproj} by definition of the tubular neighbourhood $V_w$.
\end{proof}
To finish this section, we will give the explicit link between integral points on a projective scheme (relatively to a divisor) and integral points relatively to rational functions on the scheme. In particular, this catches up with the definition of integral points of section 2 of \cite{Levin08}.
\begin{prop}
\hspace*{\fill}
\label{propreductionamplegros}
Let ${\mathcal X}$ be a normal projective scheme over ${\mathcal O}_{K,S}$.
$(a)$ If ${\mathcal Y}$ is an effective Cartier divisor on ${\mathcal X}$ such that ${\mathcal Y}_K$ is an ample (Cartier) divisor of ${\mathcal X}_K$, there is a projective embedding $\psi : {\mathcal X}_K \rightarrow \P^n_K$ and an $M_K$-constant ${\mathcal C}$ such that
\begin{itemize}
\item[$\bullet$] The pullback by $\psi$ of the hyperplane of equation $x_0=0$ in $\P^n_K$ is ${\mathcal Y}_K$.
\item[$\bullet$] For any finite extension $L$ of $K$ and any $w \in M_L$ not above $S$,
\begin{equation}
\label{eqlienreducschemasfonctionsample}
\forall P \in ({\mathcal X} \backslash {\mathcal Y}) ({\mathcal O}_{L,w}), \quad \log \|x_{\psi(P)} \|_w \leq c_v + \log |x_{\psi(P),0}|_w .
\end{equation}
This amounts to say that if the coordinates by $\psi$ of such a $P$ are normalised so that the first one is 1, all the other ones have $w$-norm bounded by $e^{c_v}$.
\end{itemize}
$(b)$ If ${\mathcal Y}$ is an effective Cartier divisor on ${\mathcal X}$ such that ${\mathcal Y}_K$ is a big (Cartier) divisor of ${\mathcal X}_K$, there is a strict Zariski closed subset $Z_K$ of ${\mathcal X}_K$, a morphism $\psi : {\mathcal X}_K \backslash {\mathcal Y}_K \rightarrow \P^n_K$ which induces a closed immersion of ${\mathcal X}_K \backslash Z_K$ and an $M_K$-constant ${\mathcal C}$ such that:
\begin{itemize}
\item[$\bullet$] The pullback by $\psi$ of the hyperplane of equation $x_0=0$ in $\P^n_K$ is contained in ${\mathcal Y}_K \cup Z_K$.
\item[$\bullet$] For any finite extension $L$ of $K$ and any $w \in M_L$ not above $S$, formula \eqref{eqlienreducschemasfonctionsample} holds.
\end{itemize}
\end{prop}
\begin{rem}
\hspace*{\fill}
\label{rempropamplegros}
$(a)$ This Proposition is formulated to avoid the use of local heights, but the idea is exactly that under the hypotheses above, the fact that $P \in ({\mathcal X} \backslash {\mathcal Y}) ({\mathcal O}_{L,w})$ implies that the local height at $w$ of $P$ for the divisor ${\mathcal Y}$ is bounded.
$(b)$ The hypotheses on ampleness (or ``bigness'') are only necessary at the generic fiber. If we considered ${\mathcal Y}$ ample on ${\mathcal X}$, it would give us a result with the zero $M_K$-constant (using an embedding over ${\mathcal O}_{K,S}$ given by ${\mathcal Y}$), and an equivalence, but this is not crucial here. Once again, the auxiliary functions replace the need for a complete understanding of what happens at the finite places.
$(c)$ The only difference between ample and big cases is hidden in the function $\psi$ : in the big case, the formula still holds but does not say much for points belonging in $Z_K$ because the morphism $\psi$ is not an embedding there.
\end{rem}
\begin{proof}[Proof of Proposition \ref{propreductionamplegros}]
\hspace*{\fill}
$(a)$ As ${\mathcal Y}_K$ is ample and effective, there is a projective embedding $\psi : {\mathcal X}_K \rightarrow \P^n$ such that the support of the divisor ${\mathcal Y}_K$ is exactly the inverse image of the hyperplane $x_0=0$ by $\psi$. Let us fix such an embedding and consider for every $i \in \{1, \cdots, n\}$ the coordinate functions $\phi_i := (x_i/x_0) \circ \psi$ in $ K({\mathcal X}_K)$, whose poles are contained in ${\mathcal Y}_K$ by construction. Now, we choose a tubular neighbourhood ${\mathcal V}$ of ${\mathcal Y}_K$ defined by an embedding of ${\mathcal X}$ in some projective $\P^m_{{\mathcal O}_{K,S}}$ (which can be completely unrelated to $\psi$), homogeneous generators $g_1, \cdots, g_s$ of the ideal of definition of ${\mathcal Y}$ in $\P^m_{{\mathcal O}_{K,S}}$ and the zero $M_K$-constant. By Proposition \ref{propcle} $(b)$ applied to ${\mathcal V}$ and $\phi_j$, we obtain an $M_K$-constant ${\mathcal C}_j$ such that for every finite extension $L$ of $K$ and every $w \in M_L$ (with the notations \eqref{eqdefvarphii} and \eqref{eqdefEiw}),
\[
\forall P \in {\mathcal X}(L) , \qquad \log |\phi_j(P)|_w \leq c_{j,v} \quad {\textrm{or}} \quad P \in V_w.
\]
By construction of ${\mathcal V}$ and Proposition \ref{proplienreductionpointssvaluation}, if $w$ is not above a place of $S$ and $P \in ({\mathcal X} \backslash {\mathcal Y})({\mathcal O}_{L,w})$, we necessarily have $\log |\phi_j(P)|_w \leq c_{j,v}$. Taking the maximum of the $M_K$-constants ${\mathcal C}_1, \cdots, {\mathcal C}_n$, we obtain the Proposition in the ample case.
$(b)$ The proof for big divisors is the same as part $(a)$, except that we can only extend our function $\psi$ to ${\mathcal X}_K \backslash Z_K$ for some proper Zariski closed subset $Z_K$ such that outside of this set, $\psi$ is a closed immersion. The coordinate functions $\phi_i \in K({\mathcal X}_K)$, similarly defined, also have poles contained in ${\mathcal Y}_K$. Applying the same arguments as in part $(a)$ for points $P \in {\mathcal X}(L)$, we obtain the same result.
\end{proof}
\section{The case of curves revisited}
\label{sectionRungecourbes}
In this section, we reprove the generalisation of an old Runge theorem \cite{Runge1887} obtained by Bombieri (\cite{BombieridecompWeil} p. 305, also rewritten as Theorem 9.6.6 in \cite{BombieriGubler}), following an idea exposed by Bilu in an unpublished note and mentioned for the case $K={\mathbb{Q}}$ by \cite{SchoofCatalan} (Chapter 5). The aim of this section is therefore to give a general understanding of this idea (quite different from the original proof of Bombieri), as well as explain how it actually gives a \textit{method} to bound heights of integral points on curves.
It is also a good start to understand how the intuition behind this result can be generalised to higher dimension, which will be done in the next section.
\begin{prop}[Bombieri, 1983]
\label{propBombieri}
Let $C$ be a smooth projective algebraic curve defined over a number field $K$ and $\phi \in K(C)$ not constant.
For any finite extension $L/K$, let $r_L$ be the number of orbits of the natural action of $\operatorname{Gal}(\overline{L}/L)$ over the poles of $\phi$. For any set of places $S_L$ of $L$ containing $M_L^{\infty}$, we say that $(L,S_L)$ satisfies the \textbf{Runge condition} if
\begin{equation}
\label{eqconditionRunge}
|S_L|<r_L.
\end{equation}
Then, the reunion
\begin{equation}
\label{eqreunionpointsRungecondition}
\bigcup_{\substack{(L,S_L)}} \left\{ P \in C(L) \, | \, \phi(P) \in {\mathcal O}_{L,S_L} \right\},
\end{equation}
where $(L,S_L)$ runs through all the pairs satisfying Runge condition, is \textbf{finite} and can be explicitly bounded in terms of the height $h \circ \phi$.
\end{prop}
\begin{exe}
As a concrete example, consider the modular curve $X_0(p)$ for $p$ prime and the $j$-invariant function. This curve is defined over ${\mathbb{Q}}$ and $j$ has two rational poles (which are the cusps of $X_0(p)$), hence $r_L=2$ for any choice of $L$, and we need to ensure $|M_L^{\infty}| \leq |S_L|< 2$. The only possibilities satisfying Runge condition are thus imaginary quadratic fields $L$ with $S_L = \{ | \cdot |_{\infty} \}$.
We thus proved in \cite{LeFourn1} that for any imaginary quadratic field $L$ and any $P \in X_0(p)(L)$ such that $j(P) \in {\mathcal O}_L$, one has
\[
\log |j(P)| \leq 2 \pi \sqrt{p} + 6 \log (p) + 8.
\]
The method for general modular curves is carried out in \cite{BiluParent09} and gives explicit estimates on the height for integral points satisfying Runge condition. This article uses the theory of modular units and implicitly the same proof of Bombieri's result as the one we expose below.
\end{exe}
\begin{rem}
\hspace*{\fill}
\label{remRungecourbes}
$(a)$ The claim of an explicit bound deserves a clarification : it can actually be made explicit when one knows well enough the auxiliary functions involved in the proof below (which is possible in many cases, e.g. for modular curves thanks to the modular units). Furthermore, even as the theoretical proof makes use of $M_K$-constants and results of section \ref{sectionresultatscles}, they are frequently implicit in pratical cases.
$(b)$ Despite the convoluted formulation of the proof below and the many auxiliary functions to obtain the full result, its principle is as descrbibed in the Introduction. It also gives the framework to apply Runge's method to a given couple $(C,\phi)$
\end{rem}
\begin{proof}[Proof of Proposition \ref{propBombieri}]
We fix $K'$ a finite Galois extension of $K$ on which every pole of $\phi$ is defined. For any two distinct poles $Q,Q'$ of $\phi$, we choose by Riemann-Roch theorem a function $g_{Q,Q'} \in K'(C)$ whose only pole is $Q$ and vanishing at $Q'$. For every point $P$ of $C(\overline{K})$ which is not a pole of $\phi$, one has $\operatorname{ord}_P (g_{Q,Q'}) \geq 0$ thus $g_{Q,Q'}$ belongs to the intersection of the discrete valuation rings of $\overline{K}(C)$ containing $\phi$ and $\overline{K}$ (\cite{Hartshorne}, proof of Lemma I.6.5), which is exactly the integral closure of $K[\phi]$ in $\overline{K}(C)$ (\cite{AtiyahMacDonald}, Corollary 5.22). Hence, the function $g_{Q,Q'}$ is integral on $K[\phi]$ and up to multiplication by some nonzero integer, we can and will assume it is integral on ${\mathcal O}_K[\phi]$.
For any fixed finite extension $L$ of $K$ included in $\overline{K}$, we define $f_{Q,Q',L} \in L(C)$ the product of the conjugates of $g_{Q,Q'}$ by $\operatorname{Gal}(\overline{L}/L)$. If $Q$ and $Q'$ belong to distinct orbits of poles for $\operatorname{Gal}(\overline{L}/L)$, the function $f_{Q,Q',L}$ has for only poles the orbit of poles of $Q$ by $\operatorname{Gal}(\overline{K}/L)$ and cancels at the poles of $\phi$ in the orbit of $Q'$ by $\operatorname{Gal}(\overline{K}/L)$ . Notice that we thus built only finitely many different functions (even with $L$ running through all finite extensions of $K$) because each $g_{Q,Q'}$ only has finitely many conjugates in $\operatorname{Gal}(K'/K)$.
Now, let ${\mathcal O}_1, \cdots, {\mathcal O}_{r_L}$ be the orbits of poles of $\phi$ and denote for any $i \in \{1, \cdots, r_L\}$ by $f_{i,L}$ a product of $f_{Q_i, Q'_j,L}$ where $Q_i \in {\mathcal O}_i$ and $Q'_j$ runs through representatives of the orbits (except ${\mathcal O}_i$). Again, there is a finite number of possible choices, and we obtain a function $f_{i,L} \in L(C)$ having for only poles the orbit ${\mathcal O}_i$ and vanishing at all the other poles of $\phi$. By our construction of the $g_{Q,Q'}$ and $f_{i,L}$, we can and do choose $n \in {\mathbb{N}}_{\geq 1}$ such that for every $i \in \{ 1, \cdots, r_L \}$, $\phi f_{i,L}^n$ has exactly as poles the points of ${\mathcal O}_i$ and is integral over ${\mathcal O}_K[\phi]$. This implies that for any finite place $w \in M_L$, if $|\phi(P)|_w \leq 1$ then $|f_{i,L} (P)|_w \leq 1$, but we also need such a result for archimedean places. To do this, we apply Corollary \ref{corpasdepolecommun} to $f_{i,L}/\phi^k$ and $f_{i,L}$ (for any $i$) for some $k$ such that $f_{i,L}/\phi^k$ does not have poles at ${\mathcal O}_i$, and take the maximum of the induced $M_K$-constants (Definition \ref{defMKconstante}) for any $L$ and $1 \leq i \leq r_L$. This gives an $M_K$-constant ${\mathcal C}_0$ independant of $L$ such that
\[
\forall i \in \{1, \cdots, r_L\}, \forall w \in M_{\overline{K}}, \forall P \in C(\overline{K}), \log \min \left( \left| \frac{f_{i,L}}{\phi^k} (P)\right|_w, |f_{i,L}(P)|_w \right) \leq c_{0,v} \quad (w|v \in M_K).
\]
In particular, the result interesting us in this case is that
\begin{equation}
\label{eqmajofilplacesarchi}
\forall i \in \{1, \cdots, r_L\}, \forall w \in M_{\overline{K}}, \forall P \in C(\overline{K}), |\phi(P)|_w \leq 1 \Rightarrow \log |f_{i,L} (P)|_w \leq c_{0,v},
\end{equation}
and we can assume $c_{0,v}$ is 0 for any finite place $v$ by integrality of the $f_{i,L}$ over ${\mathcal O}_K[\phi]$.
As the sets of poles of the $f_{i,L}$ are mutually disjoint, we reapply Corollary \ref{corpasdepolecommun} for every pair $(\phi f_{i,L}^n, \phi f_{j,L}^n)$ with $1 \leq i < j \leq r_L$, which again by taking the maximum of the induced $M_K$-constants for all the possible combinations (Definition \ref{defMKconstante}) gives an $M_K$-constant ${\mathcal C}_1$ such that for every $v \in M_K$ and every $(P,w) \in C(\overline{K}) \times M_{\overline{K}}$ with $w|v$, the inequality
\begin{equation}
\label{eqineqsaufpourunefonc}
\log |(\phi \cdot f_{i,L}^n) (P)|_w \leq c_{1,v}
\end{equation}
is true for all indices $i$ except at most one (depending of the choice of $P$ and $w$).
Let us now suppose that $(L,S_L)$ is a pair satisfying Runge condition and $P \in C(L)$ with $\phi(P) \in {\mathcal O}_{L,S_L}$. By integrality on ${\mathcal O}_K[\phi]$, for every $i \in \{1, \cdots, r_L \}$, $|f_{i,L}(P)|_w \leq 1$ for every place $w \in M_L \backslash S_L$. For every place $w \in S_L$, there is at most one index $i$ not satisying \eqref{eqineqsaufpourunefonc} hence by Runge condition and pigeon-hole principle, there remains one index $i$ (depending on $P$) such that
\begin{equation}
\label{eqmajophifiL}
\forall w \in M_L, \quad \log |\phi(P) f_{i,L}^n (P)|_w \leq c_{1,v}.
\end{equation}
With \eqref{eqmajofilplacesarchi} and \eqref{eqmajophifiL}, we have obtained all the auxiliary results we need to finish the proof. By the product formula,
\begin{eqnarray*}
0 & = & \sum_{w \in M_L} n_w \log |f_{i,L} (P)|_w \\
& = & \sum_{\substack{w \in M_L \\ |\phi(P)|_w >1 }} n_w \log |f_{i,L} (P)|_w + \sum_{\substack{w \in M_L^{\infty} \\ |\phi(P)|_w \leq 1}} n_w \log |f_{i,L} (P)|_w + \sum_{\substack{w \in M_L \! \! \backslash M_L^{\infty} \\ |\phi(P)|_w \leq 1}} n_w \log |f_{i,L} (P)|_w.
\end{eqnarray*}
Here, the first sum on the right side will be linked to the height $h \circ \phi$ and the third sum is negative by integrality of the $f_{i,L}$, so we only have to bound the second sum. From \eqref{eqmajofilplacesarchi} and \eqref{eqineqinductionMKconstante}, we obtain
\[
\sum_{\substack{w \in M_L^{\infty} \\ |\phi(P)|_w \leq 1}} n_w \log |f_{i,L} (P)|_w \leq \sum_{\substack{w \in M_L^{\infty} \\ |\phi(P)|_w \leq 1}} n_w c_{0,v} \leq [L:K] \sum_{v \in M_K^{\infty}} n_v c_{0,v}.
\]
On another side, by \eqref{eqmajophifiL} (and \eqref{eqineqinductionMKconstante} again), we have
\begin{eqnarray*}
n \cdot \sum_{\substack{w \in M_L \\ |\phi(P)|_w >1 }} n_w \log |f_{i,L} (P)|_w & = & \sum_{\substack{w \in M_L \\ |\phi(P)|_w >1 }} n_w \log |\phi f_{i,L}^n(P)|_w - \sum_{\substack{w \in M_L \\ |\phi(P)|_w >1 }} n_w \log |\phi(P)|_w \\
& \leq & \left([L:K] \sum_{v \in M_K} n_v c_{1,v} \right) - [L:{\mathbb{Q}}] h(\phi(P)).
\end{eqnarray*}
Hence, we obtain
\begin{eqnarray*}
0 & \leq & [L:K] \sum_{v \in M_K} n_v c_{1,v} - [L:{\mathbb{Q}}] h(\phi(P)) + [L:K] n \sum_{v \in M_K^{\infty}} n_v c_{0,v},
\end{eqnarray*}
which is equivalent to
\[
h (\phi(P)) \leq \frac{1}{[K : {\mathbb{Q}}]}\sum_{v \in M_K} n_v (c_{1,v} + n c_{0,v}).
\]
We thus obtained a bound on $h (\phi(P))$ independent on the choice of $(L,S_L)$ satisfying the Runge condition, and together with the bound on the degree
\[
[L : {\mathbb{Q}}] \leq 2 |S_L| < 2 r_L \leq 2 r,
\]
we get the finiteness.
\end{proof}
\section{The main result : tubular Runge theorem}
\label{sectiontubularRunge}
We will now present our version of Runge theorem with tubular neighbourhoods, which
generalises Theorem 4 $(b)$ and $(c)$ of \cite{Levin08}. As its complete formulation is quite lengthy, we indicated the different hypotheses by the letter $H$ and the results by the letter $R$ to simplify the explanation of all parts afterwards. The key condition for integral points generalising Runge condition of Proposition \ref{propBombieri} is indicated by the letters TRC.
We recall that the crucial notion of tubular neighbourhood is explained in Definitions \ref{defvoistub} and \ref{defhorsdunvoistub}, and we advise the reader to look at the simplified version of this theorem stated in the Introduction to get more insight if necessary.
\begin{thm}[Tubular Runge theorem]
\hspace*{\fill}
\label{thmRungetubulaire}
\textbf{(H0)} Let $K$ be a number field, $S_0$ a set of places of $K$ containing $M_K^{\infty}$ and ${\mathcal O}$ the integral closure of ${\mathcal O}_{K,S_0}$ in some finite Galois extension $K'$ of $K$.
\textbf{(H1)} Let ${\mathcal X}$ be a normal projective scheme over ${\mathcal O}_{K,S_0}$ and $D_1, \cdots, D_r$ be effective Cartier divisors on ${\mathcal X}_{\mathcal O} = {\mathcal X} \times_{{\mathcal O}_{K,S_0}} {\mathcal O}$ such that $D_{\mathcal O}= \bigcup_{i=1}^r D_i$ is the scalar extension to ${\mathcal O}$ of some Cartier divisor $D$ on ${\mathcal X}$, and that $\operatorname{Gal}(K'/K)$ permutes the generic fibers $(D_i)_{K'}$. For every extension $L/K$, we denote by $r_L$ the number of orbits of $(D_1)_{K'}, \cdots, (D_r)_{K'}$ for the action of $\operatorname{Gal}(K'L/L)$.
\textbf{(H2)} Let $Y$ be a closed sub-$K$-scheme of ${\mathcal X}_K$ and ${\mathcal V}$ be a tubular neighbourhood of $Y$ in ${\mathcal X}_K$. Let $m_Y \in {\mathbb{N}}$ be the minimal number such that the intersection of any $(m_Y+1)$ of the divisors $(D_i)_{K'}$ amongst the $r$ possible ones is included in $Y_{K'}$.
\textbf{(TRC)} The \textbf{tubular Runge condition} for a pair $(L,S_L)$, where $L/K$ is finite and $S_L$ contains all the places above $S_0$, is
\[
m_Y |S_L| < r_L.
\]
Under these hypotheses and notations, the results are the following :
\textbf{(R1)} If $(D_1)_{K'}, \cdots , (D_r)_{K'}$ are ample divisors, the set
\begin{equation}
\label{eqensfiniRungetubample}
\bigcup_{(L,S_L)} \{P \in ({\mathcal X} \backslash D) ({\mathcal O}_{L,S_L}) \, | \, P \notin {\mathcal V} \},
\end{equation}
where $(L,S_L)$ goes through all the pairs satisfying the tubular Runge condition, is \textbf{finite}.
\textbf{(R2)} If $(D_1)_{K'}, \cdots, (D_r)_{K'}$ are big divisors, there exists a proper closed subset $Z_{K'}$ of ${\mathcal X}_{K'}$ such that the set
\[
\left( \bigcup_{(L,S_L)} \{P \in ({\mathcal X} \backslash D) ({\mathcal O}_{L,S_L}) \, | \, P \notin {\mathcal V} \} \right) \backslash Z_{K'} (\overline{K}),
\]
where $(L,S_L)$ goes through all the pairs satisfying the tubular Runge condition, is \textbf{finite}.
\end{thm}
We separated the comments about Theorem \ref{thmRungetubulaire} in two remarks below : the first one explains its hypotheses and results, the second compares it with other theorems.
\begin{rem}
\hspace*{\fill}
\label{remRungetubulaire}
$(a)$
The need for the extensions of scalars to $K'$ and ${\mathcal O}$ in \textit{\textbf{(H0)}} and \textit{\textbf{(H1)}} is the analogue of the fact that the poles of $\phi$ are not necessarily $K$-rational in the case of curves, hence the assumption that the $(D_i)_{K'}$ are all conjugates by $\operatorname{Gal}(K'/K)$ and the definition of $r_L$ given in \textit{\textbf{(H1)}}. It will induce technical additions of the same flavour as the auxiliary functions $f_{Q,Q',L}$ in the proof of Bombieri's theorem (Proposition \ref{propBombieri}).
$(b)$ The motivation for the tubular Runge condition is the following : imitating the principle of proof for curves (Remark \ref{remRungecourbes} $(b)$), if $P \in ({\mathcal X} \backslash D) ({\mathcal O}_{L,S_L})$, we can say that at the places $w$ of $M_L \backslash S_L$, this point is ``$w$-adically far'' from $D$. Now, the divisors $(D_1)_{K'}, \cdots, (D_r)_{K'}$ can intersect (which does not happen for distinct points on curves), so for $w \in S_L$, this point $P$ can be ``$w$-adically close'' to many divisors at the same time. More precisely, it can be ``$w$-adically close'' to at most $m$ such divisors, where $m=m_{\emptyset}$, i.e. the largest number such that there are $m$ divisors among $D_1, \cdots, D_r$ whose set-theoretic intersection is nonempty. This number is also defined in \cite{Levin08} but we found that for our applications, it often makes Runge condition too strict. Therefore, we allow the use of the closed subset $Y$ in \textbf{\textit{(H2)}}, and if we assume that our point $P$ is never too close to $Y$ (i.e. $P \notin {\mathcal V}$), this $m$ goes down to $m_Y$ by definition. Thus, we only need to take out $m_Y$ divisors by place $w$ in $S_L$, hence the tubular Runge condition $m_Y |S_L|< r_L$. Actually, one can even mix the Runge conditions, i.e. assume that $P$ is close to $Y$ exactly at $s_1$ places, and close from one of the divisors (but not $Y$) at $s_2$ places : following along the lines of the proof below, we obtain finiteness given the Runge condition $s_1 m_{\emptyset} + s_2 m_Y < r_L$.
$(c)$ The last main difference with the case of curves is the assumption of ample or big divisors, respectively in \textit{\textbf{(R1)}} and \textit{\textbf{(R2)}}. In both cases, such an assumption is necessary twice. First, we need it to translate by Proposition \ref{propreductionamplegros} the integrality condition on schemes to an integrality expression on auxiliary functions (such as in section 2 of \cite{Levin08}) to use the machinery of $M_K$-constants and the key result (Proposition \ref{propcle}). Then, we need it to ensure that after obtaining a bound on the heights associated to the divisors, it implies finiteness (implicit in Proposition \ref{propreductionamplegros}, see also Remark \ref{rempropamplegros} $(a)$).
\end{rem}
\begin{rem}
\hspace*{\fill}
\label{remcomparaisonCLZetstratification}
$(a)$ This theorem has some resemblance to Theorem CLZ of \cite{CorvajaLevinZannier} (where our closed subset $Y$ would be the analogue of the ${\mathcal Y}$ in that article), let us point out the differences. In Theorem CLZ, there is no hypothesis of the set of places $S_L$, no additional hypothesis of integrality (appearing for us under the form of a tubular neighbourhood), and the divisors are assumed to be normal crossing divisors, which is replaced in our case by the tubular Runge condition. As for the results themselves, the finiteness formulated by CLZ depends on the set $S_L$ (that is, it is not clear how it would prove such an union of sets such as in our Theorem is finite). Finally, the techniques employed are greatly different : Theorem CLZ uses Schmidt's subspace theorem which is noneffective, whereas our method can be made effective if one knows the involved auxiliary functions. It might be possible (and worthy of interest) to build some bridges between the two results, and the techniques involved.
$(b)$ Theorem \ref{thmRungetubulaire} can be seen as a stratification of Runge-like results depending on the dimension of the intersection of the involved divisors : at one extreme, the intersection is empty, and we get back Theorem 4 $(b)$ and $(c)$ of \cite{Levin08}. At the other extreme, the intersection is a divisor (ample or big), and the finiteness is automatic by the hypothesis for points not belonging in the tubular neighbourhood (see Remark \ref{remhorsvoistub}). Of course, this stratification is not relevant in the case of curves. In another perspective, for a fixed closed subset $Y$, Theorem \ref{thmRungetubulaire} is more a concentration result of integral points than a finiteness result, as it means that even if we choose a tubular neighbourhood ${\mathcal V}$ of $Y$ as small as possible around $Y$, there is only a finite number of integral points in the set \eqref{eqensfiniRungetubample}, i.e. these integral points (ignoring the hypothese $P \notin {\mathcal V}$) must concentrate around $Y$ (at least at one of the places $w \in M_L$). Specific examples will be given in section \ref{sectionapplicationsSiegel} and \ref{sectionexplicitRunge}.
\end{rem}
Let us now prove Theorem \ref{thmRungetubulaire}, following the ideas outlined in Remark \ref{remRungetubulaire}.
\begin{proof}[Proof of Theorem \ref{thmRungetubulaire}]
\hspace*{\fill}
\textit{\textbf{(R1)}} Let us first build the embeddings we need. For every subextension $K''$ of $K'/K$, the action of $\operatorname{Gal}(K'/K'')$ on the divisors $(D_1)_{K'}, \cdots, (D_r)_{K'}$ has orbits denoted by $O_{K'',1}, \cdots, O_{K'',r_{K''}}$. Notice that any $m_Y+1$ such orbits still have their global intersection included in $Y$ : regrouping the divisors by orbits does not change this fact.
For each such orbit, the sum of its divisors is ample by hypothesis and coming from an effective Cartier divisor on ${\mathcal X}_{K''}$, hence one can choose by Proposition \ref{propreductionamplegros} an appropriate embedding $\psi_{K'',i} : {\mathcal X}_{K''} \rightarrow \P^{n_i}_{K''}$, whose coordinates functions (denoted by $\phi_{K'',i,j} = (x_j/x_0) \circ \psi_{K'',i} (1 \leq j \leq n_i)$) are small on integral points of $({\mathcal X}_{\mathcal O} \backslash O_{K'',i})$. We will denote by ${\mathcal C}_0$ the maximum of the (induced) $M_K$-constants obtained for by the Proposition \ref{propreductionamplegros} for all possible $K''/K$ and orbits $O_{K'',i} (1 \leq i \leq r_{K''})$. The important point of this is that for any extension $L/K$, any $v \in M_K \backslash S_0$, any place $w \in M_L$ above $v$ and any $P \in ({\mathcal X} \backslash D) ({\mathcal O}_{L,w})$, choosing $L'=K' \cap L$, one has
\begin{equation}
\label{eqinterRungetousplongements}
\max_{\substack{1 \leq i \leq r_L \\ 1 \leq j \leq n_i}} \log |\phi_{L',i,j} (P)|_w \leq c_{0,v}.
\end{equation}
This is the first step to obtain a bound on the height of one of the $\psi_{K'',i} (P)$. For fixed $P$, we only have to do so for one of the $i \in \{1, \cdots, r_L \}$ as long as the bound is uniform in the choice of $(L,S_L)$ (and $P$), to obtain finiteness as each $\psi_{K'',i}$ is an embedding. To this end, one only needs to bound the coordinate functions on the places $w$ of $M_L \backslash S_L$, which is what we will do now.
For a subextension $K''$ of $K'/K$ again, by hypothesis \textbf{\textit{(H2)}} (and especially the definition of $m_Y$), taking any set ${\mathcal I}$ of $m_Y+1$ couples $(i,j), 1 \leq i \leq r_{K''}, j \in \{1, \cdots, n_i\}$ with $m_Y+1$ different indices $i$ and considering the rational functions $\phi_{K'',i,j}, (i,j) \in {\mathcal I}$, whose common poles are included in $Y$ by hypothesis, we can apply Proposition \ref{propcle} to these functions and the tubular neighbourhood ${\mathcal V} = (V_w)_{w \in M_{\overline{K}}}$. Naming as ${\mathcal C}_1$ the maximum of the (induced) obtained $M_K$-constants (also for all the possible $K''$), we just proved that for every subextension $K''$ of $K'/K$, every place $w \in M_{\overline{K}}$ (above $v \in M_K$) and any $P \in {\mathcal X}(\overline{K}) \backslash V_w$, the inequality
\begin{equation}
\label{eqineqfaussepourauplusmY}
\max_{1 \leq j \leq n_i} \log |\phi_{K'',i,j} (P)|_w \leq c_{1,v}
\end{equation}
is true except for at most $m_Y$ different indices $i \in \{1, \cdots, r_{K''} \}$.
Now, let us consider $(L,S_L)$ a pair satisfying tubular Runge condition $m_Y |S_L| < r_L$ and denote $L' = K' \cap L$ again. For $P \in ({\mathcal X} \backslash D) ({\mathcal O}_{L,S_L})$ not belonging to ${\mathcal V}$, by \eqref{eqinterRungetousplongements}, \eqref{eqineqfaussepourauplusmY} and tubular Runge condition, there remains an index $i \in \{1, \cdots, r_L\}$ (dependent on $P$) such that
\[
\forall w \in M_L, \quad \max_{1 \leq j \leq n_i} \log |\phi_{L',i,j} (P)|_w \leq \max(c_{0,v},c_{1,v}) \quad (w | v \in M_K).
\]
This gives immediately a bound on the height of $\psi_{L',i}(P)$ independent of the choice of pair $(L,S_L)$ (except the fact that $L' = K' \cap L$) and this morphism is an embedding, hence the finiteness of the set of points
\[
\bigcup_{(L,S_L)} \{P \in ({\mathcal X} \backslash D) ({\mathcal O}_{L,S_L}) \, | \, P \notin {\mathcal V} \},
\]
where $(L,S_L)$ goes through all the pairs satisfying tubular Runge condition, because $[L:{\mathbb{Q}}]$ is also bounded by this condition.
\textit{\textbf{(R2)}}
The proof is the same as for \textit{\textbf{(R1)}} except that we have to exclude a closed subset of ${\mathcal X}_{K'}$ for every big divisor involved, and their reunion will be denoted by $Z_{K'}$. The arguments above hold for every point $P \notin Z_{K'} (\overline{K})$ (both for the expression of integrality by auxiliary functions, and for the conclusion and finiteness outside of this closed subset), using again Propositions \ref{propreductionamplegros} and \ref{propcle}.
\end{proof}
\section{Reminders on Siegel modular varieties}
\label{sectionrappelsSiegel}
In this section, we recall the classical constructions and results for the Siegel modular varieties, parametrising principally polarised abelian varieties with a level structure. Most of those results are extracted (or easily deduced) from these general references : Chapter V of \cite{CornellSilvermanArithmeticGeometry} for the basic notions on abelian varieties, \cite{Debarre99} for the complex tori, their line bundles, theta functions and moduli spaces, Chapter II of \cite{MumfordTata} for the classical complex theta functions and \cite{MumfordTataII} for their links with theta divisors, and Chapter V of \cite{ChaiFaltings} for abelian schemes and their moduli spaces.
Unless specified, all the vectors of ${\mathbb{Z}}^g, {\mathbb{R}}^g$ and ${\mathbb{C}}^g$ are assumed to be row vectors.
\subsection{Abelian varieties and Siegel modular varieties}
\label{subsecabvarSiegelmodvar}
\begin{defi}[Abelian varieties and polarisation]
\hspace*{\fill}
\label{defibaseabvar}
\begin{itemize}
\item[$\bullet$] An \textit{abelian variety} $A$ over a field $k$ is a projective algebraic group over $k$. Each abelian variety $A_{/k}$ has a dual abelian variety denoted by $\widehat{A} = \operatorname{Pic}^0 (A/k)$ (\cite{CornellSilvermanArithmeticGeometry}, section V.9).
\item[$\bullet$] A \textit{principal polarisation} is an isomorphism $\lambda : A \rightarrow \widehat{A}$ such that there exists a line bundle $L$ on $A_{\overline{k}}$ with $\dim H^0(A_{\overline{k}},L)=1$ and $\lambda$ is the morphism
\[
\fonction{\lambda}{A_{\overline{k}}}{\widehat{A_{\overline{k}}}}{x}{T_x^* L \otimes L^{-1}}
\]
(\cite{CornellSilvermanArithmeticGeometry}, section V.13).
\item[$\bullet$] Given a pair $(A,\lambda)$, for every $n \geq 1$ prime to $\textrm{char}(k)$, we can define the \textit{Weil pairing}
\[
A[n] \times A[n] \rightarrow \mu_n (\overline{k}),
\]
where $A[n]$ is the $n$-torsion of $A(\overline{k})$ and $\mu_n$ the group of $n$-th roots of unity in $\overline{k}$. It is alternate and nondegenerate (\cite{CornellSilvermanArithmeticGeometry}, section V.16).
\item[$\bullet$] Given a pair $(A,\lambda)$, for $n \geq 1$ prime to $\textrm{char}(k)$, a \textit{symplectic level $n$ structure on }$A[n]$ is a basis $\alpha_n$ of $A[n]$ in which the matrix of the Weil pairing is
\[
J = \begin{pmatrix} 0 & I_g \\
- I_g & 0
\end{pmatrix}.
\]
\item[$\bullet$] Two triples $(A,\lambda,\alpha_n)$ and $(A',\lambda',\alpha'_n)$ of principally polarised abelian varieties over $K$ with level $n$-structures are \textit{isomorphic} if there is an isomorphism of abelian varieties $\phi : A \rightarrow A'$ such that $\phi^* \lambda' = \lambda$ and $\phi^* \alpha'_n = \alpha_n$.
\end{itemize}
\end{defi}
In the case of complex abelian varieties, the previous definitions can be made more explicit.
\begin{defi}[Complex abelian varieties and symplectic group]
\hspace*{\fill}
\label{deficomplexabvar}
Let $g \geq 1$.
\begin{itemize}
\item[$\bullet$] The \textit{half-superior Siegel space of order }$g$, denoted by ${\mathcal H}_g$, is the set of matrices
\begin{equation}
\label{eqdefdemiespaceSiegel}
{\mathcal H}_g := \{ \tau \in M_g ({\mathbb{C}}) \, | \, {}^t \tau = \tau \, \, \textrm{and} \, \, \Im \tau >0 \},
\end{equation}
where $\Im \tau >0$ means that this symmetric matrix of $M_g ({\mathbb{R}})$ is positive definite. This space is an open subset of $M_g({\mathbb{C}})$.
\item[$\bullet$] For any $\tau \in {\mathcal H}_g$, we define
\begin{equation}
\Lambda_\tau := {\mathbb{Z}}^g + {\mathbb{Z}}^g \tau \quad \textrm{and} \quad A_\tau := {\mathbb{C}}^g / \Lambda_\tau.
\end{equation}
Let $L_\tau$ be the line bundle on $A_\tau$ made up as the quotient of ${\mathbb{C}}^g \times {\mathbb{C}}$ by the action of $\Lambda_\tau$ defined by
\begin{equation}
\label{eqdeffibresurAtau}
\forall p,q \in {\mathbb{Z}}^g, \quad (p \tau + q) \cdot (z,t) = \left(z+p \tau + q, e^{ - i \pi p \tau {}^tp - 2 i \pi p {}^t z} t \right).
\end{equation}
Then, $L_\tau$ is an an ample line bundle on $A_\tau$ such that $\dim H^0 (A_\tau,L_\tau)=1$, hence $A_\tau$ is a complex abelian variety and $L_\tau$ induces a principal polarisation denoted by $\lambda_\tau$ on $A_\tau$ (see for example \cite{Debarre99}, Theorem VI.1.3). We also denote by $\pi_\tau : {\mathbb{C}}^g \rightarrow A_\tau$ the quotient morphism.
\item[$\bullet$] For every $n \geq 1$, the Weil pairing $w_{\tau,n}$ associated to $(A_\tau,\lambda_\tau)$ on $A_\tau[n]$ is defined by
\[
\fonction{w_{\tau,n}}{A_\tau[n] \times A_\tau[n]}{\mu_n ({\mathbb{C}})}{(\overline{x},\overline{y})}{e^{ 2 i \pi n w_\tau(x,y)}},
\]
where $x,y \in {\mathbb{C}}^g$ have images $\overline{x}, \overline{y}$ by $\pi_\tau$, and $w_\tau$ is the ${\mathbb{R}}$-bilinear form on ${\mathbb{C}}^g \times {\mathbb{C}}^g$ (so that $w_\tau(\Lambda_\tau \times \Lambda_\tau) = {\mathbb{Z}}$) defined by
\[
w_\tau (x,y) := \Re(x) \cdot \Im (\tau)^{-1} \cdot {}^t \Im(y) - \Re(y) \cdot \Im (\tau)^{-1} \cdot {}^t \Im(x)
\]
(also readily checked by making explicit the construction of the Weil pairing).
\item[$\bullet$] Let $(e_1, \cdots, e_g)$ be the canonical basis of ${\mathbb{C}}^g$. The family
\begin{equation}
\label{eqdefalphataun}
(\pi_\tau(e_1/n), \cdots, \pi_\tau(e_g/n), \pi_\tau(e_1 \cdot \tau/n), \cdots, \pi_\tau(e_g \cdot \tau/n))
\end{equation}
is a symplectic level $n$ structure on $(A_\tau, \lambda_\tau)$, denoted by $\alpha_{\tau,n}$.
\item[$\bullet$] Let $J = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \in M_{2g}({\mathbb{Z}})$. For any commutative ring $A$, the \textit{symplectic group of order $g$ over $A$}, denoted by $\operatorname{Sp}_{2g} (A)$, is the subgroup of $\operatorname{GL}_{2g}(A)$ defined by
\begin{equation}
\label{eqdefgroupesymplectique}
\operatorname{Sp}_{2g} (A) := \{ M \in \operatorname{GL}_{2g} (A) \, \, | \, \, {}^t M J M = J \}, \qquad J:= \begin{pmatrix} 0 & I_g \\- I_g & 0 \end{pmatrix}.
\end{equation}
For every $n \geq 1$, the \textit{symplectic principal subgroup of degree $g$ and level $n$}, denoted by $\Gamma_g(n)$, is the subgroup of $\operatorname{Sp}_{2g} ({\mathbb{Z}})$ made up by the matrices congruent to $I_{2g}$ modulo $n$. For every $\gamma = \begin{pmatrix}A & B \\ C & D \end{pmatrix} \in \operatorname{Sp}_{2g} ({\mathbb{R}})$ and every $\tau \in {\mathcal H}_g$, we define
\begin{equation}
\label{eqdefjetactionsymplectique}
j_\gamma (\tau) = C \tau + D \in \operatorname{GL}_g ({\mathbb{C}}), \quad \textrm{and} \quad \gamma \cdot \tau = (A \tau + B)(C \tau + D)^{-1},
\end{equation}
which defines a left action by biholomorphisms of $\operatorname{Sp}_{2g} ({\mathbb{R}})$ on ${\mathcal H}_g$, and $(\gamma,\tau) \mapsto j_\gamma(\tau)$ is a left cocycle for this action (\cite{Klingen}, Proposition I.1).
\item[$\bullet$] For every $g \geq 2$, $n \geq 1$ and $k \geq 1$, a \textit{Siegel modular form of degree $g$, level $n$ and weight $k$} is an holomorphic function $f$ on ${\mathcal H}_g$ such that
\begin{equation}
\label{eqdefSiegelmodularform}
\forall \gamma \in \Gamma_g(n), \quad f (\gamma \cdot z) = \det(j_\gamma(z))^k f(z).
\end{equation}
\end{itemize}
\end{defi}
The reason for this seemingly partial description of the complex abelian varieties is that the $(A_\tau,\lambda_\tau)$ described above actually make up all the principally polarised complex abelian varieties up to isomorphism. The following results can be found in Chapter VI of \cite{Debarre99} except the last point which is straightforward.
\begin{defiprop}[Uniformisation of complex abelian varieties]
\hspace*{\fill}
\label{defipropuniformcomplexabvar}
\begin{itemize}
\item[$\bullet$] Every principally polarised complex abelian variety of dimension $g$ with symplectic structure of level $n$ is isomorphic to some triple $(A_\tau, \lambda_\tau,\alpha_{\tau,n})$ where $\tau \in {\mathcal H}_g$.
\item[$\bullet$] For every $n \geq 1$, two triples $(A_\tau,\lambda_\tau,\alpha_{\tau,n})$ and $(A_{\tau'},\lambda_{\tau'},\alpha_{\tau',n})$ are isomorphic if and only if there exists $\gamma \in \Gamma_g(n)$ such that $\gamma \cdot \tau = \tau'$, and then such an isomorphism is given by
\[
\fonctionsansnom{A_\tau}{A_{\tau'}}{z \! \mod \Lambda_\tau}{z \! \cdot j_\gamma(\tau)^{-1} \mod \Lambda_{\tau'}}.
\]
\item[$\bullet$] The \textit{Siegel modular variety of degree $g$ and level $n$} is the quotient $A_g(n)_{\mathbb{C}} := \Gamma_g(n) \backslash {\mathcal H}_g$. From the previous result, it is the moduli space of principally polarised complex abelian varieties of dimension $g$ with a symplectic level $n$ structure. As a quotient, it also inherits a structure of normal analytic space (with finite quotient singularities) of dimension $g(g+1)/2$, because $\Gamma_g(n)$ acts properly discontinuously on ${\mathcal H}_g$.
\item[$\bullet$] For every positive divisor $m$ of $n$, the natural morphism $A_g(n)_{\mathbb{C}} \rightarrow A_g(m)_{\mathbb{C}}$ induced by the identity of ${\mathcal H}_g$ corresponds in terms of moduli to multiplying the symplectic basis $\alpha_{\tau,n}$ by $n/m$, thus obtaining $\alpha_{\tau,m}$.
\item[$\bullet$] For every $g \geq 1$ and $n \geq 1$, the quotient of ${\mathcal H}_g \times {\mathbb{C}}$ by the action of $\Gamma_g(n)$ defined as
\begin{equation}
\label{eqdeffibreL}
\gamma \cdot (\tau,t) = (\gamma \cdot \tau, t / \det( j_\gamma(z)))
\end{equation}
is a variety over ${\mathcal H}_g$ denoted by $L$. For a large enough power of $k$ (or if $n \geq 3$), $L^{\otimes k}$ is a line bundle over $A_g(n)_{\mathbb{C}}$, hence $L$ is a ${\mathbb{Q}}$-line bundle over $A_g(n)_{\mathbb{C}}$ called \textit{line bundle of modular forms of weight one} over $A_g(n)_{\mathbb{C}}$. By definition \eqref{eqdefSiegelmodularform}, for every $k \geq 1$, the global sections of $L^{\otimes k}$ are the Siegel modular forms of degree $g$, level $n$ and weight $k$.
\end{itemize}
\end{defiprop}
Let us now present the compactification of $A_g(n)_{\mathbb{C}}$ we will use, that is the Satake compactification (for a complete description of it, see section 3 of \cite{Namikawa}).
\begin{defiprop}[Satake compactification]
\hspace*{\fill}
\label{defipropSatakecompactification}
Let $g \geq 1$ and $n \geq 1$. The normal analytic space $A_g(n)_{\mathbb{C}}$ admits a compactification called \textit{Satake compactification} and denoted by $A_g(n)^S_{\mathbb{C}}$, satisfying the following properties.
$(a)$ $A_g(n)^S_{\mathbb{C}}$ is a compact normal analytic space (of dimension $g(g+1)/2$, with finite quotient singularities) containing $A_g(n)_{\mathbb{C}}$ as an open subset and the boundary $\partial A_g(n)_{\mathbb{C}} := A_g(n)^S_{\mathbb{C}} \backslash A_g(n)_{\mathbb{C}}$ is of codimension $g$ (see \cite{CartanSatake57} for details).
$(b)$ As a normal analytic space, $A_g(n)^S_{\mathbb{C}}$ is a projective algebraic variety. More precisely, for ${\textrm{M}}_g(n)$ the graded ring of Siegel modular forms of degree $g$ and level $n$, $A_g(n)^S_{\mathbb{C}}$ is canonically isomorphic to $\operatorname{Proj}_{\mathbb{C}} {\textrm{M}}_g (n)$ (\cite{CartanPlongements57}, ``théorème fondamental'').
In particular, one can obtain naturally $A_g(n)^S_{\mathbb{C}}$ by fixing for some large enough weight $k$ a basis of modular forms of ${\textrm{M}}_g(n)$ of weight $k$ and evaluating them all on $A_g(n)_{\mathbb{C}}$ to embed it in a projective space, so that $A_g(n)^S_{\mathbb{C}}$ is the closure of the image of the embedding in this projective space.
$(c)$ The ${\mathbb{Q}}$-line bundle $L$ of modular forms of weight 1 on $A_g(n)_{\mathbb{C}}$ extends naturally to $A_g(n)^S_{\mathbb{C}}$ (and is renoted $L$), to an ample ${\mathbb{Q}}$-line bundle (this is a direct consequence of $(b)$).
\end{defiprop}
\subsection{Further properties of Siegel modular varieties}
\label{subsecfurtherpropSiegelmodvar}
As we are interested in the reduction of abelian varieties on number fields, one needs to have a good model of $A_g(n)_{\mathbb{C}}$ over integer rings, as well as some knowledge of the geometry of $A_g(n)_{\mathbb{C}}$. The integral models below and their properties are given in Chapter V of \cite{ChaiFaltings}.
\begin{defi}[Abelian schemes]
\hspace*{\fill}
\label{defabelianscheme}
$(a)$ An \textit{abelian scheme} $A \rightarrow S$ is a smooth proper group scheme whose fibers are geometrically connected. It also has a natural \textit{dual} abelian scheme $\widehat{A} = \operatorname{Pic}^0 (A/S)$, and it is \textit{principally polarised} if it is endowed with an isomorphism $\lambda : A \rightarrow \widehat{A}$ such that at every geometric point $\overline{s}$ of $S$, the induced isomorphism $\lambda_{\overline{s}} : A_{\overline{s}} \rightarrow \widehat{A}_{\overline{s}}$ is a principal polarisation of $A_{\overline{s}}$.
$(b)$ A \textit{symplectic structure of level $n \geq 1$} on a principally polarised abelian scheme $(A,\lambda)$ over a ${\mathbb{Z}}[\zeta_n,1/n]$-scheme $S$ is the datum of an isomorphism of group schemes $A[n] \rightarrow ({\mathbb{Z}}/n{\mathbb{Z}})^{2g}$, which is symplectic with respect to $\lambda$ and the canonical pairing on $({\mathbb{Z}}/n{\mathbb{Z}})^{2g}$ given by the matrix $J$ (as in \eqref{eqdefgroupesymplectique}).
\end{defi}
\begin{defiprop}[Algebraic moduli spaces]
\hspace*{\fill}
\label{defipropalgmodulispaces}
For every integers $g \geq 1$ and $n \geq 1$ :
$(a)$ The Satake compactification $A_g(n)^S_{\mathbb{C}}$ has an integral model ${\mathcal A}_g (n)^S$ on ${\mathbb{Z}}[\zeta_n, 1/n]$ which contains as a dense open subscheme the (coarse, if $n \leq 2$) moduli space ${\mathcal A}_g(n)$ on ${\mathbb{Z}}[\zeta_n,1/n]$ of principally polarised abelian schemes of dimension $g$ with a symplectic structure of level $n$. This scheme ${\mathcal A}_g(n)^S$ is normal, proper and of finite type on ${\mathbb{Z}}[\zeta_n,1/n]$ (\cite{ChaiFaltings}, Theorem V.2.5).
$(b)$ For every divisor $m$ of $n$, we have canonical degeneracy morphisms ${\mathcal A}_g(n)^S \rightarrow {\mathcal A}_g(m)^S$ extending the morphisms of Definition \ref{defipropuniformcomplexabvar}.
\end{defiprop}
Before tackling our own problem, let us give some context on the divisors on $A_g(n)^S_{\mathbb{C}}$ to give a taste of the difficulties to overcome.
\begin{defi}[Rational Picard group]
\hspace*{\fill}
For every normal algebraic variety $X$ on a field $K$, the \textit{rational Picard group} of $X$ is the ${\mathbb{Q}}$-vector space
\[
\operatorname{Pic}(X)_{\mathbb{Q}} := \operatorname{Pic}(X) \otimes_{\mathbb{Z}} {\mathbb{Q}}.
\]
\end{defi}
\begin{prop}[Rational Picard groups of Siegel modular varieties]
\hspace*{\fill}
\label{proprationalPicardSiegel}
Let $g \geq 2$ and $n \geq 1$.
$(a)$ Every Weil divisor on $A_g(n)_{\mathbb{C}}$ or $A_g(n)^S_{\mathbb{C}}$ is up to some multiple a Cartier divisor, hence their rational Picard group is also their Weil class divisor group tensored by ${\mathbb{Q}}$.
$(b)$ For $g=3$, the Picard rational groups of $A_3(n)^S_{\mathbb{C}}$ and $A_3(n)_{\mathbb{C}}$ are equal to ${\mathbb{Q}} \cdot L$ for every $n \geq 1$.
$(c)$ For $g=2$, one has $\operatorname{Pic}_{\mathbb{Q}} (A_2(1)^S_{\mathbb{C}}) = {\mathbb{Q}} \cdot L$.
\end{prop}
This result has the following immediate corollary, because $L$ is ample on $A_g(n)^S_{\mathbb{C}}$ for every $g \geq 2$ and every $n \geq 1$ (Definition-Proposition \ref{defipropSatakecompactification} $(c)$).
\begin{cor}[Ample and big divisors on Siegel modular varieties]
\hspace*{\fill}
A ${\mathbb{Q}}$-divisor on $A_g(n)_{\mathbb{C}}$ or $A_g(n)^S_{\mathbb{C}}$ with $g=3$ (or $g=2$ and $n=1$) is ample if and only if it is big if and only if it is equivalent to $a \cdot L$ with $a>0$.
\end{cor}
\begin{rem}
\label{remampledifficilepourA2}
We did not mention the case of modular curves (also difficult, but treated by different methods): the point here is that the cases $g \geq 3$ are surprisingly much more uniform because then $\operatorname{Pic}(A_g(n)^S_{\mathbb{C}}) = \operatorname{Pic} (A_g(1)^S_{\mathbb{C}})$. The reason is that some rigidity appears from $g \geq 3$ (essentially by the general arguments of \cite{Borel81}), whereas for $g=2$, the situation seems very complex already for the small levels (see for example $n=3$ in \cite{HoffmanWeintraub00}).
This is why the ampleness (or bigness) is in general hard to figure out for given divisors of $A_2(n), n >1$. We consider specific divisors in the following (namely, divisors of zeroes of theta functions), whose ampleness will not be hard to prove.
\end{rem}
\begin{proof}[Proof of Proposition \ref{proprationalPicardSiegel}]
\hspace*{\fill}
$(a)$ This is true for the $A_g(n)^S_{\mathbb{C}}$ by \cite{ArtalBartolo14} as they only have finite quotient singularities, (this result actually seems to have been generally assumed a long time ago). Now, as $\partial A_g(n)^S_{\mathbb{C}}$ is of codimension at least 2, the two varieties $A_g(n)^S_{\mathbb{C}}$ and $A_g(n)_{\mathbb{C}}$ have the same Weil and Cartier divisors, hence the same rational Picard groups.
$(b)$ This is a consequence of general results of \cite{Borel81} further refined in \cite{Weissauer92} (it can even be generalised to every $g \geq 3$).
$(c)$ This comes from the computations of section III.9 of \cite{Mumford83} (for another compactification, called toroidal), from which we extract the result for $A_2(1)_{\mathbb{C}}$ by a classical restriction theorem (\cite{Hartshorne}, Proposition II.6.5) because the boundary for this compactification is irreducible of codimension 1. The result for $A_2(1)^S_{\mathbb{C}}$ is then the same because the boundary is of codimension 2.
\end{proof}
\subsection{Theta divisors on abelian varieties and moduli spaces}
\label{subsecthetadivabvar}
We will now define the useful notions for our integral points problem.
\begin{defi}[Theta divisor on an abelian variety]
\hspace*{\fill}
\label{defithetadivisorabvar}
Let $k$ be an algebraically closed field and $A$ an abelian variety over $k$.
Let $L$ be an ample symmetric line bundle on $A$ inducing a principal polarisation $\lambda$ on $A$. A \textit{theta function associated to} $(A,L)$ is a nonzero global section $\vartheta_{A,L}$ of $L$. The \textit{theta divisor associated to }$(A,L)$, denoted by $\Theta_{A,L}$, is the divisor of zeroes of $\vartheta_{A,L}$, well-defined and independent of our choice because $\dim H^0(A,L)=\deg(\lambda)^2 = 1$.
\end{defi}
The theta divisor is in fact determined by the polarisation $\lambda$ itself, up to a finite ambiguity we make clear below.
\begin{prop}
\hspace*{\fill}
\label{propambiguitedivthetaAL}
Let $k$ be an algebraically closed field and $A$ an abelian variety over $k$.
Two ample symmetric line bundles $L$ and $L'$ on $A$ inducing a principal polarisation induce the same one if and only if $L' \cong T_x^* L$ for some $x \in A|2]$, and then
\[
\Theta_{A,L'} = \Theta_{A,L} + x.
\]
\end{prop}
\begin{proof}
For any line bundle $L$ on $A$, let us define
\[
\fonction{\lambda_L}{A}{\widehat{A} = \operatorname{Pic}^0(A)}{x}{T_x^* L \otimes L^{-1}}.
\]
This is a group morphism and the application $L \mapsto \lambda_L$ is additive from $\operatorname{Pic}(A)$ to $\operatorname{Hom}(A,\widehat{A})$, with kernel $\operatorname{Pic}^0(A)$ (\cite{MumfordAbVar}, Chapter II, Corollary 4 and what follows, along with section II.8). Moreover, when $L$ is ample, the morphism $\lambda_L$ is the polarisation associated to $L$, in particular surjective. Now, for every $x \in A(k)$, if $L' \cong T_x^* L$, then $L' \otimes L^{-1}$ belongs to $\operatorname{Pic}^0(A)$, therefore $\lambda_{L'} = \lambda_{L}$. Conversely, if $\lambda_{L'} = \lambda_L$, one has $L' \otimes L^{-1} \in \operatorname{Pic}^0(A)$, hence if $L$ is ample, by surjectivity, one has $x \in A(k)$ such that $L' \cong T_x^* L$. Finally, if $L$ and $L'$ are symmetric, having $[-1]^* L \cong L$ and $[-1]^* L' \cong L'$, we obtain $T_{-x}^* L \cong T_x^* L$ but as $\lambda_L$ is an isomorphism, this implies $[2] \cdot x = 0$, hence $x \in A[2]$.
Therefore, for $\vartheta_{A,L}$ a nonzero section of $L$, $T_x^* \vartheta_{A,L}$ can be identified to a nonzero section of $L$, hence
\[
\Theta_{A,L'} = \Theta_{A,L} - x = \Theta_{A,L} + x.
\]
\end{proof}
When $\textrm{char}(k) \neq 2$, adding to a principally polarised abelian variety $(A,\lambda)$ of dimension $g$ the datum $\alpha_2$ of a symplectic structure of level 2, we can determine an unique ample symmetric line bundle $L$ with the following process called \textit{Igusa correspondence}, devised in \cite{Igusa67bis}. To any ample symmetric Weil divisor $D$ defining a principal polarisation, one can associate bijectively a quadratic form $q_D$ from $A[2]$ to $\{ \pm 1 \}$ called \textit{even}, which means that the sum of its values on $A[2]$ is $2^{g}$ (\cite{Igusa67bis}, Theorem 2 and the previous arguments). On another side, the datum $\alpha_2$ also determines an even quadratic form $q_{\alpha_2}$, by associating to a $x \in A[2]$ with coordinates $(a,b) \in ({\mathbb{Z}}/2{\mathbb{Z}})^{2g}$ in the basis $\alpha_2$ of $A[2]$ the value
\begin{equation}
\label{eqcorrespondanceIgusa}
q_{\alpha_2}(x) = (-1)^{a {}^t b}.
\end{equation}
We now only have to choose the unique ample symmetric divisor $D$ such that $q_D = q_{\alpha_2}$ and the line bundle $L$ associated to $D$.
By construction of this correspondence (\cite{Igusa67bis}, p. 823), a point $x \in A[2]$ of coordinates $(a,b) \in ({\mathbb{Z}}/2{\mathbb{Z}})^{2g}$ in $\alpha_2$ automatically belongs to $\Theta_{A,L}$ (with $L$ associated to $(A,\lambda,\alpha_2)$) if $a{}^t b= 1 \mod 2$. A point of $A[2]$ with coordinates $(a,b)$ such that $a {}^t b = 0 \mod 2$ can also belong to $\Theta_{A,L}$ but with even multiplicity.
This allows us to get rid of the ambiguity of choice of an ample symmetric $L$ in the following, as soon as we have a symplectic level 2 structure (or finer) (
this result is a reformulation of Theorem 2 of \cite{Igusa67bis}).
\begin{defiprop}[Theta divisor canonically associated to a symplectic even level structure]
\label{defipropthetadiviseurcanonique}
\hspace*{\fill}
Let $n \geq 2$ even and $k$ algebraically closed such that $\textrm{char}(k)$ does not divide $n$.
For $(A,\lambda,\alpha_n)$ a principally polarised abelian variety of dimension $g$ with symplectic structure of level $n$ (Definition \ref{deficomplexabvar}), there is up to isomorphism an unique ample symmetric line bundle $L$ inducing $\lambda$ and associated by Igusa correspondence to the symplectic basis of $A[2]$ induced by $\alpha_n$. The \textit{theta divisor associated to} $(A,\lambda,\alpha_n)$, denoted by $\Theta_{A,\lambda,\alpha_n}$, is then the theta divisor associated to $(A,L)$, .
\end{defiprop}
The Runge-type theorem we give in section \ref{sectionapplicationsSiegel} (Theorem \ref{thmtubularRungegeneral}) focuses on principally polarised abelian surfaces $(A,\lambda)$ on a number field $K$ whose theta divisor does not contain any $n$-torsion point of $A$ (except 2-torsion points, as we will see it is automatic). This will imply (Proposition \ref{propnombrepointsdivthetajacobienne}) that $A$ is not a product of elliptic curves, but this is not a sufficient condition, as pointed out for example in \cite{BoxallGrant}.
We will once again start with the complex case to figure out how such a condition can be formulated on the moduli spaces, using complex theta functions (\cite{MumfordTata}, Chapter II).
\begin{defiprop}[Complex theta functions]
\hspace*{\fill}
\label{defipropcomplexthetafunctions}
Let $g \geq 1$.
The holomorphic function $\Theta$ on ${\mathbb{C}}^g \times {\mathcal H}_g$ is defined by the series (convergent on any compact subset)
\begin{equation}
\label{eqdefserietheta}
\Theta(z,\tau) = \sum_{n \in {\mathbb{Z}}^g} e^{ i \pi n \tau {}^t n + 2 i \pi n {}^t z}.
\end{equation}
For any $a,b \in {\mathbb{R}}^g$, we also define the holomorphic function $\Theta_{a,b}$ by
\begin{equation}
\label{eqdefseriethetaab}
\Theta_{a,b}(z,\tau) = \sum_{n \in {\mathbb{Z}}^g} e^{ i \pi (n+a) \tau {}^t (n+a) + 2 i \pi (n+a) {}^t (z+b)}.
\end{equation}
For a fixed $\tau \in {\mathcal H}_g$, one defines $\Theta_\tau : z \mapsto \Theta(z,\tau)$ and similarly for $\Theta_{a,b,\tau}$. These functions have the following properties.
$(a)$ For every $a,b \in {\mathbb{Z}}^g$,
\begin{equation}
\label{eqthetaabenfonctiontheta}
\Theta_{a,b,\tau} (z) = e^{i \pi a \tau {}^t a + 2 i \pi a {}^t (z+b)} \Theta_\tau(z + a \tau + b).
\end{equation}
$(b)$ For every $p,q \in {\mathbb{Z}}^g$,
\begin{equation}
\label{eqfoncthetaptranslation}
\Theta_{a,b,\tau}(z+p\tau + q) = e^{- i \pi p \tau {}^t p - 2 i \pi p{}^t z + 2 i \pi (a{}^t q - b {}^t p) } \Theta_{a,b,\tau} (z).
\end{equation}
$(c)$ Let us denote by $\vartheta$ and $\vartheta_{a,b}$ the \textit{normalised theta-constants}, which are the holomorphic functions on ${\mathcal H}_g$ defined by
\begin{equation}
\label{eqdefthetaconstantes}
\vartheta(\tau) : = \Theta(0,\tau) \quad {\textrm{and}} \quad \vartheta_{a,b} (\tau) := e^{ - i \pi a {}^t b} \Theta_{a,b} (0,\tau).
\end{equation}
These theta functions satisfy the following modularity property : with the notations of Definition \ref{deficomplexabvar},
\begin{equation}
\label{eqmodularitethetaconstantes}
\forall \gamma \in \Gamma_g(2), \quad \vartheta_{a,b} (\gamma \cdot \tau) = \zeta_8(\gamma) e^{ i \pi (a,b)^t V_\gamma} \sqrt{j_\gamma(\tau)} \vartheta_{(a,b)\gamma} (\tau),
\end{equation}
where $\zeta_8(\gamma)$ (a $8$-th root of unity) and $V_\gamma \in {\mathbb{Z}}^g$ only depend on $\gamma$ and the determination of the square root of $j_\gamma(\tau)$.
In particular, for every even $n \geq 2$, if $(na,nb) \in {\mathbb{Z}}^{2g}$, the function $\vartheta_{a,b}^{8n}$ is a Siegel modular form of degree $g$, level $n$ and weight $4n$, which only depends on $(a,b) \! \mod {\mathbb{Z}}^{2g}$.
\end{defiprop}
\begin{proof}
The convergence of these series as well as their functional equations \eqref{eqthetaabenfonctiontheta} and \eqref{eqfoncthetaptranslation} are classical and can be found in section II.1 of \cite{MumfordTata}.
The modularity property \eqref{eqmodularitethetaconstantes} (also classical) is a particular case of the computations of section II.5 of \cite{MumfordTata} (we do not need here the general formula for $\gamma \in \operatorname{Sp}_{2g} ({\mathbb{Z}})$).
Finally, by natural computations of the series defining $\Theta_{a,b}$, one readily obtains that
\[
\vartheta_{a+p,b+q} = e^{2 i \pi (a{}^t q - b{}^t p)} \vartheta_{a,b}.
\]
Therefore, if $(na,nb) \in {\mathbb{Z}}^{2g}$, the function $\vartheta_{a,b}^n$ only depends on $(a,b) \! \mod {\mathbb{Z}}^{2g}$. Now, putting the modularity formula \eqref{eqmodularitethetaconstantes} to the power $8n$, one eliminates the eight root of unity and if $\gamma \in \Gamma_g(n)$, one has $(a,b) \gamma = (a,b) \mod {\mathbb{Z}}^g$ hence $\vartheta_{a,b}^{8n}$ is a Siegel modular form of weight $4n$ for $\Gamma_g(n)$.
\end{proof}
There is of course an explicit link between the theta functions and the notion of theta divisor, which we explain now with the notations of Definition \ref{deficomplexabvar}.
\begin{prop}[Theta divisor and theta functions]
\hspace*{\fill}
\label{propliendivthetafonctiontheta}
Let $\tau \in {\mathcal H}_g$.
The line bundle $L_\tau$ is ample and symmetric on $A_\tau$, and defines a principal polarisation on $A_\tau$. It is also the line bundle canonically associated to the 2-structure $\alpha_{\tau,2}$ and its polarisation by Igusa correspondence (Definition-Proposition \ref{defipropthetadiviseurcanonique}).
Furthermore, the global sections of $L_\tau$ canonically identify to the multiples of $\Theta_{\tau}$, hence the theta divisor associated to $(A_\tau, \lambda_\tau,\alpha_{\tau,2})$ is exactly the divisor of zeroes of $\Theta_{\tau}$ modulo $\Lambda_\tau$.
Thus, for every $a,b \in {\mathbb{R}}^g$, the projection of $\pi_\tau(a \tau + b)$ belongs to $\Theta_{A_\tau, \lambda_\tau, \alpha_{\tau,2}}$ if and only if $\vartheta_{a,b} (\tau)=0$.
\end{prop}
\begin{rem}
The proof below that the $L_\tau$ is the line bundle associated to $(A_\tau, \lambda_\tau, \alpha_{\tau,2})$ is a bit technical, but one has to suspect that Igusa normalised its correspondence by \eqref{eqcorrespondanceIgusa} exactly to make it work.
\end{rem}
\begin{proof}
One can easily see that $L_\tau$ is symmetric by writing $[-1]^* L_\tau$ as a quotient of ${\mathbb{C}}^g \times {\mathbb{C}}$ by an action of $\Lambda_\tau$, then figuring out it is the same as \eqref{eqdeffibresurAtau}. Then, by simple connexity, the global sections of $L_\tau$ lift by the quotient morphism ${\mathbb{C}}^g \times {\mathbb{C}} \rightarrow L_\tau$ into functions $z \mapsto (z,f(z))$, and the holomorphic functions $f$ thus obtained are exactly the functions satisfying functional equation \eqref{eqfoncthetaptranslation} for $a=b=0$ because of \eqref{eqdeffibresurAtau}, hence the same functional equation as $\Theta_{\tau}$. This identification is also compatible with the associated divisors, hence $\Theta_{A_\tau,L_\tau}$ is the divisor of zeroes of $\Theta_{\tau}$ modulo $\Lambda_\tau$. For more details on the theta functions and line bundles, see (\cite{Debarre99}, Chapters IV,V and section VI.2).
We now have to check that Igusa correspondence indeed associates $L_\tau$ to $(A_\tau,\lambda_\tau,\alpha_{\tau,2})$. With the notations of the construction of this correspondence (\cite{Igusa67bis}, pp.822, 823 and 833), one sees that the meromorphic function $\psi_x$ on $A_\tau$ (depending on $L_\tau$) associated to $x \in A_\tau[2]$ has divisor $[2]^* T_x^* \Theta_{A_\tau,L_\tau} - [2]^* \Theta_{A_\tau,L_\tau}$, hence it is (up to a constant) the meromorphic function induced on $A_\tau$ by
\[
f_x(z) = \frac{\Theta_{a,b,\tau} (2z)}{\Theta_\tau(2z)} \quad {\textrm{where}} \, \, x= a \tau + b \mod \Lambda_\tau.
\]
Now, the quadratic form $q$ associated to $L_\tau$ is defined by the identity
\[
f_x(-z) = q(x) f_x(z)
\]
for every $z \in {\mathbb{C}}^g$, but $\Theta_\tau$ is even hence
\[
f_x(-z) = e^{4 i \pi a^t b} f_x(z)
\]
by formula \eqref{eqthetaabenfonctiontheta}. Now, the coordinates of $x$ in $\alpha_{\tau,2}$ are exactly $(2b,2a) \mod {\mathbb{Z}}^{2g}$ by definition, hence $q=q_{\alpha_{\tau,2}}$.
Let us finally make the explicit link between zeroes of theta-constants and theta divisors : using the argument above, the divisor of zeroes of $\Theta_\tau$ modulo $\Lambda_\tau$ is exactly $\Theta_{A_\tau,L_\tau}$, hence $\Theta_{A_\tau,\lambda_\tau,\alpha_{\tau,2}}$ by what we just proved for the Igusa correspondence. This implies that for every $z \in {\mathbb{C}}^g$, $\Theta_\tau(z)=0$ if and only if $\pi_\tau(z)$ belongs to $\Theta_{A_\tau,\lambda_\tau, \alpha_{\tau,2}}$, and as $\vartheta_{a,b} (\tau)$ is a nonzero multiple of $\Theta(a \tau + b,\tau)$, we finally have that $\vartheta_{a,b}(\tau)=0$ if and only if $\pi_\tau(a\tau+b)$ belongs to $\Theta_{A_\tau,\lambda_\tau, \alpha_{\tau,2}}$.
\end{proof}
\section{Applications of the main result on a family of Siegel modular varieties}
\label{sectionapplicationsSiegel}
We now have almost enough definitions to state the problem which we will consider for our Runge-type result (Theorem \ref{thmtubularRungegeneral}). We consider theta divisors on abelian surfaces, and their torsion points.
\subsection{The specific situation for theta divisors on abelian surfaces}
\label{subsecthetadivabsur}
As an introduction and a preliminary result, let us treat first the case of theta divisors on elliptic curves.
\begin{lem}[Theta divisor on an elliptic curve]
\label{lemdivthetaCE}
\hspace*{\fill}
Let $E$ be an elliptic curve on an algebraically closed field $k$ with $\textrm{char}(k) \neq 2$ and $L$ an ample symmetric line bundle defining the principal polarisation on $E$.
The effective divisor $\Theta_{E,L}$ is a 2-torsion point of $E$ with multiplicity one. More precisely, if $(e_1,e_2)$ is the basis of $E[2]$ associated by Igusa correspondence to $L$ (Definition-Proposition \ref{defipropthetadiviseurcanonique}),
\begin{equation}
\label{eqdivthetaCEexplicite}
\Theta_{E,L} = [e_1 + e_2].
\end{equation}
\end{lem}
\begin{rem}
In the complex case, this can simply be obtained by proving that $\Theta_{1/2,1/2,\tau}$ is odd for every $\tau \in {\mathcal H}_1$ hence cancels at 0, and has no other zeroes (by a residue theorem for example), then using Proposition \ref{propliendivthetafonctiontheta}.
\end{rem}
\begin{proof}
By Riemann-Roch theorem on $E$, the divisor $\Theta_{E,L}$ is of degree 1 because $h^0(E,L)=1$ (and effective). Now, as explained before when discussing Igusa correspondence, for $a,b \in {\mathbb{Z}}$, $a e_1 + b e_2$ automatically belongs to $\Theta_{E,L}$ if $a b = 1 \mod 2 {\mathbb{Z}}$, hence $\Theta_{E,L} = [e_1+ e_2]$.
\end{proof}
This allows to use to describe the theta divisor of a product of two elliptic curves.
\begin{prop}[Theta divisor on a product of two elliptic curves]
\label{propdivthetaproduitCE}
\hspace*{\fill}
Let $k$ be an algebraically closed field with $\textrm{char}(k) \neq 2$.
Let $(A,L)$ with $A=E_1 \times E_2$ a product of elliptic curves on $k$ and $L$ an ample symmetric line bundle on $A$ inducing the product principal polarisation on $A$.
The divisor $\Theta_{A,L}$ is then of the shape
\begin{equation}
\label{eqdivthetaproduitCE}
\Theta_{A,L} = \{x_1\} \times E_2 + E_1 \times \{x_2\},
\end{equation}
with $x_i \in E_i[2]$ for $i=1,2$. In particular, this divisor has a (unique) singular point of multiplicity two at $(x_1,x_2)$, and :
$(a)$ There are exactly seven 2-torsion points of $A$ belonging to $\Theta_{A,L}$: the six points given by the coordinates $(a,b) \in ({\mathbb{Z}}/2{\mathbb{Z}})^4$ such that $a{}^t b= 1$ in a basis giving $\Theta_{A,L}$ by Igusa correspondence, and the seventh point $(x_1,x_2)$.
$(b)$ For every even $n \geq 2$ which is nonzero in $k$, the number of $n$-torsion (but not $2$-torsion) points of $A$ belonging to $\Theta_{A,L}$ is exactly $2(n^2-4)$.
\end{prop}
\begin{proof}
By construction of $(A,L)$, a global section of $(A,L)$ corresponds to a tensor product of global sections of $E_1$ and $E_2$ (with their principal polarisations), hence the shape of $\Theta_{A,L}$ is a consequence of Lemma \ref{lemdivthetaCE}.
We readily deduce $(a)$ and $(b)$ from this shape, using that the intersection of the two components of $\Theta_{A,L}$ is a 2-torsion point of even multiplicity for the quadratic form hence different from the six other ones.
\end{proof}
To explain the result for abelian surfaces which are not products of elliptic curves, we recall below a fundamental result.
\begin{prop}[Shapes of principally polarised abelian surfaces]
\label{propsurfabnonproduitCEetdivtheta}
\hspace*{\fill}
Let $k$ be any field.
A principally polarised abelian surface $(A,\lambda)$ on $k$ is, after a finite extension of scalars, either the product of two elliptic curves (with its natural product polarisation), or the jacobian $J$ of an hyperelliptic curve $C$ of genus 2 (with its canonical principal polarisation). In the second case, for the Albanese embedding $\phi_x : C \rightarrow J$ with base-point $x$ and an ample symmetric line bundle $L$ on $K$ inducing $\lambda$, the divisor $\Theta_{J,L}$ is irreducible, and it is actually a translation of $\phi_x(C)$ by some point of $J(\overline{k})$.
\end{prop}
\begin{proof}
This proposition (together with the dimension 3 case, for the curious reader) is the main topic of \cite{OortUeno} (remarkably, its proof starts with the complex case and geometric arguments before using scheme and descent techniques to extend it to all fields).
\end{proof}
Let us now fix an algebraically closed field $k$ with $\textrm{char}(k) \neq 2$.
Let $C$ be an hyperelliptic curve of genus 2, and $\iota$ its hyperelliptic involution. This curve has exactly six Weierstrass points (the fixed points of $\iota$, by definition), and we fix one of them, denoted by $\infty$. For the Albanese morphism $\phi_\infty$, the divisor $\phi_\infty(C)$ is stable by $[-1]$ because the divisor $[x] + [\iota(x)] - 2 [\infty]$ is principal for every $x \in C$. As $\Theta_{J,L}$ is also symmetric and a translation of $\phi_\infty(C)$, we know that $\Theta_{J,L} = T_x^* (\phi_\infty(C))$ for some $x \in J[2]$.
This tells us that understanding the points of $\Theta_{J,L}$ amounts to understanding how the curve $C$ behaves when embedded in its jacobian (in particular, how its points add).
It is a difficult problem to know which torsion points of $J$ belong to the theta divisor (see \cite{BoxallGrant} for example), but we will only need to bound their quantity here, with the following result.
\begin{prop}
\hspace*{\fill}
\label{propnombrepointsdivthetajacobienne}
Let $k$ an algebraically closed field with $\textrm{char}(k) \neq 2$.
Let $C$ be an hyperelliptic curve of genus 2 on $k$ with jacobian $J$, and $\infty$ a fixed Weierstrass point of $C$.
We denote by $\widetilde{C}$ the image of $C$ in $J$ by the associated embedding $\phi_\infty : x \mapsto \overline{[x] - [\infty]}$.
$(a)$ The set $\widetilde{C}$ is stable by $[-1]$, and the application
\[
\fonctionsansnom{\operatorname{Sym}^2(\widetilde{C})}{J}{\{P,Q\}}{P+Q}
\]
is injective outside the fiber above 0.
$(b)$ There are exactly six 2-torsion points of $J$ belonging to $\widetilde{C}$, and they are equivalently the images of the Weierstrass points and the points of coordinates $(a,b) \in (({\mathbb{Z}}/2{\mathbb{Z}})^2)^2$ such that $a{}^t b= 1$ in a basis giving $\widetilde{C}$ by Igusa correspondence.
$(c)$ For any even $n \geq 2$ which is nonzero in $k$, the number of $n$-torsion points of $J$ belonging to $\widetilde{C}$ is bounded by $\sqrt{2} n^2 + \frac{1}{2}$.
\end{prop}
\begin{rem}
This proposition is not exactly a new result, and its principle can be found (with slightly different formulations) in Theorem 1.3 of \cite{BoxallGrant} or in Lemma 5.1 of \cite{Pazuki12}. For the latter, it is presented as a consequence on Abel-Jacobi theorem on ${\mathbb{C}}$, and we will here give a more detailed proof, which is also readily valid on any field. The problem of counting (or bounding) torsion points on the theta divisor has interested many people, e.g. \cite{BoxallGrant} and very recently \cite{Torsionthetadivisors} in general dimension. Notice that the results above give the expected bound in the case $g=2$, but we do not know how much we can lower the bound $\sqrt{2} n^2$ in the case of jacobians.
\end{rem}
\begin{proof}
As $[\infty]$ is a Weierstrass point, the divisor $2 [\infty]$ is canonical. Conversely, if a degree two divisor $D$ satisfies $\ell(D) := \dim H^0 (C,{\mathcal O}_C(D)) \geq 2$, then it is canonical. Indeed, by Riemann-Roch theorem, this implies that $\ell(2 [\infty] - D) \geq 1$ but this divisor is of degree 0, hence it is principal and $D$ is canonical. Now, let $x,y,z,t$ be four points of $C$ such that $\phi_\infty(x) + \phi_\infty(y) = \phi_\infty(z) + \phi_\infty(t) $ in $J$. This implies that $[x] + [y] - [z] - [t]$ is the divisor of some function $f$, and then either $f$ is constant (i.e. $\{x,y\} = \{z,t\}$), either $\ell([z] + [t]) \geq 2$ hence $[z] + [t]$ is canonical by the argument above, and in this case the points $P =\phi_\infty(z)$ and $Q=\phi_\infty(t)=0$ of $\widetilde{C}$ satisfy $P+Q=0$ in $J$, which proves $(a)$.
Now, for $n \geq 2$ even, let us denote $\widetilde{C} [n] := \widetilde{C} \cap J[n]$. The summing map from $\widetilde{C} [n]^2$ to $J[n]$ has a fiber of cardinal $|\widetilde{C}[n]|$ above 0 and at most 2 above any other point of $J[n]$ by $(a)$, hence the inequality of degree two
\[
|\widetilde{C}[n]|^2 \leq |\widetilde{C}[n]| + 2 (n^4 - 1),
\]
from which we directly obtain $(c)$. In the case $n=2$, it is enough to see that $ 2 \phi_\infty(x) = 0$ if and only if $2 [x]$ is canonical if and only if $x$ is a Weierstrass divisor, which gives $(b)$.
\end{proof}
We can now define the divisors we will consider for our Runge-type theorem, with the following notation.
\textbf{Convention}
\hspace*{\fill}
Until the end of this article, the expression ``a couple $(a,b) \in ({\mathbb{Z}} /n {\mathbb{Z}})^4$ (resp. ${\mathbb{Z}}^4, {\mathbb{Q}}^4$ )'' is a shorthand to designate the row vector with four coefficients where $a \in ({\mathbb{Z}}/n{\mathbb{Z}})^2$ (resp. ${\mathbb{Z}}^2$, ${\mathbb{Q}}^2$ ) make up the first two coefficients and $b$ the last two coefficients.
\begin{defiprop}[Theta divisors on $A_2(n)^S_{\mathbb{C}}$]
\hspace*{\fill}
\label{defipropdivthetaA2ncomplexes}
Let $n \in {\mathbb{N}}_{\geq 2}$ even.
$(a)$ A couple $(a,b) \in ({\mathbb{Z}}/n{\mathbb{Z}})^4$ is called \textit{regular} if it is \textit{not} of the shape $((n/2)a',(n/2)b')$ with $(a',b') \in (({\mathbb{Z}}/2{\mathbb{Z}})^2)^2$ such that $a' {}^t b' = 1 \mod 2$. There are exactly 6 couples $(a,b)$ not satisfying this condition, which we call \textit{singular}.
$(b)$ If $(a,b) \in ({\mathbb{Z}}/n{\mathbb{Z}})^4$ is regular, for every lift $(\widetilde{a}, \widetilde{b}) \in {\mathbb{Z}}^4$ of $(a,b)$, the function $\vartheta_{\widetilde{a}/n, \widetilde{b}/n}^{8n}$ is a \textit{nonzero} Siegel modular form of degree 2, weight $4n$ and level $n$, independent of the choice of lifts. The \textit{theta divisor associated to $(a,b)$}, denoted by $(D_{n,a,b})_{\mathbb{C}}$, is the Weil divisor of zeroes of this Siegel modular form on $A_2(n)^S_{\mathbb{C}}$.
$(c)$ For $(a,b)$ and $(a',b')$ regular couples in $({\mathbb{Z}}/n{\mathbb{Z}})^4$, the Weil divisors $(D_{n,a,b})_{\mathbb{C}}$ and $(D_{n,a',b'})_{\mathbb{C}}$ are equal if and only if $(a,b) = \pm (a',b')$. Hence, the set of regular couples defines exactly $n^4/2 + 2$ pairwise distinct Weil divisors.
\end{defiprop}
\begin{rem}
The singular couples correspond to what are called \textit{odd characteristics} by Igusa. The proof below uses Fourier expansions to figure out which theta functions are nontrivial or proportional, but we conjecture the stronger result that $(D_{n,a,b})_{\mathbb{C}}$ and $(D_{n,a',b'})_{\mathbb{C}}$ are set-theoretically distinct (i.e. even without counting the multiplicities) unless $(a,b) = \pm (a',b')$. Such a result seems natural as the image of a curve into its jacobian should generically not have any other symmetry than $[-1]$, but we could not obtain it by looking at the simpler case (in $A_2(n)^S_{\mathbb{C}}$) of the products of elliptic curves: if $(a,b)$ and $(a',b')$ are both multiples of a primitive vector $v \in (1/n){\mathbb{Z}}^4$, it is tedious but straighforward to see that the theta constants $\vartheta_{a,b}$ and $\vartheta_{a',b'}$ vanish on the same products of elliptic curves. Hence, to prove that the reduced divisors of $(D_{n,a,b})_{\mathbb{C}}$ and $(D_{n,a',b'})_{\mathbb{C}}$ are distinct unless $(a,b) = \pm (a',b')$, one needs to exhibit a curve $C$ whose jacobian isomorphic to $A_\tau$ contains $\pi_\tau(a \tau + b)$ but not $\pi_\tau(a' \tau + b')$ in its theta divisor.
Notice that this will not be a problem for us later because all our arguments for Runge are set-theoretic, and Proposition \ref{propdivthetaproduitCE} and \ref{propnombrepointsdivthetajacobienne} are not modified if some of the divisors taken into account are equal.
\end{rem}
\begin{proof}[Proof of Definition-Proposition \ref{defipropdivthetaA2ncomplexes}]
$(a)$ By construction, for any even $n \geq 2$, the number of singular couples $(a,b) \in ({\mathbb{Z}}/n{\mathbb{Z}})^4$ is the number of couples $(a',b') \in ({\mathbb{Z}}/2{\mathbb{Z}})^4$ such that $a' {}^t b' = 1 \mod 2$, and we readily see there are exactly six of them, namely
\[
(0101), (1010), (1101), (1110), (1011) \textrm{ and }(0111).
\]
For $(b)$ and $(c)$, the modularity of the function comes from Definition-Proposition \ref{defipropcomplexthetafunctions} $(c)$ hence we only have to prove that it is nonzero when $(a,b)$ is regular. To do this, we will use the Fourier expansion of this modular form (for more details on Fourier expansions of Siegel modular forms, see chapter 4 of \cite{Klingen}), and simply prove that it has nonzero coefficients. This is also how we will prove the $\vartheta_{a,b}$ are distinct.
To shorten the notations, given an initial couple $(a,b) \in ({\mathbb{Z}}/n{\mathbb{Z}})^4$, we consider instead $(\tilde{a}/n, \tilde{b}/n) \in {\mathbb{Q}}^4$ for some lift $(\tilde{a}, \tilde{b})$ of $(a,b)$ in ${\mathbb{Z}}^4$) and by abuse of notation we renote it $(a,b)$ for simplicity. Regularity of the couple translates into the fact that $(a,b)$ is different from six possibles values modulo ${\mathbb{Z}}^4$, namely
\[
\left(0 \frac{1}{2}0\frac{1}{2}\right),\left(\frac{1}{2}0\frac{1}{2}0 \right),\left(\frac{1}{2}\frac{1}{2}0\frac{1}{2}\right), \left(\frac{1}{2}\frac{1}{2}\frac{1}{2}0\right), \left(\frac{1}{2}0\frac{1}{2}\frac{1}{2}\right)\left(0\frac{1}{2}\frac{1}{2}\frac{1}{2}\right)
\]
by $(a)$, which we will assume now. We also fix $n \in {\mathbb{N}}$ even such that $(na,nb) \in {\mathbb{Z}}^4$.
Recall that
\begin{equation}
\label{eqformulationsimplevarthetapourdevFourier}
\vartheta_{a,b} (\tau) = e^{i \pi a^t b} \sum_{k \in {\mathbb{Z}}^2} e^{ i \pi (k+a) \tau{}^t(k+a) + 2 i \pi k^t b}
\end{equation}
by \eqref{eqthetaabenfonctiontheta} and \eqref{eqdefthetaconstantes}. Therefore, for any symmetric matrix $S \in M_2({\mathbb{Z}})$ such that $S/(2n^2)$ is half-integral (i.e. with integer coefficients on the diagonal, and half-integers otherwise), we have
\[
\forall \tau \in {\mathcal H}_2, \quad \vartheta_{a,b} (\tau + S) = \vartheta_{a,b} (\tau),
\]
because for every $k \in {\mathbb{Z}}^2$,
\[
(k+a)S^t (k+a) \in 2 {\mathbb{Z}}.
\]
Hence, the function $\vartheta_{a,b}$ admits a Fourier expansion of the form
\[
\vartheta_{a,b} (\tau) = \sum_{T} a_T e^{ 2 i \pi \operatorname{Tr}(T\tau)},
\]
where $T$ runs through all the matrices of $S_2({\mathbb{Q}})$ such that $(2n^2) T$ is half-integral. This Fourier expansion is unique, because for any $\tau \in {\mathcal H}_2$ and any $T$, we have
\[
(2 n^2) a_T = \int_{[0,1]^4} \vartheta_{a,b} (\tau +x) e^{- 2 i \pi \operatorname{Tr}(T(\tau+x))} dx.
\]
In particular, the function $\vartheta_{a,b}$ is zero if and only if all its Fourier coefficients $a_T$ are zero, hence we will directly compute those, which are almost directly given by \eqref{eqformulationsimplevarthetapourdevFourier}. For $a=(a_1,a_2) \in {\mathbb{Q}}^2$ and $k=(k_1,k_2) \in {\mathbb{Z}}^2$, let us define
\[
T_{a,k} = \begin{pmatrix}
(k_1+a_1)^2 & (k_1+a_1)(k_2 + a_2) \\ (k_1+a_1)(k_2+a_2) & (k_2+a_2)^2
\end{pmatrix},
\]
so that
\begin{equation}
\label{eqpresquedevFouriervartheta}
\vartheta_{a,b} (\tau) = e^{ i \pi a{}^t b} \sum_{k \in {\mathbb{Z}}^2} e^{ 2 i \pi k {}^t b} e^{ i \pi \operatorname{Tr}(T_{a,k} \tau)}
\end{equation}
by construction. It is not yet exactly the Fourier expansion, because we have to gather the $T_{a,k}$ giving the same matrix $T$ (and this is where we will use regularity). Clearly,
\[
T_{a,k} = T_{a',k'} \Longleftrightarrow (k+a) = \pm (k' +a').
\]
If $2a \notin {\mathbb{Z}}^2$, the function $k \mapsto T_{a,k}$ is injective, so \eqref{eqpresquedevFouriervartheta} is the Fourier expansion of $\vartheta_{a,b}$, with clearly nonzero coefficients, hence $\vartheta_{a,b}$ is nonzero.
If $2a = A \in {\mathbb{Z}}^2$, for every $k,k'\in {\mathbb{Z}}^2$, we have $(k+a) = \pm (k'+a)$ if and only if $k=k'$ or $k+k' = A$, so the Fourier expansion of $\vartheta_{a,b}$ is
\begin{equation}
\label{eqpdevFouriervarthetamauvaiscas}
\vartheta_{a,b} (\tau) = \frac{e^{i \pi a^t b}}{2} \sum_{T} \sum_{\substack{k,k' \in {\mathbb{Z}}^2 \\ T_{k,a} = T_{k',a} = T}} (e^{ 2 i \pi k^t b} + e^{ 2 i \pi (-A-k)^t b}) e^{ i \pi \operatorname{Tr}(T \tau)}.
\end{equation}
Therefore, the coefficients of this Fourier expansion are all zero if and only if, for every $k \in {\mathbb{Z}}^2$,
\[
e^{ 2 i \pi (2 k + A)^t b} = -1,
\]
i.e. if and only if $b \in (1/2) {\mathbb{Z}}$ and $(-1)^{4 a^t b} = -1$, and this is exactly singularity of the couple $(a,b)$ which proves $(b)$.
Now, let $(a,b)$ and $(a',b')$ in $(1/n) {\mathbb{Z}}^4$ regular couples (translated in ${\mathbb{Q}}^4$ as above), such that $(na,nb)$ and $(na',nb')$ modulo ${\mathbb{Z}}^4$ have the same associated theta divisor on $A_2(n)^S_{\mathbb{C}}$. Then, the function
\[
\frac{\vartheta_{a,b}^{8n}}{\vartheta_{a',b'}^{8n}}
\]
induces a meromorphic function on $A_2(n)^S_{\mathbb{C}}$ whose divisor is $0$ hence a constant function, which implies that $\vartheta_{a,b} = \lambda \vartheta_{a',b'}$ for some $
\lambda \in {\mathbb{C}}^*$.
As these functions depend (up to a constant) only on $(a,b)$ and $(a',b') \! \mod {\mathbb{Z}}^4$, one can assume that all the coefficients of $(a,b)$ and $(a',b')$ belong to $[-1/2,1/2[$, and we assume first that $a,a' \notin (1/2){\mathbb{Z}}^2$. Looking at the Fourier expansions \eqref{eqpresquedevFouriervartheta} gives that for every $k \in {\mathbb{Z}}^2$,
\[
e^{i \pi a^t b + 2 i \pi k^t b} = \lambda e^{i \pi a'^t b' + 2 i \pi k^t b'}.
\]
Hence, we have $b= b' \mod {\mathbb{Z}}^2$ which in turns give $a= a' \mod {\mathbb{Z}}^2$ The same argument when $a$ or $a'$ belongs to $(1/2) {\mathbb{Z}}^2$ gives by \eqref{eqpdevFouriervarthetamauvaiscas} the possibilities $b=-b'$ and $a=-a' \mod {\mathbb{Z}}^4$.
Hence, we proved that if $\vartheta_{a,b}$ and $\vartheta_{a',b'}$ are proportional, then $(a,b)= \pm (a',b') \mod {\mathbb{Z}}^4$,and the converse is straightforward.
\end{proof}
These divisors have the following properties.
\begin{prop}[Properties of the $(D_{n,a,b})_{\mathbb{C}}$]
\hspace*{\fill}
\label{propproprietesDnabcomplexes}
Let $n \in {\mathbb{N}}_{\geq 2}$ even.
$(a)$ For every regular $(a,b) \in ({\mathbb{Z}}/n{\mathbb{Z}})^4$, the divisor $(D_{n,a,b})_{\mathbb{C}}$ is ample.
$(b)$ For $n=2$, the ten divisors $(D_{2,a,b})_{\mathbb{C}}$ are set-theoretically pairwise disjoint outside the boundary $\partial A_2(2)_{\mathbb{C}} := A_2(2)^S_{\mathbb{C}} \backslash A_2(2)_{\mathbb{C}}$, and their union is exactly the set of moduli of products of elliptic curves (with any symplectic basis of the 2-torsion).
$(c)$ For $(A,\lambda,\alpha_n)$ a principally polarised complex abelian surface with symplectic structure of level $n$ :
\begin{itemize}
\item If $(A,\lambda)$ is a product of elliptic curves, the moduli of $(A,\lambda,\alpha_n)$ belongs to exactly $n^2 - 3$ divisors $(D_{n,a,b})_{\mathbb{C}}$.
\item Otherwise, the point $(A,\lambda,\alpha_n)$ belongs to at most $(\sqrt{2}/2)n^2 + 1/4$ divisors $(D_{n,a,b})_{\mathbb{C}}$.
\end{itemize}
\end{prop}
\begin{proof}
\hspace*{\fill}
$(a)$ The divisor $(D_{n,a,b})_{\mathbb{C}}$ is by definition the Weil divisor of zeroes of a Siegel modular form of order 2, weight $4n$ and level $n$, hence of a section of $L^{\otimes 4n}$ on $A_2(n)_{\mathbb{C}}^S$. As $L$ is ample on $A_2(n)^S_{\mathbb{C}}$ (Definition-Proposition \ref{defipropSatakecompactification} $(c)$), the divisor $(D_{n,a,b})_{\mathbb{C}}$ is ample.
Now, we know that every complex pair $(A,\lambda)$ is isomorphic to some $(A_\tau,\lambda_\tau)$ with $\tau \in {\mathcal H}_2$ (Definition-Proposition \ref{defipropuniformcomplexabvar}). If $(A,\lambda)$ is a product of elliptic curves, the theta divisor of $(A,\lambda,\alpha_2)$ contains exactly seven 2-torsion points (Proposition \ref{propdivthetaproduitCE}), only one of comes from a regular pair, i.e. $(A,\lambda,\alpha_2)$ is contained in exactly one of the ten divisors. If $(A,\lambda)$ is not a product of elliptic curves, it is a jacobian (Proposition \ref{propsurfabnonproduitCEetdivtheta}) and the theta divisor of $(A,\lambda,\alpha_2)$ only contains the six points coming from singular pairs (Proposition \ref{propnombrepointsdivthetajacobienne}) i.e. $(A,\lambda,\alpha_2)$ does not belong to any of the ten divisors, which proves $(b)$.
To prove $(c)$, we use the same propositions for general $n$, keeping in mind that we only count as one the divisors coming from opposite values of $(a,b)$ : for products of elliptic curves, this gives $(2n^2 - 16)/2 + 7$ divisors (the 7 coming from the 2-torsion), and for jacobians, this gives $(\sqrt{2}/2)n^2 + 1/4$ (there are no nontrivial 2-torsion points to consider here).
\end{proof}
We will now give the natural divisors extending $(D_{n,a,b})_{\mathbb{C}}$ on the integral models ${\mathcal A}_2(n)$ (Definition-Proposition \ref{defipropalgmodulispaces}).
\begin{defi}
\hspace*{\fill}
\label{defidivthetasurespacemoduleentier}
Let $n \in {\mathbb{N}}_{\geq 2}$ even.
For every regular $(a,b) \in ({\mathbb{Z}}/n{\mathbb{Z}})^4$, the divisor $(D_{n,a,b})_{\mathbb{C}}$ is the geometric fiber at ${\mathbb{C}}$ of an effective Weil divisor $D_{n,a,b}$ on ${\mathcal A}_2(n)$, such that the moduli of a triple $(A,\lambda,\alpha_n)$ (on a field $k$ of characteristic prime to $n$) belongs to $D_{n,a,b} (k)$ if and only if the point of $A[n](\overline{k})$ of coordinates $(a,b)$ for $\alpha_n$ belongs to the theta divisor $\Theta_{A,\lambda,\alpha_n}$ (Definition-Proposition \ref{defipropthetadiviseurcanonique}).
\end{defi}
\begin{proof}
\hspace*{\fill}
This amounts to giving an algebraic construction of the $D_{n,a,b}$ satisfying the wanted properties. The following arguments are extracted from Remark I.5.2 of \cite{ChaiFaltings}. Let $\pi : A \rightarrow S$ an abelian scheme and ${\mathcal L}$ a symmetric invertible sheaf on $A$, relatively ample on $S$ and inducing a principal polarisation on $A$. If $s : S \rightarrow A$ is a section of $A$ on $S$, the evaluation at $s$ induces an ${\mathcal O}_S$-module isomorphism between $\pi_*{\mathcal L}$ and $s^*{\mathcal L}$. Now, if $s$ is of $n$-torsion in $A$, for $e : S \rightarrow A$ the zero section, the sheaf $(s^* {\mathcal L})^{\otimes 2n}$ is isomorphic to $(e^* {\mathcal L})^{\otimes 2n}$, i.e. trivial. We denote by $\omega_{A/S}$ the invertible sheaf on $S$ obtained as the determinant of the sheaf of invariant differential forms on $A$, and the computations of Theorem I.5.1 and Remark I.5.2 of \cite{ChaiFaltings} give $8 \pi_* {\mathcal L} = - 4 \omega_{A/S}$ in $\operatorname{Pic}(A/S)$. Consequenltly, the evaluation at $s$ defines (after a choice of trivialisation of $(e^* {\mathcal L})^{\otimes 2n}$ and putting to the power $8n$) a section of $\omega_{A/S}^ {\otimes 4n}$. Applying this result on the universal abelian scheme (stack if $n \leq 2$) ${\mathcal X}_2(n)$ on ${\mathcal A}_2(n)$ , for every $(a,b) \in ({\mathbb{Z}}/n{\mathbb{Z}})^4$, the section defined by the point of coordinate $(a,b)$ for the $n$-structure on ${\mathcal X}_2(n)$ induces a global section $s_{a,b}$ of $\omega_{{\mathcal X}_2(n)/{\mathcal A}_2(n)}^{\otimes 4n}$, and we define $D_{n,a,b}$ as the Weil divisor of zeroes of this section. It remains to check that it satisfies the good properties.
Let $(A,\lambda,\alpha_n)$ be a triple over a field $k$ of characteristic prime to $n$, and $L$ the ample line bundle associated to it by Definition-Proposition \ref{defipropthetadiviseurcanonique}. By construction, its moduli belongs to $D_{n,a,b}$ if and only if the unique (up to constant) nonzero section vanishes at the point of $A[n]$ of coordinates $(a,b)$ in $\alpha_n$, hence if and only if this point belongs to $\Theta_{A,\lambda,\alpha_n}$.
Finally, we see that the process described above applied to the universal abelian variety ${\mathcal X}_2(n)_{\mathbb{C}}$ of ${\mathcal A}_2(n)_{\mathbb{C}}$ (by means of explicit description of the line bundles as quotients) gives (up to invertible holomorphic functions) the functions $\vartheta_{\widetilde{a}/n,\widetilde{b}/n}^{8n}$, which proves that $(D_{n,a,b})_{\mathbb{C}}$ is indeed the geometric fiber of $D_{n,a,b}$ (it is easier to see that their complex points are the same, by Proposition \ref{propproprietesDnabcomplexes} $(c)$ and the above characterisation applied to the field ${\mathbb{C}}$).
If one does not want to use stacks for $n=2$, one can consider for $(a,b) \in ({\mathbb{Z}}/2{\mathbb{Z}})^4$ the divisor $D_{4,2a,2b}$ which is the pullback of $D_{2,a,b}$ by the degeneracy morphism $A_2(4) \rightarrow A_2(2)$.
\end{proof}
\subsection{Tubular Runge theorems for abelian surfaces and their theta divisors}
\label{subsectubularRungethmabsur}
We can now prove a family of tubular Runge theorems for to the theta divisors $D_{n,a,b}$ (for even $n \geq 2$).
We will state the case $n=2$ first because its moduli interpretation is easier but the proofs are the same, as we explain below.
In the following results, the \textit{boundary} of $A_2(n)^S_{\mathbb{C}}$ is defined as $\partial A_2(n)^S_{\mathbb{C}} := A_2(n)^S_{\mathbb{C}} \backslash A_2(n)_{\mathbb{C}}$.
\begin{thm}[Tubular Runge for products of elliptic curves on ${\mathcal A}_2(2)^S$]
\hspace*{\fill}
\label{thmtubularRungeproduitCE}
Let $U$ be an open neighbourhood of $\partial A_2(2)^S_{\mathbb{C}}$ in $A_2(2)^S_{\mathbb{C}}$ for the natural complex topology.
For any such $U$, we define ${\mathcal E}(U)$ the set of moduli $P$ of triples $(A,\lambda,\alpha_2)$ in ${\mathcal A}_2(2) (\overline{\Q})$ such that (choosing $L$ a number field of definition of the moduli) :
\vspace{-0.2cm}
\begin{itemize}
\item The abelian surface $A$ has potentially good reduction at every finite place $w \in M_L$ (\textbf{tubular condition for finite places}).
\vspace{-0.2cm}
\item For any embedding $\sigma : L \rightarrow {\mathbb{C}}$, the image $P_\sigma$ of $P$ in ${\mathcal A}_2(2)_{\mathbb{C}}$ is outside of $U$ (\textbf{tubular condition for archimedean places}).
\vspace{-0.2cm}
\item The number $s_L$ of non-integrality places of $P$, i.e. places $w \in M_L$ such that
\begin{itemize}
\vspace{-0.2cm}
\item either $w$ is above $M_L^{\infty}$ or $2$,
\vspace{-0.2cm}
\item or the semistable reduction modulo $w$ of $(A,\lambda)$ is a product of elliptic curves
\end{itemize}
\vspace{-0.2cm}
satisfies the \textbf{tubular Runge condition}
\vspace{-0.2cm}
\[
s_L < 10.
\]
\end{itemize}
\vspace{-0.2cm}
Then, for every choice of $U$, the set ${\mathcal E}(U)$ is \textbf{finite}.
\end{thm}
\begin{thm}[Tubular Runge for theta divisors on ${\mathcal A}_2(n)^S$]
\hspace*{\fill}
\label{thmtubularRungegeneral}
Let $n \geq 4$ even.
Let $U$ be an open neighbourhood of $\partial A_2(n)^S_{\mathbb{C}}$ in $A_2(n)^S_{\mathbb{C}}$ for the natural complex topology.
For any such $U$, we define ${\mathcal E}(U)$ the set of moduli $P$ of triples $(A,\lambda,\alpha_2)$ in ${\mathcal A}_2(n) (\overline{\Q})$ such that (choosing $L \supset {\mathbb{Q}}(\zeta_n)$ a number field of definition of the triple) :
\vspace{-0.2cm}
\begin{itemize}
\item The abelian surface $A$ has potentially good reduction at every place $w \in M_L^{\infty}$ (\textbf{tubular condition for finite places}).
\vspace{-0.2cm}
\item For any embedding $\sigma : L \rightarrow {\mathbb{C}}$, the image $P_\sigma$ of $P$ in ${\mathcal A}_2(n)_{\mathbb{C}}$ is outside of $U$ (\textbf{tubular condition for archimedean places}).
\vspace{-0.2cm}
\item The number $s_P$ of non-integrality places of $P$, i.e. places $w \in M_L$ such that
\vspace{-0.2cm}
\begin{itemize}
\vspace{-0.2cm}
\item either $w$ is above $M_L^{\infty}$ or a prime factor of $n$,
\vspace{-0.2cm}
\item or the theta divisor of the semistable reduction modulo $w$ of $(A,\lambda,\alpha_n)$ contains an $n$-torsion point which is not one of the six points coming from odd characteristics,
\end{itemize}
\vspace{-0.2cm}
satisfies the \textbf{tubular Runge condition}
\[
(n^2 - 3) s_P < \frac{n^4}{2} + 2.
\]
\end{itemize}
\vspace{-0.2cm}
Then, for every choice of $U$, the set of points ${\mathcal E}(U)$ is \textbf{finite}.
\end{thm}
\begin{rem}
We put an emphasis on the conditions given in the theorem to make it easier to identify how it is an application of our main result, Theorem \ref{thmRungetubulaire}. The tubular conditions (archimedean and finite) mean that our points $P$ do not belong to some tubular neighbourhood ${\mathcal V}$ of the boundary. We of course chose the boundary as our closed subset to exclude because of its modular interpretation for finite places. The places above $M_L^{\infty}$ or a prime factor of $n$ are automatically of non-integrality for our divisors because the model ${\mathcal A}_2(n)$ is not defined at these places. Finally, the second possibility to be a place of non-integrality straightforwardly comes from the moduli interpretation of the divisors $D_{n,a,b}$ (Definition \ref{defidivthetasurespacemoduleentier}). All this is detailed in the proof below.
To give an example of how we can obtain an explicit result in practice, we prove in section \ref{sectionexplicitRunge} an explicit (and even theoretically better) version of Theorem \ref{thmtubularRungeproduitCE}.
It would be more satisfying (and easier to express) to give a tubular Runge theorem for which the divisors considered are exactly the irreducible components parametrising the products of elliptic curves. Unfortunately, except for $n=2$, there is a serious obstruction because those divisors are not ample, and there are even reasons to suspect they are not big. We have explained in Remark \ref{remampledifficilepourA2} why proving the ampleness for general divisors on $A_2(n)^S_{\mathbb{C}}$ is difficult.
It would also be morally satisfying to give a better interpretation of the moduli of $D_{n,a,b}$ for $n >2$, i.e. not in terms of the theta divisor, but maybe of the structure of the abelian surface if possible (nontrivial endomorphisms ? isogenous to products of elliptic curves ?). As far as the author knows, the understanding of abelian surfaces admitting some nontrivial torsion points on their theta divisor is still very limited.
Finally, to give an idea of the margin the tubular Runge condition gives for $n>2$ (in terms of the number of places which are not ``taken'' by the automatic bad places), we can easily see that the number of places of ${\mathbb{Q}}(\zeta_n)$ which are archimedean or above a prime factor of $n$ is less than $n/2$. Hence, we can find examples of extensions $L$ of ${\mathbb{Q}}(\zeta_n)$ of degree $n$ such that some points defined on it still can satisfy tubular Runge condition. This is also where using the full strength of tubular Runge theorem is crucial: for $n=2$, one can compute that some points of the boundary are contained in 6 different divisors $D_{2,a,b}$, and for general even $n$, a similar analysis gives that the intersection number $m_{\emptyset}$ is quartic in $n$, which leaves a lot less margin for the places of non-integrality (or even none at all).
\end{rem}
\begin{proof}[Proof of Theorems \ref{thmtubularRungeproduitCE} and \ref{thmtubularRungegeneral}]
\hspace*{\fill}
As announced, this result is an application of the tubular Runge theorem (Theorem \ref{thmRungetubulaire}) to ${\mathcal A}_2(n)^S_{{\mathbb{Q}}(\zeta_n)}$ (Definition-Proposition \ref{defipropalgmodulispaces}) and the divisors $D_{n,a,b}$ (Definition \ref{defidivthetasurespacemoduleentier}), whose properties will be used without specific mention. We reuse the notations of the hypotheses of Theorem \ref{thmRungetubulaire} to explain carefully how it is applied.
\textbf{\textit{(H0)}} The field of definition of $A_2(n)^S_{\mathbb{C}}$ is ${\mathbb{Q}}(\zeta_n)$, and the ring over which our model ${\mathcal A}_2(n)^S$ is built is ${\mathbb{Z}} [ \zeta_n, 1/n]$, hence $S_0$ is made up with all the archimedean places and the places above prime factors of $n$. There is no need for a finite extension here as all the $D_{n,a,b}$ are divisors on ${\mathcal A}_2(n)^S$.
\textbf{\textit{(H1)}} The model ${\mathcal A}_2(n)^S_{\mathbb{C}}$ is indeed normal projective, and we know that the $D_{n,a,b}$ are effective Weil divisors hence Cartier divisors up to multiplication by some constant by Proposition \ref{proprationalPicardSiegel}. For any finite extension $L$ of ${\mathbb{Q}}(\zeta_n)$, the number of orbits $r_L$ is the number of divisors $D_{n,a,b}$ (as they are divisors on the base model), i.e. $n^4/2 + 2$ (Proposition \ref{propproprietesDnabcomplexes} $(c)$).
\textbf{\textit{(H2)}} The chosen closed subset $Y$ of ${\mathcal A}_2(n)^S_{\mathbb{Q}}(\zeta_n)$ is the boundary, namely
\[
\partial {\mathcal A}_2(n)^S_{{\mathbb{Q}}(\zeta_n)} = {\mathcal A}_2(n)^S_{{\mathbb{Q}}(\zeta_n)} \backslash {\mathcal A}_2(n)_{{\mathbb{Q}}(\zeta_n)}.
\]
We have to prove that the tubular conditions given above correspond to a tubular neighbourhood. To do this, let ${\mathcal Y}$ be the boundary ${\mathcal A}_2(n)^S \backslash {\mathcal A}_2(n)$ and $g_1, \cdots, g_s$ homogeneous generators of the ideal of definition of ${\mathcal Y}$ after having fixed a projective embedding of ${\mathcal A}_2(n)$. Let us find an $M_{{\mathbb{Q}}(\zeta_n)}$-constant such that ${\mathcal E}(U)$ is included in the tubular neighbourhood of $\partial {\mathcal A}_2(n)^S_{\mathbb{Q}}(\zeta_n)$ in $A_2(n)^S_{{\mathbb{Q}}(\zeta_n)}$ associated to ${\mathcal C}$ and $g_1, \cdots, g_k$. For the places $w$ not above $M_L^\infty$ or a prime factor of $n$, the fact that $P = (A,\lambda,\alpha_n)$ does not reduce in $Y$ modulo $w$ is exactly equivalent to $A$ having potentially good reduction at $w$ hence we can choose $c_v= 0$ for the places $v$ of ${\mathbb{Q}}(\zeta_n)$ not archimedean and not dividing $n$. For archimedean places, belonging to $U$ for an embedding $\sigma : L \rightarrow {\mathbb{C}}$ implies that $g_1, \cdots, g_n$ are small, and we just have to choose $c_v$ stricly larger than the maximum of the norms of the $g_i(U \cap V_j)$ (in the natural affine covering $(V_j)_j$ of the projective space), independant of the choice of $v \in M_{{\mathbb{Q}}(\zeta_n)}^\infty$. Finally, we have to consider the case of places above a prime factor of $n$. To do this, we only have to recall that having potentially good reduction can be given by integrality of some quotients of the Igusa invariants at finite places, and these invariants are modular forms on $\Gamma_2(1)$. We can add those who vanish on the boundary to the homogeneous generators $g_1, \cdots, g_n$ and consider $c_v=0$ for these places as well. This is explicitly done in part \ref{subsecplacesabove2} for $A_2(2)$.
\textbf{\textit{(TRC)}} As said before, there are $n^4/2 + 2$ divisors considered, and their generic fibers are ample by Proposition \ref{propproprietesDnabcomplexes}. Furthermore, by Propositions \ref{propdivthetaproduitCE} and \ref{propnombrepointsdivthetajacobienne}, outside the boundary, at most $(n^2 - 3)$ can have nonempty common intersection, and this exact number is attained only for products of elliptic curves, (as $n^2 - 3 = 2(n^2 - 4)/2 + 1$, separating the regular 2-torsion pairs and regular non-2-torsion pairs up to $\pm 1$).
This gives the tubular Runge condition
\[
(n^2 - 3) s_L < n^4/2 + 2,
\]
which concludes the proof.
For $n=2$, the union of the ten $D_{2,a,b}$ is made up with the moduli of products of elliptic curves, and they are pairwise disjoint outside $\partial A_2(2)$ (Proposition \ref{propproprietesDnabcomplexes} $(b)$), hence the simply-expressed condition $s_L<10$ in this case.
\end{proof}
\section{The explicit Runge result for level two}
\label{sectionexplicitRunge}
To finish this paper, we improve and make explicit the finiteness result of Theorem \ref{thmtubularRungeproduitCE}, as a proof of principle of the method.
Before stating Theorem \ref{thmproduitCEexplicite}, we need some notations. In level two, the auxiliary functions are deduced from the ten even theta constants of characteristic two, namely the functions $\Theta_{m/2} (\tau)$ (notation \eqref{eqdefseriethetaab}), with the quadruples $m$ going through
\begin{equation}
\label{eqevenchartheta}
E = \{(0000),(0001),(0010),(0011),(0100),(0110),(1000),(1001),(1100),(1111) \}
\end{equation}
(see subsections \ref{subsecthetadivabvar} and \ref{subsecthetadivabsur} for details).
We recall (\cite{vdG82}, Theorem 5.2) that these functions define an embedding
\begin{equation}
\label{eqdefplongementpsi}
\fonction{\psi}{A_2(2)}{\P^9}{\overline{\tau}}{(\Theta_{m/2}^4 (\tau))_{m \in E}}
\end{equation}
which induces an isomorphism between $A_2(2)^S_{\mathbb{C}}$ and the subvariety of $\P^9$ (with coordinates indexed by $m \in E$) defined by the linear equations
\begin{eqnarray}
x_{1000} - x_{1100} + x_{1111} - x_{1001} & = & 0 \\
x_{0000} - x_{0001} - x_{0110} - x_{1100} & = & 0 \\
x_{0110} - x_{0010} + x_{1111} + x_{0011} & = & 0 \\
x_{0100} - x_{0000} + x_{1001} + x_{0011} & = & 0 \\
x_{0100} - x_{1000} + x_{0001} - x_{0010} & = & 0
\end{eqnarray}
(which makes it a subvariety of $\P^4$) together with the quartic equation
\begin{equation}
\left( \sum_{m \in E} x_m^2 \right)^2 - 4 \sum_{m \in E} x_m^4 = 0.
\end{equation}
\begin{rem}
For the attentive reader, the first linear equation has sign $(+1)$ in $x_{1111}$ whereas it is $(-1)$ in \cite{vdG82}, as there seems to be a typographic mistake there : we have realised it during our computations on Sage in part \ref{subsecplacesabove2} and found the right sign back from Igusa's relations (\cite{Igusa64bis}, Lemma 1 combined with the proof of Theorem 1).
\end{rem}
There is a natural definition for a tubular neighbourhood of $Y = \partial A_2(2)$: for a finite place $v$, as in Theorem \ref{thmtubularRungeproduitCE}, we choose $V_v$ as the set of triples $P = \overline{(A,\lambda,\alpha_2)}$ where $A$ has potentially bad reduction modulo $v$. To complete it with archimedean places, we use the classical fundamental domain for the action of $\operatorname{Sp}_4({\mathbb{Z}})$ on ${\mathcal H}_2$ denoted by ${\mathcal F}_2$ (see \cite{Klingen}, section I.2 for details). Given some parameter $t \geq \sqrt{3}/2$, the neighbourhood $V(t)$ of $\partial A_2(2)_{\mathbb{C}}^S$ in $A_2(2)^S_{\mathbb{C}}$ is made up with the points $P$ whose lift $\tau$ in ${\mathcal F}_2$ (for the usual quotient morphism ${\mathcal H}_2 \rightarrow A_2(1)_{\mathbb{C}}$) satisfies $\Im (\tau_4) \geq t$, where $\tau_4$ is the lower-right coefficient of $\tau$. We choose $V(t)$ as the archimedean component of the tubular neighbourhood for every archimedean place. The reader knowledgeable with the construction of Satake compactification will have already seen such neighbourhoods of the boundary.
Notice that for a point $P=\overline{(A,\lambda,\alpha_2)} \in A_2(2)(K)$, the abelian surface $A$ is only defined over a finite extension $L$ of $K$, but for prime ideals ${\mathfrak{P}}_1$ and ${\mathfrak{P}}_2$ of ${\mathcal O}_L$ above the same prime ideal ${\mathfrak{P}}$ of ${\mathcal O}_K$, the reductions of $A$ modulo ${\mathfrak{P}}_1$ and ${\mathfrak{P}}_2$ are of the same type because $P \in A_2(2) (K)$. This justifies what we mean by ``semistable reduction of $A$ modulo ${\mathfrak{P}}$'' below.
\begin{thm}
\label{thmproduitCEexplicite}
Let $K$ be a number field and $P=\overline{(A,\lambda,\alpha_2)} \in A_2(2)(K)$ where $A$ has potentially good reduction at every finite place.
Let $s_P$ be the number of prime ideals ${\mathfrak{P}}$ of ${\mathcal O}_K$ such that the semistable reduction of $A$ modulo ${\mathfrak{P}}$ is a product of elliptic curves. We denote by $h_{\mathcal F}$ the stable Faltings height of $A$.
$(a)$ If $K={\mathbb{Q}}$ or an imaginary quadratic field and
\[
|s_P|<4
\]
then
\[
h(\psi(P)) \leq 10.75, \quad h_{\mathcal F}(A) \leq 1070.
\]
$(b)$ Let $t \geq \sqrt{3}/2$ be a real number. If for any embedding $\sigma : K \rightarrow {\mathbb{C}}$, the point $P_\sigma \in A_2(2)_{\mathbb{C}}$ does not belong to $V(t)$, and
\[
|s_P| + |M_K^{\infty}| < 10
\]
then
\[
h(\psi(P)) \leq 4 \pi t + 6.14, \quad h_{\mathcal F}(A) \leq 2 \pi t + 535 \log(2 \pi t + 9)
\]
\end{thm}
The Runge condition for $(b)$ is a straightforward application of our tubular Runge theorem. For $(a)$, we did not assume anything on the point $P$ at the (unique) archimedean place, which eliminates six divisors when applying Runge's method here, hence the different Runge condition here (see Remark \ref{remRungetubulaire} $(b)$).
The principle of proof is very simple: we apply Runge's method to bound the height of $\psi(P)$ when $P$ satisfies the conditions of Theorem \ref{thmtubularRungeproduitCE}, and using the link between this height and Faltings height given in (\cite{Pazuki12b}, Corollary 1.3), we know we will obtain a bound of the shape
\[
h_{\mathcal F}(P) \leq f(t)
\]
where $f$ is an explicit function of $t$, for every point $P$ satisfying the conditions of Theorem \ref{thmtubularRungeproduitCE}.
At the places of good reduction not dividing 2, the contribution to the height is easy to compute thanks to the theory of algebraic theta functions devised in \cite{Mumford66} and \cite{Mumford67}. The theory will be sketched in part \ref{subsecalgebraicthetafunctions}, resulting in Proposition \ref{propalgthetafoncetreduchorsde2}.
For the archimedean places, preexisting estimates due to Streng for Fourier expansions on each of the ten theta functions allow to make explicit how only one of them can be too small compared to the others, when we are out of $V(t)$. This is the topic of part \ref{subsecarchimedeanplaces}.
For the places above 2, the theory of algebraic theta functions cannot be applied. To bypass the problem, we use Igusa invariants (which behave in a well-known fashion for reduction in any characteristic) and prove that the theta functions are algebraic and ``almost integral'' on the ring of these Igusa invariants, with explicit coefficients. Combining these two facts in part \ref{subsecplacesabove2}, we will obtain Proposition \ref{propbornesfoncthetaaudessus2}, a less-sharp avatar of Proposition \ref{propalgthetafoncetreduchorsde2}, but explicit nonetheless.
Finally, we put together these estimates in part \ref{subsecfinalresultRungeCEexplicite} and obtain the stated bounds on $h \circ \psi$ and the Faltings height.
\subsection{Algebraic theta functions and the places of potentially good reduction outside of 2}
\label{subsecalgebraicthetafunctions}
The goal of this part is the following result.
\begin{prop}
\label{propalgthetafoncetreduchorsde2}
Let $K$ be a number field and ${\mathfrak{P}}$ a maximal ideal of ${\mathcal O}_K$, of residue field $k({\mathfrak{P}})$ with characteristic different from 2.
Let $P = \overline{(A, \lambda,\alpha_2)} \in A_2(2)(K)$. Then, $\psi(P) \in \P^9(K)$ and :
$(a)$ If the semistable reduction of $A$ modulo ${\mathfrak{P}}$ is a product of elliptic curves, the reduction of $\psi(P)$ modulo ${\mathfrak{P}}$ has exactly one zero coordinate, in other words every coordinate of $\psi(P)$ has the same ${\mathfrak{P}}$-adic norm except one which is strictly smaller.
$(b)$ If the semistable reduction of $A$ modulo ${\mathfrak{P}}$ is a jacobian of hyperelliptic curve, the reduction of $\psi(P)$ modulo ${\mathfrak{P}}$ has no zero coordinate, in other words every coordinate of $\psi(P)$ has the same ${\mathfrak{P}}$-adic norm.
\end{prop}
To link $\psi(P)$ with the intrinsic behaviour of $A$, we use the theory of algebraic theta functions, devised in \cite{Mumford66} and \cite{Mumford67} (see also \cite{DavidPhilippon} and \cite{Pazuki12b}). As it is not very useful nor enlightening to go into detail or repeat known results, we only mention them briefly here.
In the following, $A$ is an abelian variety of dimension $g$ over a field $k$ and $L$ an ample symmetric line bundle on $A$ inducing a principal polarisation $\lambda$. We also fix $n \geq 2$ even, assuming that all the points of $2n$-torsion of $A$ are defined over $k$ and $\textrm{char}(k)$ does not divide $n$ (in particular, we always assume $\textrm{char}(k) \neq 2$). Let us denote formally the Heisenberg group ${\mathcal G}(\underline{n})$ as the set
\[
{\mathcal G}(\underline{n}) := k^* \times ({\mathbb{Z}}/n{\mathbb{Z}})^g \times ({\mathbb{Z}}/n{\mathbb{Z}})^g
\]
equipped with the group law
\[
(\alpha,a,b) \cdot (\alpha',a',b') := (\alpha \alpha' e^{\frac{2 i \pi}{n} a{}^t b'},a+a',b+b')
\]
(contrary to the convention of \cite{Mumford66}, p.294, we identified the dual of $({\mathbb{Z}}/n{\mathbb{Z}})^g$ with itself). Recall that $A[n]$ is exactly the group of elements of $A(\overline{k})$ such that $T_x^* (L^{\otimes n}) \cong L ^{\otimes n}$ : indeed, it is the kernel of the morphism $\lambda_{L^{\otimes n}} = n \lambda$ from $A$ to $\widehat{A}$ (see proof of Proposition \ref{propambiguitedivthetaAL}).
\begin{proof}
Given the datum of a \textit{theta structure} on $L^{\otimes n}$, i.e. an isomorphism $\beta: {\mathcal G}(L^{\otimes n}) \cong {\mathcal G}(\underline{n})$ which is the identity on $k^*$ (see \cite{Mumford66}, p. 289 for the definition of ${\mathcal G}(L^{\otimes n})$), one has a natural action of ${\mathcal G}(\underline{n})$ on $\Gamma(A,L^{\otimes n})$ (consequence of Proposition 3 and Theorem 2 of \cite{Mumford66}), hence for $n \geq 4$ the following projective embedding of $A$ :
\begin{equation}
\label{eqplongementA}
\fonction{\psi_\beta}{A}{\P^{n^{2g} - 1}_k}{x}{\left( ((1,a,b)\cdot ( s_0^{\otimes n})) (x) \right)_{a,b \in ({\mathbb{Z}}/n{\mathbb{Z}})^g}},
\end{equation}
where $s_0$ is a nonzero section of $\Gamma(A,L)$, hence unique up to multiplicative scalar (therefore $\psi_\beta$ only depends on $\beta$). This embedding is not exactly the same as the one defined in (\cite{Mumford66}, p. 298) (it has more coordinates), but the principle does not change at all. One calls \textit{Mumford coordinates of $(A,L)$ associated to $\beta$} the projective point $\psi_\beta(0) \in \P^{n^{2g-1}}(k)$.
Now, one has the following commutative diagram whose rows are canonical exact sequences (\cite{Mumford66}, Corollary of Theorem 1)
\[
\xymatrix{
0 \ar[r] & k^* \ar[d]^{=} \ar[r] & {\mathcal G}(L^{\otimes n}) \ar[r] \ar[d]^{\beta}& A[n] \ar[d]^{\alpha_n} \ar[r] & 0 \\
0 \ar[r] & k^* \ar[r] & {\mathcal G}(\underline{n}) \ar[r] & ({\mathbb{Z}}/n{\mathbb{Z}})^{2g} \ar[r] & 0,
}
\]
where $\alpha_n$ is a symplectic level $n$ structure on $A[n]$ (Definition \ref{defibaseabvar}), called \textit{the symplectic level $n$ structure induced by $\beta$}. Moreover, for every $x \in A(k)$, the coordinates of $\psi_\beta (x)$ are (up to constant values for each coordinate, only depending on $\beta$) the $\vartheta_{A,L} ([n] x +\alpha_{n}^{-1} (a,b))$ (see Definition \ref{defithetadivisorabvar}). In particular, for any $a,b \in ({\mathbb{Z}}/n {\mathbb{Z}})^g$,
\begin{equation}
\label{eqliencoordonneesMumfordetdivtheta}
\psi_\beta(0)_{a,b} = 0 \Leftrightarrow \alpha_n^{-1} (a,b) \in \Theta_{A,L}.
\end{equation}
Furthermore, for two theta structures $\beta,\beta'$ on $[n]^* L$ inducing $\alpha_n$, one sees that $\beta' \circ \beta^{-1}$ is of the shape $(\alpha,a,b) \mapsto (\alpha \cdot f(a,b),a,b)$, where $f$ has values in $n$-th roots of unity, hence $\psi_\beta$ and $\psi_{\beta'}$ only differ multiplicatively by $n$-th roots of unity.
Conversely, given the datum of a symplectic structure $\alpha_{2n}$ on $A[2n]$, there exists an unique \textit{symmetric theta structure} on $[n]^* L$ which is \textit{compatible} with some symmetric theta structure on $[2n]^* L$ inducing $\alpha_{2n}$ (\cite{Mumford66}, p.317 and Remark 3 p.319). We call it the \textit{theta structure on $[n]^* L$ induced by $\alpha_{2n}$}. Thus, we just proved that the datum of a symmetric theta structure on $[n]^*L$ is intermediary between a level $2n$ symplectic structure and a level $n$ symplectic structure (the exact congruence group is easily identified as $\Gamma_g(n,2n)$ with the notations of \cite{Igusa66}).
Now, for a triple $(A,L,\alpha_{2n})$ (notations of subsection \ref{subsecabvarSiegelmodvar}), when $A$ is a complex abelian variety, there exists $\tau \in {\mathcal H}_g$ such that this triple is isomorphic to $(A_\tau,L_\tau,\alpha_{\tau,2n})$ (Definition-Proposition \ref{defipropuniformcomplexabvar}). By definition of $L_\tau$ as a quotient \eqref{eqdeffibresurAtau}, the sections of $L_\tau ^{\otimes n}$ canonically identify to holomorphic functions $\vartheta$ on ${\mathbb{C}}^g$ such that
\begin{equation}
\label{eqsectionsLtaupuissancen}
\forall p,q \in {\mathbb{Z}}^g, \forall z \in {\mathbb{C}}^g, \quad \vartheta(z + p \tau + q) = e^{ - i \pi n \tau ^t n - 2 i \pi n ^t z} \vartheta(z),
\end{equation}
and through this identification one sees (after some tedious computations) that the symmetric theta structure $\beta_\tau$ on $L_\tau^{\otimes n}$ induced by $\alpha_{\tau,2n}$ acts by
\[
((\alpha,a,b) \cdot \vartheta) (z) = \alpha \exp\left( \frac{i \pi}{n} \widetilde{a} \tau \widetilde{a} + \frac{2 i \pi}{n} \widetilde{a}{}^t (z+\widetilde{b}) \right) \vartheta \left( z+\frac{\widetilde{a}}{n} \tau + \frac{\widetilde{b}}{n} \right),
\]
where $\widetilde{a},\widetilde{b}$ are lifts of $a,b$ in ${\mathbb{Z}}^g$ (the result does not depend on this choice by \eqref{eqsectionsLtaupuissancen}). Therefore, by $\psi_\beta$ and the theta functions with characteristic (formula \eqref{eqthetaabenfonctiontheta}), the Mumford coordinates of $(A,L,\alpha_{2n})$ (with the induced theta structure $\beta$ on $L^{\otimes n})$ are \textit{exactly} the projective coordinates
\[
\left( \Theta_{\widetilde{a}/n,\widetilde{b}/n (\tau)}^n(\tau) \right)_{a,b \in \frac{1}{n} {\mathbb{Z}}^{2g} / {\mathbb{Z}}^{2g}} \in \P^{n^{2g-1}} ({\mathbb{C}}),
\]
where the choices of lifts $\widetilde{a}$ and $\widetilde{b}$ for $a$ and $b$ still do not matter.
In particular, for every $\tau \in {\mathcal H}_2$, the point $\psi(\tau)$ can be intrinsically given as the squares of Mumford coordinates for $\beta_\tau$, where the six odd characteristics (whose coordinates vanish everywhere) are taken out. The result only depends on the isomorphism class of $(A_\tau,L_\tau,\alpha_{\tau,2})$, as expected.
Finally, as demonstrated in the paragraph 6 of \cite{Mumford67} (especially the Theorem p. 83), the theory of theta structures (and the associated Mumford coordinates) can be extended to abelian schemes (Definition \ref{defabelianscheme}) (still outside characteristics dividing $2n$), and the Mumford coordinates in this context lead to an embedding of the associated moduli space in a projective space as long as the \textit{type} of the sheaf is a multiple of 8 (which for us amounts to $8|n$). Here, fixing a principally polarised abelian variety $A$ over a number field $K$ and ${\mathfrak{P}}$ a prime ideal of ${\mathcal O}_K$ not above 2, this theory means thats given a symmetric theta structure on $(A,L)$ for $L^{\otimes n}$ where $8|n$, if $A$ has good reduction modulo ${\mathfrak{P}}$, this theta structure has a natural reduction to a theta structure on the reduction $(A_{{\mathfrak{P}}}, L_{{\mathfrak{P}}})$ for $L_{\mathfrak{P}}^{\otimes n}$, and this reduction is compatible with the reduction of Mumford coordinates modulo ${\mathfrak{P}}$. To link this with the reduction of coordinates of $\psi$, one just has to extend the number field $K$ of definition of $A$ so that all 8-torsion points of $A$ are defined over $K$ (in particular, the reduction of $A$ modulo ${\mathfrak{P}}$ is semistable), and consider a symmetric theta structure on $L^{\otimes 8}$. The associated Mumford coordinates then reduce modulo ${\mathfrak{P}}$, but their vanishing is linked to the belonging of $8$-th torsion points to $\Theta_{A_{\mathfrak{P}},L_{\mathfrak{P}}}$ by \eqref{eqliencoordonneesMumfordetdivtheta}. The number of vanishing coordinates is then entirely determined in Propositions \ref{propdivthetaproduitCE} and \ref{propnombrepointsdivthetajacobienne}, which proves Proposition \ref{propalgthetafoncetreduchorsde2} (not forgetting the six ever-implicit odd characteristics).
\end{proof}
\subsection{Evaluating the theta functions at archimedean places}
\label{subsecarchimedeanplaces}
We denote by ${\mathcal H}_2$ the Siegel half-space of degree 2, and by ${\mathcal F}_2$ the usual fundamental domain of this half-space for the action of $\operatorname{Sp}_4({\mathbb{Z}})$ (see \cite{Klingen}, section I.2 for details). For $\tau \in {\mathcal H}_2$, we denote by $y_4$ the imaginary part of the lower-right coefficient of $\tau$.
\begin{prop}
\label{proparchimedeanbound}
For every $\tau \in {\mathcal H}_2$ and a fixed real parameter $t \geq \sqrt{3}/2$, one has :
$(a)$ Amongst the ten even characteristics $m$ of $E$, at most six of them can satisfy
\[
|\Theta_{m/2} (\tau)| < 0.42 \max_{m' \in E} |\Theta_{m'/2} (\tau)|.
\]
$(b)$ If the representative of the orbit of $\tau$ in the fundamental domain ${\mathcal F}_2$ satisfies $y_4 \leq t$, at most one of the ten even characteristics $m$ of $E$ can satisfy
\[
|\Theta_{m/2} (\tau)| < 1.22 e^{- \pi t} \max_{m' \in E} |\Theta_{m'/2} (\tau)|.
\]
\end{prop}
\begin{proof}
First, we can assume that $\tau \in {\mathcal F}_2$ as the inequalities $(a)$ and $(b)$ are invariant by the action of $\operatorname{Sp}_4({\mathbb{Z}})$, given the complete transformation formula of these theta functions (\cite{MumfordTata}, section II.5). Now, using the Fourier expansions of the ten theta constants (mentioned in the proof of Definition-Proposition \ref{defipropdivthetaA2ncomplexes}) and isolating their respective dominant terms (such as in \cite{Klingen}, proof of Proposition IV.2), we obtain explicit estimates. More precisely, Proposition 7.7 of \cite{Strengthesis} states that, for every $\tau = \begin{pmatrix} \tau_1 & \tau_2 \\ \tau_2 & \tau_4 \end{pmatrix} \in {\mathcal B}_2$ (which is a domain containing ${\mathcal F}_2$), one has
\begin{eqnarray*}
\left| \Theta_{m/2}(\tau) - 1 \right| & < & 0.405, \quad {\scriptstyle m \in \{(0000)(0001),(0010),(0011) \}}. \\
\left| \frac{ \Theta_{m/2}(\tau)}{2 e^{ i \pi \tau_1/2}} - 1 \right| & < & 0.348, \quad {\scriptstyle m \in \{(0100),(0110) \}}. \\
\left| \frac{ \Theta_{m/2}(\tau)}{2 e^{ i \pi \tau_4/2}} - 1 \right| & < & 0.348, \quad {\scriptstyle m \in \{(1000),(1001) \}}. \\
\left| \frac{ \Theta_{m/2}(\tau)}{(\varepsilon_m + e^{2 i \pi \tau_2})e^{ i \pi (\tau_1 + \tau_4 - 2 \tau_3)/2}} - 1 \right| & < & 0.438,\quad {\scriptstyle m \in \{(1100),(1111) \}},
\end{eqnarray*}
with $\varepsilon_m = 1$ if $m = (1100)$ and $-1$ if $m=(1111)$.
Under the assumption that $y_4 \leq t$ (which induces the same bound for $\Im \tau_1$ and $2 \Im \tau_2$), we obtain
\[
\begin{array}{rcccl}
0.595 & < & \left| \Theta_{m/2}(\tau) \right| & < & 1.405, \quad {\scriptstyle m \in \{(0000)(0001),(0010),(0011) \}}. \\
1.304 e^{ - \pi t/2} & < & \left| \Theta_{m/2}(\tau) \right| & < & 0.692, \quad {\scriptstyle m \in \{(0100),(0110),(1000),(1001)\}}. \\
1.05 e^{ - \pi t} & < & \left| \Theta_{m/2}(\tau) \right| & < & 0.855, \quad {\scriptstyle m = (1100)}. \\
& & \left| \Theta_{m/2}(\tau) \right| & < & 0.855, \quad {\scriptstyle m = (1111)}
\end{array}
\]
Thus, we get $(a)$ with $0.595/1.405 > 0.42$, and $(b)$ with $1.05 e^{- \pi t}/ 0.855> 1.22 e^{- \pi t}$.
\end{proof}
\subsection{Computations with Igusa invariants for the case places above 2}
\label{subsecplacesabove2}
In this case, as emphasized before, it is not possible to use Proposition \ref{propalgthetafoncetreduchorsde2}, as the algebraic theory of theta functions does not work.
We have substituted it in the following way.
\begin{defi}[Auxiliary polynomials]
\hspace*{\fill}
For every $i \in \{1, \cdots, 10\}$, let $\Sigma_i$ be the $i$-th symmetric polynomial in the ten modular forms $\Theta_{m/2}^8$, $m \in E$ (notation \eqref{eqevenchartheta}).
This is a modular form of level $4i$ for the whole modular group $\operatorname{Sp}_4({\mathbb{Z}})$.
\end{defi}
Indeed, each $\Theta_{m/2}^8$ is a modular form for the congruence subgroup $\Gamma_2(2)$ of weight 4, and they are permuted by the modular action of $\Gamma_2(1)$ (\cite{MumfordTata}, section II.5). The important point is that the $\Sigma_i$ are then polynomials in the four Igusa modular forms $\psi_4,\psi_6,\chi_{10}$ and $\chi_{12}$ (\cite{Igusa67bis}, p.848 and 849). We can now explain the principle of this paragraph : these four modular forms are linked explicitly with the Igusa invariants (for a given jacobian of an hyperelliptic curve $C$ over a number field $K$), and the semi-stable reduction of the jacobian at some place $v|2$ is determined by the integrality (or not) of some quotients of these invariants, hence rational fractions of the modular forms. Now, with the explicit expressions of the $\Sigma_i$ in terms of $\psi_4,\psi_6,\chi_{10}$ and $\chi_{12}$, we can bound these $\Sigma_i$ by one of the Igusa invariants, and as every $\Theta_{m/2}^8$ is a root of the polynomial
\[
P(X) = X^{10} - \Sigma_1 X^9 + \Sigma_2 X^8 - \Sigma_3 X^7 + \Sigma_4 X^6 - \Sigma_5 X^5 + \Sigma_6 X^4 - \Sigma_7 X^4 + \Sigma_8 X^2 - \Sigma_9 X + \Sigma_{10},
\]
we can infer an explicit bound above on the $\Theta_{m/2}^8/\lambda$, with a well-chosen normalising factor $\lambda$ such that these quotients belong to $K$. Actually, we will even give an approximative shape of the Newton polygon of the polynomial $\lambda^{10} P(X/\lambda)$, implying that its slopes (except maybe the first one) are bounded above and below, thus giving us a minoration of each of the $|\Theta_{m/2}|_v/\max_{m' \in E} |\Theta_{m'/2}|_v$, except maybe for one $m$. The explicit result is the following.
\begin{prop}
\label{propbornesfoncthetaaudessus2}
Let $K$ be a number field, $(A,L)$ a principally polarised jacobian of dimension 2 over $K$ and $\tau \in {\mathcal H}_2$ such that $(A_\tau,L_\tau) \cong (A,L)$.
Let ${\mathfrak{P}}$ be a prime ideal of $K$ above $2$ such that $A$ has potentially good reduction at ${\mathfrak{P}}$, and the reduced (principally polarised abelian surface) is denoted by $(A_{\mathfrak{P}},L_{\mathfrak{P}})$. By abuse of notation, we forget the normalising factor ensuring that the coordinates $\Theta_{m/2} (\tau)^8$ belong to $K$.
$(a)$ If $(A_{\mathfrak{P}},L_{\mathfrak{P}})$ is the jacobian of a smooth hyperlliptic curve, all the $m \in E$ satisfy
\[
\frac{\left| \Theta_{m/2} (\tau) ^8\right|_{\mathfrak{P}}}{\max_{m' \in E} \left| \Theta_{m'/2} (\tau)^8 \right|_{\mathfrak{P}} } \geq |2|_{\mathfrak{P}}^{12}.
\]
$(b)$ If $(A_{\mathfrak{P}},L_{\mathfrak{P}})$ is a product of elliptic curves, all the $m \in E$ except at most one satisfy
\[
\frac{\left| \Theta_{m/2} (\tau)^8 \right|_{\mathfrak{P}}}{\max_{m' \in E} \left| \Theta_{m'/2} (\tau)^8\right|_{\mathfrak{P}} } \geq |2|_{\mathfrak{P}}^{21}.
\]
\end{prop}
\begin{proof}
The most technical part is computing the $\Sigma_i$ as polynomials in the four Igusa modular forms. To do this, we worked with Sage in the formal algebra generated by some sums of $\Theta_{m/2}^4$ with explicit relations (namely, $y_0, \cdots, y_4$ in the notations of \cite{Igusa64bis}, p.396 and 397). Taking away some timeouts probably due to the computer's hibernate mode, the total computation time on a portable PC has been about twelve-hours-long (including verification of the results). The detail of algorithms and construction is available on a Sage worksheet \footnote{This worksheet can be downloaded at \url{http://perso.ens-lyon.fr/samuel.le_fourn/contenu/fichiers_publis/Igusainvariants.ipynb}} (in Jupyter format). An approach based on Fourier expansions might be more efficient, but as there is no clear closed formula for the involved modular forms, we privileged computations in this formal algebra. For easier reading, we slightly modified the Igusa modular forms into $h_4,h_6,h_{10},h_{12}$ defined as
\begin{equation}
\label{eqdefmodifiedIgusamodularforms}
\left\{
\begin{array}{rcccl}
h_4 & = & 2 \cdot \psi_4 & = & {\displaystyle \frac{1}{2} \sum_{m \in E} \Theta_{m/2}^8}\\
h_6 & = & 2^2 \cdot \psi_6 & = & {\displaystyle \sum_{\scriptscriptstyle \substack{\{m_1,m_2,m_3\} \subset E \\ \textrm{syzygous}}} \pm (\Theta_{m_1/2} \Theta_{m_2/2} \Theta_{m_3/2})^4} \\
h_{10} & = & 2^{15} \cdot \chi_{10} & = & {\displaystyle 2 \prod_{m \in E} \Theta_{m/2}^2} \\
h_{12} & = & 2^{16} \cdot 3 \cdot \chi_{12} & = & {\displaystyle \frac{1}{2} \sum_{\scriptscriptstyle \substack{C \subset E\\ C \textrm{ Göpel}}} \prod_{m \in E \backslash C} \Theta_{m/2}^4 }
\end{array}
\right.
\end{equation}
(\cite{Igusa67bis}, p.848 for details on these definitions, notably syzygous triples and Göpel quadruples). The third expression is not explicitly a polynomial in $y_0, \cdots, y_4$, but there is such an expression, given p.397 of \cite{Igusa64bis}. We also used to great benefit (both for understanding and computations) the section I.7.1 of \cite{Strengthesis}.
Now, the computations on Sage gave us the following formulas (the first and last one being trivial given \eqref{eqdefmodifiedIgusamodularforms}, they were not computed by the algorithm)
\begin{align}
\Sigma_1 & = 2 h_4 \label{eqSigma1} \\
\Sigma_2 & = \frac{3}{2} h_4^2 \label{eqSigma2} \\
\Sigma_3 & = \frac{29}{2 \cdot 3^3} h_4^3 - \frac{1}{2 \cdot 3^3} h_6^2 + \frac{1}{2 \cdot 3} h_{12} \label{eqSigma3}\\
\Sigma_4 & = \frac{43}{2^4 \cdot 3^3} h_4^4 - \frac{1}{2 \cdot 3^3} h_4 h_6^2 + \frac{23}{2 \cdot 3} h_4 h_{12} + \frac{2}{3} h_6 h_{10} \label{eqSigma4} \\
\Sigma_5 & = \frac{1}{2^2 \cdot 3^3} h_4^5 - \frac{1}{2^3 \cdot 3^3} h_4^2 h_6^2 + \frac{25}{2^3 \cdot 3} h_4^2 h_{12} - \frac{1}{2 \cdot 3} h_4 h_6 h_{10} + \frac{123}{2^2} h_{10}^2 \label{eqSigma5}\\
\Sigma_6 & = \frac{1}{2^2 \cdot 3^6} h_4^6 - \frac{1}{2^2 \cdot 3^6} h_4^3 h_6^2 + \frac{7}{2 \cdot 3^3} h_4^3 h_{12} - \frac{1}{2^2 \cdot 3} h_4^2 h_6 h_{10} \label{eqSigma6} \\
& + \frac{47}{2 \cdot 3} h_4 h_{10}^2 + \frac{1}{2^4 \cdot 3^6} h_6^4 - \frac{5}{2^3 \cdot 3^3} h_6^2 h_{12} + \frac{43}{2^4 \cdot 3} h_{12}^2 \nonumber \\
\vspace{1cm} \\
\Sigma_7 & = \frac{1}{2 \cdot 3^4} h_4^2 h_{12} - \frac{1}{2 \cdot 3^4} h_4^3 h_6 h_{10} + \frac{41}{2^3 3^2} h_4^2 h_{10}^2 - \frac{1}{2^2 \cdot 3^4} h_4 h_6^2 h_{12} \label{eqSigma7} \\ & + \frac{11}{2^2 \cdot 3^2} h_4 h_{12}^2 + \frac{1}{2^2 \cdot 3^4} h_6^3 h_{10} - \frac{19}{2^2 \cdot 3^2} h_6 h_{10} h_{12} \nonumber\\
\Sigma_8 & = \frac{1}{2^2 \cdot 3^3} h_4^3 h_{10}^2 + \frac{1}{2^2 \cdot 3^2} h_4^2 h_{12}^2 - \frac{1}{2 \cdot 3^2} h_4 h_6 h_{10} h_{12} + \frac{5}{2^3 \cdot 3^3} h_6^2 h_{10}^2 - \frac{11}{2^3} h_{10}^2 h_{12} \label{eqSigma8} \\
\Sigma_9 & = \frac{-5}{2^2 \cdot 3^2} h_4 h_{10}^2 h_{12} + \frac{7}{2^2 \cdot 3^3} h_6 h_{10}^3 + \frac{1}{3^3} h_{12}^3 \label{eqSigma9} \\
\Sigma_{10} & = \frac{1}{2^4} h_{10}^4. \label{eqSigma10}
\end{align}
\begin{rem}
The denominators are always products of powers of 2 and 3. This was predicted by \cite{Ichikawa09}, as all Fourier expansions of $\Theta_{m/2}$ (therefore of the $\Sigma_i$) have integral coefficients. Surprisingly, the result of \cite{Ichikawa09} would actually be false for a ${\mathbb{Z}}[1/3]$-algebra instead of a ${\mathbb{Z}}[1/6]$-algebra, as the expression of $\Sigma_3$ (converted as a polynomial in $\psi_4,\psi_6, \chi_{12}$) shows, but this does not provide a counterexample for a ${\mathbb{Z}}[1/2]$-algebra.
\end{rem}
Now, let $C$ be an hyperelliptic curve of genus 2 on a number field $K$ and ${\mathfrak{P}}$ a prime ideal of ${\mathcal O}_K$ above 2. We will denote by $|\cdot|$ the norm associated to ${\mathfrak{P}}$ to lighten the notation. Let $A$ be the jacobian of $C$ and $J_2,J_4,J_6,J_8,J_{10}$ the homogeneous Igusa invariants of the curve $C$, defined as in (\cite{Igusa60}, pp. 621-622) up to a choice of hyperelliptic equation for $C$. We fix $\tau \in {\mathcal H}_2$ such that $A_\tau$ is isomorphic to $A$, which will be implicit in the following (i.e. $h_4$ denotes $h_4(\tau)$ for example). By (\cite{Igusa67bis}, p.848) applied with our normalisation, there is an hyperelliptic equation for $C$ (and we fix it) such that
\begin{align}
J_2 & = \frac{1}{2} \frac{h_{12}}{h_{10}} \\
J_4 & = \frac{1}{2^5 \cdot 3} \left( \frac{h_{12}^2}{h_{10}^2} - 2 h_4 \right) \\
J_6 & = \frac{1}{2^7 \cdot 3^3} \left( \frac{h_{12}^3}{h_{10}^3} - 6 \frac{h_4 h_{12}}{h_{10}} + 4 h_6 \right) \\
J_8 & = \frac{1}{2^{12} \cdot 3^3} \left( \frac{h_{12}^4}{h_{10}^4} - 12 \frac{h_4 h_{12}^2}{h_{10}^2} + 16 \frac{h_6 h_{12}}{h_{10}} - 12 h_4^2 \right) \\
J_{10} & = \frac{1}{2^{13}} h_{10}.
\end{align}
Let us now figure out the Newton polygons allowing us to bound our theta constants.
$(a)$ If $A$ has potentially good reduction at ${\mathfrak{P}}$, and this reduction is also a jacobian, by Proposition 3 of \cite{Igusa60}, the quotients $J_2^5/J_{10}, J_4^5 /J_{10}^2, J_6^5 / J_{10}^3$ and $J_{8}^5 / J_{10}^4$ are all integral at ${\mathfrak{P}}$. Translating it into quotients of modular forms, this gives
\begin{eqnarray*}
\left| \frac{J_2^5}{J_{10}} \right| & = & |2|^8 \left| \frac{h_{12}^5}{h_{10}^6} \right| \leq 1 \\
\left|\frac{J_4^5}{J_{10}^2} \right| & = & |2|^3 \left| \frac{h_{12}^2}{h_{10}^{12/5}} - 2 \frac{h_4}{h_{10}^{2/5}}\right|^{5} \leq 1 \\
\left| \frac{J_6^5}{J_{10}^3} \right| & = & |2|^4 \left| \frac{h_{12}^3} {h_{10}^{18/5}} - 6 \frac{h_4 h_{12}}{h_{10}^{8/5}} + 4 \frac{h_6}{h_{10}^{3/5}} \right|^5 \leq 1 \\
\left| \frac{J_8^5}{J_{10}^4} \right| & = & |2|^{-8} \left| \frac{h_{12}^4}{h_{10}^{24/5}} - 12 \frac{h_4 h_{12}^2}{h_{10}^{14/5}} + 16 \frac{h_6 h_{12}}{h_{10}^{9/5}} - 12 \frac{h_4^2}{h_{10}^{4/5}} \right|^5 \leq 1.
\end{eqnarray*}
By successive bounds on the three first lines, we obtain
\[
\left| \frac{h_4}{h_{10}^{2/5}} \right| \leq |2|^{-21/5}, \quad \left| \frac{h_6}{h_{10}^{3/5}} \right| \leq |2|^{-34/5}, \quad
\left| \frac{h_{12}}{h_{10}^{6/5}} \right| \leq |2|^{-8/5}.
\]
Using the expressions of the $\Sigma_i$ (\eqref{eqSigma1} to \eqref{eqSigma10}), we compute that for every $i \in \{1, \cdots, 10\}$, one has $\left| \Sigma_i / h_{10}^{2 i /5} \right| \leq |2|^{\lambda_i}$ with the following values of $\lambda_i$ :
\[
\begin{array}{c|cccccccccc}
\hline
i & 10 & 9 & 8 & 7 & 6 & 5 & 4 & 3 & 2 & 1 \\
\lambda_i & - \frac{20}{5} & - \frac{44}{5}& - \frac{83}{5}& - \frac{112}{5}& - \frac{156}{5}& - \frac{125}{5}& - \frac{104}{5}& - \frac{73}{5}& - \frac{47}{5}& - \frac{16}{5} \\
\hline
\end{array}
\]
and for $i=10$, it is an equality. Therefore, the highest slope of the Newton polygon is at most $26/5 \cdot v_{\mathfrak{P}}(2)$, whereas the lowest one is at least $-34/5 \cdot v_{\mathfrak{P}}(2)$, which gives part $(a)$ of Proposition \ref{propbornesfoncthetaaudessus2} by the theory of Newton polygons.
$(b)$ If $A$ has potentially good reduction at ${\mathfrak{P}}$ and the semistable reduction is a product of elliptic curves, defining
\begin{eqnarray}
I_4 & = & J_2^3 - 25 J_4 = \frac{h_4}{2} \label{eqdefI4} \\
I_{12} & = & - 8 J_4^3 + 9 J_2 J_4J_6 - 27 J_6^2 - J_2^2 J_8 = \frac{1}{2^{10} \cdot 3^3} (2 h_4^3 - h_6^2), \label{eqdefI12} \\
P_{48} & = & 2^{12} \cdot 3^3 h_{10}^4 J_8 = h_{12}^4 - 12 h_4 h_{12}^2 h_{10}^2 + 16 h_6 h_{12} h_{10}^3 - 12 h_4^2 h_{10}^4 \label{eqdefP48}
\end{eqnarray}
(which as modular forms are of respective weights $4,12$ and $48$), by Theorem 1 (parts $(V_*)$ and $(V)$) of \cite{Liu93}, we obtain in the same fashion that
\begin{equation}
\label{eqbornesenfoncP481}
\left| \frac{h_4}{P_{48}^{1/12}} \right| \leq |2|^{-13/3}, \left| \frac{h_6}{P_{48}^{1/8}}\right| \leq |2|^{-3}, \quad \left| \frac{h_{10}}{P_{48}^{5/24}}\right| \leq |2|^{-4/3}.
\end{equation}
Using the Newton polygon for the polynomial of \eqref{eqdefP48} defining $P_{48}$, one deduces quickly that
\begin{equation}
\label{eqbornesenfoncP482}
\left| \frac{h_{12}}{P_{48}^{1/4}} \right| \leq |2|^{-7/2}.
\end{equation}
As before, with the explicit expression of the $\Sigma_i$, one obtains that the $|\Sigma_i / P_{48}^{i/12}|$ are bounded by $|2|^{\lambda_i}$ with the following values of $\lambda$ :
\begin{equation}
\label{eqvalSigmairedproduitCE1}
\begin{array}{c|cccccccccc}
\hline
i & 10 & 9 & 8 & 7 & 6 & 5 & 4 & 3 & 2 & 1 \\
\lambda_i & -\frac{28}{3} & -\frac{71}{6} & \frac{-53}{3} & \frac{-55}{3} & \frac{-84}{3} & \frac{-71}{3} & \frac{-64}{3} & -14 & \frac{-29}{3} & \frac{-10}{3} \\
\hline
\end{array}
\end{equation}
This implies directly that the highest slope of the Newton polygon is at most $16/3 \cdot v_{\mathfrak{P}}(2)$. Now, for the lowest slope, there is no immediate bound and it was expected : in this situation, $\Sigma_{10} = 2^{-4} h_{10}^4$ can be relatively very small compared to $P_{48}^{5/6}$.
As $P_{48}$ is in the ideal generated by $h_{10},h_{12}$ (in other words, is cuspidal) and dominates all modular forms $h_4,h_6,h_{10},h_{12}$, one of $h_{10}$ and $h_{12}$ has to be relatively large enough compared to $P_{48}$ . In practice, we get (with \eqref{eqbornesenfoncP481}, \eqref{eqbornesenfoncP482} and \eqref{eqdefP48})
\[
\left| \frac{h_{12}}{P_{48}^{1/4}} \right| \geq 1 \quad \textrm{or} \quad \left| \frac{h_{10}}{P_{48}^{5/24}} \right| \geq |2|^{13/6}.
\]
Now, if $h_{10}$ is relatively very small (for example, $\left|h_{10}/P_{48}^{5/24} \right| \leq |2|^{19/6} \left|h_{12}/P_{48}^{1/4} \right|$), we immediately get $\left| h_{12}/P_{48}^{1/4} \right| = 1$ and $\left| \Sigma_9 / P_{48}^{3/4} \right| = 1$. Computing again with these estimates for $h_{10}$ and $h_{12}$, we obtain that the $\left|\Sigma_i / P_{48}^{i/12} \right|$ are bounded by $|2|^{\lambda_i}$ with the following slightly improved values of $\lambda$,
\[
\begin{array}{c|ccccccccc}
\hline
i & 9 & 8 & 7 & 6 & 5 & 4 & 3 & 2 & 1 \\
\lambda_i & 0 & -\frac{32}{3} & -\frac{51}{3} & \frac{-84}{3} & \frac{-71}{3} & \frac{-64}{3} & -14 & \frac{-29}{3} & \frac{-10}{3} \\
\hline
\end{array}
\]
The value at $i=9$ is exact, hence the second lowest slope is then at least $-\frac{32}{3} \cdot v_{\mathfrak{P}}(2)$.
If it is not so small, we have a bound on $v_{\mathfrak{P}}(\Sigma_{10}/P_{48}^{6/5})$, hence the Newton polygon itself is bounded (and looks like in the first situation). In practice, one finds that the lowest slope is at least $-47/3 \cdot v_{\mathfrak{P}}(2)$, hence all others slopes are at least this value, and this concludes the proof of Proposition \ref{propbornesfoncthetaaudessus2} $(b)$.
\end{proof}
\begin{rem}
In characteristics $\neq 2,3$, Theorem 1 of \cite{Liu93} and its precise computations pp. 4 and 5 give the following exact shapes of Newton polygons (notice the different normalisation factors).
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=0.5]
\draw[->, thin] (-1,0) -- (12,0);
\draw (-1/2,3) node {$v_{\mathfrak{P}}$};
\draw (12,-1/2) node {$\Sigma_{10-i}/h_{10}^{2(10-i)/5}$};
\draw[->, thin] (0,-1) -- (0,4);
\draw (0,0) node {$\bullet$};
\draw (0,0) node[above right] {$(0,0)$};
\draw (10,0) node {$\bullet$};
\draw (10,0) node[above right] {$(10,0)$};
\draw[thick] (0,0) -- (10,0);
\end{tikzpicture}
\caption{When the reduction of $A$ is a jacobian}
\end{figure}
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=0.5]
\draw[->, thin] (-1,0) -- (11,0);
\draw (11,-1/2) node {$\Sigma_{10-i}/h_{12}^{(10-i)/3}$};
\draw[->, thin] (0,-1) -- (0,5);
\draw (-1/2,4) node {$v_{\mathfrak{P}}$};
\draw (0,3) node {$\bullet$};
\draw (1,0) node {$\bullet$};
\draw (10,0) node {$\bullet$};
\draw[thick] (0,3) -- (1,0);
\draw[thick] (1,0) -- (10,0);
\end{tikzpicture}
\caption{When the reduction of $A$ is a product of elliptic curves}
\end{figure}
In particular, when $A$ reduces to a jacobian, the theta coordinates all have the same ${\mathfrak{P}}$-adic norm and when $A$ reduces to a product of elliptic curves, exactly one of them has smaller norm : in other words, we reproved Proposition \ref{propalgthetafoncetreduchorsde2}, and the Newton polygons have a very characteristic shape.
The idea behind the computations above is that in cases $(a)$ and $(b)$ (with other normalisation factors), the Newton polygons have a shape close to these ones, therefore estimates can be made. It would be interesting to see what the exact shape of the Newton polygons is, to maybe obtain sharper results.
\end{rem}
\subsection{Wrapping up the estimates and end of the proof}
\label{subsecfinalresultRungeCEexplicite}
We can now prove the explicit refined version of Theorem \ref{thmtubularRungeproduitCE}, namely Theorem \ref{thmproduitCEexplicite}.
\begin{proof}[Proof of Theorem \ref{thmproduitCEexplicite}]
In case $(a)$, one can avoid the tubular assumption for the (unique) archimedean place of $K$: indeed, amongst the ten theta coordinates, there remain 4 which are large enough with no further assumption. As $|s_P|<4$, there remains one theta coordinates which is never too small (at any place). In practice, normalising the projective point $\psi(P)$ by this coordinate, one obtains with Propositions \ref{proparchimedeanbound} $(a)$ (archimedean place), \ref{propalgthetafoncetreduchorsde2} (finite places not above 2) and \ref{propbornesfoncthetaaudessus2} (finite places above 2)
\[
h(\psi(P)) \leq - 4 \log(0.42) + \frac{1}{[K:{\mathbb{Q}}]} \sum_{v|2} n_v |2|^{21/2} \leq 10.75
\]
after approximation.
In case $(b)$, one has to use the tubular neighbourhood implicitly given by the parameter $t$, namely Proposition \ref{proparchimedeanbound} $(b)$ for archimedean places, again with Propositions \ref{propalgthetafoncetreduchorsde2} and \ref{propbornesfoncthetaaudessus2} for the finite places, hence we get
\[
h(\psi(P)) \leq 4 \log(e^{\pi t}/1.33) + \frac{1}{[K:{\mathbb{Q}}]} \sum_{v|2} n_v |2|^{21/2} \leq 4 \pi t + 6.14
\]
after approximation.
Finally, we deduce from there the bounds on the stable Faltings height by Corollary 2.2 of \cite{Pazuki12b} (with its notations, $h_\Theta(A,L) = h(\psi(P))/4$).
\end{proof}
It would be interesting to give an analogous result for Theorem \ref{thmtubularRungegeneral}, and the estimates for archimedean and finite places not above 2 should not give any particular problem. For finite places above 2, the method outlined above can only be applied if, taking the symmetric polynomials $\Sigma_1, \cdots, \Sigma_{f(n)}$ in well-chosen powers $\Theta_{\widetilde{a}/n,\widetilde{b/n}} (\tau)$ for $\widetilde{a},\widetilde{b} \in {\mathbb{Z}}^g$, we can figure out by other arguments the largest rank $k_0$ for which $\Sigma_{k_0}$ is cuspidal but not in the ideal generated by $h_{10}$. Doing so, we could roughly get back the pictured shape of the Newton polygon when $h_{10}$ is relatively very small (because then $\Sigma_k$ is relatively very small for $k>k_0$ by construction). Notice that for this process, one needs some way to theoretically bound the denominators appearing in the expressions of the $\Sigma_i$ in $h_4,h_6,h_{10},h_{12}$, but if this works, the method can again be applied.
\bibliographystyle{alphaSLF}
| proofpile-arXiv_066-699 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Appendix}
This appendix accompanies the main text. We first provide more background on finite set statistics. Further, we add the details of our derivations for Deep Set Network that were omitted due to space constraints. To do this, here we augment Sections $3$ and $4$ of the main text.
Finally, we provide more discussions and results on all object counting, multi-label classification and pedestrian detection applications.
\section{Background on Finite Set Statistics}
\label{sec:background0}
Finite Set Statistics provide powerful and practical mathematical tools for dealing with
random finite sets, based on the notion of integration and density that is consistent with the point process theory~\cite{mahler2007statistical}. In this section, we review some basic mathematical background about this subject of statistics.
In the conventional statistics theory, a continuous random variable $y$ is a variable that can take an infinite number of possible values. A continuous random vector can be defined by stacking several continuous random variables into a fixed length vector, $Y=\left(y_1,\cdots,y_m\right)$. The mathematical function describing the possible values of a continuous random vector, and their associated joint probabilities, is known as a probability density function (PDF) $p(Y)$ such that
$\int p(Y)dY = 1.$
A random finite set (RFS) $\mathcal{Y}$ is a finite-set valued random variable $\mathcal{Y}=\left\{y_1,\cdots,y_m\right\}\subset \mathbb{Y}$. The main difference between an RFS and a random vector is that for the former, the number of constituent variables, $m$, is random and the variables themselves are random and unordered, while the latter is of a fixed size with a predefined order.
A statistical function describing a finite-set variable $\mathcal{Y}$ is a
combinatorial probability density function $p(\mathcal{Y})$, which consists of a discrete probability distribution, the so-called cardinality distribution, and a family of joint probability densities on the values of the constituent variables for each cardinality.
Similar to the definition of a PDF for a random variable, the PDF of an RFS must sum to unity over all possible cardinality values and all possible element values and their permutations, \ie
\begin{equation}\label{eq: RFS pdf0}
\int p(\mathcal{Y})\mu(d\mathcal{Y}) \triangleq \sum_{m=0}^{\infty}\frac{1}{m!U^m}\int p(\{y_1,\cdots,y_m\}_{||}) dy_1\cdots dy_m = 1,
\end{equation}
where $\mu$ is the dominating
measure and $U$ is the unit of hypervolume
in $\mathbb{Y}$~\cite{vo2016model}.
The PDF of an $m$-dimensional random vector can be defined in terms of an RFS as:
\begin{equation}\label{eq: pdf rfs vs vect0}
\begin{aligned}
p(y_1,\cdots,y_m) \triangleq \frac{1}{m!} p(\{y_1,\cdots,y_m\}_{||}).
\end{aligned}
\end{equation}
The denominator $m!=\prod_{k=1}^m k$ appears because the probability density for a set $\{y_1,\cdots,y_m\}_{||}$ must be equally distributed among all the $m!$ possible permutations of the vector~\cite{mahler2007statistical}.
The cardinality distribution $p(m)$ over the number of
elements in the random finite set $\mathcal{Y}$ is obtained by
\begin{equation}\label{eq: Cardinality distribution}
p(m) = \int_{|\mathcal{Y}|=m} p(\mathcal{Y})\mu(d\mathcal{Y}) \triangleq \frac{1}{m!U^m} \int p(\{y_1,\cdots,y_m\}_{||}) dy_1\cdots dy_m.
\end{equation}
Similar to the conventional statistics for random variables, the expectation of an RFS has been defined above.
The first statistical moment, or the expected value, of an RFS is known as intensity density or probability hypothesis density (PHD) and is calculated by definition as
\begin{equation}\label{eq: intensity density}
v(y) \triangleq \int\delta_{\mathcal{Y}}(y) p(\mathcal{Y})\mu(d\mathcal{Y}),
\end{equation}
where $\delta_{\mathcal{Y}}(y) = \sum_{x\in \mathcal{Y}}\delta_x(y)$ and $\delta_x(y)$ denotes the Dirac delta function
concentrated at $x$. The PHD function $v(y)$ is interpreted as the instantaneous expected number of the variables that exist
at that point $y$. Moreover, the integral of the PHD over a region gives the expected number of elements in that region and the peaks of the PHD
indicate highest local concentrations of the expected number of elements.
Having an RFS distribtuion $p(\mathcal{Y})$, the samples can be drawn from this distribution as shown in Algorithm~\ref{table:RFS}.
\input{sampling_rfs}
\section{Deep Set Network}
\label{sec:deep-set-net0}
Let us begin by defining a training set $\mathcal{D} = \{\mathcal{Y}_{i},\mathbf{x}_{i}\}$,
where each training sample $i=1,\ldots,n$ is a pair consisting of an input feature $\mathbf{x}_{i}\in\mathbb{R}^{l}$ and an output (or label) set
$\mathcal{Y}_{i} = \{y_{1},y_{2},\ldots,y_{m_i}\}, y_{k}\in\mathbb{R}^{d}, m_i\in\mathbb{N}^0 $. In the following we will drop the instance index $i$ for better readability. Note that $m:=|\mathcal{Y}|$ denotes the cardinality of set $\mathcal{Y}$.
The probability of a set $\mathcal{Y}$ with an unknown cardinality is defined as:
\begin{equation}
\label{eq:set_density0}
\begin{aligned}
p(\mathcal{Y}|\mathbf{x},\boldsymbol\theta,\mathbf{w}) = & p(m|\mathbf{x},\mathbf{w})\times U^m\times p(\{y_{1},y_{2},\cdots,y_{m}\}_{||}|\mathbf{x},\boldsymbol\theta)\\
= & p(m|\mathbf{x},\mathbf{w})\times m!\times U^m\times p(y_{1},y_{2},\cdots,y_{m}|\mathbf{x},\boldsymbol\theta),
\end{aligned}
\end{equation}
where $p(m|\cdot,\cdot)$ and $ p(y_{1},y_{2},\cdots,y_{m}|\cdot,\cdot)$ are respectively a cardinality distribution and a
symmetric joint probability density distribution of the elements. $U$ is the unit of hypervolume
in $\mathbb{R}^{d}$, which makes the joint distribution unitless~\cite{vo2016model}. $\boldsymbol\theta$ denotes the parameters that estimate the joint distribution of set element values for a fixed cardinality,
while $\mathbf{w}$ represents the collection of parameters which estimate the cardinality distribution of the set elements.
The above formulation represents the probability density of a set which is very general and completely independent from the choices of both cardinality and spatial distributions. It is thus straightforward to transfer it to many applications that require the output to be a set. However, to make the problem amenable to mathematical derivation and implementation, we adopt two assumptions: \emph{i)} the outputs (or labels) in the set are independent
and identically distributed (\textit{i.i.d.}\xspace) and \emph{ii)} their cardinality follows
a Poisson distribution with parameter $\lambda$. Thus, we can write the distribution as
\begin{equation}
\begin{aligned}
p(\mathcal{Y}|\mathbf{x},\boldsymbol\theta,\mathbf{w}) = \int p(m|\lambda)p(\lambda|\mathbf{x},\mathbf{w}) d\lambda\times
m!\times U^m\times\left(\prod_{k=1}^{m}p(y_{k}|\mathbf{x},\boldsymbol\theta)\right).
\end{aligned}
\end{equation}
\subsection{Posterior distribution}
\label{sec:posterior}
To learn the parameters $\boldsymbol\theta$ and $\mathbf{w}$, it is valid to assume that the training samples are independent from each other and the distribution over the input data $p(\mathbf{x})$ is independent from the output and the parameters.
Therefore, the posterior distribution over the parameters can be derived as
\begin{equation}
\begin{aligned}
p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) &= \frac{1}{Z} p(\mathcal{D}|\boldsymbol\theta,\mathbf{w})p(\boldsymbol\theta)p(\mathbf{w})\\
&= \frac{1}{Z} p(\{\mathcal{Y}_{i},\mathbf{x}_{i}\}_{\forall i}|\boldsymbol\theta,\mathbf{w})p(\boldsymbol\theta)p(\mathbf{w})\\
&= \frac{1}{Z}\prod_{i=1}^{n} \bigg[p(\mathcal{Y}_{i}|\mathbf{x}_{i},\boldsymbol\theta,\mathbf{w}) p(\mathbf{x}_{i})\bigg]p(\boldsymbol\theta)p(\mathbf{w})\\
&= \frac{1}{Z}\prod_{i=1}^{n}\left[\int p(m_{i}|\lambda)p(\lambda|\mathbf{x}_{i},\mathbf{w})d\lambda\times
m_{i}!\times U^{m_i}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right) p(\mathbf{x}_{i})\right]p(\boldsymbol\theta)p(\mathbf{w}),
\end{aligned}
\label{eq:posterior}
\end{equation}
where $Z$ is a normaliser defined as
\begin{equation}
Z = \int \int\prod_{i=1}^{n} \left[\int p(m_{i}|\lambda)p(\lambda|\mathbf{x}_{i},\mathbf{w})d\lambda\times
m_{i}!\times U^{m_i}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right) p(\mathbf{x}_{i})\right]p(\boldsymbol\theta)p(\mathbf{w})\quad d\theta d\mathbf{w}.
\end{equation}
The probability
$p(\mathbf{x}_{i})$ can be eliminated as it appears in both the numerator and the denominator. Therefore,
\begin{equation}
p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) = \frac{1}{\tilde{Z}}\prod_{i=1}^{n}\left[\int p(m_{i}|\lambda)p(\lambda|\mathbf{x}_{i},\mathbf{w})d\lambda\times
m_{i}!\times U^{m_i}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\right]p(\boldsymbol\theta)p(\mathbf{w}),
\label{eq:posterior_m}
\end{equation}
where
\begin{equation}
\tilde{Z} = \int \int\prod_{i=1}^{n} \left[\int p(m_{i}|\lambda)p(\lambda|\mathbf{x}_{i},\mathbf{w})d\lambda\times
m_{i}!\times U^{m_i}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\right]p(\boldsymbol\theta)p(\mathbf{w})\quad d\theta d\mathbf{w}.
\end{equation}
A closed form solution for the integral in Eq.\xspace~\eqref{eq:posterior_m} can be obtained by using conjugate priors:
\begin{eqnarray*}
m & \sim & \mathcal{P}(m;\lambda)\\
\lambda & \sim & \mathcal{G}(\lambda;\alpha(\mathbf{x},\mathbf{w}),\beta(\mathbf{x},\mathbf{w}))\\
&&\alpha(\mathbf{x},\mathbf{w}),\beta(\mathbf{x},\mathbf{w})>0\quad\forall\mathbf{x},\mathbf{w}\\
\boldsymbol\theta & \sim & \mathcal{N}(\boldsymbol\theta;0,\sigma_{1}^{2}\mathbf{I})\\
\mathbf{w} & \sim & \mathcal{N}(\mathbf{w};0,\sigma_{2}^{2}\mathbf{I}),
\end{eqnarray*}
where $\mathcal{P}(\cdot,\lambda)$, $\mathcal{G}(\cdot;\alpha,\beta)$, and $\mathcal{N}(\cdot;0,\sigma^{2}\mathbf{I})$ represent respectively a Poisson distribution with parameters $\lambda$, a Gamma distribution with parameters $(\alpha,\beta)$ and a zero mean normal distribution with covariance equal to $\sigma^{2}\mathbf{I}$.
We assume that the cardinality follows a Poisson distribution whose mean, $\lambda$, follows a Gamma distribution, with parameters which can be estimated from the input data $\mathbf{x}$.
Note that the cardinality distribution in Eq.\xspace~\ref{eq:set_density0} can be replaced by any other discrete distribution. For example, it is a valid assumption to model the number of objects in natural images by a Poisson distribution~\cite{chan2009bayesian}. Thus, we could directly predict $\lambda$ to model this distribution by formulating the cardinality as $p(m|\mathbf{x},\mathbf{w}) = \mathcal{P}(m;\lambda(\mathbf{x},\mathbf{w}))$ . However, this would limit the model's expressive power; because two visually entirely different images with the same number of objects would be mapped to the same $\lambda$. Instead, to allow for uncertainty of the mean, we model it with another distribution, which we choose to be Gamma for mathematical convenience.
Consequently, the integrals in $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})$ are simplified
and form a negative binomial distribution,
\begin{equation}
\text{NB}\left(m;a,b\right) = \frac{\Gamma(m+a)}{\Gamma(m+1)\Gamma(a)}.(1-b)^{a}b^{m},
\end{equation}
where $\Gamma$ is the Gamma function. Finally, the full posterior distribution can be written as
\begin{equation}
\begin{aligned}
p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) =\frac{1}{\tilde{Z}}\prod_{i=1}^{n}\bigg[\text{NB}\left(m_{i};\alpha(\mathbf{x}_{i},\mathbf{w}),\frac{1}{1+\beta(\mathbf{x}_{i},\mathbf{w})}\right)\times m_{i}!\times U^{m_{i}}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\bigg]p(\boldsymbol\theta)p(\mathbf{w}).
\label{eq:full-posterior}
\end{aligned}
\end{equation}
\subsection{Learning}
\label{sec:learning}
For simplicity, we use a point estimate for the posterior $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})$,
\ie $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) = \delta(\boldsymbol\theta=\boldsymbol\theta^{*},\mathbf{w}=\mathbf{w}^{*}|\mathcal{D})$, where $(\boldsymbol\theta^{*},\mathbf{w}^{*})$ are computed using the following MAP estimator:
\begin{equation}
(\boldsymbol\theta^{*},\mathbf{w}^{*}) = \arg\max_{\boldsymbol\theta,\mathbf{w}}\quad \log\left(p\left(\boldsymbol\theta,\mathbf{w}|\mathcal{D}\right)\right).
\label{eq:map}
\end{equation}
Since the solution to the above problem is independent from the normalisation constant $\tilde{Z}$, we have
\begin{equation}
\begin{aligned}
(\boldsymbol\theta^{*},\mathbf{w}^{*})= \arg\max_{\boldsymbol\theta,\mathbf{w}}&\quad\log\left(p(\boldsymbol\theta)\right)+\sum_{i=1}^{n}\left[\log\left(m_{i}!\right)+m_i\log U+\sum_{k=1}^{m_{i}}\log\left(p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\right.\\
& \quad+\left.\log\left(NB\left(m_{i};\alpha(\mathbf{x}_{i},\mathbf{w}),\frac{1}{1+\beta(\mathbf{x}_{i},\mathbf{w})}\right)\right)\right]+\log\left(p(\mathbf{w})\right)\\
= & \arg\max_{\boldsymbol\theta,\mathbf{w}}\quad f_{1}(\boldsymbol\theta)+f_{2}(\mathbf{w}).
\end{aligned}
\label{eq:map_complete}
\end{equation}
Therefore,
the optimisation problem in Eq.~\eqref{eq:map_complete} can be decomposed \wrt the parameters
$\boldsymbol\theta$ and $\mathbf{w}$. Therefore, we can learn them independently in two separate problems
\begin{equation}
\begin{aligned}
\boldsymbol\theta^{*} = & \arg\max_{\boldsymbol\theta}\quad f_{1}(\boldsymbol\theta)\\
= & \arg\max_{\boldsymbol\theta}\quad\gamma_{1}\|\boldsymbol\theta\|+\sum_{i=1}^{n}\left[\log\left(m_{i}!\right)+m_i\log U+\sum_{k=1}^{m_{i}}\log\left(p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\right]\\
\equiv&\arg\max_{\boldsymbol\theta}\quad\gamma_{1}\|\boldsymbol\theta\|+\sum_{i=1}^{n}\sum_{k=1}^{m_{i}}\log\left(p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)
\end{aligned}
\label{eq:CNN_Eq0}
\end{equation}
and
\begin{equation}
\begin{aligned}
\mathbf{w}^{*} = \arg\max_{\mathbf{w}}\quad& f_{2}(\mathbf{w})\\
= \arg\max_{\mathbf{w}}\quad&\sum_{i=1}^{n}\Big[\log\left(\frac{\Gamma(m_{i}+\alpha(\mathbf{x}_{i},\mathbf{w}))}{\Gamma(m_{i}+1)\Gamma(\alpha(\mathbf{x}_{i},\mathbf{w}))}\right)\\
& + \displaystyle{ \log\left(\frac{\beta(\mathbf{x}_{i},\mathbf{w})^{\alpha(\mathbf{x}_{i},\mathbf{w})}}{(1+\beta(\mathbf{x}_{i},\mathbf{w})^{\alpha(\mathbf{x}_{i},\mathbf{w})+m_{i}})}\right)}\Big]+\gamma_2\|\mathbf{w}\|,
\label{eq:Cardinal_Eq}
\end{aligned}
\end{equation}
where $\gamma_1$ and $\gamma_2$ are the regularisation parameters, proportional to the predefined covariance parameters $\sigma_1$ and $\sigma_2$. These parameters are also known as weight decay parameters and commonly used in training neural networks.
The learned parameters $\boldsymbol\theta^{*}$ in Eq.\xspace~\eqref{eq:CNN_Eq0} are used to map an input feature vector $\mathbf{x}$ into an output vector $Y$. For example, in image classification, $\boldsymbol\theta^*$ is used to predict the distribution $Y$ over all categories, given the input image $\mathbf{x}$. Note that $\boldsymbol\theta^*$ can generally be learned using a number of existing machine learning techniques. In this paper we rely on deep CNNs to perform this task.
To learn the highly complex function between the input feature $\mathbf{x}$ and the parameters $(\alpha,\beta)$, which are used for estimating the output cardinality distribution, we train a second deep neural network.
Using neural networks to predict a discrete value may seem counterintuitive, because these methods at their core rely on the backpropagation algorithm, which assumes a differentiable loss. Note that we achieve this by describing the discrete distribution by continuous parameters $\alpha, \beta$ (Negative binomial $\text{NB}(\cdot,\alpha,\frac{1}{1+\beta})$), and can then easily draw discrete samples from that distribution. More formally, to estimate $\mathbf{w}^{*}$, we compute the partial derivatives of the objective function in Eq.\xspace~\eqref{eq:Cardinal_Eq} \wrt $\alpha (\cdot,\cdot)$ and $\beta (\cdot,\cdot)$ and use standard backpropagation to learn the parameters of the deep neural network.
\begin{equation}
\frac{\partial f_{2}(\mathbf{w})}{\partial\mathbf{w}} = \frac{\partial f_{2}(\mathbf{w})}{\partial\alpha(\mathbf{x},\mathbf{w})}.\frac{\partial\alpha(\mathbf{x},\mathbf{w})}{\partial\mathbf{w}}+\frac{\partial f_{2}(\mathbf{w})}{\partial\beta(\mathbf{x},\mathbf{w})}.\frac{\partial\beta(\mathbf{x},\mathbf{w})}{\partial\mathbf{w}}+2\gamma_{2}\mathbf{w},
\end{equation}
where
\begin{equation}
\frac{\partial f_{2}(\mathbf{w})}{\partial\alpha(\mathbf{x},\mathbf{w})} = \sum_{i=1}^{n}\bigg[\Psi\Big(m_{i}+\alpha(\mathbf{x}_{i},\mathbf{w})\Big)-\Psi\Big(\alpha(\mathbf{x}_{i},\mathbf{w})\Big)+\log\Big(\frac{\beta(\mathbf{x}_{i},\mathbf{w})}{1+\beta(\mathbf{x}_{i},\mathbf{w})}\Big)\bigg],
\end{equation}
and
\begin{equation}
\frac{\partial f_{2}(\mathbf{w})}{\partial\beta(\mathbf{x},\mathbf{w})} = \sum_{i=1}^{n}\bigg[\frac{\alpha(\mathbf{x}_{i},\mathbf{w})-m_{i}.\beta(\mathbf{x}_{i},\mathbf{w})}{\beta(\mathbf{x}_{i},\mathbf{w}).\Big(1+\beta(\mathbf{x}_{i},\mathbf{w})\Big)}\bigg],
\end{equation}
where $\Psi(\cdot)$ is the digamma function defined as
\begin{equation}
\Psi(\alpha)=\frac{d}{d\alpha} \log \left(\Gamma(\alpha)\right)=\frac{{\Gamma'(\alpha)}}{\Gamma(\alpha)}.
\end{equation}
\subsection{Inference}
\label{sec:inference}
Having the learned parameters of the network $(\mathbf{w}^{*},\boldsymbol\theta^{*})$, for a test feature $\mathbf{x}^{+}$, we use a MAP estimate to generate a set output as
\begin{equation}
\mathcal{Y}^{*}
= \arg\max_{\mathcal{Y}} p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+}),
\end{equation}
where
\begin{eqnarray*}
p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+}) & = & \int p(\mathcal{Y}|\mathbf{x}^{+},\boldsymbol\theta,\mathbf{w})p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})d\boldsymbol\theta d\mathbf{w}
\end{eqnarray*}
and $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) = \delta(\boldsymbol\theta=\boldsymbol\theta^{*},\mathbf{w}=\mathbf{w}^{*}|\mathcal{D})$.
Since the unit of hypervolume $U$ in most practical application in unknown, to calculate the mode of the set distribution $p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+})$, we use the sequential inference as explained in~\cite{mahler2007statistical}. To this end, we first calculate the mode $m^*$ of the cardinality distribution
\begin{equation}
m^{*}
= \arg\max_{m}\quad p(m|\mathbf{w}^*,\mathbf{x}^{+}),
\end{equation}
where
\begin{equation}
p(m|\mathbf{w}^*,\mathbf{x}^{+})=\text{NB}\left(m;\alpha(\mathbf{w}^*,\mathbf{x}^{+}),\frac{1}{1+\beta(\mathbf{w}^*,\mathbf{x}^{+})}\right).
\end{equation}
Then, we calculate the mode of the joint distribution for the given cardinality $m^{*}$ as
\begin{equation}
\mathcal{Y}^{*}
= \arg\max_{\mathcal{Y}_{m^{*}}}\quad p(\{y_1,\cdots,y_{m^{*}}\}_{||}|\boldsymbol\theta^*,\mathbf{x}^{+}).
\end{equation}
To estimate the most likely set $\mathcal{Y}^{*}$ with cardinality $m^{*}$, we use the first CNN with the parameters $\boldsymbol\theta^*$ which predicts $p(y_1,\cdots,y_{M}|\mathbf{x}^{+},\boldsymbol\theta^*)$, where $M$ is the maximum cardinality of the set, \ie $\{y_1,\cdots,y_{m^{*}}\}\subseteq\{y_1,\cdots,y_{M}\}\quad,\forall m^{*}$.
Since the samples are \textit{i.i.d.}\xspace, the joint probability maximised when the probability of each element in the set is maximised. Therefore, the most likely set $\mathcal{Y}^{*}$ with cardinality $m^{*}$ is obtained by ordering the probabilities of the set elements $y_1,\cdots,y_{M}$ as the output of the first CNN and choosing $m^{*}$ elements with highest probability values.
Note that the assumptions listed in Sec.\xspace~\ref{sec:deep-set-net0} are necessary to make both learning and inference computationally tractable and amenable to an elegant mathematical formulation.
A major advantage of this approach is that we can use any state-of-the-art classifier/detector as the first CNN ($\boldsymbol\theta^*$) to further improve its performance.
Modifying any of the assumptions, \eg non-\textit{i.i.d.}\xspace set elements, leads to serious mathematical complexities~\cite{mahler2007statistical}, and are left for future work.
\section{Further Experimental Results}
Here, we provide additional arguments, evaluation plots and qualitative results that could not be included in the main paper.
\subsection{Object counting by regression}
\begin{table}[tb]
\caption{Loss comparison for cardinality estimation.}
\label{tab:new-experiments}
\vspace{-1.5em}
\begin{center}
\begin{tabular}{l|cc|cc}
& \multicolumn{2}{c}{Mean card. error} & \multicolumn{2}{c}{F1 score}\\
\!\!\!\! Loss & MOT16 & MS COCO & MOT16 & MS COCO \\
\hline
\!\!\!\! Regression &$2.05$& $0.83$ & $60.16$ &$68.4$ \\
\!\!\!\! Negative Binomial & $\textbf{1.94}$ & $\textbf{0.74}$ & $\textbf{61.86}$ & $\textbf{69.4}$ \\
\end{tabular}
\end{center}
\end{table}
Regressing for cardinality may seem an obvious choice, but is not trivial to derive mathematically and cannot be easily justified in our framework because it
\emph{a)} cannot be accommodated in a Bayesian set formulation to model uncertainty and
\emph{b)} does not yield a discrete distribution.
Nonetheless, we have conducted the experiment by replacing our loss with the regression loss while using the exact same architecture and setup as in Sec.\xspace 5.2 of the main text. Tab.~\ref{tab:new-experiments} shows the comparison results between our cardinality loss and regression loss on two datasets from two reported tasks of multi-class image classification (MS-COCO) and pedestrian detection (MOT16). As expected, directly regressing for cardinality yields slightly worse results both in terms of the cardinality prediction and \wrt the F1 score.
For completeness, Tab.\xspace~\ref{table:Card_err} also reports the mean absolute error and standard deviation for cardinality estimation using our loss on four datasets.
\input{Card_err}
\subsection{Pedestrian detection}
Here, we first discuss the challenges that we confronted to use our set formulation for this application. Then we provide some information about the datasets and their split used for this experiment. Finally, we show more quantitative and qualitative results on this experiment.
\myparagraph{Non-maximum suppression.}
In the main text, we argued that the non-maximum suppression (NMS) as a heuristic step makes the detection problem not as
straightforward as what is expressed in our set formulation
in Eq.\xspace~(\ref{eq:set_density0}).
In fact, a major nuisance in detection is not the NMS algorithm itself as a greedy solver, but rather its hand-crafted objective.
This process is traditionally formulated as maximising the joint distribution over pairs of samples, or equivalently as a quadratic binary program (QBP)
\begin{equation}Y^{*}
= \arg\max_{Y} Y^TQY, \end{equation}
where
$Y\in\mathbb{B}^M$
is a binary vector, indicating which of the $M$ boxes to keep or to suppress.
The diagonal values of $Q$ are proportional to the detection scores while the pairwise (exclusion) terms in $Q$ are manually designed, \eg to correspond to the overlap ratios.
The aforementioned QBP is NP-hard and cannot be solved globally in general. NMS is one greedy and efficient approach to solve the problem locally.
To enforce $m^*$, one could include a constraint into the QBP like
\begin{equation}
\begin{aligned}
Y^{*} = &\arg\max_{Y}Y^TQY,\\ & s.t. \textbf{1}^TY = m^*.
\end{aligned}
\end{equation}
However, this may lead to an infeasible problem for a fixed $Q$ with a predefined value of the threshold for an overlap ratio $T_O$. To this end, $Q$ should be designed such that the above problem can have a feasible solution. Learning $Q$ is perhaps a more elegant approach, but is not part of this paper. To this end, for the current setup, one solution is to find a threshold that can make the above problem feasible. Therefore, we start from the default value of $T_O$, and adjust it step-wise until the number of boxes reaches $m^*$. In the case if the number of final boxes is larger than $m^*$, we pick $m^*$ boxes with the highest scores. To apply a solver, we experimented with the global QBP formulation using Gurobi for a small problem, but found NMS with an adaptive threshold to be the most efficient and effective approach.
\myparagraph{Caltech Pedestrians~\cite{Dollar:2012:PAMI}} is a de-facto standard benchmark for pedestrian detection. The dataset contains sequences captured from a vehicle driving through regular traffic in an urban environment and provides bounding box annotations of nearly $350,000$ pedestrians. The annotations also include detailed occlusion labels. The number of pedestrians per image varies between $0$ and $14$. However, more than $55\%$ of the images contain no people at all and around $30\%$ of the data includes one or two persons.
We use the MS-CNN~\cite{Cai:2016:ECCV} network model and its parameters learned on the Caltech training set as $\boldsymbol\theta^*$ in Eq.\xspace~\eqref{eq:CNN_Eq0}. To learn the cardinality, we use $4250$ images provided as a training set, splitting it into training and validation ($80\%-20\%$), reaching a mean absolute error of $0.54$ (\cf~Tab.\xspace~\ref{table:Card_err}).
\myparagraph{MOTCallenge 2016.}
This benchmark is primarily targeted at multi-object tracking and is not yet commonly used for evaluating pedestrian detection. However, the variation in the number of pedestrians across the frames is relatively large (between $0$ and $32$) and is also distributed more uniformly, which makes correct cardinality estimation more important.
Since the labels for the test set are not available, we use the provided training set of this benchmark consisting of $5316$ images from $7$ different sequences, and divide it into training, validation and test set with split ratios $60\%$, $15\%$ and $25\%$, respectively.
We only learn the cardinality network $\mathbf{w}^*$ on training set and we use the MS-CNN network model and its parameters learned on the KITTI dataset~\cite{Geiger:2012:CVPR} as $\boldsymbol\theta^*$ in Eq.\xspace~\eqref{eq:CNN_Eq0}.
\myparagraph{Additional Results.} ROC curves on two detection datasets are shown in Fig.~\ref{fig:detection-plots}.
Qualitative results of pedestrian detection are shown in Figure~\ref{fig:Results3}.
\begin{figure}[ht]
\centering
\includegraphics[width=.395\linewidth]{figs_appendix/MOT16.pdf}
\includegraphics[width=.40\linewidth]{figs_appendix/UsaTestRocAll.pdf}
\caption{ROC curves on MOT16 and Caltech Pedestrians (experiment ``all''). The overall performance of a detector is measured by the log-average miss rate as proposed by Doll\'ar~\etal~\cite{Dollar:2012:PAMI}.}
\label{fig:detection-plots}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{figs_appendix/dets.pdf}
\caption{More examples illustrating results of pedestrian detection generated using the state-of-the-art detector MS-CNN~\cite{Cai:2016:ECCV} (in blue, left) and our MS-CNN + DeepSetNet (right).
To generate the MS-CNN results, we display the top $m^*$ boxes after applying non-maximum suppression. Arrows indicate typical failures introduced by a suboptimal NMS threshold, which are corrected when considering the predicted number of persons for each image.
}
\label{fig:Results3}
\end{figure*}
\subsection{Multi-class image classification.}
Figure~\ref{fig:Results10} shows more results for successful image tagging. Figure~\ref{fig:Results20} points to some interesting failures and erroneous predictions.
\begin{figure*}[t]
\centering
\includegraphics[width=.95\linewidth]{figs_appendix/success_cases.pdf}
\caption{Further examples showing a perfect prediction \wrt both the number of tags and the labels themselves using our Deep Set Network.}
\label{fig:Results10}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=.95\linewidth]{figs_appendix/failure_cases.pdf}
\caption{Additional examples illustrating interesting failure cases. {\textcolor{blue}{False negatives}} and {\textcolor{red}{false positives}} are highlighted in blue and red, respectively. Note that in most examples, the mismatch between the ground truth and our prediction is due to the ambiguity or artifacts in the annotations. Two such examples are shown in the top row, where a train (window) and the surfboard are not annotated, probably because these are hardly visible in the image. Nevertheless, our network can predict the objects.
The two bottom rows show real failure cases of our method. Note, however, that these include extremely challenging examples, where even for a human, it is fairly difficult to spot a traffic light in the aerial image or the person and the chair in the image on the bottom right.
}
\label{fig:Results20}
\end{figure*}
\section{Random Vectors vs. Random Finite Sets}
\label{sec:background}
In statistics, a continuous random variable $y$ is a variable that can take an infinite number of possible values. A continuous random vector can be defined by stacking several continuous random variables into a fixed length vector, $Y=\left(y_1,\cdots,y_m\right)$. The mathematical function describing the possible values of a continuous random vector and their associated joint probabilities is known as a probability density function (PDF) $p(Y)$ such that
$\int p(Y)dY = 1.$
In contrast, a random finite set (RFS) $\mathcal{Y}$ is a finite-set valued random variable $\mathcal{Y}=\left\{y_1,\cdots,y_m\right\}$. The main difference between an RFS and a random vector is that for the former, the number of constituent variables, $m$, is random and the variables themselves are random and unordered. Throughout the paper, we use $\mathcal{Y}=\left\{y_1,\cdots,y_m\right\}$ for a set with unknown cardinality, $\mathcal{Y}_m=\left\{y_1,\cdots,y_m\right\}_{||}$ for a set with known cardinality and $Y=\left(y_1,\cdots,y_m\right)$ for a vector with known dimension.
A statistical function describing a finite-set variable $\mathcal{Y}$ is a
combinatorial probability density function $p(\mathcal{Y})$ which consists of a discrete probability distribution, the so-called cardinality distribution, and a family of joint probability densities on both the number and the values of the constituent variables.
Similar to the definition of a PDF for a random variable, the PDF of an RFS must sum to unity over all possible cardinality values and all possible element values and their permutations~\cite{mahler2007statistical}.
The PDF of an $m$-dimensional random vector can be defined in terms of an RFS as
\begin{equation}\label{eq: pdf rfs vs vect}
\begin{aligned}
p(y_1,\cdots,y_m) \triangleq \frac{1}{m!} p(\{y_1,\cdots,y_m\}_{||}).
\end{aligned}
\end{equation}
The normalisation factor $m!=\prod_{k=1}^m k$ appears because the probability density for a set $\{y_1,\cdots,y_m\}_{||}$ must be equally distributed among all the $m!$ possible permutations of the vector~\cite{mahler2007statistical}.
Conventional machine learning approaches, such as Bayesian learning and convolutional neural networks, have been proposed to learn the optimal parameters $\boldsymbol\theta^*$ of the distribution $p(Y|\mathbf{x},\boldsymbol\theta^*)$ which maps the input vector $\mathbf{x}$ to the \emph{output vector} $Y$.
In this paper, we instead propose an approach that can learn a pair $(\boldsymbol\theta^*,\mathbf{w}^*)$ of parameter vectors for a set distribution that allow one to map the input vector $\mathbf{x}$ into the \emph{output set} $\mathcal{Y}$, i.e. $p(\mathcal{Y}|\mathbf{x}, \boldsymbol\theta^*,\mathbf{w}^*)$. The additional parameters $\mathbf{w}^*$ define a PDF over the set cardinality, as we will explain in the next section.
\section{Conclusion}
\label{sec:conclusion}
We proposed a deep learning approach for predicting sets. To achieve this goal, we derived a loss for learning a discrete distribution over the set cardinality.
This allowed us to use standard backpropagation for training a deep network for set prediction.
We have demonstrated the effectiveness of this approach on crowd counting, pedestrian detection and multi-class image classification, achieving competitive or superior results in all three applications. As our network is trained independently, it can be trivially applied to any existing classifier or detector, to further improve performance.
Note that this decoupling is a direct consequence of our underlying mathematical derivation due to the \textit{i.i.d.}\xspace~assumptions, which renders our approach very general and applicable to a wide range of models.
In future, we plan to extend our method to multi-class cardinality estimation and investigate models that do not make \textit{i.i.d.}\xspace assumptions. Another potential avenue could be to exploit the Bayesian nature of the model to include uncertainty as opposed to relying on the MAP estimation alone.
\section{Introduction}
\label{sec:introduction}
Deep neural networks have shown state-of-the-art performance on many computer vision problems, including semantic segmentation~\cite{Papandreou:2015:ICCV}, visual tracking~\cite{Nam:2016:CVPR}, image captioning~\cite{Johnson:2016:CVPR}, scene classification~\cite{Krizhevsky:2012:NIPS}, and object detection~\cite{Liu:2016:ECCV}.
However, traditional convolutional architectures require a problem to be formulated in a certain way: in particular, they are designed to predict a \emph{vector} (or a matrix, or a tensor in a more general sense), that is either of a fixed length or whose size depends on the input (\cf fully convolutional architectures).
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figs/method}
\caption{{\bf Left:} Traditional CNNs learn parameters $\boldsymbol\theta^*$ to predict a fixed \emph{vector} $Y$. {\bf Right:} In contrast, we propose to train a separate CNN to learn a parameter vector $\mathbf{w}^*$, which is used to predict the set cardinality of a particular output.}
\vspace{-1em}
\label{fig:method}
\end{figure}
For example, consider the task of scene classification where the goal is to predict the label (or category) of a given image.
Modern approaches typically address this by a series of convolutional layers, followed by a number of fully connected layers, which are finally mapped to predict a fixed-sized vector~\cite{Krizhevsky:2012:NIPS, Simonyan:2014:VGG, Szegedy:2014:Inception}.
The length of the predicted vector corresponds to the number of candidate categories, \eg 1,000 for the ImageNet challenge~\cite{Russakovsky:2015:ILSVRC}. Each element is a score or probability of a particular category, and the final prediction is a probability distribution over all categories.
This strategy is perfectly admissible if one expects to find exactly one or at least the same number of categories across all images. However, natural images typically show multiple entities (\eg table, pizza, person, \etc), and what is perhaps more important, this number differs from image to image.
During evaluation, this property is not taken into account.
The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) only counts an error if the ``true'' label is not among the top-5 candidates.
Another strategy to account for multiple classes is to fix the number to a certain value for all test instances, and report precision and recall by counting false positive and false negative predictions, as was done in~\cite{gong2013deep,Wang_2016_CVPR}.
Arguably, both methods are suboptimal because in a real-world scenario, where the correct labelling is unknown, the prediction should in fact not only rank all labels according to their likelihood of being present, but also to report \emph{how many} objects (or labels) are actually present in one particular image.
Deciding how many objects are present in an image is a crucial part of human perception and scene understanding but is missing from our current evaluation of automated image understanding methods.
As a second example, let us consider pedestrian detection.
The parallel to scene classification that we motivated above is that, once again, in real scenarios, the number of people in a particular scene is not known beforehand.
The most common approach is to assign a confidence score to a number of region candidates~\cite{Dalal:2005:CVPR, Felzenszwalb:2010:PAMI, Girshick:2015:ICCV, Ren:2015:NIPS}, which are typically selected heuristically by thresholding and non-maxima suppression.
We argue that it is important not to simply discard the information about the actual number of objects at test time, but to exploit it while selecting the subset of region proposals.
The examples above motivate and underline the importance of \emph{set prediction} in certain applications.
It is important to note that, in contrast to vectors, a set is a collection of elements which is \emph{invariant under permutation} and the size of a set is \emph{not fixed} in advance.
To this end, we use a principled definition of a set as the union of cardinality distribution and family of joint distributions over each cardinality value.
In summary, our main contributions are as follows:
\emph{(i)}
Starting from the mathematical definition of a set distribution, we derive a loss that enables us to employ existing machine learning methodology to learn this distribution from data.
\emph{(ii)}
We integrate our loss into a deep learning framework to exploit the power of a multi-layer architecture.
\emph{(iii)}
We present state-of-the-art results for multi-label image classification and pedestrian detection on standard datasets and competitive results on the task of object counting.
\section*{Appendix}
This appendix accompanies the main text. We first provide more background on finite set statistics. Further, we add the details of our derivations for Deep Set Network that were omitted due to space constraints. To do this, here we augment Sections $3$ and $4$ of the main text.
Finally, we provide more discussions and results on all object counting, multi-label classification and pedestrian detection applications.
\section{Background on Finite Set Statistics}
\label{sec:background0}
Finite Set Statistics provide powerful and practical mathematical tools for dealing with
random finite sets, based on the notion of integration and density that is consistent with the point process theory~\cite{mahler2007statistical}. In this section, we review some basic mathematical background about this subject of statistics.
In the conventional statistics theory, a continuous random variable $y$ is a variable that can take an infinite number of possible values. A continuous random vector can be defined by stacking several continuous random variables into a fixed length vector, $Y=\left(y_1,\cdots,y_m\right)$. The mathematical function describing the possible values of a continuous random vector, and their associated joint probabilities, is known as a probability density function (PDF) $p(Y)$ such that
$\int p(Y)dY = 1.$
A random finite set (RFS) $\mathcal{Y}$ is a finite-set valued random variable $\mathcal{Y}=\left\{y_1,\cdots,y_m\right\}\subset \mathbb{Y}$. The main difference between an RFS and a random vector is that for the former, the number of constituent variables, $m$, is random and the variables themselves are random and unordered, while the latter is of a fixed size with a predefined order.
A statistical function describing a finite-set variable $\mathcal{Y}$ is a
combinatorial probability density function $p(\mathcal{Y})$, which consists of a discrete probability distribution, the so-called cardinality distribution, and a family of joint probability densities on the values of the constituent variables for each cardinality.
Similar to the definition of a PDF for a random variable, the PDF of an RFS must sum to unity over all possible cardinality values and all possible element values and their permutations, \ie
\begin{equation}\label{eq: RFS pdf0}
\int p(\mathcal{Y})\mu(d\mathcal{Y}) \triangleq \sum_{m=0}^{\infty}\frac{1}{m!U^m}\int p(\{y_1,\cdots,y_m\}_{||}) dy_1\cdots dy_m = 1,
\end{equation}
where $\mu$ is the dominating
measure and $U$ is the unit of hypervolume
in $\mathbb{Y}$~\cite{vo2016model}.
The PDF of an $m$-dimensional random vector can be defined in terms of an RFS as:
\begin{equation}\label{eq: pdf rfs vs vect0}
\begin{aligned}
p(y_1,\cdots,y_m) \triangleq \frac{1}{m!} p(\{y_1,\cdots,y_m\}_{||}).
\end{aligned}
\end{equation}
The denominator $m!=\prod_{k=1}^m k$ appears because the probability density for a set $\{y_1,\cdots,y_m\}_{||}$ must be equally distributed among all the $m!$ possible permutations of the vector~\cite{mahler2007statistical}.
The cardinality distribution $p(m)$ over the number of
elements in the random finite set $\mathcal{Y}$ is obtained by
\begin{equation}\label{eq: Cardinality distribution}
p(m) = \int_{|\mathcal{Y}|=m} p(\mathcal{Y})\mu(d\mathcal{Y}) \triangleq \frac{1}{m!U^m} \int p(\{y_1,\cdots,y_m\}_{||}) dy_1\cdots dy_m.
\end{equation}
Similar to the conventional statistics for random variables, the expectation of an RFS has been defined above.
The first statistical moment, or the expected value, of an RFS is known as intensity density or probability hypothesis density (PHD) and is calculated by definition as
\begin{equation}\label{eq: intensity density}
v(y) \triangleq \int\delta_{\mathcal{Y}}(y) p(\mathcal{Y})\mu(d\mathcal{Y}),
\end{equation}
where $\delta_{\mathcal{Y}}(y) = \sum_{x\in \mathcal{Y}}\delta_x(y)$ and $\delta_x(y)$ denotes the Dirac delta function
concentrated at $x$. The PHD function $v(y)$ is interpreted as the instantaneous expected number of the variables that exist
at that point $y$. Moreover, the integral of the PHD over a region gives the expected number of elements in that region and the peaks of the PHD
indicate highest local concentrations of the expected number of elements.
Having an RFS distribtuion $p(\mathcal{Y})$, the samples can be drawn from this distribution as shown in Algorithm~\ref{table:RFS}.
\input{sampling_rfs}
\section{Deep Set Network}
\label{sec:deep-set-net0}
Let us begin by defining a training set $\mathcal{D} = \{\mathcal{Y}_{i},\mathbf{x}_{i}\}$,
where each training sample $i=1,\ldots,n$ is a pair consisting of an input feature $\mathbf{x}_{i}\in\mathbb{R}^{l}$ and an output (or label) set
$\mathcal{Y}_{i} = \{y_{1},y_{2},\ldots,y_{m_i}\}, y_{k}\in\mathbb{R}^{d}, m_i\in\mathbb{N}^0 $. In the following we will drop the instance index $i$ for better readability. Note that $m:=|\mathcal{Y}|$ denotes the cardinality of set $\mathcal{Y}$.
The probability of a set $\mathcal{Y}$ with an unknown cardinality is defined as:
\begin{equation}
\label{eq:set_density0}
\begin{aligned}
p(\mathcal{Y}|\mathbf{x},\boldsymbol\theta,\mathbf{w}) = & p(m|\mathbf{x},\mathbf{w})\times U^m\times p(\{y_{1},y_{2},\cdots,y_{m}\}_{||}|\mathbf{x},\boldsymbol\theta)\\
= & p(m|\mathbf{x},\mathbf{w})\times m!\times U^m\times p(y_{1},y_{2},\cdots,y_{m}|\mathbf{x},\boldsymbol\theta),
\end{aligned}
\end{equation}
where $p(m|\cdot,\cdot)$ and $ p(y_{1},y_{2},\cdots,y_{m}|\cdot,\cdot)$ are respectively a cardinality distribution and a
symmetric joint probability density distribution of the elements. $U$ is the unit of hypervolume
in $\mathbb{R}^{d}$, which makes the joint distribution unitless~\cite{vo2016model}. $\boldsymbol\theta$ denotes the parameters that estimate the joint distribution of set element values for a fixed cardinality,
while $\mathbf{w}$ represents the collection of parameters which estimate the cardinality distribution of the set elements.
The above formulation represents the probability density of a set which is very general and completely independent from the choices of both cardinality and spatial distributions. It is thus straightforward to transfer it to many applications that require the output to be a set. However, to make the problem amenable to mathematical derivation and implementation, we adopt two assumptions: \emph{i)} the outputs (or labels) in the set are independent
and identically distributed (\textit{i.i.d.}\xspace) and \emph{ii)} their cardinality follows
a Poisson distribution with parameter $\lambda$. Thus, we can write the distribution as
\begin{equation}
\begin{aligned}
p(\mathcal{Y}|\mathbf{x},\boldsymbol\theta,\mathbf{w}) = \int p(m|\lambda)p(\lambda|\mathbf{x},\mathbf{w}) d\lambda\times
m!\times U^m\times\left(\prod_{k=1}^{m}p(y_{k}|\mathbf{x},\boldsymbol\theta)\right).
\end{aligned}
\end{equation}
\subsection{Posterior distribution}
\label{sec:posterior}
To learn the parameters $\boldsymbol\theta$ and $\mathbf{w}$, it is valid to assume that the training samples are independent from each other and the distribution over the input data $p(\mathbf{x})$ is independent from the output and the parameters.
Therefore, the posterior distribution over the parameters can be derived as
\begin{equation}
\begin{aligned}
p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) &= \frac{1}{Z} p(\mathcal{D}|\boldsymbol\theta,\mathbf{w})p(\boldsymbol\theta)p(\mathbf{w})\\
&= \frac{1}{Z} p(\{\mathcal{Y}_{i},\mathbf{x}_{i}\}_{\forall i}|\boldsymbol\theta,\mathbf{w})p(\boldsymbol\theta)p(\mathbf{w})\\
&= \frac{1}{Z}\prod_{i=1}^{n} \bigg[p(\mathcal{Y}_{i}|\mathbf{x}_{i},\boldsymbol\theta,\mathbf{w}) p(\mathbf{x}_{i})\bigg]p(\boldsymbol\theta)p(\mathbf{w})\\
&= \frac{1}{Z}\prod_{i=1}^{n}\left[\int p(m_{i}|\lambda)p(\lambda|\mathbf{x}_{i},\mathbf{w})d\lambda\times
m_{i}!\times U^{m_i}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right) p(\mathbf{x}_{i})\right]p(\boldsymbol\theta)p(\mathbf{w}),
\end{aligned}
\label{eq:posterior}
\end{equation}
where $Z$ is a normaliser defined as
\begin{equation}
Z = \int \int\prod_{i=1}^{n} \left[\int p(m_{i}|\lambda)p(\lambda|\mathbf{x}_{i},\mathbf{w})d\lambda\times
m_{i}!\times U^{m_i}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right) p(\mathbf{x}_{i})\right]p(\boldsymbol\theta)p(\mathbf{w})\quad d\theta d\mathbf{w}.
\end{equation}
The probability
$p(\mathbf{x}_{i})$ can be eliminated as it appears in both the numerator and the denominator. Therefore,
\begin{equation}
p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) = \frac{1}{\tilde{Z}}\prod_{i=1}^{n}\left[\int p(m_{i}|\lambda)p(\lambda|\mathbf{x}_{i},\mathbf{w})d\lambda\times
m_{i}!\times U^{m_i}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\right]p(\boldsymbol\theta)p(\mathbf{w}),
\label{eq:posterior_m}
\end{equation}
where
\begin{equation}
\tilde{Z} = \int \int\prod_{i=1}^{n} \left[\int p(m_{i}|\lambda)p(\lambda|\mathbf{x}_{i},\mathbf{w})d\lambda\times
m_{i}!\times U^{m_i}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\right]p(\boldsymbol\theta)p(\mathbf{w})\quad d\theta d\mathbf{w}.
\end{equation}
A closed form solution for the integral in Eq.\xspace~\eqref{eq:posterior_m} can be obtained by using conjugate priors:
\begin{eqnarray*}
m & \sim & \mathcal{P}(m;\lambda)\\
\lambda & \sim & \mathcal{G}(\lambda;\alpha(\mathbf{x},\mathbf{w}),\beta(\mathbf{x},\mathbf{w}))\\
&&\alpha(\mathbf{x},\mathbf{w}),\beta(\mathbf{x},\mathbf{w})>0\quad\forall\mathbf{x},\mathbf{w}\\
\boldsymbol\theta & \sim & \mathcal{N}(\boldsymbol\theta;0,\sigma_{1}^{2}\mathbf{I})\\
\mathbf{w} & \sim & \mathcal{N}(\mathbf{w};0,\sigma_{2}^{2}\mathbf{I}),
\end{eqnarray*}
where $\mathcal{P}(\cdot,\lambda)$, $\mathcal{G}(\cdot;\alpha,\beta)$, and $\mathcal{N}(\cdot;0,\sigma^{2}\mathbf{I})$ represent respectively a Poisson distribution with parameters $\lambda$, a Gamma distribution with parameters $(\alpha,\beta)$ and a zero mean normal distribution with covariance equal to $\sigma^{2}\mathbf{I}$.
We assume that the cardinality follows a Poisson distribution whose mean, $\lambda$, follows a Gamma distribution, with parameters which can be estimated from the input data $\mathbf{x}$.
Note that the cardinality distribution in Eq.\xspace~\ref{eq:set_density0} can be replaced by any other discrete distribution. For example, it is a valid assumption to model the number of objects in natural images by a Poisson distribution~\cite{chan2009bayesian}. Thus, we could directly predict $\lambda$ to model this distribution by formulating the cardinality as $p(m|\mathbf{x},\mathbf{w}) = \mathcal{P}(m;\lambda(\mathbf{x},\mathbf{w}))$ . However, this would limit the model's expressive power; because two visually entirely different images with the same number of objects would be mapped to the same $\lambda$. Instead, to allow for uncertainty of the mean, we model it with another distribution, which we choose to be Gamma for mathematical convenience.
Consequently, the integrals in $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})$ are simplified
and form a negative binomial distribution,
\begin{equation}
\text{NB}\left(m;a,b\right) = \frac{\Gamma(m+a)}{\Gamma(m+1)\Gamma(a)}.(1-b)^{a}b^{m},
\end{equation}
where $\Gamma$ is the Gamma function. Finally, the full posterior distribution can be written as
\begin{equation}
\begin{aligned}
p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) =\frac{1}{\tilde{Z}}\prod_{i=1}^{n}\bigg[\text{NB}\left(m_{i};\alpha(\mathbf{x}_{i},\mathbf{w}),\frac{1}{1+\beta(\mathbf{x}_{i},\mathbf{w})}\right)\times m_{i}!\times U^{m_{i}}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\bigg]p(\boldsymbol\theta)p(\mathbf{w}).
\label{eq:full-posterior}
\end{aligned}
\end{equation}
\subsection{Learning}
\label{sec:learning}
For simplicity, we use a point estimate for the posterior $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})$,
\ie $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) = \delta(\boldsymbol\theta=\boldsymbol\theta^{*},\mathbf{w}=\mathbf{w}^{*}|\mathcal{D})$, where $(\boldsymbol\theta^{*},\mathbf{w}^{*})$ are computed using the following MAP estimator:
\begin{equation}
(\boldsymbol\theta^{*},\mathbf{w}^{*}) = \arg\max_{\boldsymbol\theta,\mathbf{w}}\quad \log\left(p\left(\boldsymbol\theta,\mathbf{w}|\mathcal{D}\right)\right).
\label{eq:map}
\end{equation}
Since the solution to the above problem is independent from the normalisation constant $\tilde{Z}$, we have
\begin{equation}
\begin{aligned}
(\boldsymbol\theta^{*},\mathbf{w}^{*})= \arg\max_{\boldsymbol\theta,\mathbf{w}}&\quad\log\left(p(\boldsymbol\theta)\right)+\sum_{i=1}^{n}\left[\log\left(m_{i}!\right)+m_i\log U+\sum_{k=1}^{m_{i}}\log\left(p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\right.\\
& \quad+\left.\log\left(NB\left(m_{i};\alpha(\mathbf{x}_{i},\mathbf{w}),\frac{1}{1+\beta(\mathbf{x}_{i},\mathbf{w})}\right)\right)\right]+\log\left(p(\mathbf{w})\right)\\
= & \arg\max_{\boldsymbol\theta,\mathbf{w}}\quad f_{1}(\boldsymbol\theta)+f_{2}(\mathbf{w}).
\end{aligned}
\label{eq:map_complete}
\end{equation}
Therefore,
the optimisation problem in Eq.~\eqref{eq:map_complete} can be decomposed \wrt the parameters
$\boldsymbol\theta$ and $\mathbf{w}$. Therefore, we can learn them independently in two separate problems
\begin{equation}
\begin{aligned}
\boldsymbol\theta^{*} = & \arg\max_{\boldsymbol\theta}\quad f_{1}(\boldsymbol\theta)\\
= & \arg\max_{\boldsymbol\theta}\quad\gamma_{1}\|\boldsymbol\theta\|+\sum_{i=1}^{n}\left[\log\left(m_{i}!\right)+m_i\log U+\sum_{k=1}^{m_{i}}\log\left(p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\right]\\
\equiv&\arg\max_{\boldsymbol\theta}\quad\gamma_{1}\|\boldsymbol\theta\|+\sum_{i=1}^{n}\sum_{k=1}^{m_{i}}\log\left(p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)
\end{aligned}
\label{eq:CNN_Eq0}
\end{equation}
and
\begin{equation}
\begin{aligned}
\mathbf{w}^{*} = \arg\max_{\mathbf{w}}\quad& f_{2}(\mathbf{w})\\
= \arg\max_{\mathbf{w}}\quad&\sum_{i=1}^{n}\Big[\log\left(\frac{\Gamma(m_{i}+\alpha(\mathbf{x}_{i},\mathbf{w}))}{\Gamma(m_{i}+1)\Gamma(\alpha(\mathbf{x}_{i},\mathbf{w}))}\right)\\
& + \displaystyle{ \log\left(\frac{\beta(\mathbf{x}_{i},\mathbf{w})^{\alpha(\mathbf{x}_{i},\mathbf{w})}}{(1+\beta(\mathbf{x}_{i},\mathbf{w})^{\alpha(\mathbf{x}_{i},\mathbf{w})+m_{i}})}\right)}\Big]+\gamma_2\|\mathbf{w}\|,
\label{eq:Cardinal_Eq}
\end{aligned}
\end{equation}
where $\gamma_1$ and $\gamma_2$ are the regularisation parameters, proportional to the predefined covariance parameters $\sigma_1$ and $\sigma_2$. These parameters are also known as weight decay parameters and commonly used in training neural networks.
The learned parameters $\boldsymbol\theta^{*}$ in Eq.\xspace~\eqref{eq:CNN_Eq0} are used to map an input feature vector $\mathbf{x}$ into an output vector $Y$. For example, in image classification, $\boldsymbol\theta^*$ is used to predict the distribution $Y$ over all categories, given the input image $\mathbf{x}$. Note that $\boldsymbol\theta^*$ can generally be learned using a number of existing machine learning techniques. In this paper we rely on deep CNNs to perform this task.
To learn the highly complex function between the input feature $\mathbf{x}$ and the parameters $(\alpha,\beta)$, which are used for estimating the output cardinality distribution, we train a second deep neural network.
Using neural networks to predict a discrete value may seem counterintuitive, because these methods at their core rely on the backpropagation algorithm, which assumes a differentiable loss. Note that we achieve this by describing the discrete distribution by continuous parameters $\alpha, \beta$ (Negative binomial $\text{NB}(\cdot,\alpha,\frac{1}{1+\beta})$), and can then easily draw discrete samples from that distribution. More formally, to estimate $\mathbf{w}^{*}$, we compute the partial derivatives of the objective function in Eq.\xspace~\eqref{eq:Cardinal_Eq} \wrt $\alpha (\cdot,\cdot)$ and $\beta (\cdot,\cdot)$ and use standard backpropagation to learn the parameters of the deep neural network.
\begin{equation}
\frac{\partial f_{2}(\mathbf{w})}{\partial\mathbf{w}} = \frac{\partial f_{2}(\mathbf{w})}{\partial\alpha(\mathbf{x},\mathbf{w})}.\frac{\partial\alpha(\mathbf{x},\mathbf{w})}{\partial\mathbf{w}}+\frac{\partial f_{2}(\mathbf{w})}{\partial\beta(\mathbf{x},\mathbf{w})}.\frac{\partial\beta(\mathbf{x},\mathbf{w})}{\partial\mathbf{w}}+2\gamma_{2}\mathbf{w},
\end{equation}
where
\begin{equation}
\frac{\partial f_{2}(\mathbf{w})}{\partial\alpha(\mathbf{x},\mathbf{w})} = \sum_{i=1}^{n}\bigg[\Psi\Big(m_{i}+\alpha(\mathbf{x}_{i},\mathbf{w})\Big)-\Psi\Big(\alpha(\mathbf{x}_{i},\mathbf{w})\Big)+\log\Big(\frac{\beta(\mathbf{x}_{i},\mathbf{w})}{1+\beta(\mathbf{x}_{i},\mathbf{w})}\Big)\bigg],
\end{equation}
and
\begin{equation}
\frac{\partial f_{2}(\mathbf{w})}{\partial\beta(\mathbf{x},\mathbf{w})} = \sum_{i=1}^{n}\bigg[\frac{\alpha(\mathbf{x}_{i},\mathbf{w})-m_{i}.\beta(\mathbf{x}_{i},\mathbf{w})}{\beta(\mathbf{x}_{i},\mathbf{w}).\Big(1+\beta(\mathbf{x}_{i},\mathbf{w})\Big)}\bigg],
\end{equation}
where $\Psi(\cdot)$ is the digamma function defined as
\begin{equation}
\Psi(\alpha)=\frac{d}{d\alpha} \log \left(\Gamma(\alpha)\right)=\frac{{\Gamma'(\alpha)}}{\Gamma(\alpha)}.
\end{equation}
\subsection{Inference}
\label{sec:inference}
Having the learned parameters of the network $(\mathbf{w}^{*},\boldsymbol\theta^{*})$, for a test feature $\mathbf{x}^{+}$, we use a MAP estimate to generate a set output as
\begin{equation}
\mathcal{Y}^{*}
= \arg\max_{\mathcal{Y}} p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+}),
\end{equation}
where
\begin{eqnarray*}
p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+}) & = & \int p(\mathcal{Y}|\mathbf{x}^{+},\boldsymbol\theta,\mathbf{w})p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})d\boldsymbol\theta d\mathbf{w}
\end{eqnarray*}
and $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) = \delta(\boldsymbol\theta=\boldsymbol\theta^{*},\mathbf{w}=\mathbf{w}^{*}|\mathcal{D})$.
Since the unit of hypervolume $U$ in most practical application in unknown, to calculate the mode of the set distribution $p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+})$, we use the sequential inference as explained in~\cite{mahler2007statistical}. To this end, we first calculate the mode $m^*$ of the cardinality distribution
\begin{equation}
m^{*}
= \arg\max_{m}\quad p(m|\mathbf{w}^*,\mathbf{x}^{+}),
\end{equation}
where
\begin{equation}
p(m|\mathbf{w}^*,\mathbf{x}^{+})=\text{NB}\left(m;\alpha(\mathbf{w}^*,\mathbf{x}^{+}),\frac{1}{1+\beta(\mathbf{w}^*,\mathbf{x}^{+})}\right).
\end{equation}
Then, we calculate the mode of the joint distribution for the given cardinality $m^{*}$ as
\begin{equation}
\mathcal{Y}^{*}
= \arg\max_{\mathcal{Y}_{m^{*}}}\quad p(\{y_1,\cdots,y_{m^{*}}\}_{||}|\boldsymbol\theta^*,\mathbf{x}^{+}).
\end{equation}
To estimate the most likely set $\mathcal{Y}^{*}$ with cardinality $m^{*}$, we use the first CNN with the parameters $\boldsymbol\theta^*$ which predicts $p(y_1,\cdots,y_{M}|\mathbf{x}^{+},\boldsymbol\theta^*)$, where $M$ is the maximum cardinality of the set, \ie $\{y_1,\cdots,y_{m^{*}}\}\subseteq\{y_1,\cdots,y_{M}\}\quad,\forall m^{*}$.
Since the samples are \textit{i.i.d.}\xspace, the joint probability maximised when the probability of each element in the set is maximised. Therefore, the most likely set $\mathcal{Y}^{*}$ with cardinality $m^{*}$ is obtained by ordering the probabilities of the set elements $y_1,\cdots,y_{M}$ as the output of the first CNN and choosing $m^{*}$ elements with highest probability values.
Note that the assumptions listed in Sec.\xspace~\ref{sec:deep-set-net0} are necessary to make both learning and inference computationally tractable and amenable to an elegant mathematical formulation.
A major advantage of this approach is that we can use any state-of-the-art classifier/detector as the first CNN ($\boldsymbol\theta^*$) to further improve its performance.
Modifying any of the assumptions, \eg non-\textit{i.i.d.}\xspace set elements, leads to serious mathematical complexities~\cite{mahler2007statistical}, and are left for future work.
\section{Further Experimental Results}
Here, we provide additional arguments, evaluation plots and qualitative results that could not be included in the main paper.
\subsection{Object counting by regression}
\begin{table}[tb]
\caption{Loss comparison for cardinality estimation.}
\label{tab:new-experiments}
\vspace{-1.5em}
\begin{center}
\begin{tabular}{l|cc|cc}
& \multicolumn{2}{c}{Mean card. error} & \multicolumn{2}{c}{F1 score}\\
\!\!\!\! Loss & MOT16 & MS COCO & MOT16 & MS COCO \\
\hline
\!\!\!\! Regression &$2.05$& $0.83$ & $60.16$ &$68.4$ \\
\!\!\!\! Negative Binomial & $\textbf{1.94}$ & $\textbf{0.74}$ & $\textbf{61.86}$ & $\textbf{69.4}$ \\
\end{tabular}
\end{center}
\end{table}
Regressing for cardinality may seem an obvious choice, but is not trivial to derive mathematically and cannot be easily justified in our framework because it
\emph{a)} cannot be accommodated in a Bayesian set formulation to model uncertainty and
\emph{b)} does not yield a discrete distribution.
Nonetheless, we have conducted the experiment by replacing our loss with the regression loss while using the exact same architecture and setup as in Sec.\xspace 5.2 of the main text. Tab.~\ref{tab:new-experiments} shows the comparison results between our cardinality loss and regression loss on two datasets from two reported tasks of multi-class image classification (MS-COCO) and pedestrian detection (MOT16). As expected, directly regressing for cardinality yields slightly worse results both in terms of the cardinality prediction and \wrt the F1 score.
For completeness, Tab.\xspace~\ref{table:Card_err} also reports the mean absolute error and standard deviation for cardinality estimation using our loss on four datasets.
\input{Card_err}
\subsection{Pedestrian detection}
Here, we first discuss the challenges that we confronted to use our set formulation for this application. Then we provide some information about the datasets and their split used for this experiment. Finally, we show more quantitative and qualitative results on this experiment.
\myparagraph{Non-maximum suppression.}
In the main text, we argued that the non-maximum suppression (NMS) as a heuristic step makes the detection problem not as
straightforward as what is expressed in our set formulation
in Eq.\xspace~(\ref{eq:set_density0}).
In fact, a major nuisance in detection is not the NMS algorithm itself as a greedy solver, but rather its hand-crafted objective.
This process is traditionally formulated as maximising the joint distribution over pairs of samples, or equivalently as a quadratic binary program (QBP)
\begin{equation}Y^{*}
= \arg\max_{Y} Y^TQY, \end{equation}
where
$Y\in\mathbb{B}^M$
is a binary vector, indicating which of the $M$ boxes to keep or to suppress.
The diagonal values of $Q$ are proportional to the detection scores while the pairwise (exclusion) terms in $Q$ are manually designed, \eg to correspond to the overlap ratios.
The aforementioned QBP is NP-hard and cannot be solved globally in general. NMS is one greedy and efficient approach to solve the problem locally.
To enforce $m^*$, one could include a constraint into the QBP like
\begin{equation}
\begin{aligned}
Y^{*} = &\arg\max_{Y}Y^TQY,\\ & s.t. \textbf{1}^TY = m^*.
\end{aligned}
\end{equation}
However, this may lead to an infeasible problem for a fixed $Q$ with a predefined value of the threshold for an overlap ratio $T_O$. To this end, $Q$ should be designed such that the above problem can have a feasible solution. Learning $Q$ is perhaps a more elegant approach, but is not part of this paper. To this end, for the current setup, one solution is to find a threshold that can make the above problem feasible. Therefore, we start from the default value of $T_O$, and adjust it step-wise until the number of boxes reaches $m^*$. In the case if the number of final boxes is larger than $m^*$, we pick $m^*$ boxes with the highest scores. To apply a solver, we experimented with the global QBP formulation using Gurobi for a small problem, but found NMS with an adaptive threshold to be the most efficient and effective approach.
\myparagraph{Caltech Pedestrians~\cite{Dollar:2012:PAMI}} is a de-facto standard benchmark for pedestrian detection. The dataset contains sequences captured from a vehicle driving through regular traffic in an urban environment and provides bounding box annotations of nearly $350,000$ pedestrians. The annotations also include detailed occlusion labels. The number of pedestrians per image varies between $0$ and $14$. However, more than $55\%$ of the images contain no people at all and around $30\%$ of the data includes one or two persons.
We use the MS-CNN~\cite{Cai:2016:ECCV} network model and its parameters learned on the Caltech training set as $\boldsymbol\theta^*$ in Eq.\xspace~\eqref{eq:CNN_Eq0}. To learn the cardinality, we use $4250$ images provided as a training set, splitting it into training and validation ($80\%-20\%$), reaching a mean absolute error of $0.54$ (\cf~Tab.\xspace~\ref{table:Card_err}).
\myparagraph{MOTCallenge 2016.}
This benchmark is primarily targeted at multi-object tracking and is not yet commonly used for evaluating pedestrian detection. However, the variation in the number of pedestrians across the frames is relatively large (between $0$ and $32$) and is also distributed more uniformly, which makes correct cardinality estimation more important.
Since the labels for the test set are not available, we use the provided training set of this benchmark consisting of $5316$ images from $7$ different sequences, and divide it into training, validation and test set with split ratios $60\%$, $15\%$ and $25\%$, respectively.
We only learn the cardinality network $\mathbf{w}^*$ on training set and we use the MS-CNN network model and its parameters learned on the KITTI dataset~\cite{Geiger:2012:CVPR} as $\boldsymbol\theta^*$ in Eq.\xspace~\eqref{eq:CNN_Eq0}.
\myparagraph{Additional Results.} ROC curves on two detection datasets are shown in Fig.~\ref{fig:detection-plots}.
Qualitative results of pedestrian detection are shown in Figure~\ref{fig:Results3}.
\begin{figure}[ht]
\centering
\includegraphics[width=.395\linewidth]{figs_appendix/MOT16.pdf}
\includegraphics[width=.40\linewidth]{figs_appendix/UsaTestRocAll.pdf}
\caption{ROC curves on MOT16 and Caltech Pedestrians (experiment ``all''). The overall performance of a detector is measured by the log-average miss rate as proposed by Doll\'ar~\etal~\cite{Dollar:2012:PAMI}.}
\label{fig:detection-plots}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{figs_appendix/dets.pdf}
\caption{More examples illustrating results of pedestrian detection generated using the state-of-the-art detector MS-CNN~\cite{Cai:2016:ECCV} (in blue, left) and our MS-CNN + DeepSetNet (right).
To generate the MS-CNN results, we display the top $m^*$ boxes after applying non-maximum suppression. Arrows indicate typical failures introduced by a suboptimal NMS threshold, which are corrected when considering the predicted number of persons for each image.
}
\label{fig:Results3}
\end{figure*}
\subsection{Multi-class image classification.}
Figure~\ref{fig:Results10} shows more results for successful image tagging. Figure~\ref{fig:Results20} points to some interesting failures and erroneous predictions.
\begin{figure*}[t]
\centering
\includegraphics[width=.95\linewidth]{figs_appendix/success_cases.pdf}
\caption{Further examples showing a perfect prediction \wrt both the number of tags and the labels themselves using our Deep Set Network.}
\label{fig:Results10}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=.95\linewidth]{figs_appendix/failure_cases.pdf}
\caption{Additional examples illustrating interesting failure cases. {\textcolor{blue}{False negatives}} and {\textcolor{red}{false positives}} are highlighted in blue and red, respectively. Note that in most examples, the mismatch between the ground truth and our prediction is due to the ambiguity or artifacts in the annotations. Two such examples are shown in the top row, where a train (window) and the surfboard are not annotated, probably because these are hardly visible in the image. Nevertheless, our network can predict the objects.
The two bottom rows show real failure cases of our method. Note, however, that these include extremely challenging examples, where even for a human, it is fairly difficult to spot a traffic light in the aerial image or the person and the chair in the image on the bottom right.
}
\label{fig:Results20}
\end{figure*}
\section{Random Vectors vs. Random Finite Sets}
\label{sec:background}
In statistics, a continuous random variable $y$ is a variable that can take an infinite number of possible values. A continuous random vector can be defined by stacking several continuous random variables into a fixed length vector, $Y=\left(y_1,\cdots,y_m\right)$. The mathematical function describing the possible values of a continuous random vector and their associated joint probabilities is known as a probability density function (PDF) $p(Y)$ such that
$\int p(Y)dY = 1.$
In contrast, a random finite set (RFS) $\mathcal{Y}$ is a finite-set valued random variable $\mathcal{Y}=\left\{y_1,\cdots,y_m\right\}$. The main difference between an RFS and a random vector is that for the former, the number of constituent variables, $m$, is random and the variables themselves are random and unordered. Throughout the paper, we use $\mathcal{Y}=\left\{y_1,\cdots,y_m\right\}$ for a set with unknown cardinality, $\mathcal{Y}_m=\left\{y_1,\cdots,y_m\right\}_{||}$ for a set with known cardinality and $Y=\left(y_1,\cdots,y_m\right)$ for a vector with known dimension.
A statistical function describing a finite-set variable $\mathcal{Y}$ is a
combinatorial probability density function $p(\mathcal{Y})$ which consists of a discrete probability distribution, the so-called cardinality distribution, and a family of joint probability densities on both the number and the values of the constituent variables.
Similar to the definition of a PDF for a random variable, the PDF of an RFS must sum to unity over all possible cardinality values and all possible element values and their permutations~\cite{mahler2007statistical}.
The PDF of an $m$-dimensional random vector can be defined in terms of an RFS as
\begin{equation}\label{eq: pdf rfs vs vect}
\begin{aligned}
p(y_1,\cdots,y_m) \triangleq \frac{1}{m!} p(\{y_1,\cdots,y_m\}_{||}).
\end{aligned}
\end{equation}
The normalisation factor $m!=\prod_{k=1}^m k$ appears because the probability density for a set $\{y_1,\cdots,y_m\}_{||}$ must be equally distributed among all the $m!$ possible permutations of the vector~\cite{mahler2007statistical}.
Conventional machine learning approaches, such as Bayesian learning and convolutional neural networks, have been proposed to learn the optimal parameters $\boldsymbol\theta^*$ of the distribution $p(Y|\mathbf{x},\boldsymbol\theta^*)$ which maps the input vector $\mathbf{x}$ to the \emph{output vector} $Y$.
In this paper, we instead propose an approach that can learn a pair $(\boldsymbol\theta^*,\mathbf{w}^*)$ of parameter vectors for a set distribution that allow one to map the input vector $\mathbf{x}$ into the \emph{output set} $\mathcal{Y}$, i.e. $p(\mathcal{Y}|\mathbf{x}, \boldsymbol\theta^*,\mathbf{w}^*)$. The additional parameters $\mathbf{w}^*$ define a PDF over the set cardinality, as we will explain in the next section.
\section{Conclusion}
\label{sec:conclusion}
We proposed a deep learning approach for predicting sets. To achieve this goal, we derived a loss for learning a discrete distribution over the set cardinality.
This allowed us to use standard backpropagation for training a deep network for set prediction.
We have demonstrated the effectiveness of this approach on crowd counting, pedestrian detection and multi-class image classification, achieving competitive or superior results in all three applications. As our network is trained independently, it can be trivially applied to any existing classifier or detector, to further improve performance.
Note that this decoupling is a direct consequence of our underlying mathematical derivation due to the \textit{i.i.d.}\xspace~assumptions, which renders our approach very general and applicable to a wide range of models.
In future, we plan to extend our method to multi-class cardinality estimation and investigate models that do not make \textit{i.i.d.}\xspace assumptions. Another potential avenue could be to exploit the Bayesian nature of the model to include uncertainty as opposed to relying on the MAP estimation alone.
\section{Introduction}
\label{sec:introduction}
Deep neural networks have shown state-of-the-art performance on many computer vision problems, including semantic segmentation~\cite{Papandreou:2015:ICCV}, visual tracking~\cite{Nam:2016:CVPR}, image captioning~\cite{Johnson:2016:CVPR}, scene classification~\cite{Krizhevsky:2012:NIPS}, and object detection~\cite{Liu:2016:ECCV}.
However, traditional convolutional architectures require a problem to be formulated in a certain way: in particular, they are designed to predict a \emph{vector} (or a matrix, or a tensor in a more general sense), that is either of a fixed length or whose size depends on the input (\cf fully convolutional architectures).
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figs/method}
\caption{{\bf Left:} Traditional CNNs learn parameters $\boldsymbol\theta^*$ to predict a fixed \emph{vector} $Y$. {\bf Right:} In contrast, we propose to train a separate CNN to learn a parameter vector $\mathbf{w}^*$, which is used to predict the set cardinality of a particular output.}
\vspace{-1em}
\label{fig:method}
\end{figure}
For example, consider the task of scene classification where the goal is to predict the label (or category) of a given image.
Modern approaches typically address this by a series of convolutional layers, followed by a number of fully connected layers, which are finally mapped to predict a fixed-sized vector~\cite{Krizhevsky:2012:NIPS, Simonyan:2014:VGG, Szegedy:2014:Inception}.
The length of the predicted vector corresponds to the number of candidate categories, \eg 1,000 for the ImageNet challenge~\cite{Russakovsky:2015:ILSVRC}. Each element is a score or probability of a particular category, and the final prediction is a probability distribution over all categories.
This strategy is perfectly admissible if one expects to find exactly one or at least the same number of categories across all images. However, natural images typically show multiple entities (\eg table, pizza, person, \etc), and what is perhaps more important, this number differs from image to image.
During evaluation, this property is not taken into account.
The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) only counts an error if the ``true'' label is not among the top-5 candidates.
Another strategy to account for multiple classes is to fix the number to a certain value for all test instances, and report precision and recall by counting false positive and false negative predictions, as was done in~\cite{gong2013deep,Wang_2016_CVPR}.
Arguably, both methods are suboptimal because in a real-world scenario, where the correct labelling is unknown, the prediction should in fact not only rank all labels according to their likelihood of being present, but also to report \emph{how many} objects (or labels) are actually present in one particular image.
Deciding how many objects are present in an image is a crucial part of human perception and scene understanding but is missing from our current evaluation of automated image understanding methods.
As a second example, let us consider pedestrian detection.
The parallel to scene classification that we motivated above is that, once again, in real scenarios, the number of people in a particular scene is not known beforehand.
The most common approach is to assign a confidence score to a number of region candidates~\cite{Dalal:2005:CVPR, Felzenszwalb:2010:PAMI, Girshick:2015:ICCV, Ren:2015:NIPS}, which are typically selected heuristically by thresholding and non-maxima suppression.
We argue that it is important not to simply discard the information about the actual number of objects at test time, but to exploit it while selecting the subset of region proposals.
The examples above motivate and underline the importance of \emph{set prediction} in certain applications.
It is important to note that, in contrast to vectors, a set is a collection of elements which is \emph{invariant under permutation} and the size of a set is \emph{not fixed} in advance.
To this end, we use a principled definition of a set as the union of cardinality distribution and family of joint distributions over each cardinality value.
In summary, our main contributions are as follows:
\emph{(i)}
Starting from the mathematical definition of a set distribution, we derive a loss that enables us to employ existing machine learning methodology to learn this distribution from data.
\emph{(ii)}
We integrate our loss into a deep learning framework to exploit the power of a multi-layer architecture.
\emph{(iii)}
We present state-of-the-art results for multi-label image classification and pedestrian detection on standard datasets and competitive results on the task of object counting.
\section{Deep Set Network}
\label{sec:deep-set-net}
Let us begin by defining a training set $\mathcal{D} = \{\mathcal{Y}_{i},\mathbf{x}_{i}\}$,
where each training sample $i=1,\ldots,n$ is a pair consisting of an input feature $\mathbf{x}_{i}\in\mathbb{R}^{l}$ and an output (or label) set
$\mathcal{Y}_{i} = \{y_{1},y_{2},\ldots,y_{m_i}\}, y_{k}\in\mathbb{R}^{d} $, $m_i\in\mathbb{N}^0$. In the following we will drop the instance index $i$ for better readability. Note that $m:=|\mathcal{Y}|$ denotes the cardinality of set $\mathcal{Y}$.
The probability of a set $\mathcal{Y}$ is defined as
\begin{equation}
\begin{aligned}
p(\mathcal{Y}|\mathbf{x},\boldsymbol\theta,\mathbf{w}) = & p(m|\mathbf{x},\mathbf{w})\times\\
& m!\times U^m\times p(y_{1},y_{2},\cdots,y_{m}|\mathbf{x},\boldsymbol\theta),
\label{eq:set_density}
\end{aligned}
\end{equation}
where $p(m|\cdot,\cdot)$ and $ p(y_{1},y_{2},\cdots,y_{m}|\cdot,\cdot)$ are respectively a cardinality distribution and a
symmetric joint probability density distribution of the elements. $U$ is the unit of hypervolume
in $\mathbb{R}^{d}$, which makes the joint distribution unitless~\cite{mahler2007statistical,vo2016model}. $\boldsymbol\theta$ denotes the parameters that estimate the joint distribution of set element values for a fixed cardinality,\footnote{This is also known as \emph{spatial distribution of points} in point process statistics.}
while $\mathbf{w}$ represents the collection of parameters which estimate the cardinality distribution of the set elements.
The above formulation represents the probability density of a set which is very general and completely independent from the choices of both cardinality and spatial distributions. It is thus straightforward to transfer it to many applications that require the output to be a set. However, to make the problem amenable to mathematical derivation and implementation, we adopt two assumptions: \emph{i)} the outputs (or labels) in the set are independent
and identically distributed (\textit{i.i.d.}\xspace) and \emph{ii)} their cardinality follows
a Poisson distribution with parameter $\lambda$. Thus, we can write the distribution as
\begin{equation}
\begin{aligned}
p(\mathcal{Y}|\mathbf{x},\boldsymbol\theta,\mathbf{w}) = \int p(m|\lambda)&p(\lambda|\mathbf{x},\mathbf{w}) d\lambda\times\\
m!\times U^m\times&\left(\prod_{k=1}^{m}p(y_{k}|\mathbf{x},\boldsymbol\theta)\right).
\end{aligned}
\end{equation}
\subsection{Posterior distribution}
\label{sec:posterior}
To learn the parameters $\boldsymbol\theta$ and $\mathbf{w}$, we first define the posterior distribution over them as
\begin{equation}
\begin{aligned}
p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) &\propto p(\mathcal{D}|\boldsymbol\theta,\mathbf{w})p(\boldsymbol\theta)p(\mathbf{w})\\
&\propto\prod_{i=1}^{n}\left[\int p(m_{i}|\lambda)p(\lambda|\mathbf{x}_{i},\mathbf{w})d\lambda\times m_{i}! \right.\\ &\left.U^{m_{i}}\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\right]p(\mathbf{x}_{i})p(\boldsymbol\theta)p(\mathbf{w}).
\end{aligned}
\label{eq:posterior0}
\end{equation}
A closed form solution for the integral in Eq.\xspace~\eqref{eq:posterior0} can be obtained by using conjugate priors:
\begin{eqnarray*}
m & \sim & \mathcal{P}(m;\lambda)\\
\lambda & \sim & \mathcal{G}(\lambda;\alpha(\mathbf{x},\mathbf{w}),\beta(\mathbf{x},\mathbf{w}))\\
&&\alpha(\mathbf{x},\mathbf{w}),\beta(\mathbf{x},\mathbf{w})>0\quad\forall\mathbf{x},\mathbf{w}\\
\boldsymbol\theta & \sim & \mathcal{N}(\boldsymbol\theta;0,\sigma_{1}^{2}\mathbf{I})\\
\mathbf{w} & \sim & \mathcal{N}(\mathbf{w};0,\sigma_{2}^{2}\mathbf{I}),
\end{eqnarray*}
where $\mathcal{P}(\cdot,\lambda)$, $\mathcal{G}(\cdot;\alpha,\beta)$, and $\mathcal{N}(\cdot;0,\sigma^{2}\mathbf{I})$ represent respectively a Poisson distribution with parameters $\lambda$, a Gamma distribution with parameters $(\alpha,\beta)$ and a zero mean normal distribution with covariance equal to $\sigma^{2}\mathbf{I}$.
We assume that the cardinality follows a Poisson distribution whose mean, $\lambda$, follows a Gamma distribution, with parameters which can be estimated from the input data $\mathbf{x}$.
Note that the cardinality distribution in Eq.\xspace~\eqref{eq:set_density} can be replaced by any other discrete distribution. For example, it is a valid assumption to model the number of objects in natural images by a Poisson distribution~\cite{chan2009bayesian}. Thus, we could directly predict $\lambda$ to model this distribution by formulating the cardinality as $p(m|\mathbf{x},\mathbf{w}) = \mathcal{P}(m;\lambda(\mathbf{x},\mathbf{w}))$. However, this would limit the model's expressive power because two visually entirely different images with the same number of objects would be mapped to the same $\lambda$. Instead, to allow for uncertainty of the mean, we model it with another distribution, which we choose to be Gamma for mathematical convenience.
Consequently, the integrals in $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})$ are simplified
and form a negative binomial distribution,
\begin{equation}
\text{NB}\left(m;a,b\right) = \frac{\Gamma(m+a)}{\Gamma(m+1)\Gamma(a)}.(1-b)^{a}b^{m},
\end{equation}
where $\Gamma$ is the Gamma function. Finally, the full posterior distribution can be written as
\begin{equation}
\begin{aligned}
p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) & \propto\prod_{i=1}^{n}\bigg[\text{NB}\left(m_{i};\alpha(\mathbf{x}_{i},\mathbf{w}),\frac{1}{1+\beta(\mathbf{x}_{i},\mathbf{w})}\right)\\
&\!\!\!\!\!\!\!\!\times m_{i}!\times U^{m_{i}}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\bigg]p(\boldsymbol\theta)p(\mathbf{w}).
\label{eq:full-posterior0}
\end{aligned}
\end{equation}
\subsection{Learning}
\label{sec:learning}
For simplicity, we use a point estimate for the posterior $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})$,
\ie $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) = \delta(\boldsymbol\theta=\boldsymbol\theta^{*},\mathbf{w}=\mathbf{w}^{*}|\mathcal{D})$, where $(\boldsymbol\theta^{*},\mathbf{w}^{*})$ are computed using the following MAP estimator:
\begin{equation}
(\boldsymbol\theta^{*},\mathbf{w}^{*}) = \arg\max_{\boldsymbol\theta,\mathbf{w}}\quad \log\left(p\left(\boldsymbol\theta,\mathbf{w}|\mathcal{D}\right)\right).
\label{eq:map0}
\end{equation}
The optimisation problem in Eq.\xspace~\eqref{eq:map0} can be decomposed \wrt the parameters
$\boldsymbol\theta$ and $\mathbf{w}$. Therefore, we can learn them independently as
\vspace{-.5em}
\begin{equation}
\boldsymbol\theta^{*} = \arg\max_{\boldsymbol\theta}\quad-\gamma_{1}\|\boldsymbol\theta\|+\sum_{i=1}^{n}\sum_{k=1}^{m_{i}}\log\left(p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)
\label{eq:CNN_Eq}
\end{equation}
and
\begin{equation}
\begin{aligned}
\mathbf{w}^{*}
= & \arg\max_{\mathbf{w}}\quad\sum_{i=1}^{n}\Big[\log\left(\frac{\Gamma(m_{i}+\alpha(\mathbf{x}_{i},\mathbf{w}))}{\Gamma(m_{i}+1)\Gamma(\alpha(\mathbf{x}_{i},\mathbf{w}))}\right)\\
+ & \displaystyle{ \log\left(\frac{\beta(\mathbf{x}_{i},\mathbf{w})^{\alpha(\mathbf{x}_{i},\mathbf{w})}}{(1+\beta(\mathbf{x}_{i},\mathbf{w})^{\alpha(\mathbf{x}_{i},\mathbf{w})+m_{i}})}\right)}\Big]-\gamma_2\|\mathbf{w}\|,
\label{eq:Cardinal_Eq0}
\end{aligned}
\end{equation}
where $\gamma_1$ and $\gamma_2$ are the regularisation parameters, proportional to the predefined covariance parameters $\sigma_1$ and $\sigma_2$. These parameters are also known as weight decay parameters and commonly used in training neural networks.
The learned parameters $\boldsymbol\theta^{*}$ in Eq.\xspace~\eqref{eq:CNN_Eq} are used to map an input feature vector $\mathbf{x}$ into an output vector $Y$. For example, in image classification, $\boldsymbol\theta^*$ is used to predict the distribution $Y$ over all categories, given the input image $\mathbf{x}$. Note that $\boldsymbol\theta^*$ can generally be learned using a number of existing machine learning techniques. In this paper we rely on deep CNNs to perform this task.
To learn the highly complex function between the input feature $\mathbf{x}$ and the parameters $(\alpha,\beta)$, which are used for estimating the output cardinality distribution, we train a second deep neural network.
Using neural networks to predict a discrete value may seem counterintuitive, because these methods at their core rely on the backpropagation algorithm, which assumes a differentiable loss. Note that we achieve this by describing the discrete distribution by continuous parameters $\alpha, \beta$ (Negative binomial $\text{NB}(\cdot,\alpha,\frac{1}{1+\beta})$), and can then easily draw discrete samples from that distribution. More formally, to estimate $\mathbf{w}^{*}$, we compute the partial derivatives of the objective function in Eq.\xspace~\eqref{eq:Cardinal_Eq0} \wrt $\alpha (\cdot,\cdot)$ and $\beta (\cdot,\cdot)$ and use standard backpropagation to learn the parameters of the deep neural network.
We refer the reader to the supplementary material for the complete derivation of the partial derivatives, a more detailed derivation of the posterior in Eqs.\xspace~\eqref{eq:posterior0}-\eqref{eq:full-posterior0} and the proof for decomposition of the MAP estimation in Eq.\xspace~\eqref{eq:map0}.
\subsection{Inference}
\label{sec:inference}
Having the learned parameters of the network $(\mathbf{w}^{*},\boldsymbol\theta^{*})$, for a test feature $\mathbf{x}^{+}$, we use a MAP estimate to generate a set output as
$
\mathcal{Y}^{*}
= \arg\max_{\mathcal{Y}} p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+}),
$
where
\begin{eqnarray*}
p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+}) & = & \int p(\mathcal{Y}|\mathbf{x}^{+},\boldsymbol\theta,\mathbf{w})p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})d\boldsymbol\theta d\mathbf{w}
\end{eqnarray*}
and $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) = \delta(\boldsymbol\theta=\boldsymbol\theta^{*},\mathbf{w}=\mathbf{w}^{*}|\mathcal{D})$.
Since the unit of hypervolume $U$ in most practical application in unknown, to calculate the mode of the set distribution $p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+})$, we use sequential inference as explained in~\cite{mahler2007statistical}. To this end, we first calculate the mode $m^*$ of the cardinality distribution
$
m^{*}
= \arg\max_{m} p(m|\mathbf{x}^{+},\mathbf{w}^*),
$
where $p(m|\mathbf{x}^{+},\mathbf{w}^*)=\text{NB}\left(m;\alpha(\mathbf{x}^{+},\mathbf{w}^*),\frac{1}{1+\beta(\mathbf{x}^{+},\mathbf{w}^*)}\right)$.
Then, we calculate the mode of the joint distribution for the given cardinality $m^{*}$ as
\begin{equation}
\mathcal{Y}^{*}
= \arg\max_{\mathcal{Y}_{m^{*}}}\quad p(\{y_1,\cdots,y_{m^{*}}\}_{||}|\mathbf{x}^{+},\boldsymbol\theta^*).
\end{equation}
To estimate the most likely set $\mathcal{Y}^{*}$ with cardinality $m^{*}$, we use the first CNN with the parameters $\boldsymbol\theta^*$ which predicts $p(y_1,\cdots,y_{M}|\mathbf{x}^{+},\boldsymbol\theta^*)$, where $M$ is the maximum cardinality of the set, \ie $\{y_1,\cdots,y_{m^{*}}\}\subseteq\{y_1,\cdots,y_{M}\},\quad\forall m^{*}$.
Since the samples are \textit{i.i.d.}\xspace, the joint probability maximised when the probability of each element in the set is maximised. Therefore, the most likely set $\mathcal{Y}^{*}$ with cardinality $m^{*}$ is obtained by ordering the probabilities of the set elements $y_1,\cdots,y_{M}$ as the output of the first CNN and choosing $m^{*}$ elements with highest probability values.
Note that the assumptions in Sec.\xspace~\ref{sec:deep-set-net} are necessary to make both learning and inference computationally tractable and amenable to an elegant mathematical formulation.
A major advantage of this approach is that we can use any state-of-the-art classifier/detector as the first CNN ($\boldsymbol\theta^*$) to further improve its performance.
Modifying any of the assumptions, \eg non-\textit{i.i.d.}\xspace set elements, leads to serious mathematical complexities~\cite{mahler2007statistical}, and is left for future work.
\section{Related Work}
\label{sec:relwork}
A sudden success in multiple applications including voice recognition~\cite{Graves:2013:ICASSP}, machine translation~\cite{Sutskever:2014:NIPS} and image classification~\cite{Krizhevsky:2012:NIPS}, has sparked the deployment of deep learning methods throughout numerous research areas.
Deep convolutional (CNN) and recurrent (RNN) neural networks now outperform traditional approaches in tasks like semantic segmentation~\cite{Chen:2014:ICLR, Lin:2017:CVPR}, image captioning~\cite{Johnson:2016:CVPR} or object detection~\cite{Liu:2016:ECCV}.
Here, we will briefly review some of the recent approaches to image classification and object detections and point out their limitations.
Image or scene classification is a fundamental task of understanding photographs.
The goal here is to predict a scene label for a given image.
Early datasets, such as \mbox{Caltech-101}~\cite{FeiFei:2006:PAMI}, mostly contained one single object and could easily be described by one category. Consequently, a large body of literature focused on single-class prediction~\cite{Krizhevsky:2012:NIPS, Sermanet:2013:OverFeat, Zeiler:2014:ECCV, Murthy:2016:CVPR}.
However, real-world photographs typically contain a collection of multiple objects and should therefore be captioned with multiple tags.
Surprisingly, there exists rather little work on multi-class image classification that makes use of deep architectures. Gong~\etal~\cite{Gong:2013:arxiv} combine deep CNNs with a top-$k$ approximate ranking loss to predict multiple labels. Wei~\etal~\cite{Wei:2014:arxiv} propose a Hypotheses-Pooling architecture that is specifically designed to handle multi-label output. While both methods open a promising direction, their underlying architectures largely ignore the correlation between multiple labels.
To address this limitation, recently, Wang~\etal~\cite{Wang_2016_CVPR} combined CNNs and RNNs to predict a number of classes in a sequential manner.
RNNs, however, are not suitable for set prediction mainly for two reasons.
First, the output represents a sequence rather than a set and is thus highly dependent on the prediction order, as was shown recently by Vinyals~\etal~\cite{Vinyals:2015:arxiv}.
Second, the final prediction may not result in a feasible solution (\eg it may contain the same element multiple times), such that post-processing or heuristics such as beam search must be employed~\cite{Vinyals:2015:NIPS, Wang_2016_CVPR}.
Here we show that our approach not only guarantees to always predict a valid set, but also outperforms previous methods.
Pedestrian detection can also be viewed as a classification problem. Traditional approaches follow the sliding-window paradigm~\cite{Viola:2004:IJCV, Dalal:2005:CVPR, Walk:2010:CVPR, Felzenszwalb:2010:PAMI, Benenson:2012:CVPR}, where each possible (or rather plausible) image region is scored independently to contain a person or not.
More recent methods like Fast R-CNN~\cite{Girshick:2015:ICCV} or the single-shot multi-box detector (SSD)~\cite{Liu:2016:ECCV} learn the relevant image features rather than manually engineering them, but retain the sliding window approach.
All the above approaches require some form of post-processing to suppress spurious detection responses that originate from the same person. This is typically addressed by non-maximum suppression (NMS), a greedy optimisation strategy with a fixed overlap threshold.
Recently, several alternatives have been proposed to replace the greedy NMS procedure, including sequential head detection with LSTMs~\cite{Hochreiter:1997:LSTM}, a global optimisation approach to NMS~\cite{Pham:2016:CVPR,Lee:2016:ECCV}, or even learning NMS end-to-end using CNNs~\cite{Hosang:2017:CVPR}. None of the above methods, however, explicitly consider the number of objects while selecting the final set of boxes. Contrary to existing pedestrian detection approaches, we incorporate the cardinality into the NMS algorithm itself and show an improvement over the state of the art on two benchmarks. Note that the idea of considering the number of pedestrians can be applied in combination with any of the aforementioned detection techniques to further improve their performances.
It is important to bear in mind that unlike many existing approaches that learn to count~\cite{arteta2014interactive,chan2009bayesian,fiaschi2012learning,idrees2013multi,lempitsky2010learning, pham2015count, zhang2015cross,zhang2016single}, our main goal is not object counting. Rather, we derive a formulation for set prediction using deep learning. Estimating the cardinality of objects and thereby their count is a byproduct of our approach. To demonstrate the effectiveness of our formulation, we also conduct experiments on the task of object counting, outperforming many recent methods on the widely used USCD dataset.
\section{Experimental Results}
\label{sec:results}
Our proposed method is best suited for applications that expect the solution to be in the form of a set, \ie permutation invariant and of an unknown cardinality. To that end, we perform an
experiment on multi-label image classification in Sec.\xspace~\ref{sec:multi-label}. In addition, we explore
our cardinality estimation loss on the object counting problem in Sec.\xspace~\ref{sec:crowd-counting} and then
show in Sec.\xspace~\ref{sec:detection} how incorporating cardinality into a state-of-the art pedestrian detector and formulating it as a set problem can boost up its performance.
\subsection{Multi-Label Image Classification}
\label{sec:multi-label}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{figs/success_cases}
\caption{Qualitative results of our multi-class image labelling approach. For each image, the ground truth tags and our predictions are denoted below. Note that we show the exact output of our network, without any heuristics or post-processing.}
\label{fig:Results1}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{figs/failure_cases}
\caption{Interesting failure cases of our method. The ``spurious'' TV class predicted on the left is an artifact in annotation because in many examples, computer monitors are actually labelled as TV. In other cases, our network can correctly reason about the number of objects or concepts in the scene, but is constrained by a fixed list of categories defined in the dataset.}
\label{fig:Results2}
\end{figure*}
As opposed to the more common and more studied problem of (single-label) image classification, the task here is rather to label a photograph with an arbitrary, a-priori unknown number of tags. We perform experiments on two standard benchmarks, the PASCAL VOC 2007 dataset~\cite{Everingham:2007:PASCAL-VOC} and the Microsoft Common Objects in Context (MS COCO) dataset~\cite{Lin:2014:COCO}.
\myparagraph{Implementation details.}
In this experiment, similar to~\cite{Wang_2016_CVPR}, we build on the $16$-layers VGG network~\cite{Simonyan:2014:VGG}, pretrained on the 2012 ImageNet dataset. We adapt VGG for our purpose by modifying the last fully connected prediction layer to predict $20$ classes for PASCAL VOC, and $80$ classes for MS COCO. We then fine-tune the entire network for each of these datasets using two commonly used losses for multi-label classification, \textit{softmax} and \textit{binary cross-entropy (BCE)}\footnote{\textit{Weighted Approximate Ranking} (WARP) objective is another commonly used loss for multi-label classification. However, it does not perform as well as \textit{softmax} and \textit{binary cross-entropy} for the used datasets~\cite{Wang_2016_CVPR}.}~\cite{gong2013deep,Wang_2016_CVPR}. To learn both classifiers, we set the weight decay to $5\cdot 10^{-4}$, with a momentum of $0.9$ and a dropout rate of $0.5$. The learning rate is adjusted to gradually decrease after each epoch, starting from $0.01$ for \textit{softmax} and from $0.001$ for \textit{binary cross-entropy}. The learned parameters of these classifiers correspond to $\boldsymbol\theta^*$ for our proposed deep set network (\cf~Eq.\xspace~\eqref{eq:CNN_Eq} and Fig.\xspace~\ref{fig:method}).
To learn the cardinality distribution, we use the same VGG-16 network as above and modify the final fully connected layer to predict $2$ values followed by two weighted sigmoid activation functions for $\alpha$ and $\beta$. It is important to note, that the outputs must be positive to describe a valid Gamma distribution. We therefore also append two weighted sigmoid transfer functions
with weights $\alpha_{M}, \beta_{M}$
to ensure that the values predicted for $\alpha$ and $\beta$ are in a valid range. Our model is not sensitive to these parameters and we set their values to be large enough ($\alpha_{M}=160$ and $\beta_{M}=20$) to guarantee that the mode of the distribution can accommodate the largest cardinality existing in the dataset. We then fine-tune the network on cardinality distribution using the objective loss defined in Eq.\xspace~\eqref{eq:Cardinal_Eq0}. To train the cardinality CNN, we set a constant learning rate $0.001$, weight decay $5\cdot10^{-12}$, momentum rate $0.9$ and dropout $0.5$.
\myparagraph{Evaluation protocol.}
To evaluate the performance of the classifiers and our deep set network, we employ the commonly used evaluation metrics for multi-label image classification~\cite{gong2013deep,Wang_2016_CVPR}: \textit{precision} and \textit{recall} of the generated labels per-class (C-P and C-R) and overall (O-P and O-R). Precision is defined as the ratio of
correctly predicted labels and total predicted labels, while
recall is the ratio of correctly predicted labels and ground-truth labels.
In case no predictions (or ground truth) labels exist, \ie the denominator becomes zero, precision (or recall) is defined as $100\%$. To generate the predicted labels for a particular image, we perform a forward pass of the CNN and choose top-$k$ labels according to their scores similar to~\cite{gong2013deep,Wang_2016_CVPR}.
Since the classifier always predicts a fixed-sized prediction for all categories, we sweep $k$ from $0$ to the maximum number of classes to generate a precision/recall curve, which is common practice in multi-label image classification.
However, for our proposed DeepSet Network, the number of labels per instance is predicted from the cardinality network. Therefore, prediction/recall is not dependent on value $k$ and one single precision/recall value can be computed.
To calculate the per-class and overall precision/recall, their average values over all classes and all examples are computed, respectively. In addition, we also report the F1 score (the harmonic mean of precision and recall) averaged over all classes (C-F1) and all instances and classes (O-F1).
\myparagraph{PASCAL VOC 2007.}
The Pascal Visual Object Classes (VOC)~\cite{Everingham:2007:PASCAL-VOC} benchmark is one of the most widely used datasets for detection and classification. It consists of $9963$ images with a 50/50 split for training and test, where objects from $20$ pre-defined categories have been annotated by bounding boxes. Each image may contain between $1$ and $7$ unique objects.
We compare our results with a state-of-the-art classifier as described above. The resulting precision/recall plots are shown in Fig.\xspace~\ref{fig:curves-mlic}(a) together with our proposed approach using the estimated cardinality. Note that by enforcing the correct cardinality for each image, we are able to clearly outperform the baseline \wrt both measures. Note also that our prediction (+) can nearly replicate the oracle ($\ast$), where the ground truth cardinality is known.
The mean absolute cardinality error of our prediction on PASCAL VOC is $0.32 \pm0.52$.
\begin{figure}[t]
\centering
\includegraphics[width=.49\linewidth]{figs/Overal_ROC_Curve_VOC_mod.pdf}
\hfill
\includegraphics[width=.48\linewidth]{figs/Overal_ROC_Curve_Coco.pdf}
\caption{Experimental results on multi-label image classification. The baselines (solid curves) represent state-of-the-art classifiers, fine-tuned for each dataset, using two different loss functions. The methods are evaluated by choosing the top-$k$ predictions across the entire dataset, for different $k$. Our approach predicts $k$ and is thus evaluated only on one single point (+). It outperforms both classifiers significantly in terms of precision and recall and comes very close to the performance when the true cardinality is known ($*$).}
\label{fig:curves-mlic}
\vspace{-1em}
\end{figure}
\myparagraph{Microsoft COCO.}
Another popular benchmark for image captioning, recognition, and segmentation is the recent Microsoft Common Objects in Context (MS-COCO)~\cite{Lin:2014:COCO}. The dataset consists of
$123$ thousand images, each labelled with per instance
segmentation masks of $80$ classes. The number of unique objects for each image can vary between $0$ and $18$. Around $700$ images in the training dataset do not contain any of the $80$ classes and there are only a handful of images that have more than $10$ tags. The majority of the images contain between one and three labels. We use $82783$
images as training and validation split ($90$\% - $10\%$), and the remaining $40504$ images as test data. We predict the cardinality of objects in the scene with a mean absolute error of $0.74$ and a standard deviation of $0.86$.
\input{tables/allcoco-multilabel}
Fig.\xspace~\ref{fig:curves-mlic}(b) shows a significant improvement of precision and recall and consequently the F1 score using our deep set network compared to the softmax and binary cross-entropy classifiers for all ranking values $k$. We also outperform the state-of-the art multi-label classifier CNN-RNN~\cite{Wang_2016_CVPR}, for the reported value of $k=3$. Our results, listed in Tab.\xspace~\ref{table:allcoco-multilabel}, show around $7$ percentage points improvement for the F1 score on top of the baseline classifiers and about $3$ percentage points improvement compared to the state of the art on this dataset.
Examples of perfect label prediction using our proposed approach are shown in Fig.\xspace~\ref{fig:Results1}. The deep set network can properly recognise images with no labels at all, as well as images with many tags.
We also investigated failure cases where either the cardinality CNN or the classifier fails to make a correct prediction. We showcase some of these cases in Fig~\ref{fig:Results2}. We argue here that some of the failure cases are simply due to a missed ground truth annotation, such as the left-most example, but some are actually semantically correct \wrt the cardinality prediction, but are penalized during evaluation because a particular object category is not available in the dataset. This is best illustrated in the second example in Fig.\xspace~\ref{fig:Results2}. Here, our network correctly predicts the number of objects in the scene, which is two, however, the can does not belong to any of the 80 categories in the dataset and is thus not annotated. Similar situations also appear in other images further to the right.
\subsection{Object Counting}
\label{sec:crowd-counting}
To show the robustness of our cardinality loss, we first evaluate our cardinality estimation on the common crowd counting application. To this end, we apply our approach on the widely used UCSD dataset~\cite{chan2008privacy} and compare our results to four state-of-the art approaches~\cite{arteta2014interactive,onoro2016towards,pham2015count,zhang2015cross}.
The USCD dataset includes
a 2000-frames long video sequence, captured by a fixed outdoor surveillance camera. In addition to the video, the region of interest (ROI), the perspective map of the scene and the location annotations of all pedestrians in each frame are also provided.
\myparagraph{Implementation details.} We build our cardinality network structure on top of the well-known AlexNet~\cite{Krizhevsky:2012:NIPS} architecture. However, we replace the first convolutional layer with a single channel filter to accept grayscale images as input, and the last fully connected layer with $2$ layers outputs, similar to the case above (\cf~Sec.\xspace~\ref{sec:multi-label}). To estimate the counts, we calculate the mode of the negative binomial distribution.
As input, we use a grayscale image constructed by superimposing all region proposals and their scores generated by an off-the-shelf pedestrian detector (before non-maximum suppression). We use the multi-scale deep CNN approach (MS-CNN)~\cite{Cai:2016:ECCV} trained on the KITTI dataset~\cite{Geiger:2012:CVPR} for our purpose. We found, that this input provides a stronger signal than the raw RGB images, yielding better results.
Note that we process the input images with a pedestrian detector, however, we do not use any location or perspective information that is available for this dataset. During learning, we only rely on the object count for each image region.
\input{tables/Crowd_err}
We follow exactly the same data split used in~\cite{onoro2016towards} by conducting four different and separate experiments on maximal, downscale, upscale and minimal subsets in UCSD dataset.
In order to train our network, similar to~\cite{onoro2016towards} we use data augmentation in each experiment by extracting $800$ random patches from each training image and their corresponding
ground truth counts. We also randomly flip each patch during training. To ensure that we can count all pedestrians from the entire image at test time, we choose the patch sizes to be exactly half of the image size ($79\times119$ pixels) and then perform inference on the resulting $4$ non-overlapping regions. The weights are initialised randomly and the network is trained for $100$ epochs.
All hyperparameters are set as in Sec.\xspace~\ref{sec:multi-label}.
\myparagraph{Results.} Tab.\xspace~\ref{table:Crowd_err} shows the mean absolute error between the predicted and the ground truth counts. We show competitive or superior performance in each experiment except for the `minimal' subset. The main reason is that the training set size is too small (only $10$ images) in this particular split and even data augmentation cannot generalize the cardinality model for the test images. Moreover, unlike other methods, we do not utilize any location information but only provide the object count as ground truth.
Considering the overall performance, our approach outperforms state-of-the-art counting approaches that do not use the perspective map (Hydra 2s and 3s) and performs favourably compared to many existing methods that exploit localisation and perspective information.
\myparagraph{Discussion.}
One obvious alternative for our proposed cardinality loss may seem to directly regress for $m$. This alternative, however, has two main drawbacks. First, it cannot be formulated within a Bayesian set framework to model uncertainty, and second, the regression loss does not yield a discrete distribution and hence does not fit the underlying mathematical foundation of this paper. Nevertheless, we have run the same experiments as above using a standard regression loss but did not reach the same performance.
Using the regression loss we achieve a mean cardinality error (MCE) of $0.83$ on MS COCO, while our loss yields an MCE of $0.74$. This is also reflected in the O-F1 score which drops from $69.4$ to $68.4$ when directly regressing for $m$.
\subsection{Pedestrian Detection}
\label{sec:detection}
\begin{figure}[t]
\centering
\begin{subfigure}[t]{.31\linewidth}
\centering
\includegraphics[width=1\linewidth]{figs/MOT16-RegProps-crp.png}
\caption{Proposals}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.31\linewidth}
\centering
\includegraphics[width=1\linewidth]{figs/MOT16-MS-CNN-crp.png}
\caption{MS-CNN~\cite{Cai:2016:ECCV}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.31\linewidth}
\centering
\includegraphics[width=1\linewidth]{figs/MOT16-ours-crp.png}
\caption{Our result}
\end{subfigure}
\vspace{-.5em}
\small
\caption{Example pedestrian detection result of our approach. To select relevant detection candidates from an overcomplete set of proposals~\ep{a}, state-of-the-art methods rely on non-maximum suppression (NMS) with a fixed setting~\ep{b}. We show that a better result can be achieved by adjusting the NMS threshold adaptively, depending on the number of instances in each image (3 in this case)~\ep{c}.}
\vspace{-1.5em}
\label{fig:nms}
\end{figure}
In this section, we cast the task of pedestrian detection as a set prediction problem and demonstrate that incorporating cardinality prediction (person count) can be beneficial to improve performance.
To this end, we perform experiments on two widely used datasets, Caltech Pedestrians~\cite{Dollar:2012:PAMI} and MOT16 from the MOTChallenge benchmark~\cite{Milan:2016:MOT16}. Recalling Eqs.\xspace~\eqref{eq:CNN_Eq} and~\eqref{eq:Cardinal_Eq0}, we need two networks with parameters $\mathbf{w}^*$ and $\boldsymbol\theta^*$ for cardinality estimation and detection proposals, respectively. For the cardinality network, we use the exact same architecture and setup as in Sec.\xspace~\ref{sec:crowd-counting} and train it on the training sets of these datasets.
Note that it is not our intention to engineer a completely novel pedestrian detector here. Rather, for $\boldsymbol\theta^*$, we take an off-the-shelf state-of-the-art system (MS-CNN)~\cite{Cai:2016:ECCV} and show how it can be further improved by taking the cardinality prediction into account.
To generate the final detection outputs, most detectors often rely on non-maximum suppression (NMS), which greedily picks the boxes with highest scores and suppresses any boxes that overlap more than a pre-defined threshold $T_O$. This heuristic makes the solution more ad-hoc than what is expressed in our set formulation in Eq.\xspace~\eqref{eq:set_density}. However, we are still able to improve the detector performance by adjusting this threshold for each frame separately. To obtain the final detection output, we use the prediction on the number of people $(m^*)$ in the scene to choose an adaptive NMS threshold for each image. In particular, we start from the default value of $T_O$, and increase it gradually until the number of boxes reaches $m^*$. In the case if the number of final boxes is larger than $m^*$, we pick $m^*$ boxes with the highest scores, which corresponds to the MAP set prediction as discussed in Sec.\xspace~\ref{sec:inference}. To ensure a fair comparison, we also determine the best (global) value for $T_O=0.4$ for the MS-CNN baseline. Fig.\xspace~\ref{fig:nms} demonstrates an example of the adjusted NMS threshold when considering the number of pedestrians in the image.
To quantify the detection performance, we adapt the same evaluation metrics and follow the protocols used on the Caltech detection benchmark~\cite{Dollar:2012:PAMI}. The evaluation metrics used here are log-average
miss rate (MR) over false positive per image. Additionally, we compute the F1 score (the harmonic mean of precision and recall). The F1 score is computed from \emph{all} detections predicted from our DeepSet network and is compared with the \emph{highest} F1 score along the MS-CNN precision-recall curve. To calculate MR, we concatenate all boxes resulted from our adaptive NMS approach and change the threshold over all scores from our predicted sets.
Quantitative detection results are shown in Tab.\xspace~\ref{table:F1 score}. Note that we do not retrain the detector, but are still able to improve its performance by predicting the number of pedestrians in each frame in these two dataset.
\input{tables/F1score}
\section{Deep Set Network}
\label{sec:deep-set-net}
Let us begin by defining a training set $\mathcal{D} = \{\mathcal{Y}_{i},\mathbf{x}_{i}\}$,
where each training sample $i=1,\ldots,n$ is a pair consisting of an input feature $\mathbf{x}_{i}\in\mathbb{R}^{l}$ and an output (or label) set
$\mathcal{Y}_{i} = \{y_{1},y_{2},\ldots,y_{m_i}\}, y_{k}\in\mathbb{R}^{d} $, $m_i\in\mathbb{N}^0$. In the following we will drop the instance index $i$ for better readability. Note that $m:=|\mathcal{Y}|$ denotes the cardinality of set $\mathcal{Y}$.
The probability of a set $\mathcal{Y}$ is defined as
\begin{equation}
\begin{aligned}
p(\mathcal{Y}|\mathbf{x},\boldsymbol\theta,\mathbf{w}) = & p(m|\mathbf{x},\mathbf{w})\times\\
& m!\times U^m\times p(y_{1},y_{2},\cdots,y_{m}|\mathbf{x},\boldsymbol\theta),
\label{eq:set_density}
\end{aligned}
\end{equation}
where $p(m|\cdot,\cdot)$ and $ p(y_{1},y_{2},\cdots,y_{m}|\cdot,\cdot)$ are respectively a cardinality distribution and a
symmetric joint probability density distribution of the elements. $U$ is the unit of hypervolume
in $\mathbb{R}^{d}$, which makes the joint distribution unitless~\cite{mahler2007statistical,vo2016model}. $\boldsymbol\theta$ denotes the parameters that estimate the joint distribution of set element values for a fixed cardinality,\footnote{This is also known as \emph{spatial distribution of points} in point process statistics.}
while $\mathbf{w}$ represents the collection of parameters which estimate the cardinality distribution of the set elements.
The above formulation represents the probability density of a set which is very general and completely independent from the choices of both cardinality and spatial distributions. It is thus straightforward to transfer it to many applications that require the output to be a set. However, to make the problem amenable to mathematical derivation and implementation, we adopt two assumptions: \emph{i)} the outputs (or labels) in the set are independent
and identically distributed (\textit{i.i.d.}\xspace) and \emph{ii)} their cardinality follows
a Poisson distribution with parameter $\lambda$. Thus, we can write the distribution as
\begin{equation}
\begin{aligned}
p(\mathcal{Y}|\mathbf{x},\boldsymbol\theta,\mathbf{w}) = \int p(m|\lambda)&p(\lambda|\mathbf{x},\mathbf{w}) d\lambda\times\\
m!\times U^m\times&\left(\prod_{k=1}^{m}p(y_{k}|\mathbf{x},\boldsymbol\theta)\right).
\end{aligned}
\end{equation}
\subsection{Posterior distribution}
\label{sec:posterior}
To learn the parameters $\boldsymbol\theta$ and $\mathbf{w}$, we first define the posterior distribution over them as
\begin{equation}
\begin{aligned}
p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) &\propto p(\mathcal{D}|\boldsymbol\theta,\mathbf{w})p(\boldsymbol\theta)p(\mathbf{w})\\
&\propto\prod_{i=1}^{n}\left[\int p(m_{i}|\lambda)p(\lambda|\mathbf{x}_{i},\mathbf{w})d\lambda\times m_{i}! \right.\\ &\left.U^{m_{i}}\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\right]p(\mathbf{x}_{i})p(\boldsymbol\theta)p(\mathbf{w}).
\end{aligned}
\label{eq:posterior0}
\end{equation}
A closed form solution for the integral in Eq.\xspace~\eqref{eq:posterior0} can be obtained by using conjugate priors:
\begin{eqnarray*}
m & \sim & \mathcal{P}(m;\lambda)\\
\lambda & \sim & \mathcal{G}(\lambda;\alpha(\mathbf{x},\mathbf{w}),\beta(\mathbf{x},\mathbf{w}))\\
&&\alpha(\mathbf{x},\mathbf{w}),\beta(\mathbf{x},\mathbf{w})>0\quad\forall\mathbf{x},\mathbf{w}\\
\boldsymbol\theta & \sim & \mathcal{N}(\boldsymbol\theta;0,\sigma_{1}^{2}\mathbf{I})\\
\mathbf{w} & \sim & \mathcal{N}(\mathbf{w};0,\sigma_{2}^{2}\mathbf{I}),
\end{eqnarray*}
where $\mathcal{P}(\cdot,\lambda)$, $\mathcal{G}(\cdot;\alpha,\beta)$, and $\mathcal{N}(\cdot;0,\sigma^{2}\mathbf{I})$ represent respectively a Poisson distribution with parameters $\lambda$, a Gamma distribution with parameters $(\alpha,\beta)$ and a zero mean normal distribution with covariance equal to $\sigma^{2}\mathbf{I}$.
We assume that the cardinality follows a Poisson distribution whose mean, $\lambda$, follows a Gamma distribution, with parameters which can be estimated from the input data $\mathbf{x}$.
Note that the cardinality distribution in Eq.\xspace~\eqref{eq:set_density} can be replaced by any other discrete distribution. For example, it is a valid assumption to model the number of objects in natural images by a Poisson distribution~\cite{chan2009bayesian}. Thus, we could directly predict $\lambda$ to model this distribution by formulating the cardinality as $p(m|\mathbf{x},\mathbf{w}) = \mathcal{P}(m;\lambda(\mathbf{x},\mathbf{w}))$. However, this would limit the model's expressive power because two visually entirely different images with the same number of objects would be mapped to the same $\lambda$. Instead, to allow for uncertainty of the mean, we model it with another distribution, which we choose to be Gamma for mathematical convenience.
Consequently, the integrals in $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})$ are simplified
and form a negative binomial distribution,
\begin{equation}
\text{NB}\left(m;a,b\right) = \frac{\Gamma(m+a)}{\Gamma(m+1)\Gamma(a)}.(1-b)^{a}b^{m},
\end{equation}
where $\Gamma$ is the Gamma function. Finally, the full posterior distribution can be written as
\begin{equation}
\begin{aligned}
p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) & \propto\prod_{i=1}^{n}\bigg[\text{NB}\left(m_{i};\alpha(\mathbf{x}_{i},\mathbf{w}),\frac{1}{1+\beta(\mathbf{x}_{i},\mathbf{w})}\right)\\
&\!\!\!\!\!\!\!\!\times m_{i}!\times U^{m_{i}}\times\left(\prod_{k=1}^{m_{i}}p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)\bigg]p(\boldsymbol\theta)p(\mathbf{w}).
\label{eq:full-posterior0}
\end{aligned}
\end{equation}
\subsection{Learning}
\label{sec:learning}
For simplicity, we use a point estimate for the posterior $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})$,
\ie $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) = \delta(\boldsymbol\theta=\boldsymbol\theta^{*},\mathbf{w}=\mathbf{w}^{*}|\mathcal{D})$, where $(\boldsymbol\theta^{*},\mathbf{w}^{*})$ are computed using the following MAP estimator:
\begin{equation}
(\boldsymbol\theta^{*},\mathbf{w}^{*}) = \arg\max_{\boldsymbol\theta,\mathbf{w}}\quad \log\left(p\left(\boldsymbol\theta,\mathbf{w}|\mathcal{D}\right)\right).
\label{eq:map0}
\end{equation}
The optimisation problem in Eq.\xspace~\eqref{eq:map0} can be decomposed \wrt the parameters
$\boldsymbol\theta$ and $\mathbf{w}$. Therefore, we can learn them independently as
\vspace{-.5em}
\begin{equation}
\boldsymbol\theta^{*} = \arg\max_{\boldsymbol\theta}\quad-\gamma_{1}\|\boldsymbol\theta\|+\sum_{i=1}^{n}\sum_{k=1}^{m_{i}}\log\left(p(y_{k}|\mathbf{x}_{i},\boldsymbol\theta)\right)
\label{eq:CNN_Eq}
\end{equation}
and
\begin{equation}
\begin{aligned}
\mathbf{w}^{*}
= & \arg\max_{\mathbf{w}}\quad\sum_{i=1}^{n}\Big[\log\left(\frac{\Gamma(m_{i}+\alpha(\mathbf{x}_{i},\mathbf{w}))}{\Gamma(m_{i}+1)\Gamma(\alpha(\mathbf{x}_{i},\mathbf{w}))}\right)\\
+ & \displaystyle{ \log\left(\frac{\beta(\mathbf{x}_{i},\mathbf{w})^{\alpha(\mathbf{x}_{i},\mathbf{w})}}{(1+\beta(\mathbf{x}_{i},\mathbf{w})^{\alpha(\mathbf{x}_{i},\mathbf{w})+m_{i}})}\right)}\Big]-\gamma_2\|\mathbf{w}\|,
\label{eq:Cardinal_Eq0}
\end{aligned}
\end{equation}
where $\gamma_1$ and $\gamma_2$ are the regularisation parameters, proportional to the predefined covariance parameters $\sigma_1$ and $\sigma_2$. These parameters are also known as weight decay parameters and commonly used in training neural networks.
The learned parameters $\boldsymbol\theta^{*}$ in Eq.\xspace~\eqref{eq:CNN_Eq} are used to map an input feature vector $\mathbf{x}$ into an output vector $Y$. For example, in image classification, $\boldsymbol\theta^*$ is used to predict the distribution $Y$ over all categories, given the input image $\mathbf{x}$. Note that $\boldsymbol\theta^*$ can generally be learned using a number of existing machine learning techniques. In this paper we rely on deep CNNs to perform this task.
To learn the highly complex function between the input feature $\mathbf{x}$ and the parameters $(\alpha,\beta)$, which are used for estimating the output cardinality distribution, we train a second deep neural network.
Using neural networks to predict a discrete value may seem counterintuitive, because these methods at their core rely on the backpropagation algorithm, which assumes a differentiable loss. Note that we achieve this by describing the discrete distribution by continuous parameters $\alpha, \beta$ (Negative binomial $\text{NB}(\cdot,\alpha,\frac{1}{1+\beta})$), and can then easily draw discrete samples from that distribution. More formally, to estimate $\mathbf{w}^{*}$, we compute the partial derivatives of the objective function in Eq.\xspace~\eqref{eq:Cardinal_Eq0} \wrt $\alpha (\cdot,\cdot)$ and $\beta (\cdot,\cdot)$ and use standard backpropagation to learn the parameters of the deep neural network.
We refer the reader to the supplementary material for the complete derivation of the partial derivatives, a more detailed derivation of the posterior in Eqs.\xspace~\eqref{eq:posterior0}-\eqref{eq:full-posterior0} and the proof for decomposition of the MAP estimation in Eq.\xspace~\eqref{eq:map0}.
\subsection{Inference}
\label{sec:inference}
Having the learned parameters of the network $(\mathbf{w}^{*},\boldsymbol\theta^{*})$, for a test feature $\mathbf{x}^{+}$, we use a MAP estimate to generate a set output as
$
\mathcal{Y}^{*}
= \arg\max_{\mathcal{Y}} p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+}),
$
where
\begin{eqnarray*}
p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+}) & = & \int p(\mathcal{Y}|\mathbf{x}^{+},\boldsymbol\theta,\mathbf{w})p(\boldsymbol\theta,\mathbf{w}|\mathcal{D})d\boldsymbol\theta d\mathbf{w}
\end{eqnarray*}
and $p(\boldsymbol\theta,\mathbf{w}|\mathcal{D}) = \delta(\boldsymbol\theta=\boldsymbol\theta^{*},\mathbf{w}=\mathbf{w}^{*}|\mathcal{D})$.
Since the unit of hypervolume $U$ in most practical application in unknown, to calculate the mode of the set distribution $p(\mathcal{Y}|\mathcal{D},\mathbf{x}^{+})$, we use sequential inference as explained in~\cite{mahler2007statistical}. To this end, we first calculate the mode $m^*$ of the cardinality distribution
$
m^{*}
= \arg\max_{m} p(m|\mathbf{x}^{+},\mathbf{w}^*),
$
where $p(m|\mathbf{x}^{+},\mathbf{w}^*)=\text{NB}\left(m;\alpha(\mathbf{x}^{+},\mathbf{w}^*),\frac{1}{1+\beta(\mathbf{x}^{+},\mathbf{w}^*)}\right)$.
Then, we calculate the mode of the joint distribution for the given cardinality $m^{*}$ as
\begin{equation}
\mathcal{Y}^{*}
= \arg\max_{\mathcal{Y}_{m^{*}}}\quad p(\{y_1,\cdots,y_{m^{*}}\}_{||}|\mathbf{x}^{+},\boldsymbol\theta^*).
\end{equation}
To estimate the most likely set $\mathcal{Y}^{*}$ with cardinality $m^{*}$, we use the first CNN with the parameters $\boldsymbol\theta^*$ which predicts $p(y_1,\cdots,y_{M}|\mathbf{x}^{+},\boldsymbol\theta^*)$, where $M$ is the maximum cardinality of the set, \ie $\{y_1,\cdots,y_{m^{*}}\}\subseteq\{y_1,\cdots,y_{M}\},\quad\forall m^{*}$.
Since the samples are \textit{i.i.d.}\xspace, the joint probability maximised when the probability of each element in the set is maximised. Therefore, the most likely set $\mathcal{Y}^{*}$ with cardinality $m^{*}$ is obtained by ordering the probabilities of the set elements $y_1,\cdots,y_{M}$ as the output of the first CNN and choosing $m^{*}$ elements with highest probability values.
Note that the assumptions in Sec.\xspace~\ref{sec:deep-set-net} are necessary to make both learning and inference computationally tractable and amenable to an elegant mathematical formulation.
A major advantage of this approach is that we can use any state-of-the-art classifier/detector as the first CNN ($\boldsymbol\theta^*$) to further improve its performance.
Modifying any of the assumptions, \eg non-\textit{i.i.d.}\xspace set elements, leads to serious mathematical complexities~\cite{mahler2007statistical}, and is left for future work.
\section{Related Work}
\label{sec:relwork}
A sudden success in multiple applications including voice recognition~\cite{Graves:2013:ICASSP}, machine translation~\cite{Sutskever:2014:NIPS} and image classification~\cite{Krizhevsky:2012:NIPS}, has sparked the deployment of deep learning methods throughout numerous research areas.
Deep convolutional (CNN) and recurrent (RNN) neural networks now outperform traditional approaches in tasks like semantic segmentation~\cite{Chen:2014:ICLR, Lin:2017:CVPR}, image captioning~\cite{Johnson:2016:CVPR} or object detection~\cite{Liu:2016:ECCV}.
Here, we will briefly review some of the recent approaches to image classification and object detections and point out their limitations.
Image or scene classification is a fundamental task of understanding photographs.
The goal here is to predict a scene label for a given image.
Early datasets, such as \mbox{Caltech-101}~\cite{FeiFei:2006:PAMI}, mostly contained one single object and could easily be described by one category. Consequently, a large body of literature focused on single-class prediction~\cite{Krizhevsky:2012:NIPS, Sermanet:2013:OverFeat, Zeiler:2014:ECCV, Murthy:2016:CVPR}.
However, real-world photographs typically contain a collection of multiple objects and should therefore be captioned with multiple tags.
Surprisingly, there exists rather little work on multi-class image classification that makes use of deep architectures. Gong~\etal~\cite{Gong:2013:arxiv} combine deep CNNs with a top-$k$ approximate ranking loss to predict multiple labels. Wei~\etal~\cite{Wei:2014:arxiv} propose a Hypotheses-Pooling architecture that is specifically designed to handle multi-label output. While both methods open a promising direction, their underlying architectures largely ignore the correlation between multiple labels.
To address this limitation, recently, Wang~\etal~\cite{Wang_2016_CVPR} combined CNNs and RNNs to predict a number of classes in a sequential manner.
RNNs, however, are not suitable for set prediction mainly for two reasons.
First, the output represents a sequence rather than a set and is thus highly dependent on the prediction order, as was shown recently by Vinyals~\etal~\cite{Vinyals:2015:arxiv}.
Second, the final prediction may not result in a feasible solution (\eg it may contain the same element multiple times), such that post-processing or heuristics such as beam search must be employed~\cite{Vinyals:2015:NIPS, Wang_2016_CVPR}.
Here we show that our approach not only guarantees to always predict a valid set, but also outperforms previous methods.
Pedestrian detection can also be viewed as a classification problem. Traditional approaches follow the sliding-window paradigm~\cite{Viola:2004:IJCV, Dalal:2005:CVPR, Walk:2010:CVPR, Felzenszwalb:2010:PAMI, Benenson:2012:CVPR}, where each possible (or rather plausible) image region is scored independently to contain a person or not.
More recent methods like Fast R-CNN~\cite{Girshick:2015:ICCV} or the single-shot multi-box detector (SSD)~\cite{Liu:2016:ECCV} learn the relevant image features rather than manually engineering them, but retain the sliding window approach.
All the above approaches require some form of post-processing to suppress spurious detection responses that originate from the same person. This is typically addressed by non-maximum suppression (NMS), a greedy optimisation strategy with a fixed overlap threshold.
Recently, several alternatives have been proposed to replace the greedy NMS procedure, including sequential head detection with LSTMs~\cite{Hochreiter:1997:LSTM}, a global optimisation approach to NMS~\cite{Pham:2016:CVPR,Lee:2016:ECCV}, or even learning NMS end-to-end using CNNs~\cite{Hosang:2017:CVPR}. None of the above methods, however, explicitly consider the number of objects while selecting the final set of boxes. Contrary to existing pedestrian detection approaches, we incorporate the cardinality into the NMS algorithm itself and show an improvement over the state of the art on two benchmarks. Note that the idea of considering the number of pedestrians can be applied in combination with any of the aforementioned detection techniques to further improve their performances.
It is important to bear in mind that unlike many existing approaches that learn to count~\cite{arteta2014interactive,chan2009bayesian,fiaschi2012learning,idrees2013multi,lempitsky2010learning, pham2015count, zhang2015cross,zhang2016single}, our main goal is not object counting. Rather, we derive a formulation for set prediction using deep learning. Estimating the cardinality of objects and thereby their count is a byproduct of our approach. To demonstrate the effectiveness of our formulation, we also conduct experiments on the task of object counting, outperforming many recent methods on the widely used USCD dataset.
\section{Experimental Results}
\label{sec:results}
Our proposed method is best suited for applications that expect the solution to be in the form of a set, \ie permutation invariant and of an unknown cardinality. To that end, we perform an
experiment on multi-label image classification in Sec.\xspace~\ref{sec:multi-label}. In addition, we explore
our cardinality estimation loss on the object counting problem in Sec.\xspace~\ref{sec:crowd-counting} and then
show in Sec.\xspace~\ref{sec:detection} how incorporating cardinality into a state-of-the art pedestrian detector and formulating it as a set problem can boost up its performance.
\subsection{Multi-Label Image Classification}
\label{sec:multi-label}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{figs/success_cases}
\caption{Qualitative results of our multi-class image labelling approach. For each image, the ground truth tags and our predictions are denoted below. Note that we show the exact output of our network, without any heuristics or post-processing.}
\label{fig:Results1}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{figs/failure_cases}
\caption{Interesting failure cases of our method. The ``spurious'' TV class predicted on the left is an artifact in annotation because in many examples, computer monitors are actually labelled as TV. In other cases, our network can correctly reason about the number of objects or concepts in the scene, but is constrained by a fixed list of categories defined in the dataset.}
\label{fig:Results2}
\end{figure*}
As opposed to the more common and more studied problem of (single-label) image classification, the task here is rather to label a photograph with an arbitrary, a-priori unknown number of tags. We perform experiments on two standard benchmarks, the PASCAL VOC 2007 dataset~\cite{Everingham:2007:PASCAL-VOC} and the Microsoft Common Objects in Context (MS COCO) dataset~\cite{Lin:2014:COCO}.
\myparagraph{Implementation details.}
In this experiment, similar to~\cite{Wang_2016_CVPR}, we build on the $16$-layers VGG network~\cite{Simonyan:2014:VGG}, pretrained on the 2012 ImageNet dataset. We adapt VGG for our purpose by modifying the last fully connected prediction layer to predict $20$ classes for PASCAL VOC, and $80$ classes for MS COCO. We then fine-tune the entire network for each of these datasets using two commonly used losses for multi-label classification, \textit{softmax} and \textit{binary cross-entropy (BCE)}\footnote{\textit{Weighted Approximate Ranking} (WARP) objective is another commonly used loss for multi-label classification. However, it does not perform as well as \textit{softmax} and \textit{binary cross-entropy} for the used datasets~\cite{Wang_2016_CVPR}.}~\cite{gong2013deep,Wang_2016_CVPR}. To learn both classifiers, we set the weight decay to $5\cdot 10^{-4}$, with a momentum of $0.9$ and a dropout rate of $0.5$. The learning rate is adjusted to gradually decrease after each epoch, starting from $0.01$ for \textit{softmax} and from $0.001$ for \textit{binary cross-entropy}. The learned parameters of these classifiers correspond to $\boldsymbol\theta^*$ for our proposed deep set network (\cf~Eq.\xspace~\eqref{eq:CNN_Eq} and Fig.\xspace~\ref{fig:method}).
To learn the cardinality distribution, we use the same VGG-16 network as above and modify the final fully connected layer to predict $2$ values followed by two weighted sigmoid activation functions for $\alpha$ and $\beta$. It is important to note, that the outputs must be positive to describe a valid Gamma distribution. We therefore also append two weighted sigmoid transfer functions
with weights $\alpha_{M}, \beta_{M}$
to ensure that the values predicted for $\alpha$ and $\beta$ are in a valid range. Our model is not sensitive to these parameters and we set their values to be large enough ($\alpha_{M}=160$ and $\beta_{M}=20$) to guarantee that the mode of the distribution can accommodate the largest cardinality existing in the dataset. We then fine-tune the network on cardinality distribution using the objective loss defined in Eq.\xspace~\eqref{eq:Cardinal_Eq0}. To train the cardinality CNN, we set a constant learning rate $0.001$, weight decay $5\cdot10^{-12}$, momentum rate $0.9$ and dropout $0.5$.
\myparagraph{Evaluation protocol.}
To evaluate the performance of the classifiers and our deep set network, we employ the commonly used evaluation metrics for multi-label image classification~\cite{gong2013deep,Wang_2016_CVPR}: \textit{precision} and \textit{recall} of the generated labels per-class (C-P and C-R) and overall (O-P and O-R). Precision is defined as the ratio of
correctly predicted labels and total predicted labels, while
recall is the ratio of correctly predicted labels and ground-truth labels.
In case no predictions (or ground truth) labels exist, \ie the denominator becomes zero, precision (or recall) is defined as $100\%$. To generate the predicted labels for a particular image, we perform a forward pass of the CNN and choose top-$k$ labels according to their scores similar to~\cite{gong2013deep,Wang_2016_CVPR}.
Since the classifier always predicts a fixed-sized prediction for all categories, we sweep $k$ from $0$ to the maximum number of classes to generate a precision/recall curve, which is common practice in multi-label image classification.
However, for our proposed DeepSet Network, the number of labels per instance is predicted from the cardinality network. Therefore, prediction/recall is not dependent on value $k$ and one single precision/recall value can be computed.
To calculate the per-class and overall precision/recall, their average values over all classes and all examples are computed, respectively. In addition, we also report the F1 score (the harmonic mean of precision and recall) averaged over all classes (C-F1) and all instances and classes (O-F1).
\myparagraph{PASCAL VOC 2007.}
The Pascal Visual Object Classes (VOC)~\cite{Everingham:2007:PASCAL-VOC} benchmark is one of the most widely used datasets for detection and classification. It consists of $9963$ images with a 50/50 split for training and test, where objects from $20$ pre-defined categories have been annotated by bounding boxes. Each image may contain between $1$ and $7$ unique objects.
We compare our results with a state-of-the-art classifier as described above. The resulting precision/recall plots are shown in Fig.\xspace~\ref{fig:curves-mlic}(a) together with our proposed approach using the estimated cardinality. Note that by enforcing the correct cardinality for each image, we are able to clearly outperform the baseline \wrt both measures. Note also that our prediction (+) can nearly replicate the oracle ($\ast$), where the ground truth cardinality is known.
The mean absolute cardinality error of our prediction on PASCAL VOC is $0.32 \pm0.52$.
\begin{figure}[t]
\centering
\includegraphics[width=.49\linewidth]{figs/Overal_ROC_Curve_VOC_mod.pdf}
\hfill
\includegraphics[width=.48\linewidth]{figs/Overal_ROC_Curve_Coco.pdf}
\caption{Experimental results on multi-label image classification. The baselines (solid curves) represent state-of-the-art classifiers, fine-tuned for each dataset, using two different loss functions. The methods are evaluated by choosing the top-$k$ predictions across the entire dataset, for different $k$. Our approach predicts $k$ and is thus evaluated only on one single point (+). It outperforms both classifiers significantly in terms of precision and recall and comes very close to the performance when the true cardinality is known ($*$).}
\label{fig:curves-mlic}
\vspace{-1em}
\end{figure}
\myparagraph{Microsoft COCO.}
Another popular benchmark for image captioning, recognition, and segmentation is the recent Microsoft Common Objects in Context (MS-COCO)~\cite{Lin:2014:COCO}. The dataset consists of
$123$ thousand images, each labelled with per instance
segmentation masks of $80$ classes. The number of unique objects for each image can vary between $0$ and $18$. Around $700$ images in the training dataset do not contain any of the $80$ classes and there are only a handful of images that have more than $10$ tags. The majority of the images contain between one and three labels. We use $82783$
images as training and validation split ($90$\% - $10\%$), and the remaining $40504$ images as test data. We predict the cardinality of objects in the scene with a mean absolute error of $0.74$ and a standard deviation of $0.86$.
\input{tables/allcoco-multilabel}
Fig.\xspace~\ref{fig:curves-mlic}(b) shows a significant improvement of precision and recall and consequently the F1 score using our deep set network compared to the softmax and binary cross-entropy classifiers for all ranking values $k$. We also outperform the state-of-the art multi-label classifier CNN-RNN~\cite{Wang_2016_CVPR}, for the reported value of $k=3$. Our results, listed in Tab.\xspace~\ref{table:allcoco-multilabel}, show around $7$ percentage points improvement for the F1 score on top of the baseline classifiers and about $3$ percentage points improvement compared to the state of the art on this dataset.
Examples of perfect label prediction using our proposed approach are shown in Fig.\xspace~\ref{fig:Results1}. The deep set network can properly recognise images with no labels at all, as well as images with many tags.
We also investigated failure cases where either the cardinality CNN or the classifier fails to make a correct prediction. We showcase some of these cases in Fig~\ref{fig:Results2}. We argue here that some of the failure cases are simply due to a missed ground truth annotation, such as the left-most example, but some are actually semantically correct \wrt the cardinality prediction, but are penalized during evaluation because a particular object category is not available in the dataset. This is best illustrated in the second example in Fig.\xspace~\ref{fig:Results2}. Here, our network correctly predicts the number of objects in the scene, which is two, however, the can does not belong to any of the 80 categories in the dataset and is thus not annotated. Similar situations also appear in other images further to the right.
\subsection{Object Counting}
\label{sec:crowd-counting}
To show the robustness of our cardinality loss, we first evaluate our cardinality estimation on the common crowd counting application. To this end, we apply our approach on the widely used UCSD dataset~\cite{chan2008privacy} and compare our results to four state-of-the art approaches~\cite{arteta2014interactive,onoro2016towards,pham2015count,zhang2015cross}.
The USCD dataset includes
a 2000-frames long video sequence, captured by a fixed outdoor surveillance camera. In addition to the video, the region of interest (ROI), the perspective map of the scene and the location annotations of all pedestrians in each frame are also provided.
\myparagraph{Implementation details.} We build our cardinality network structure on top of the well-known AlexNet~\cite{Krizhevsky:2012:NIPS} architecture. However, we replace the first convolutional layer with a single channel filter to accept grayscale images as input, and the last fully connected layer with $2$ layers outputs, similar to the case above (\cf~Sec.\xspace~\ref{sec:multi-label}). To estimate the counts, we calculate the mode of the negative binomial distribution.
As input, we use a grayscale image constructed by superimposing all region proposals and their scores generated by an off-the-shelf pedestrian detector (before non-maximum suppression). We use the multi-scale deep CNN approach (MS-CNN)~\cite{Cai:2016:ECCV} trained on the KITTI dataset~\cite{Geiger:2012:CVPR} for our purpose. We found, that this input provides a stronger signal than the raw RGB images, yielding better results.
Note that we process the input images with a pedestrian detector, however, we do not use any location or perspective information that is available for this dataset. During learning, we only rely on the object count for each image region.
\input{tables/Crowd_err}
We follow exactly the same data split used in~\cite{onoro2016towards} by conducting four different and separate experiments on maximal, downscale, upscale and minimal subsets in UCSD dataset.
In order to train our network, similar to~\cite{onoro2016towards} we use data augmentation in each experiment by extracting $800$ random patches from each training image and their corresponding
ground truth counts. We also randomly flip each patch during training. To ensure that we can count all pedestrians from the entire image at test time, we choose the patch sizes to be exactly half of the image size ($79\times119$ pixels) and then perform inference on the resulting $4$ non-overlapping regions. The weights are initialised randomly and the network is trained for $100$ epochs.
All hyperparameters are set as in Sec.\xspace~\ref{sec:multi-label}.
\myparagraph{Results.} Tab.\xspace~\ref{table:Crowd_err} shows the mean absolute error between the predicted and the ground truth counts. We show competitive or superior performance in each experiment except for the `minimal' subset. The main reason is that the training set size is too small (only $10$ images) in this particular split and even data augmentation cannot generalize the cardinality model for the test images. Moreover, unlike other methods, we do not utilize any location information but only provide the object count as ground truth.
Considering the overall performance, our approach outperforms state-of-the-art counting approaches that do not use the perspective map (Hydra 2s and 3s) and performs favourably compared to many existing methods that exploit localisation and perspective information.
\myparagraph{Discussion.}
One obvious alternative for our proposed cardinality loss may seem to directly regress for $m$. This alternative, however, has two main drawbacks. First, it cannot be formulated within a Bayesian set framework to model uncertainty, and second, the regression loss does not yield a discrete distribution and hence does not fit the underlying mathematical foundation of this paper. Nevertheless, we have run the same experiments as above using a standard regression loss but did not reach the same performance.
Using the regression loss we achieve a mean cardinality error (MCE) of $0.83$ on MS COCO, while our loss yields an MCE of $0.74$. This is also reflected in the O-F1 score which drops from $69.4$ to $68.4$ when directly regressing for $m$.
\subsection{Pedestrian Detection}
\label{sec:detection}
\begin{figure}[t]
\centering
\begin{subfigure}[t]{.31\linewidth}
\centering
\includegraphics[width=1\linewidth]{figs/MOT16-RegProps-crp.png}
\caption{Proposals}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.31\linewidth}
\centering
\includegraphics[width=1\linewidth]{figs/MOT16-MS-CNN-crp.png}
\caption{MS-CNN~\cite{Cai:2016:ECCV}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.31\linewidth}
\centering
\includegraphics[width=1\linewidth]{figs/MOT16-ours-crp.png}
\caption{Our result}
\end{subfigure}
\vspace{-.5em}
\small
\caption{Example pedestrian detection result of our approach. To select relevant detection candidates from an overcomplete set of proposals~\ep{a}, state-of-the-art methods rely on non-maximum suppression (NMS) with a fixed setting~\ep{b}. We show that a better result can be achieved by adjusting the NMS threshold adaptively, depending on the number of instances in each image (3 in this case)~\ep{c}.}
\vspace{-1.5em}
\label{fig:nms}
\end{figure}
In this section, we cast the task of pedestrian detection as a set prediction problem and demonstrate that incorporating cardinality prediction (person count) can be beneficial to improve performance.
To this end, we perform experiments on two widely used datasets, Caltech Pedestrians~\cite{Dollar:2012:PAMI} and MOT16 from the MOTChallenge benchmark~\cite{Milan:2016:MOT16}. Recalling Eqs.\xspace~\eqref{eq:CNN_Eq} and~\eqref{eq:Cardinal_Eq0}, we need two networks with parameters $\mathbf{w}^*$ and $\boldsymbol\theta^*$ for cardinality estimation and detection proposals, respectively. For the cardinality network, we use the exact same architecture and setup as in Sec.\xspace~\ref{sec:crowd-counting} and train it on the training sets of these datasets.
Note that it is not our intention to engineer a completely novel pedestrian detector here. Rather, for $\boldsymbol\theta^*$, we take an off-the-shelf state-of-the-art system (MS-CNN)~\cite{Cai:2016:ECCV} and show how it can be further improved by taking the cardinality prediction into account.
To generate the final detection outputs, most detectors often rely on non-maximum suppression (NMS), which greedily picks the boxes with highest scores and suppresses any boxes that overlap more than a pre-defined threshold $T_O$. This heuristic makes the solution more ad-hoc than what is expressed in our set formulation in Eq.\xspace~\eqref{eq:set_density}. However, we are still able to improve the detector performance by adjusting this threshold for each frame separately. To obtain the final detection output, we use the prediction on the number of people $(m^*)$ in the scene to choose an adaptive NMS threshold for each image. In particular, we start from the default value of $T_O$, and increase it gradually until the number of boxes reaches $m^*$. In the case if the number of final boxes is larger than $m^*$, we pick $m^*$ boxes with the highest scores, which corresponds to the MAP set prediction as discussed in Sec.\xspace~\ref{sec:inference}. To ensure a fair comparison, we also determine the best (global) value for $T_O=0.4$ for the MS-CNN baseline. Fig.\xspace~\ref{fig:nms} demonstrates an example of the adjusted NMS threshold when considering the number of pedestrians in the image.
To quantify the detection performance, we adapt the same evaluation metrics and follow the protocols used on the Caltech detection benchmark~\cite{Dollar:2012:PAMI}. The evaluation metrics used here are log-average
miss rate (MR) over false positive per image. Additionally, we compute the F1 score (the harmonic mean of precision and recall). The F1 score is computed from \emph{all} detections predicted from our DeepSet network and is compared with the \emph{highest} F1 score along the MS-CNN precision-recall curve. To calculate MR, we concatenate all boxes resulted from our adaptive NMS approach and change the threshold over all scores from our predicted sets.
Quantitative detection results are shown in Tab.\xspace~\ref{table:F1 score}. Note that we do not retrain the detector, but are still able to improve its performance by predicting the number of pedestrians in each frame in these two dataset.
\input{tables/F1score}
| proofpile-arXiv_066-800 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A (proper vertex) {\em k-colouring} of a finite undirected graph $G$ is a function $f:=V(G) \rightarrow \{1,2,\ldots,k\}$ such that for every edge $e = uv$ of $G$, $f(u) \neq f(v)$ (we will denote by $[k] = \{1,2,\ldots,k\}$ the set of {\em colours}).
There are variants of vertex colourings that have been of interest. In a {\em list colouring}, for each vertex $v$ there is a finite list (that is, set) $L(v)$ of colours available for use, and then one wishes to properly colour the vertices such that the colour of $v$ is from $L(v)$. If $|L(v)|=k$ for every vertex $v$, then a list colouring is called a {\em k-list colouring}. There is a vast literature on list colourings (see, for example, \cite{alon} and \cite{chartrand}, Section 9.2).
We are going to consider a complementary problem, namely colouring the vertices of a graph $G$ where each vertex $v$ has a {\em forbidden} finite set of colours, $r(v) \subset {\mathbb N}$ (we allow $r(v)$ to be equal to the empty set); we call the function $r$ a {\em restraint} on the graph. A restraint $r$ is called an {\em $m$-restraint} if $|r(u)| \leq m$ for every $u\in V(G)$, and $r$ is called a {\em standard $m$-restraint} if $|r(u)| = m$ for every $u\in V(G)$. If $m = 1$ (that is, we forbid at most one colour at each vertex) we omit $m$ from the notation and use the word {\em simple} when discussing such restraints.
A $k$-colouring $c$ of $G$ is {\em permitted} by restraint $r$ (or $c$ is a colouring {\em with respect to $r$}) if for all vertices of $v$ of $G$, $c(v) \not\in r(v)$.
Restrained colourings arise in a natural way as a graph is sequentially coloured, since the colours already assigned to vertices induce a set of forbidden colours on their uncoloured neighbours. Restrained colourings can also arise in scheduling problems where certain time slots are unavailable for certain nodes (c.f.\ \cite{kubale}). Moreover, restraints are of use in the construction of critical graphs (with respect to colourings) \cite{toft}; a $k$-chromatic graph $G = (V,E)$ is said to be {\em $k$-amenable} iff for every non-constant simple restraint $r:V \rightarrow \{1,2,\ldots,k\}$ permits a $k$-colouring \cite{amenable,roberts}. Finally, observe that if each vertex $v$ of a graph $G$ has a list of available colours $L(v)$, and, without loss,
\[ L = \bigcup_{v \in V(G)} L(v) \subseteq [k] \]
then setting $r(v) = \{1,2,\ldots,k\} - L(v)$ we see that $G$ is list colourable with respect to the lists $L(v)$ iff $G$ has a $k$-colouring permitted by $r$.
The well known {\em chromatic polynomial} $\pi(G,x)$ (see, for example, \cite{chrompolybook}) counts the number of $x$-colourings of $G$ with $x$ colours. Given a restraint $r$ on graph $G$, we define the {\em restrained chromatic polynomial} of $G$ with respect to $r$, $\pi_{r}(G,x)$, to be the number of $x$-colourings permitted by restraint $r$. Note that this function extends the definition of chromatic polynomial, as if $r(v) = \emptyset$ for all vertices $v$, then $\pi_r(G,x) = \pi(G,x)$.
Using standard techniques (i.e. the deletion/contraction formula), we can show that the restrained chromatic polynomial $\pi_{r}(G,x)$ is a polynomial function of $x$ for $x$ sufficiently large, and like chromatic polynomials, the restrained chromatic polynomials of a graph $G$ of order $n$ is monic of degree $n$ with integer coefficients that alternate in sign, but unlike chromatic polynomials, the constant term need not be 0 (we can show that the constant term for any restraint $r$ on $\overline{K_{n}}$ is $(-1)^{n}\prod_{v \in V(G)} |r(v)|)$. Also, note that if $r$ is a constant standard $m$-restraint, say $r(v) = S$ for all $v \in V$, then $\pi_{r}(G,x) = \pi(G,x-m)$ for $x$ at least as large as $\mbox{max}(S)$.
Observe that if $r'$ arises from $r$ by a permutation of colours, then $\pi_r(G,x)=\pi_{r'}(G,x)$ for all $x$ sufficiently large. Thus if $\displaystyle{k = \sum_{v \in V(G)} |r(v)|}$ then we can assume (as we shall do for the rest of this paper) that each $r(v) \subseteq [k]$, and so there are only finitely many restrained chromatic polynomials on a given graph $G$. Hence past some point (past the roots of all of the differences of such polynomials), one polynomial exceeds (or is less) than all of the rest, no matter what $x$ is.
As an example, consider the cycle $C_{3}$. There are essentially three different kinds of standard simple restraints on $C_{3}$, namely $r_{1}= [\{1\}, \{1\}, \{1\}]$, $r_{2} = [\{1\}, \{2\}, \{1\}]$ and $r_{3}= [\{1\}, \{2\}, \{3\}]$ (If the vertices of $G$ are ordered as $v_1,v_2\dots v_n$, then we usually write $r$ in the form $[r(v_1),r(v_2)\dots r(v_n)]$). For $x\geq 3$, the restrained chromatic polynomials with respect to these restraints can be calculated as
\begin{eqnarray*}
\pi_{r_1}(C_3,x) & = & (x-1)(x-2)(x-3),\\
\pi_{r_2}(C_3,x) & = & (x-2)(x^2-4x+5), \mbox{ and} \\
\pi_{r_3}(C_3,x) & = & 2(x-2)^2+(x-2)(x-3)+(x-3)^3.
\end{eqnarray*}
where $\pi_{r_1}(C_3,x)<\pi_{r_2}(C_3,x)<\pi_{r_3}(C_3,x)$ holds for $x>3.$
Our focus in this paper is on the following: given a graph $G$ and $x$ large enough, what standard simple restraints permit the largest/smallest number of $x$-colourings?
In the next section, we give a complete answer to minimization part of this question, and then turn our attention to the more difficult maximization problem, and in the case of complete graphs and trees, describe the standard simple restraints which permit the largest number of colourings.
\section{Standard Restraints permitting the extremal number of colourings}
The standard $m$-restraints that permit the smallest number of colourings are easy to describe, and are, in fact, the same for all graphs. In \cite{carsten} (see also \cite{donner}) it was proved that if a graph $G$ of order $n$ has a list of at least $k$ available colours at every vertex, then the number of list colourings is at least $\pi(G,k)$ for any natural number $k\geq n^{10}$.
As we already pointed out, given a standard $m$-restraint $r$ on a graph $G$ and a natural number $x\geq mn$, we can consider an $x$-colouring permitted by $r$ as a list colouring $L$ where each vertex $v$ has a list $L(v)=[x]-r(v)$ of $x-m$ available colours. Therefore, we derive that for a standard $m$-restraint $r$ on graph $G$, $\pi_r(G,x) \geq \pi(G,x-m)$ for any natural number $x\geq n^{10}+mn$. But $\pi_{r_{const}^m}(G,x)$ is clearly the number of colourings permitted by the {\em constant} standard $m$-restraint in which $\{1,2,\ldots,m\}$ is restrained at each vertex. In particular, for any graph $G$, the constant standard simple restraints always permit the smallest number of colourings (provided the number of colours is large enough).
The more difficult question is which standard $m$-restraints permit the largest number of colorings; even for standard simple restraints, it appears difficult, so we will focus on this question. As we shall see, the extremal simple restraints differ from graph to graph. We investigate the extremal problem for two important families of graphs: complete graphs and trees.
\subsection{Complete graphs}
First, we prove that for complete graphs, the standard simple restraints that allow for the largest number of colourings are obtained when all vertices have different restrained colours.
\begin{theorem}\label{completethm}
Let $r: \{ v_{1}, v_{2}, \ldots, v_{n} \} \longrightarrow [n]$ be any standard simple restraint on $K_{n}$ , then for all $x \geq n$, $ \pi_{r}(K_{n}, x) \le \pi_{r'}(K_{n}, x)$, where $r'(v_{i})=i$ for all $i \le n$.
\end{theorem}
\begin{proof}
We show that if two vertices of a complete graph have the same forbidden colour, then we can improve the situation, colouring-wise, by reassigning the restraint at one of these vertices to a colour not forbidden elsewhere.
Let $r_{1}: \{ v_{1}, v_{2}, \ldots, v_{n} \} \longrightarrow [n]$ be a standard simple restraint on $K_{n}$ with $r_{1}(v_{i}) = r_{1}(v_{j})= t$, and there is an element $l \in [n]$ such that $l \notin r(V(K_{n}))$. Then setting
\[ r'_{1}(v_{s}) = \left\{ \begin{array}{ll}
r(v_{s}) & \mbox{ if } s \neq j\\
l & \mbox{ if } s = j
\end{array} \right.
\]
we will show that $\pi_{r_{1}}(K_{n}, x) \le \pi_{r'_{1}}(K_{n}, x)$ for $x \ge n$.
Let $c$ be a proper $x$-colouring of $K_{n}$ permitted by $r_{1}$. We produce for each such $c$ another proper $x$-colouring $c'$ of $K_{n}$ permitted by $r'_{1}$, in a 1--1 fashion.
We take cases based on $c$.
\begin{itemize}
\item case 1: $c(v_{j}) \neq l$. The proper $x$-colouring $c$ is also permitted by $r'_{1}$, so take $c'=c$.
\item case 2: $c(v_{j}) = l$ and $t$ is not used by $c$ on the rest of $K_{n}$. Let $c'$ be the proper $x$-colouring of $K_{n}$ with $c'(v_{u})= c(v_{u})$ if $u \neq j$ and $c'(v_{j})= t$. This gives us a proper $x$-colouring $c'$ permitted by $r'_{1}$.
\item case 3: $c(v_{j}) = l$ and $t$ is used somewhere on the rest of $K_{n}$ by $c$, say vertex $v_{k}$. Let $c'$ be a proper $x$-colouring of $K_{n}$ with $c'(v_{u}) = c(v_{u})$ if $ u \neq j$ or $k$, $c'(v_{j})= t$ and $c'(v_{k})=l$. This gives us a proper $x$-colouring $c'$ permitted by $r'_{1}$.
\end{itemize}
No colouring from one case is a colouring in another case and different colourings $c$ give rise to different colourings $c'$ within each case. Therefore, we have $\pi_{r_{1}}(K_{n}, x) \le \pi_{r'_{1}}(K_{n}, x)$ for $x \ge n$.
If $r$ is not 1-1, we start with $r_{1} = r$ and repeat the argument until we arrive at a simple restraint $r^{\ast}$ that is 1-1 on $V(G)$ and $\pi_{r}(K_{n}, x) \le \pi_{r^{\ast}}(K_{n}, x)$ for $x \ge n$. Clearly $r^{\ast}$ arises from $r'$ by a permutation of colours, so $\pi_{r}(K_{n}, x) \le \pi_{r^{\ast}}(K_{n}, x) = \pi_{r'}(K_{n}, x)$ for $x \ge n$ and we are done.
\end{proof}
\subsection{Trees}
We now consider extremal simple restraints for trees, but first we need some notation.
Suppose $G$ is a connected bipartite graph with bipartition $(A,B)$. Then a standard simple restraint is called an \textit{alternating restraint}, denoted $r_{alt}$, if $r_{alt}$ is constant on both $A$ and $B$ individually but $r_{alt}(A)\neq r_{alt}(B)$. We show that for trees alternating restraints permit the largest number of colorings.
Before we begin, though, we will need some notation and a lemma. If $r$ is a restraint on $G$ and $H$ is an induced subgraph of $G$ then $r|_H$, the {\em restriction of $r$ to $H$}, denotes the restraint function induced by $r$ on the vertex set of $H$ (if $A$ is a vertex subset of $G$ then $G_A$ is the subgraph induced by $A$).
\begin{lemma}\label{tree3}
Let $T$ be a tree on $n$ vertices and $r:V(T)\rightarrow [n]$ be a $2$-restraint such that there is at most one vertex $w$ of $T$ with $|r(w)| = 2$. Then for any $k \geq \operatorname{max}\{3,n\}$, $\pi_r(T,k) > 0$.
\end{lemma}
\begin{proof} The proof is by induction on $n$. For $n= 1$ the proof is trivial, so we assume that $n \geq 2$. As $T$ has at least two leaves, let $u$ be a leaf of $T$ such that $|r(u)|\leq 1$ and $v$ be the stem of $u$. By induction we can colour $T-u$ with respect to $r|_{T-u}$. As $k \geq 3$, there is a colour different from $r(u)$ and the colour assigned to $v$, so we can extend the colouring to one permitted by $r$ on all of $T$.
\end{proof}
\begin{theorem}\label{treemaxmin}Let $T$ be a tree on $n$ vertices and $r:V(T)\rightarrow [n]$ be a standard simple restraint that is not an alternating restraint, then for $k \geq n$, $$\pi_r(T,k) < \pi_{r_{alt}}(T,k).$$
\end{theorem}
\begin{proof}
We proceed by induction on $n$. We leave it to the reader to check the basis step $n=2$. Suppose that $n\geq 3$, $u$ be a leaf of $T$ and $v$ be the neighbor of $u$. Also, let $v_1,v_2,\dots v_m$ be the vertices of the set $N(v)-\{u\}$. Let $T'=T-u$ and $T''=T-\{u,v\}$. Let $T^i$ be the connected component of $T''$ which contains the vertex $v_i$. Given a simple restraint $r$ on $T$, we consider two cases:
\begin{itemize}
\item case 1: $r(u)=r(v).$\newline
Once all the vertices of $T'$ are coloured with respect to $r|_{T'}$, $u$ has $k-2$ choices because it cannot get the colour $r(u)$ and the colour assigned to $v$ which different from $r(u)$. Thus,
\begin{equation}\label{treecase1}
\pi_r(T,k)=(k-2)\pi_{r|_{T'}}(T',k).
\end{equation}
\item case 2: $r(u)\neq r(v).$\newline
In this case we define $x_{n-1}^r$ (respectively $y_{n-1}^r$) to be the number of $k$-colourings of $T'$ permitted by $r|_{T'}$ where $v$ gets (respectively does not get) the colour $r(u)$. Now it can be verified that $\pi_{r|_{T'}}(T',k)=x_{n-1}^r+y_{n-1}^r$ and $\pi_r(T,k)=(k-1)x_{n-1}^r+(k-2)y_{n-1}^r$. In other words,
\begin{equation}\label{treecase2}
\pi_r(T,k)=(k-2)\pi_{r|_{T'}}(T',k)+x_{n-1}^r
\end{equation}
Also let us define a restraint function $r_i:V(T^i)\rightarrow {\mathbb N} $ on each component $T^i$ for $i=1,\dots m$ as follows:
\begin{enumerate}
\item If $r(v_i)=r(u)$ then $r_i(w):=r(w)$ for each $w\in V(T^i)$
\item If $r(v_i)\neq r(u)$ then \[r_i(w) := \left\{ \begin{array}{ll}
\{r(v_i),r(u)\} & \mbox{ if $w=v_i$}\\
r(w) & \mbox{ if $w\neq v_i$ }
\end{array} \right. \]
\textit{for each } $w\in V(T^i).$
\end{enumerate}
Now, $\displaystyle{x_{n-1}^r=\prod_{i=1}^m \pi_{r_i}(T^i,k)}$ which is strictly larger than $0$ by Lemma~\ref{tree3}.
\end{itemize}
By comparing Equations (\ref{treecase1}) and (\ref{treecase2}), it is clear that $\pi_r(T,k)$ will be maximized in case 2. Since $r(V(T^i))\subseteq r_i(V(T^i))$, $\pi_{r_i}(T,k)$ is maximized when $r(v_i)=r(u)$, that is, when $r_i$ and $r|_{T^i}$ are equal to each other for each $i=1\dots m$. Moreover, $\pi_{r|_{T^i}}(T^i,k)$ is maximized when $r|_{T^i}$ is alternating on $T^i$ for each $i=1\dots m$, and $\pi_{r|_{T'}}(T',k)$ is maximized when $r|_{T'}$ is alternating on $T'$ by the induction hypothesis. Hence, $\pi_r(T,k)$ attains its maximum value when $r$ is alternating on $T$. Moreover, this value is strictly larger than all the others. Therefore, the result follows.
\end{proof}
\section{Concluding remarks and open problems}
It is worth noting that for complete graphs and trees the simple restraints which maximize the restrained chromatic polynomials are all minimal colourings, that is, colourings with the smallest number of colours. One might wonder therefore whether this always holds, but unfortunately this is not always the case. For consider the graph $G$ in Figure~\ref{twotriangles} which has chromatic number $3$. It is easy to see that there is essentially only one standard simple restraint ($r_2=[1,2,3,1,2,3]$) which is a proper colouring of the graph with three colours. If $r_1=[1,2,3,1,2,4]$, then some direct computations show that $$\pi_{r_1}(G,x)-\pi_{r_2}(G,x)=(x-3)^2>0$$ for all $x$ large enough. It follows that the simple restraint which maximizes the restrained chromatic polynomial of $G$ cannot be a minimal colouring of the graph.
\begin{figure} [ht]
\begin{center}
\includegraphics[width=4in]{twotriangles.pdf}
\caption{Graph whose standard simple restraint permitting the largest number of colourings is not a minimal colouring.}
\label{twotriangles}
\end{center}
\end{figure}
We believe, however, that for bipartite graphs the simple restraint which maximizes the restrained chromatic polynomial is a minimal colouring of the graph. More specifically, we propose the following:
\begin{conjecture}\label{bipartiteconjecture}
Let $r: \{ v_{1}, v_{2}, \ldots, v_{n} \} \longrightarrow [n]$ be any standard simple restraint on a bipartite graph $G$ and $x$ large enough. Then,
$\pi_{r}(G, x) \le \pi_{r_{alt}}(G, x)$.
\end{conjecture}
We verified that the conjecture above is correct for all such graphs of order at most $6$. Indeed, we know that among all graphs of order at most $6$, there are only two graphs where the standard simple restraint which maximizes the restrained chromatic polynomial is not a minimal colouring of the graph. Therefore, we suggest the following interesting problem:
\begin{problem} Is it true that for almost all graphs the standard simple restraint which maximizes the restrained chromatic polynomial is a minimal colouring of the graph?
\end{problem}
\vskip0.4in
\noindent {\bf \large Acknowledgments} \\
The authors would like to thank the referee for his help and insightful comments.
This research was partially supported by a grant from NSERC.
| proofpile-arXiv_066-812 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Conclusion}
\vspace{-3mm}
In this paper, we introduced a novel mechanism, NRML which incorporates another aspects of human learning in the current meta learning paradigms. In particular, inspired from how distinct parts of the brain are highly specialized for different types of tasks, we exploit the scaling factor in the BN layer associated with each convolutional layer to select the neurons that activated by certain tasks in the train and validation process of meta learning. We found that NRML outperforms state-of-the-art MAML algorithm on the Omniglot and MiniImageNet datasets. We note that NRML can be applied to all existing meta/few-shot learning baselines.
\clearpage
\section{Experiments} \label{exp-main}
\vspace{-3mm}
In this section, we delve into the few-shot learning benchmark, datasets and baselines used in our evaluation as well as the implementation details. Our code is available at~\url{https://github.com/DameDollaForThree/NRML}.
\vspace{-3mm}
\subsection{Datasets}
\vspace{-2mm}
Two few-shot learning datasets, namely Omniglot and MiniImageNet, are used to evaluate the proposed NRML algorithm along with the standard MAML algorithm \cite{finn2017model} as the baseline.
{\bf{Omniglot}}~\cite{lake2011one} contains 1623 different classes. Each class corresponds to a character from 50 different alphabets. There are $20$ images associated with each character drawn by a different subject via Amazon's Mechanical Turk. We divide the dataset into 1200 characters for train, 100 characters for validation, and 323 characters for test. These characters are chosen randomly, however, by fixing the random seed for this random selection, we make sure that train, validation and test classes are disjoint sets.
{\bf{Mini-ImageNet}} is a dataset proposed for few-shot learning evaluation constructed from ImageNet images. In particular, Mini-ImageNet consists of 100 classes, with 600 64 $\times$ 64 images in each class. We adopted the partitioning of Mini-ImageNet, which divides the 100 classes into 64 classes for training, 16 classes for validation, and 20 classes for testing. These partitions are consistent across all sets of our experiments.
\vspace{-3mm}
\subsection{Neural Network Architectures}
Our models generally follow the same architecture as discussed in~\cite{finn2017model}. The model we employ for Omniglot comprises 4 blocks, each of which starts with a convolutional layer with $3\times3$ kernels and 64 channels, followed by a ReLU non-linearity and batch normalization. Strided convolutions are used rather than max-pooling to reduce the dimension. A single fully-connected layer is then placed after the last block as the classifier.
For Mini-ImageNet, the network has 4 blocks as well, which includes a convolutional layer with $3\times3$ kernels and 32 channels, followed by a ReLU non-linearity, batch normalization, and $2\times2$ max-pooling (the stride of convolutional layers is 1). The last layer is a fully connected layer with $N$ output nodes.
Though we use the specific architectures as discussed above in our evaluation, we note that our NRML algorithm can be applied to any existing CNNs architectures, including VGG, ResNet, and EfficientNet.
\vspace{-2mm}
\subsection{Results}
In our experiments, the $(N, K)$ combinations of $(5,1)$, $(5,5)$, $(20,1)$, and $(20,5)$ were adopted for meta-training, where $N$ and $K$ stand for $N$-way $K$-shot learning. Then, $K=5$, and $K=15$ were used for validation. All experiments were repeated 3 times and we took an average over the 3 runs. The evaluation results are summarized in Tables \ref{tab:omni1}, \ref{tab:omni2}, \ref{tab:mini1}, and \ref{tab:mini2}.
\begin{table}[ht]
\caption{Accuracy results on the Omniglot dataset over $N$-way, $K^{tr}$-shot downstream
tasks with $K^{val}=15$ for each task. $\pm$ indicates the stardard deviation.
The top supervised result is reported in bold. The baseline results are from~\cite{finn2017model}.}
\centering
\resizebox{0.85\columnwidth}{!}{
\begin{tabular}{llllll}
\toprule
$(N, K^{tr})$ & (5,1) &(5,5) &(20,1)& (20,5) \\
\midrule
Baseline & $94.17 \pm 1.68$ & $98.47 \pm 0.14$& $85.22 \pm 0.84$ & $93.99 \pm 0.50$ \\
Ours & $\bf{95.51 \pm 0.32}$ & $\bf{98.68 \pm 0.04}$& $\bf{86.72 \pm 0.14}$ & $\bf{94.82 \pm 0.26}$\\
\midrule
\end{tabular}
}
\label{tab:omni1}
\vspace{-5.5mm}
\end{table}
\begin{table}[ht]
\caption{Accuracy results on the Omniglot dataset over $N$-way, $K^{tr}$-shot downstream
tasks with $K^{val}=5$ for each task. $\pm$ indicates the stardard deviation.
The top supervised result is reported in bold. The baseline results are from~\cite{finn2017model}.}
\centering
\resizebox{.85\columnwidth}{!}{
\begin{tabular}{llllll}
\toprule
$(N, K^{tr})$ = & (5,1) &(5,5) &(20,1)& (20,5) \\
\midrule
Baseline & $94.86 \pm 0.12$ & $98.47 \pm 0.40$& $86.00 \pm 0.94$ & $93.98 \pm 0.10$ \\
Ours & $\bf{96.00 \pm 0.00}$ & $\bf{98.71 \pm 0.05}$& $\bf{86.98 \pm 0.29}$ & $\bf{94.35 \pm 0.20}$ \\
\midrule
\end{tabular}
}
\label{tab:omni2}
\vspace{-5mm}
\end{table}
\begin{table}[h!]
\caption{Accuracy results on the Mini-ImageNet dataset over $N$-way, $K^{tr}$-shot downstream
tasks with $K^{val}=15$ for each task. $\pm$ indicates the stardard deviation.
The top supervised result is reported in bold. The baseline results are from~\cite{finn2017model}.}
\centering
\resizebox{.85\columnwidth}{!}{
\begin{tabular}{llllll}
\toprule
$(N, K^{tr})$ = & (5,1) &(5,5) &(20,1)& (20,5) \\
\midrule
Baseline & $46.97 \pm 0.02$ & $62.37 \pm 0.47$& $17.88 \pm 0.41$ & $30.23 \pm 0.30$ \\
Ours & $\bf{48.03 \pm 0.19}$ & $\bf{63.05 \pm 0.54}$& $\bf{18.57 \pm 0.18}$ & $\bf{30.74 \pm 0.25}$ \\
\midrule
\end{tabular}
}
\label{tab:mini1}
\vspace{-5mm}
\end{table}
\begin{table}[h!]
\caption{Accuracy results on the Mini-ImageNet dataset over $N$-way, $K^{tr}$-shot downstream
tasks with $K^{val}=5$ for each task. $\pm$ indicates the stardard deviation.
The top supervised result is reported in bold. The baseline results are from~\cite{finn2017model}.}
\centering
\resizebox{.85\columnwidth}{!}{
\begin{tabular}{llllll}
\toprule
$(N, K^{tr})$ = & (5,1) &(5,5) &(20,1)& (20,5) \\
\midrule
Baseline & $46.74 \pm 0.47$ & $59.41 \pm 0.32$& $17.55 \pm 0.20$ & $30.03 \pm 0.59$ \\
Ours & $\bf{48.03 \pm 0.65}$ & $\bf{60.22 \pm 0.32}$& $\bf{18.03 \pm 0.21}$ & $\bf{30.72 \pm 0.48}$ \\
\midrule
\end{tabular}
}
\label{tab:mini2}
\vspace{-3mm}
\end{table}
{\bf{Omniglot}} As can be seen from Tables \ref{tab:omni1} and \ref{tab:omni2}, our NRML approach consistently outperforms the MAML baseline in all the 8 cases. In particular, with $(N, K^{tr}, K^{val}) = (20, 1, 15)$, NRML achieves the highest advantage over MAML, which is 1.5\%.
{\bf{Mini-ImageNet}} Similar observation can be concluded from Tables \ref{tab:mini1} and \ref{tab:mini2} that NRML always achieves better results than MAML on the Mini-ImageNet dataset. $(N, K^{tr}, K^{val}) = (5, 1, 5)$ raises the largest difference between our approach and the baseline, which is about 1.3\%.
It can be observed that NRML tends to achieve higher improvement over MAML with relatively lower $K^{tr}$ (i.e., $K^{tr}=1$ vs. $K^{tr}=5$). This is potentially because that fewer samples introduce higher uncertainty and more noise in comparison with higher $K^{tr}$, while our routing algorithm reduces the effect of such uncertainty and improves the generalization of the network.
An intriguing feature of our NRML approach is that we have a clear understanding of how much of each layer in the network is taken up by the meta tasks during training. We also have a good indication of how many of the selected neurons of the previously learned tasks are being re-selected (reused). Throughout training, we observed that to get a better performance, by going deeper into the neural network, the number of selected neurons should decrease and vice versa. This observation is consistent with Fig. 3 of~\cite{CLNP-2019}. This is as expected, since by going deeper into the neural network layers, the neurons become more task specific. In our experiment on Omniglot and MiniImgaeNet the percentage of selected neurons in each layer are around $p$=[1st, 2nd, 3rd, 4th]=$[70\%, 60\%, 30\%, 20\%]$. We also observed that the first layer grows no new neurons after the early tasks is fed to the neural network which implies that the neurons fired and their corresponding features during the training of the early tasks are utilized and appeared sufficient for the training of the subsequent tasks. This can be explained by the fact that the features learned by the neurons in lower layers are more general and thus more transferable in comparison with the features of the higher layers which are known to be specific.
\section{Introduction}
Few-shot classification or learning a classifier to generalize to unseen classes by using a limited number of labeled data has attracted remarkable attention~\cite{vinyals2016matching, koch2015siamese, chen2019closer}. Meta-learning algorithms can learn to quickly adapt to unseen tasks by extracting transferable knowledge from few examples~\cite{mishra2017simple, finn2017model, snell2017prototypical}. Broadly speaking, in the paradigm of meta-learning, algorithms can be divided into two main approaches. The first approach, i.e., ``learning to compare" (non-parametric) tends to learn an appropriate embedding function, so that prediction is based on the distance of a new example to the labeled examples~\cite{snell2017prototypical, matchingnet2016, Ren2018, Liu2018}. The second one is ``learning to optimize" (optimization-based), which tends to develop a learning algorithm that can learn a new episode efficiently via only few steps of parameter updating~\cite{finn2017model, sachin2016, Mishra2017, reptile2018, rusu2018}. Non-parametric few-shot learning methods have the advantage that learned embedding space could be used in target task without explicitly design the architecture for the desired number of classes. However, they do not adapt the network weights to the target task. On the other hand, optimization-based algorithms have the power to adapt to an unseen task with gradient descent and can better take advantage of the provided information for new unseen task training~\cite{Vahidian-ICLR-2021}. In comparison with different approaches such as metric-based algorithms which are more suitable for non-parametric learners, optimization-based algorithms are simpler but also more general and thus have been applied to a variety of applications. In this paper, we elaborate on “learning to optimize” framework. This is while the other framework can also be incorporated into our model. Our proposed method allows optimization-based methods to be much more efficient in terms of memory usage since they do not need to keep the whole training path in memory and can updates selected neurons (filters in CNN).
In this paper, we rely on recent advances in the field of human brain/memory which is often referred as an informational processing system. It plays the pivotal role in human intelligence and has inspired many well-known machine learning models. It is broadly recognized in neuroscience that different parts of the brain are highly specialized for distinct types of tasks~\cite{brain}. It contributes not only to the high efficiency in handling a response but also the surprising efficacy of the brain in learning novel tasks.
Episodic memory of brain, as a longterm memory, is the collection of past human experiences. They can be retrieved and exploited by the brain when tackling problems that have never been occurred before. Different memories activate different neurons in the brain, directing us to perform well on what we have not done before. Inspired by the above-mentioned observations in neuroscience we propose why not emulating this learning process to the existing meta/few-shot learning algorithms which strive for reducing the gap between human learning and machine leaning models. More specifically, we describe Neural Routing in Meta learning (NRML), as novel method that learns each specific task, only by involving a small portion of the model in a meta learning algorithm. The small portion of the neurons are selected by leveraging the scaling factor $\gamma$ in batch normalization (BN) layers associated with each convolutional layer as indicators of importance of the corresponding
neurons. It means that only a small fraction of neurons needs to be updated at each back-propagation step, which is desirable for training a large model, facilitates the difficulty of learning, and achieves better performance.
\section{Method}
In this section, we provide required preliminaries and describe how our approach inspired from human learning improves the model performance of the current
meta learning algorithms by selectively using only parts of the model based
on the input tasks in meta training and validation.
\input{Prelim}
\subsection{Neuron Selection in Meta-Tasks}
While machine learning setups have excelled humans at many tasks, they generally need far more data to achieve similar performance. Human brain can recognize new object categories based on a few instance images. It is not entirely fair to compare humans to algorithms learning from scratch, because humans learn the task with a huge amount of prior information, encoded in their brains and DNA. Rather than learning from scratch, they are recombining a set of encoded skills in their neurons. To emulate human learning, we exploit the meta learning paradigm in a novel way to encode specific knowledge in distinct neurons of the neural network which specialized for. In our NRML, we describe a simple yet effective mechanism, an input-dependent dynamic neuron selection in convolutional neural networks (CNNs) in a few-shot learning paradigm as shown in Fig.~\ref{fig:model}. We adopt a custom network architecture meta-learning model which consists of four stacked convolutional blocks with a BN layer after each convolutional layer, with neuron-wise scaling/shifting parameters. Owing to the fact that each neuron extracts a certain pattern related to the task at hand~\cite{Like-What-You-Like}, we make use of the selectivity knowledge of neurons during training and validation.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\columnwidth,height=6cm]{model.pdf}
\caption{The overview of NRML.}
\label{fig:model}
\vspace{-5mm}
\end{figure}
Herein, we can directly leverage the $\gamma$ parameters in BN layers as the scaling factors we need for neuron selection. In effect, our approach introduces a scaling factor $\gamma$ for each neuron (channel), which is multiplied to the output of that neuron which facilitate its implementation without introducing any modification to existing CNN architectures. To begin, we consider a model represented by a parametrized function $f_\theta$ with parameters $\theta$. We sample a batch of tasks as described in Section~\ref{prelim}. Each task is fed to the network in the inner loop and we train these associated scaling factors. We then backpropagate the gradient only to these scaling factors $\gamma$s. The neurons in each layer is sorted based on the value of their scaling factor values. Finally we update the neurons whose corresponding $\gamma$, is among top $p\%$. The rest of neurons will not be updated for that particular task. When adapting to a task $\mathcal{T}_i$, the selected neuron’s parameters $\theta$ become $\theta'_i=\theta -\eta \nabla_\theta {\mathcal{L}}_{{\mathcal{T}}_i}(f_\theta)$ with $\eta$ being the step size.
After feeding each task in each batch in the inner loop, we feed the validation data and the loss across all tasks within each batch is accumulated to train the model parameters in the outer loop by optimizing for the performance of $f_{\theta'_{i}}$ with respect to $\theta$ across tasks. The expected meta-objective is defined as
\begin{equation}
{\rm{min}}_{\theta} \sum_{\mathcal{T}_i} {\mathcal{L}}_{{\mathcal{T}}_i}(f_{\theta'_i}) = \sum_{\mathcal{T}_i} {\mathcal{L}}_{{\mathcal{T}}_i}(f_{\theta -\eta \nabla_\theta {\mathcal{L}}_{{\mathcal{T}}_i}(f_\theta)})
\label{eq1}
\end{equation}
Then, the same process is done as in inner loop to select that neurons that are strongly fired. In particular, the gradient of loss in Eq.~\ref{eq1} is backpropagated to the scaling factors of the BN layers and the top $p\%$ of them are selected whose corresponding neuron parameters are updated as
\begin{equation}
\theta \leftarrow {\theta-\alpha \nabla_\theta \sum_{\mathcal{T}_i} {\mathcal{L}}_{{\mathcal{T}}_i}(f_{\theta'_i}) }
\end{equation}
\noindent where $\alpha$ is the meta step size. The goal is based on the simple intuition that if a neuron is activated in certain task, that implies this neuron is able to better extract properties that may relate to the task. Such encoded knowledge in each neuron is valuable for the network since it provides an explanation to the final prediction of the model. As a result, we propose to strengthen the weights of selected neurons. We also tried selecting neurons based on the absolute value of the activation function of each neuron for each task. However, we found that the scaling factor of BN better captures what kind of inputs can fire each neuron.
Once the meta-learning algorithm is applied, we evaluate our meta-learned model on a set of tasks $\{{\mathcal{T}'}_{i}\}$ which do not share any instance and even classes with the tasks from $\{{\mathcal{T}}_1,...,{\mathcal{T}}_n\}$ to evaluate the capability to adapt to new unseen tasks. The pseudo-code of the NRML algorithm is described in Algorithm~\ref{alg:1}.
\iffalse
\begin{algorithm}[t]
\label{alg1}
\caption{Meta-Learning and Filter Selection based on Scaling Factor of Batch Normalization.}
\SetKwInOut{Require}{require}
\SetKwInOut{Return}{return}
\Require{Labeled dataset
$\mathcal{L}=\{x_i, y_i\}$
}
\Require{$N$: class-count, $N_\mathit{MB}$: meta-batch size}
\Require{$\alpha ,\eta$: step size hyperparameters}
randomly initialize $\theta$;
\While {not done }{
\For {i in $1, \ldots, N_\mathit{MB}$} {
Sample N class from the dataset and K-shot from each class for task ${\cal{T}}_i$ for meta-train: $D_i=\{(x_1,y_1),...,(x_N,y_N)\}$
\For {\textbf{each} ${\cal{T}}_i$}{
- Feed ${\cal{T}}_i$ to the neural net\\
- Evaluate ${\nabla _{\theta }}{{\cal{L}}_{{{\cal{T}}_i}}}\left( {{f_{\theta }}} \right) \\
- Backprop the loss and update only the parameters of the batch norm layer, $BN_{\gamma, \beta}(x_i)$. \\
- Store the gradient of the rest of parameters.\\
- Select the filters with big scaling factor values (top $k\%$), $\Omega\leftarrow$ index of the selected filters.\\
- Update the parameters of the filters indexed in $\Omega$ by the stored gradient and freeze the rest.\\
- Generate a validation set for ${{{\cal{T}}_i}}$: ${D_i^'}=\{({x_1^'},y_1),...,({x_N^'},y_N)\}$ for meta-validation}
Update ${\theta }={\theta }-\alpha \sum\nolimits_{{{\cal{T}}_i}} { {\nabla _{\theta }}{{\cal{L}}_{{{\cal{T}}_i}}}\left( {{f_{\theta _i^'}}} \right)}$ using meta-validation data~${D_i^'}$.
}
}
\Return $\theta$
\end{algorithm}
\fi
\SetKwComment{Comment}{/* }{ */}
\begin{algorithm}
\caption{The NRML Algorithm.}\label{alg:1}
\SetKwInOut{Require}{require}
\Require{Labeled dataset
$\mathcal{L}=\{x_i, y_i\}$ }
\Require{$N$: class-count, $N_\mathit{MB}$: meta-batch size}
\Require{$\alpha ,\eta$: step size hyperparameters}
randomly initialize $\theta$ \;
\While{not done}{
\For {i in $1, \ldots, N_\mathit{MB}$} {
Sample N class from the dataset and K-shot from each class for task ${\cal{T}}_i$ for meta-train: $D_i=\{(x_1,y_1),...,(x_N,y_N)\}$\;
\For {\textbf{each} ${\cal{T}}_i$}{
Feed ${\cal{T}}_i$ to the neural net\;
Evaluate ${\nabla _{\theta }}{{\cal{L}}_{{{\cal{T}}_i}}}\left( {{f_{\theta }}} \right)$\;
Backprop the loss and update only the parameters of the batch norm layer, ${BN}_{\gamma, \beta}(x_i)$\;
Store the gradient of the rest of parameters\;
Select the filters with big scaling factor values (top $k\%$), $\Omega\leftarrow$ index of the selected filters\;
Update the parameters of the filters indexed in $\Omega$ by the stored gradient and freeze the rest\;
Generate a validation set for ${{{\mathcal{T}}_i}}$: ${D_i}'=\left\{ \left ( {x_1}', y_1 \right ),..., \left ( {x_n}', y_n \right ) \right\}$ for meta-validation\;
}
Update $\theta =\theta -\alpha \sum\nolimits_{{{\cal{T}}_i}} {\nabla _{\theta }}{\mathcal{L}}_{\mathcal{T}_i}\left ( f_{{\theta}'_i} \right )$ using meta-validation data~${D_i}'$\;
}
}
\textbf{return} $\theta$
\end{algorithm}
\section{Preliminaries}\label{prelim}
\noindent \textbf{Batch normalization (BN)}. Let $x_{in}$ and $x_{out}$ be the input and output of a BN layer, $\mathcal{B}$ denotes the current minibatch, BN layer do the following transformation: $\hat x =\frac{x_{in}-\mu_{\mathcal{B}}}{\sqrt{\sigma_{\mathcal{B}}^2+\epsilon}}$; $x_{out}=\gamma \hat x+\beta$, where $\mu_{\mathcal{B}}$ and $\sigma_{\mathcal{B}}^2$ are the mean and standard deviation of input activations over $\mathcal{B}$; $\gamma$, and $\beta$ are trainable scale and shift parameters.
\noindent \textbf{Generating meta-tasks}. Concretely, we are dealing with an $N$-way, $(K^{(tr)}, K^{(val)})$-shot supervised learning task, $\mathcal{T}_{1}, \mathcal{T}_{2},..., \mathcal{T}_{n}$ that are drawn from an underlying joint distribution ${\cal{P^T_{X,Y}}}(x_i, y_i)$. Each task consists of two disjoint sets $D_{\mathcal{T}}^{(tr)}$ and $D_{\mathcal{T}}^{(val)}$.
$D_{\mathcal{T}}^{(tr)}$, has $i \in \{1, \ldots, N \times K^{(tr)}\}$ data points $(x_i, y_i)$ such that there are exactly $K^{(tr)}$ samples for each categorical label $y_i \in \{1, \ldots, N\}$. $D_{\mathcal{T}}^{(val)}$ contains another $N \times K^{(val)}$ data points separate from the ones in $D_{\mathcal{T}}^{(tr)}$. We have exactly $K^{(val)}$ samples for each class in $D_{\mathcal{T}}^{(val)}$ as well. Our goal is to learn a classifier $f_{\theta}(x)$ to predict $y$ given $x$. In other words, $f_{\theta}(x) = {\rm{arg\,max}}_{y}{{\cal{P^T_{Y|X}}}(y|x)}$. We do not have access to the underlying distribution ${\cal{P^T_{X,Y}}}(x, y)$, we rather have access to a few samples of the task train set, $D_{\mathcal{T}}^{(tr)}$.
\section{Related Work}\label{related-work}
The approach that we propose in this paper addresses the meta-learning for classification which aims to obtain transferable knowledge from a few examples~\cite{hsu2018unsupervised, khodadadeh2019unsupervised, AAL2019} and from a few neurons. In the following paragraphs in this section, we describe the prior work in meta/few-shot learning, and Sub-Network Routing, as they are the most related topics to this work.
{\bf{Meta/Few-Shot Learning.}} Few-shot classification aims at learning a model that can be efficiently adapted to unseen classes from few samples. Early methods~\cite{vinyals2016matching, sachin2016, finn2017model, Mishra2017, rusu2018, reptile2018, Ren2018, AAL2019, rajeswaran2019meta} pose the few-shot classification problem in a learning-to-learn paradigm by training a deep network over a distribution of related tasks which are constructed from the support set, and transfer this experience to enhance its performance for learning novel and unseen classes. Concretely,~\cite{vinyals2016matching} learn a feature encoder that
is conditioned on the training set in meta-training and does
not necessitate any further training during meta-test due to its nonparametric classifier. The authors in~\cite{sachin2016} leverage the idea of learning a feature encoder in meta-train further and learning an update rule via an LSTM to
update the classifier in meta-test. \cite{finn2017model} poses the
problem as a meta-learning formulation and learn the parameters of
a deep network in meta-training such that a neural network initialized with the learned parameters can be quickly adapted to unseen tasks. We refer to~\cite{survey-meta-2022-Timothy,survey-meta-2020-wang} for comprehensive review of early works.
{\bf{Routing on Neural Network.}} Routing on deep neural networks refers to activating only some of the modules in a network during training and inference~\cite{Neural-Routing-2021, Vahidian-federated-2021, sara-sabour-capsule}. Recent researches promoted it on CNNs in order to accelerate network inference. In AIG~\cite{Andreas-routing-2020}, BlockDrop \cite{BlockDrop}, and SkipNet~\cite{SkipNet} a subset of needed blocks is learned to process a given task. Since deep layers’ features may not be required for classification, SACT~\cite{SACT-2016}, Inside Cascaded CNN~\cite{CNN-2017}, and Dynamic Routing~\cite{dynamic-routing-2017, PathNet-2017} suggest to do input-dependent early stopping at inference time. Routing Convolutional Network (RCN)~\cite{anytime-routing-2021} introduces a routing network aimed at reducing the models’ computational costs at the cost of losing accuracy performance to some extent. Another line of work that is closely related to routing is Mixture of Experts (MoE)~\cite{MOE-hinton-2017}, where several sub-networks are exploited through an ensemble using weights determined by a gating module. From other stream of works that resemble routing on neural networks is dynamic network configuration~\cite{Bengio-2015-routing, Zhourong-2019-routing}. In dynamic network configuration, the neurons, layers or other components of the model for each input task is dynamically selected. In these method, one small sub-module is added to each position to be configured in the model.
Some common weakness of these routing approaches are the following: i) They require an extra module (the gater network) that needs to be trained jointly with the backbone in an end-to-end fashion through back-propagation. This consumes additional memory and storage ii) Since they require some parallel modules along with the main network, the inference efficiency is lower than that of the baseline. iii) The model training is unstable and they experience accuracy drop while requiring orders of magnitude more labeled training data.
Our proposed approach, NRML, takes the advantage of meta/few-shot learning along with the selectivity knowledge of neurons to find implicit routing on the neural network. The neurons are selected based on their sensitivity (how strongly they are fired) to each task in meta training and validation stage. NRML is simple to implement and can be added to any few-shot learning algorithm. Further, it does not require any gater network to be trained along with the main network and it does not require extra memory. More importantly, it improves the accuracy performance of the meta learning baselines on the downstream unseen tasks.
| proofpile-arXiv_066-940 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
One of the widely used approaches to determine the melting point of materials via atomistic computer simulation is the so-called Z-method~\cite{Belonoshko2006,Belonoshko2007,Bouchet2009,
Li2011,Belonoshko2012,Stutzmann2015,GonzalezCataldo2016,Anzellini2019,Errandonea2020,Mausbach2020,Baty2021}, which is based on the empirical observation
that the superheated solid at the limit of superheating temperature $T_{LS}$ has the same internal energy as the liquid at the melting temperature $T_m$. This observation has
potential implications for understanding the atomistic melting mechanism from the superheated state~\cite{Davis2011,OlguinArias2019} but has been left mostly unexplored, disconnected
from thermodynamical models of solids.
In the Z-method, the isochoric curve $T(E)$ is computed from simulations at different total energies $E_1, E_2, \ldots$ and the minimum temperature of the liquid branch of this
isochoric curve is identified with the melting temperature $T_m$. This key assumption still lacks a proper explanation in terms of microcanonical thermodynamics of finite systems.
In this work, we study the properties of a recently proposed model~\cite{Montecinos2021} for the configurational density of states (CDOS) of systems with piecewise constant
heat capacity by calculating its canonical and microcanonical caloric curves in terms of special functions. The model presents a first-order phase transition with
metastable regions where the microcanonical curve $T(E)$ shows a so-called \emph{van der Waals loop}~\cite{Umirzakov1999}. The inflection points found in this loop can
be associated to $T_{LS}$ and $T_m$ of the Z-method description of the superheated solid.
This paper is organized as follows. In Section \ref{sec:cdos} we briefly review the definition of the CDOS and the formalism used to compute thermodynamic properties from it.
In Section \ref{sec:model} we revisit the model in Ref.~\cite{Montecinos2021} and provide some interpretation of its parameters. Sections ~\ref{sec:canon} and ~\ref{sec:microcanon}
show the computation of the caloric curves of the solid model in the canonical and microcanonical ensemble, respectively, and we present some concluding remarks in
Section ~\ref{sec:conclusions}.
\section{Configurational density of states and thermodynamics}
\label{sec:cdos}
\noindent
For a classical system with Hamiltonian
\begin{equation}
H(\bm{r}_1, \ldots, \bm{r}_N, \bm{p}_1, \ldots, \bm{p}_N) = \sum_{i=1}^N \frac{{\bm{p}_i}^2}{2 m_i} + \Phi(\bm{r}_1, \ldots, \bm{r}_N),
\end{equation}
we will define the configurational density of states (CDOS) as the multidimensional integral
\begin{equation}
\label{eq:cdos}
\mathcal{D}(\phi) \mathrel{\mathop:}= \int d\bm{r}_1\ldots d\bm{r}_N\: \delta(\Phi(\bm{r}_1, \ldots, \bm{r}_N)-\phi),
\end{equation}
where $\Phi(\bm{r}_1, \ldots, \bm{r}_N)$ is the potential energy describing the interaction between particles. Using the definition of $\mathcal{D}(\phi)$
in (\ref{eq:cdos}) is possible to rewrite any configurational integral of the form
\begin{equation}
I = \int d\bm{r}_1\ldots d\bm{r}_N\:G(\Phi(\bm{r}_1, \ldots, \bm{r}_N))
\end{equation}
as a one-dimensional integral over $\phi$. In fact, taking $I$ and introducing a factor of 1 as an integral over a Dirac delta function, we have
\begin{equation}
\label{eq:cdos_trick}
\begin{split}
I & = \int d\bm{r}_1\ldots d\bm{r}_N\:G(\Phi(\bm{r}_1, \ldots, \bm{r}_N)) \\
& = \int d\bm{r}_1\ldots d\bm{r}_N\left[\int_{-\infty}^\infty d\phi\delta(\phi-\Phi(\bm{r}_1,\ldots,\bm{r}_N))\right]\:G(\Phi(\bm{r}_1, \ldots, \bm{r}_N)) \\
& = \int_{0}^\infty d\phi\left\{\int d\bm{r}_1\ldots d\bm{r}_N\delta(\phi-\Phi(\bm{r}_1,\ldots,\bm{r}_N))\right\}\:G(\phi) \\
& = \int_{0}^\infty d\phi \mathcal{D}(\phi)G(\phi).
\end{split}
\end{equation}
where we have assumed that $\Phi$ has its global minimum at $\Phi = 0$. One such integral of particular importance is the canonical partition function
$Z(\beta)$, defined as
\begin{equation}
Z(\beta) \mathrel{\mathop:}= \int d\bm{\Gamma}\exp\big(-\beta H(\bm \Gamma)\big),
\end{equation}
where $\bm{\Gamma} = (\bm{r}_1,\ldots, \bm{r}_N, \bm{p}_1, \ldots, \bm{p}_N)$, and which can be computed, following \eqref{eq:cdos_trick}, as
\begin{equation}
\label{eq:zeta}
\begin{split}
Z(\beta) = & \int d\bm{p}_1\ldots d\bm{p}_N \exp\Big(-\beta\textstyle\sum_{i=1}^N \frac{\bm{p}^2_i}{2 m_i}\Big)
\times \int d\bm{r}_1\ldots d\bm{r}_N\:\exp\Big(-\beta\Phi(\bm{r}_1, \ldots, \bm{r}_N)\Big) \\
= &\: Z_0(N)\:\beta^{-3N/2}\int_{0}^{\infty} d\phi \mathcal{D}(\phi)\exp(-\beta \phi) \\
= &\: Z_0(N)\:\beta^{-3N/2}Z_c(\beta),
\end{split}
\end{equation}
with $Z_0(N) \mathrel{\mathop:}= \prod_{i=1}^N(\sqrt{2\pi m_i})^3$ a constant only dependent on $N$ and the masses of the particles, and where
\begin{equation}
Z_c(\beta) \mathrel{\mathop:}= \int_0^\infty d\phi \mathcal{D}(\phi)\exp(-\beta \phi)
\end{equation}
is the configurational partition function. Similarly, the full density of states,
\begin{equation}
\Omega(E) \mathrel{\mathop:}= \int d\bm{\Gamma}\delta(E-H(\bm \Gamma))
\end{equation}
can also be computed from the CDOS, by a convolution with the density of states $\Omega_K$ of the ideal gas~\cite{Kardar2007}, that is,
\begin{equation}
\label{eq:convol}
\Omega(E) = \int d\phi\mathcal{D}(\phi)\Omega_K(E-\phi),
\end{equation}
where
\begin{equation}
\label{eq:kdos}
\Omega_K(E) \mathrel{\mathop:}= \int d\bm{p}_1\ldots d\bm{p}_N\delta\Big(\textstyle\sum_{i=1}^N \frac{\bm{p}^2_i}{2 m_i} - E\Big) = \Omega_0(N) \Theta(E)E^{\frac{3N}{2}-1}.
\end{equation}
\noindent
In order to obtain the result in (\ref{eq:convol}), we replace the integral over the momenta in $\Omega(E)$ by using
(\ref{eq:kdos}),
\begin{equation}
\begin{split}
\Omega(E) \mathrel{\mathop:}= & \int d\bm{r}_1\ldots d\bm{r}_N\:\left[\int d\bm{p}_1\ldots d\bm{p}_N \:\delta\Big(\textstyle\sum_{i=1}^N \frac{\bm{p}^2_i}{2 m_i} + \Phi(\bm{r}_1, \ldots, \bm{r}_N)-E\Big)\right] \\
= & \int d\bm{r}_1\ldots d\bm{r}_N\:\Omega_K(E-\Phi(\bm{r}_1, \ldots, \bm{r}_N))\\
= & \int_{0}^\infty d\phi\mathcal{D}(\phi)\Omega_K(E-\phi),
\end{split}
\end{equation}
where in the last equality we have used (\ref{eq:cdos_trick}). Finally we have
\begin{equation}
\Omega(E) = \Omega_0(N)\int_0^E d\phi\mathcal{D}(\phi)(E-\phi)^{\frac{3N}{2}-1} = \Omega_0(N)\eta(E)
\end{equation}
where $\Omega_0(N) \mathrel{\mathop:}= Z_0(N)/\Gamma\big(3N/2\big)$ is a function only of the size $N$ of the system, and
\begin{equation}
\eta(E) \mathrel{\mathop:}= \int_0^E d\phi \mathcal{D}(\phi)\big(E-\phi\big)^{\frac{3N}{2}-1}.
\end{equation}
Now we have all the elements needed for the computation of the caloric curves and the transition energy (or temperature) for a given model of CDOS. In the canonical
ensemble, the probability of observing a value $\phi$ of potential energy at inverse temperature $\beta$ is given by
\begin{equation}
\label{eq:phi_prob_canon}
P(\phi|\beta) = \frac{1}{Z_c(\beta)}\exp(-\beta\phi)\mathcal{D}(\phi)
\end{equation}
and, using this, we can determine the caloric curve (internal energy as a function of inverse temperature) as
\begin{equation}
\big<H\big>_\beta = -\frac{\partial}{\partial \beta}\ln Z(\beta) = \frac{3N}{2\beta} + \big<\phi\big>_\beta,
\end{equation}
where
\begin{equation}
\label{eq:phi_beta}
\big<\phi\big>_\beta = \frac{1}{Z_c(\beta)}\int_0^\infty d\phi \mathcal{D}(\phi)\exp(-\beta \phi)\phi = -\frac{\partial}{\partial \beta}\ln Z_c(\beta).
\end{equation}
On the other hand, in the microcanonical ensemble the probability of having potential energy $\phi$ at total energy $E$ is given by~\cite{Pearson1985, Ray1991, Carignano2002, Davis2011a}
\begin{equation}
\label{eq:phi_prob_micro}
P(\phi|E) = \frac{1}{\eta(E)}(E-\phi)^{\frac{3N}{2}-1}\mathcal{D}(\phi).
\end{equation}
\noindent
The microcanonical caloric curve (inverse temperature as a function of internal energy) is given by
\begin{equation}
\label{eq:beta}
\beta(E) \mathrel{\mathop:}= \frac{\partial}{\partial E}\ln \Omega(E),
\end{equation}
which can also be rewritten as an expectation as follows. Replacing $\Omega(E)$ in terms of $\eta(E)$ in (\ref{eq:beta}) and using (\ref{eq:phi_prob_micro}), we
can write
\begin{equation}
\begin{split}
\beta(E) & = \frac{1}{\eta(E)}\frac{\partial}{\partial E}\int_0^E d\phi\mathcal{D}(\phi)(E-\phi)^{\frac{3N}{2}-1} \\
& = \frac{1}{\eta(E)}\int_0^E d\phi \mathcal{D}(\phi)\big(E-\phi)^{\frac{3N}{2}-1}\left[\frac{3N-2}{2(E-\phi)}\right] = \big<\hat{\beta}_K\big>_E.
\end{split}
\end{equation}
where $\hat{\beta}_K$ is the kinetic inverse temperature estimator
\begin{equation}
\hat{\beta}_K(\phi; E) \mathrel{\mathop:}= \frac{3N-2}{2(E-\phi)}.
\end{equation}
\section{Model for the CDOS}
\label{sec:model}
In the following sections we will use, for the configurational density of states, the model presented in Ref.~\cite{Montecinos2021} which is defined piecewise, as
\begin{equation}
\mathcal{D}(\phi) = \begin{cases}
d_S(\phi-\phi_S)^{\alpha_S} \qquad\text{for}\;\phi < \phi_c, \\[15pt]
d_L(\phi-\phi_L)^{\alpha_L} \qquad\text{for}\;\phi \geq \phi_c.
\end{cases}
\end{equation}
This model represents two segments, one for the solid phase which, by definition, will have potential energies $\phi < \phi_c$, and one for the liquid phase where
$\phi \geq \phi_c$. That is, our main assumption is that the potential energy landscape is effectively divided into solid and liquid states by a surface
\[\Phi(\bm{r}_1, \ldots, \bm{r}_N) = \phi_c\] in configurational space. By imposing continuity of the CDOS at $\phi=\phi_c$, we must have
\begin{equation}
d_S(\phi_c-\phi_S)^{\alpha_S} = d_L(\phi_c-\phi_L)^{\alpha_L}.
\end{equation}
Here $\phi_S$ represents the potential energy minimum of the ideal solid, that can be set to zero without loss of generality provided that all energies are
measured with respect to this value. We can also set $d_S$=1 and express the potential energy in units of $\phi_c$, and then we have
\begin{equation}
\label{eq:model_CDOS}
\mathcal{D}(\phi) = \begin{cases}
\phi^{\alpha_S} \qquad\text{for}\;\phi < 1, \\[15pt]
\displaystyle\Big(\frac{\phi-\gamma}{1-\gamma}\Big)^{\alpha_L} \qquad\text{for}\;\phi \geq 1,
\end{cases}
\end{equation}
where we have defined the dimensionless parameter \[\gamma \mathrel{\mathop:}= \frac{\phi_L}{\phi_c}.\] In this way, the model for the CDOS has only three free parameters, namely
$\alpha_S$, $\alpha_L$ and $\gamma$.
\section{The caloric curve in the canonical ensemble}
\label{sec:canon}
The configurational partition function $Z_c(\beta)$ associated to the model for the CDOS in \eqref{eq:model_CDOS} can be obtained by piecewise integration as
\begin{equation}
\begin{split}
Z_c(\beta) & = \int_0^\infty d\phi \mathcal{D}(\phi)\exp(-\beta \phi) \\
& = \int_0^1 d\phi \phi^{\alpha_S}\exp(-\beta \phi) + \frac{1}{(1-\gamma)^{\alpha_L}}\int_1^\infty d\phi (\phi-\gamma)^{\alpha_L}\exp(-\beta \phi) \\
& = \beta^{-(\alpha_S+1)}\int_0^{\beta} du u^{\alpha_S}\exp(-u) + \frac{\beta^{-(\alpha_L+1)}}{(1-\gamma)^{\alpha_L}}\int_{\beta}^{\infty} du (u-\beta\gamma)^{\alpha_L}\exp(-u)\\
& = \beta^{-(\alpha_S+1)}\int_0^{\beta} du u^{\alpha_S}\exp(-u) + \frac{\beta^{-(\alpha_L+1)}}{(1-\gamma)^{\alpha_L}}\exp(-\beta\gamma)\int_{\beta(1-\gamma)}^{\infty} dw w^{\alpha_L}\exp(-w).
\end{split}
\end{equation}
\noindent
Finally we obtain
\begin{equation}
\label{eq:Zc}
Z_c(\beta) = \beta^{-(\alpha_S+1)}G_S(0 \rightarrow \beta) + \frac{\beta^{-(\alpha_L+1)}}{(1-\gamma)^{\alpha_L}}\exp(-\beta\gamma)G_L(\beta(1-\gamma) \rightarrow \infty)
\end{equation}
where we have defined, for convenience, the auxiliary functions
\begin{equation}
G_\nu(a \rightarrow b) \mathrel{\mathop:}= \int_a^b dt \exp(-t)t^{\alpha_\nu} = \Gamma(\alpha_\nu+1; b) - \Gamma(\alpha_\nu+1; a)
\end{equation}
for $\nu = S, L$, where $\Gamma(k; x) \mathrel{\mathop:}= \int_0^x dt\exp(-t)t^{k-1}$ is the lower incomplete Gamma function.
\subsection{Low and high-temperature limits}
\vspace{5pt}
By taking the limit $\beta \rightarrow \infty$ of (\ref{eq:Zc}) we see that the second term vanishes, and also $G_S(0 \rightarrow \infty) = \Gamma(\alpha_S+1)$ so
for low temperatures we can approximate
\begin{equation}
\label{eq:Zc_approx}
Z_c(\beta) \approx \beta^{-(\alpha_S+1)}\Gamma(\alpha_S+1).
\end{equation}
\noindent
By replacing (\ref{eq:Zc_approx}) in (\ref{eq:phi_beta}) we have
\begin{equation}
\big<\phi\big>_\beta = -\frac{\partial}{\partial \beta}\ln Z_c(\beta) \approx \frac{\alpha_S+1}{\beta}
\end{equation}
and then the solid branch of the canonical caloric curve is given by a straight line,
\begin{equation}
\label{eq:solid_branch}
E_S(T) \mathrel{\mathop:}= \big<H\big>_{T,S} = \frac{3N}{2}k_B T + \big<\Phi\big>_T = \big(\textstyle{\frac{3N}{2}}+\alpha_S+1\big)k_B T
\end{equation}
as expected. Here we can verify that $E_S(T) \rightarrow 0$ as $T \rightarrow 0$, because we have fixed $\phi_S = 0$, and moreover, we learn that the value of the
specific heat for the solid phase is
\begin{equation}
C_S = \frac{d}{dT}E_S(T) = \big(\textstyle{\frac{3N}{2}}+\alpha_S+1\big)k_B.
\end{equation}
\noindent
On the other hand, for high temperatures (i.e. in the limit $\beta \rightarrow 0$) we can approximate
\begin{equation}
\label{eq:Zc_approx2}
Z_c(\beta) \approx \left[\frac{\beta^{-(\alpha_L+1)}\exp(-\beta\gamma)}{(1-\gamma)^{\alpha_L}}\right]\Gamma(\alpha_L+1),
\end{equation}
obtaining from (\ref{eq:phi_beta}) that
\begin{equation}
\big<\phi\big>_\beta = -\frac{\partial}{\partial \beta}\ln Z_c(\beta) \approx \frac{\alpha_L+1}{\beta}+\gamma
\end{equation}
hence the liquid branch is also a straight line, given by
\begin{equation}
E_L(T) \mathrel{\mathop:}= \big<H\big>_{T,L} = \frac{3N}{2}k_B T + \big<\Phi\big>_T \approx \gamma + \big(\textstyle{\frac{3N}{2}}+\alpha_L+1\big)k_B T.
\end{equation}
This allows us to interpret $\gamma$ as the extrapolation of the liquid branch towards $T=0$, that is, $\phi_L$ in units of $\phi_c$ is the potential energy of a perfectly frozen
liquid at $T=0$,
\begin{equation}
E_{L}(T=0) = \gamma = \frac{\phi_L}{\phi_c}.
\end{equation}
\noindent
Moreover, we obtain that the specific heat of the liquid phase is
\begin{equation}
C_L = \frac{d}{dT}E_L(T) = \big(\textstyle{\frac{3N}{2}}+\alpha_L+1\big)k_B.
\end{equation}
In order for the energy to be extensive in both branches as $N \rightarrow \infty$, it must hold true that the parameters $\alpha_\nu$ are proportional to $N$, and it follows
that
\begin{equation}
\frac{C_\nu}{k_B} = \alpha_\nu + \frac{3N}{2}
\end{equation}
with $\nu = S, L$. Using these definitions we can write $E_S$ and $E_L$ in terms of $\beta$ more compactly, as
\begin{subequations}
\begin{align}
E_S(\beta) & = \frac{C_S}{\beta}, \\
E_L(\beta) & = \gamma + \frac{C_L}{\beta}.
\end{align}
\end{subequations}
\subsection{Melting temperature}
On account of the assumption that all solid states have $\phi < \phi_c$ and for the liquid states $\phi > \phi_c$, we will define the probabilities $\pS{\beta}$ of being
in the solid phase, and $\pL{\beta}$ of being in the liquid phase as
\begin{align}
\pS{\beta} \mathrel{\mathop:}= P(\phi < 1|\beta) & = \frac{1}{Z_c(\beta)}\int_0^1 d\phi \exp(-\beta\phi)\phi^{\alpha_S} = \frac{\beta^{-(\alpha_S+1)} G_S(0\rightarrow \beta)}{Z_c(\beta)}, \\
\pL{\beta} \mathrel{\mathop:}= P(\phi \geq 1|\beta) & = \frac{1}{Z_c(\beta)}\int_1^\infty d\phi \exp(-\beta \phi)\frac{(\phi-\gamma)^{\alpha_L}}{(1-\gamma)^{\alpha_L}}
= \frac{\beta^{-(\alpha_L+1)}\exp(-\beta\gamma)}{Z_c(\beta)(1-\gamma)^{\alpha_L}}G_L(\beta(1-\gamma) \rightarrow \infty)
\end{align}
respectively, such that \[\pS{\beta} + \pL{\beta} = 1\] for all values of $\beta$. The probability of solid is shown as a function of $T$ in Fig.~\ref{fig:probcanon}.
The melting temperature $T_m$ is such that both probabilities are equal~\cite{Davis2016}, that is,
\begin{equation}
\pS{\beta_m} = \pL{\beta_m} = \frac{1}{2},
\end{equation}
with $\beta_m = 1/(k_B T_m)$, which is then the solution of the transcendental equation
\begin{equation}
\label{eq:Tm_solution}
(\beta_m)^{\alpha_L-\alpha_S}G_S(0\rightarrow \beta_m) = \frac{\exp(-\beta_m\gamma)}{(1-\gamma)^{\alpha_L}}G_L(\beta_m(1-\gamma) \rightarrow \infty).
\end{equation}
\noindent
Using $\pS{\beta}$ and $\pL{\beta}$ we can write the canonical caloric curve for any $\beta$ as the sum of three contributions, namely
\begin{equation}
\label{eq:canon_energy_full}
\big<H\big>_\beta = \frac{3N}{2\beta} -\frac{\partial}{\partial \beta}\ln Z_c(\beta)
= E_S(\beta)\pS{\beta} + E_L(\beta)\big(1-\pS{\beta}\big) + E_{\text{res}}(\beta),
\end{equation}
where the first and second terms account for the solid and liquid branches, respectively, and $E_\text{res}$ is equal to
\begin{equation}
E_{\text{res}}(\beta) \mathrel{\mathop:}= -\frac{\gamma\beta^{-1}\exp(-\beta)}{Z_c(\beta)}.
\end{equation}
By comparing (\ref{eq:canon_energy_full}) with the low and high temperature limits, namely $E_S$ and $E_L$, and noting that $\pS{\beta} \rightarrow 1$ for low temperatures
and $\pS{\beta} \rightarrow 0$ for high temperatures, we see that $E_{\text{res}}$ must vanish on both limits, as can be verified by using (\ref{eq:Zc_approx}) and
(\ref{eq:Zc_approx2}). Therefore, this term is only relevant near the transition region. If we replace (\ref{eq:Tm_solution}) into the configurational partition
function in (\ref{eq:Zc}) we obtain
\begin{equation}
Z_c(\beta_m) = 2(\beta_m)^{-(\alpha_S+1)}G_S(0\rightarrow \beta_m)
\end{equation}
and we can determine the melting energy $E^*$ as
\begin{equation}
\label{eq:energy_star}
E^* = \frac{3N}{2\beta_m} + \frac{1}{2}\left[N\gamma+\frac{\alpha_S+\alpha_L+2}{\beta_m}\right] -\frac{\gamma\exp(-\beta_m)}{2G_S(0 \rightarrow \beta_m)}(\beta_m)^{\alpha_S}.
\end{equation}
\noindent
Because $\alpha_S \propto N$, in the thermodynamic limit we can approximate
\begin{equation}
\begin{split}
G_S(0 \rightarrow \beta_m) & = \int_0^{\beta_m} dt \exp(-t)t^{\alpha_S} \\
& = \int_0^{\beta_m} dt \exp(-t + \alpha_S\ln t) \\
& \approx \int_0^{\beta_m} dt\exp(\alpha_S\ln t) = \frac{\beta_m^{\alpha_S+1}}{\alpha_S+1},
\end{split}
\end{equation}
and then we have, for the melting energy per particle $\varepsilon^* \mathrel{\mathop:}= E^*/N$ in the original energy units, that
\begin{equation}
\label{eq:energy_star_lim}
\lim_{N\rightarrow \infty} \varepsilon^* = \varphi_S + \frac{\gamma}{2}+\frac{1}{2}\left(\frac{c_S+c_L}{k_B}-\gamma a_S\exp\Big(-\frac{\phi_c}{k_B T_m}\Big)\right)k_B T_m,
\end{equation}
with $\varphi_S \mathrel{\mathop:}= \phi_S/N$, $c_\nu \mathrel{\mathop:}= C_\nu/N$ and $a_\nu \mathrel{\mathop:}= \alpha_\nu/N$. This becomes a linear relation between $E^*$ and $T_m$ for $\gamma \approx 0$, because
\begin{equation}
\lim_{\gamma \rightarrow 0} \varepsilon^* = \varphi_s + \left(\frac{c_s+c_L}{2}\right)T_m,
\end{equation}
and also for $k_B T_m \ll \phi_c$. Fig. \ref{fig:ratio} shows the dimensionless intensive quantity \[\zeta \mathrel{\mathop:}= \frac{\varepsilon^*-\varphi_S}{k_B T_m}\] as a function of
$\gamma$, using the approximation in \eqref{eq:energy_star_lim} and the exact value in \eqref{eq:energy_star}.
\section{The caloric curve in the microcanonical ensemble}
\label{sec:microcanon}
Just as we used the configurational partition function in Section ~\ref{sec:canon} to compute the canonical caloric curve, we can use the full density of states
$\Omega(E)$ to obtain the microcanonical caloric curve, being given in terms of the CDOS as
\begin{equation}
\label{eq:etacomp}
\begin{split}
\eta(E) & = \int_0^E d\phi \mathcal{D}(\phi)\big(E-\phi\big)^{\frac{3N}{2}-1} = E^{\frac{3N}{2}-1}\int_0^E d\phi \mathcal{D}(\phi)\Big(1-\frac{\phi}{E}\Big)^{\frac{3N}{2}-1} \\
& = E^{\frac{3N}{2}-1}\left[ \int_0^{\min(E, 1)}\hspace{-7pt} d\phi \Big(1-\frac{\phi}{E}\Big)^{\frac{3N}{2}-1}\phi^{\alpha_S}
+ \frac{\Theta(E-1)}{(1-\gamma)^{\alpha_L}}\int_1^E d\phi \Big(1-\frac{\phi}{E}\Big)^{\frac{3N}{2}-1}(\phi-\gamma)^{\alpha_L}\right] \\
& = E^{C_S}B_S\Big(\textstyle 0 \rightarrow \min(1,\frac{1}{E})\Big) + \displaystyle\frac{\Theta(E-1)}{(1-\gamma)^{\alpha_L}}(E-\gamma)^{C_L}B_L\Big(\lambda(E) \rightarrow 1\Big)
\end{split}
\end{equation}
where we have defined the auxiliary functions
\begin{equation}
\lambda(E) \mathrel{\mathop:}= \frac{1-\gamma}{E-\gamma}
\end{equation}
and
\begin{equation}
B_\nu(a \rightarrow b) \mathrel{\mathop:}= \int_a^b dt\;t^{\alpha_\nu+1}(1-t)^{\frac{3N}{2}-1}
\end{equation}
for convenience of notation, where $\nu=S,L$ and $a, b \in [0, 1]$. Replacing in \eqref{eq:etacomp} we obtain $\eta(E)$ as a piecewise function,
\begin{equation}
\label{eq:eta}
\eta(E) = \begin{cases}
E^{C_S}B(\alpha_S+1, \textstyle\frac{3N}{2}) \;\text{for}\; E \leq 1\\[15pt]
\displaystyle E^{C_S}B_S(0 \rightarrow \textstyle \frac{1}{E})\displaystyle+ \frac{(E-\gamma)^{C_L}}{(1-\gamma)^{\alpha_L}}B_L(\lambda(E) \rightarrow 1)\;\text{for}\;E > 1.
\end{cases}
\end{equation}
\noindent
We can see that the microcanonical inverse temperature for the branch with $E \leq 1$ is simply given by
\begin{equation}
\beta_\text{low}(E) = \frac{\partial}{\partial E}\ln (E^{C_S}) = \frac{C_S}{E},
\end{equation}
that is $E$ as a function of $T(E)$ is a straight line with slope $C_S$, in agreement with the low temperature approximation $E_S(T)$ of the canonical caloric curve in
(\ref{eq:solid_branch}). This means the non-monotonic behavior, i.e. the \emph{van der Waals} loop and the microcanonical melting energy $E_m$ must ocurr above $E = 1$.
Just as we did for the canonical ensemble, we will define the probabilities of solid and liquid, $\pS{E}$ and $\pL{E}$ respectively, at an energy $E \geq 1$, according to
\begin{align}
\pS{E} & \mathrel{\mathop:}= \frac{1}{\eta(E)}\int_0^1 d\phi\mathcal{D}(\phi)(E-\phi)^{\frac{3N}{2}-1} = \frac{E^{C_S}B_S(0 \rightarrow 1/E)}{\eta(E)}, \\
\pL{E} & \mathrel{\mathop:}= \frac{1}{\eta(E)}\int_1^E d\phi\mathcal{D}(\phi)(E-\phi)^{\frac{3N}{2}-1} = \frac{(E-\gamma)^{C_L}}{(1-\gamma)^{\alpha_L}}\frac{B_L(\lambda(E)\rightarrow 1)}{\eta(E)}.
\end{align}
The probability of solid is shown, as a function of energy, in Fig. ~\ref{fig:probmicro}. We will define the microcanonical melting energy $E_m$ as the value of $E$
such that $\pS{E_m} = \pL{E_m}$, being then the solution of the transcendental equation
\begin{equation}
(E_m)^{C_S}(1-\gamma)^{\alpha_L} B_S(0 \rightarrow 1/E_m) = (E_m-\gamma)^{C_L}B_L(\lambda(E_m)\rightarrow 1).
\end{equation}
\noindent
Using the derivatives
\begin{align}
\frac{\partial}{\partial E}B_S(0 \rightarrow 1/E) & =
\frac{\partial}{\partial E}\int_0^1 dt\: t^{\alpha_S+1}(1-t)^{\frac{3N}{2}-1}\Theta(1/E-t)
= -E^{-(C_S+2)}(E-1)^{\frac{3N}{2}-1} \\
\frac{\partial}{\partial E}B_L(\lambda(E) \rightarrow 1) & =
\frac{\partial}{\partial E}\int_0^1 dt\:t^{\alpha_L+1}(1-t)^{\frac{3N}{2}-1}\Theta(t-\lambda(E))
= \frac{1}{1-\gamma}\left[\lambda(E)^{\alpha_L+3}-\lambda(E)^{C_L+2}\right],
\end{align}
we can write
\begin{equation}
\eta'(E) = \eta(E)\left(\frac{C_S}{E}\pS{E} + \frac{C_L}{E-\gamma}\pL{E}\right) + \lambda(E)^2\left[(E-\gamma)^{\frac{3N}{2}}
-(1-\gamma)^{\frac{3N}{2}}\right] - E^{-2}(E-1)^{\frac{3N}{2}-1}
\end{equation}
and then
\begin{equation}
\beta_\text{high}(E) = \left(\frac{C_S}{E}\right)\pS{E} + \left(\frac{C_L}{E-\gamma}\right)\big(1-\pS{E}\big) + \beta_{\text{res}}(E)
\end{equation}
where
\begin{equation}
\beta_{\text{res}}(E) \mathrel{\mathop:}= \frac{1}{\eta(E)}\left(\lambda(E)^2\left[(E-\gamma)^{\frac{3N}{2}}-(1-\gamma)^{\frac{3N}{2}}\right] -\frac{1}{E^2}(E-1)^{\frac{3N}{2}-1}\right)
\end{equation}
from which we see that the (inverse) temperature is also continuous at $E = 1$. Similarly to $E_{\text{res}}(\beta)$ in the canonical ensemble, we can also see that
$\beta_{\text{res}}(E)$ vanishes for both $E \rightarrow 1$ and $E \rightarrow \infty$. By using the Stirling approximation on the representation of the Beta function
written in terms of $\Gamma$-functions,
\begin{equation}
B(x, y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)} \approx \sqrt{2\pi}\:\left[\frac{x^{x-1/2}y^{y-1/2}}{(x+y)^{x+y-1/2}}\right]
\end{equation}
for large $x$ and $y$ we have
\begin{equation}
B_\nu(a \rightarrow b) \approx
\begin{cases}
K_\nu \;\text{if}\; \alpha_\nu/C_\nu \in [a, b]\\[10pt]
0\;\text{otherwise},
\end{cases}
\end{equation}
where
\begin{equation}
K_\nu \mathrel{\mathop:}= \sqrt{2\pi}\:\left[\frac{(\alpha_\nu+2)^{\alpha_\nu+3/2}{\Big(\tfrac{3N}{2}\Big)}^{3N/2-1/2}}{(C_\nu+1)^{C_\nu-1/2}}\right].
\end{equation}
\noindent
The probability of solid phase $\pS{E}$ is then approximated by
\begin{equation}
\pS{E} \approx \frac{E^{C_S}(1-\gamma)^{\alpha_L}K_S \mathrm{Q}[E \leq c_S/a_S]}{E^{C_S}(1-\gamma)^{\alpha_L}K_S \mathrm{Q}[E \leq c_S/a_S]
+ (E-\gamma)^{C_L} K_L \mathrm{Q}[E \geq \gamma+(c_L/a_L)(1-\gamma)]}
\end{equation}
where $\mathrm{Q}(A)$ is the indicator function~\cite{Grimmett2014} of the proposition $A$, defined as
\begin{equation}
\mathrm{Q}(A) = \begin{cases}
1\;\;\text{if}\;A\;\text{is true}, \\
0\;\;\text{otherwise}.
\end{cases}
\end{equation}
\noindent
Therefore, in this approximation the transition energy $E_m$, such that $\pS{E_m} = 1/2$, must be the solution of
\begin{equation}
E_m^{C_S}(1-\gamma)^{\alpha_L} K_S = (E_m-\gamma)^{C_L} K_L
\end{equation}
provided that no indicator function vanishes, that is, it must hold that
\begin{equation}
\gamma + \frac{c_L}{a_L}(1-\gamma) \leq E_m \leq \frac{c_S}{a_S}.
\end{equation}
\noindent
These inequalities impose a lower limit on $\gamma$, namely
\begin{equation}
\gamma \geq \frac{a_L}{a_S}-1,
\end{equation}
which however is only relevant if $a_L > a_S$, as it prevents $\gamma$ for reaching zero. In order to determine the solutions $E^*$ corresponding to the maximum
and minimum of microcanonical temperature, we impose
\begin{equation}
\frac{\partial}{\partial E}\beta(E)\Big|_{E=E^*} = \frac{\partial^2}{\partial E^2}\ln \eta(E)\Big|_{E=E^*} = \frac{\eta''(E^*)}{\eta(E^*)}-\left(\frac{\eta'(E^*)}{\eta(E^*)}\right)^2 = 0,
\end{equation}
therefore
\begin{equation}
\label{eq:diamonds}
\eta''(E^*)\cdot \eta(E^*) = \eta'(E^*)^2.
\end{equation}
The exact conditions under which \eqref{eq:diamonds} has exactly two solutions remain to be explored. Nevertheless, we have verified this fact numerically, and in Fig.
\ref{fig:isochores} the green diamonds show the numerical solutions of (\ref{eq:diamonds}) using the expression (\ref{eq:eta}) for $\eta(E)$ in the case $E > 1$, together
with the canonical and microcanonical caloric curves. The ensemble inequivalence expected in small systems~\cite{Dunkel2006} is clearly seen, and a rather remarkable agreement
of the microcanonical curves with the usual shape of the Z curves in the literature is found. The degree of superheating increases with $\gamma$, and the van der Waals loop
typically found in microcanonical curves of small systems~\cite{Doye1995, Schmidt2001, Behringer2005b, Behringer2006, Eryurek2007, Eryurek2008, Carignano2010} gradually
becomes sharper. However, the lower inflection point in the microcanonical curve does not coincide with the value of $T_m$ obtained from the canonical curve, suggesting that the
model for the CDOS could be improved by adding additional parameters.
\section{Concluding remarks}
\label{sec:conclusions}
We have presented a study of the microcanonical and canonical melting curves for a simple solid, based on the recently proposed model in Ref.~\cite{Montecinos2021}. Our
results show that this model is sufficient to reproduce the existence of superheating and recover the so-called Z curves of microcanonical melting, in which the Z-method
by Belonoshko \emph{et al}~\cite{Belonoshko2006} is based. This is a first step for an explanation of the foundations of the Z-method in terms of ensemble inequivalence
for small systems.
\newpage
| proofpile-arXiv_066-1121 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Conclusions}
\label{section:conclusions}
We introduced $\phi\text{-RTN}${}---a grammar-backed LM with low disk space cost that integrates with FST decoders and improves coverage for long tail entities in VA ASR scenarios. $\phi\text{-RTN}${} avoids the explicit expansion of non-terminals at creation time and stores a single sub-graph per non-terminal, leading to a low disk footprint. %
Despite limitations (end of \S\ref{section:results}), we show that $\phi\text{-RTN}${} is complementary to LMs trained on the expanded grammar---allowing us to reap the benefits of both models---and improves WER on long tail entity queries by 10\%.
Future work includes the expansion of $\phi\text{-RTN}${} to non-English languages where grammatical agreements and morphology may require specialized solutions.
$\;$ \textbf{Acknowledgements} We thank Amr Mousa, Barry Theobald, Man-Hung Siu, and the anonymous reviewers for their comments and feedback.
\section{Experimental setup}
\label{section:experiments}
\newcommand{\compacttexttt{vector}}{\compacttexttt{vector}}
\newcommand{\compacttexttt{compact}}{\compacttexttt{compact}}
\newcommand{\compacttexttt{ngram}}{\compacttexttt{ngram}}
\newcommand{\compacttexttt{phi-rtn}}{\compacttexttt{phi-rtn}}
\newcommand{\compacttexttt{arpa2fst}}{\compacttexttt{arpa2fst}}
\newcommand{293}{293}
\newcommand{2608460}{2608460}
\newcommand{77}{77}
\newcommand{230321}{230321}
\newcommand{head}{head}
\newcommand{torso}{torso}
\newcommand{tail}{tail}
\newcommand{10k}{10k}
\subsection{Entity-centric Query Grammar}
\label{section:experiments:statistics}
We focus on media player-type queries, where users instruct a VA to interact with audio content (e.g., songs, playlists). %
Our media player query grammar consists of a weighted list of \numprint{293{}} templates (\numprint{77{}} unique tokens) that reference an entity slot and a weighted list of \numprint{2608460{}} entities (\numprint{230321{}} unique tokens) that can fill the slot.
The templates were derived from high-frequency use-cases in a representative sample of query logs by domain experts with prior probabilities proportional to their presence in user requests. %
As shown in \S\ref{section:methodology:entity_query_grammars}, a small number of templates can model a significant proportion of user queries. The list of media entities was extracted from the catalog of a popular media service, with probabilities based on interactions.
\subsection{Evaluation Sets}
\label{section:experiments:evaluation}
For evaluation, we partition the set of expanded synthetic queries according to the rank percentiles of their joint probabilities (i.e., $\Prob{\text{template}} \cdot \Prob{\text{entity}}$): head{} (top-10\%), torso{} (between top-10\% and the median), and tail{} (bottom-50\%). Subsequently we sample 10k{} queries from each stratum for %
\begin{enumerate*}[label=(\arabic*)]
\item evaluating the trade-off between perplexity and query LM size across the compared approaches (see below), and
\item word error rate (WER) evaluation where we linearly interpolate the query LMs with our main ASR LM, with weight coefficients $0.05$ and $0.95$, resp., on audio generated from the randomly sampled queries with a Neural Text-To-Speech system described in \cite{Sivanand2021ondeviceTTS} to measure the effectiveness of our approach. %
%
The 95/5\% weight distribution was chosen to allow the query LM to influence recognition while not dominating it. Within experiments not included in this paper, we verified that a 5\% weight is sufficient for any model to maximize its impact without negatively impacting out-of-domain utterances.
\end{enumerate*}
In addition, we also evaluate WER using the same setup on a uniform random sample of VA queries to ensure there is no regression.
For (hyper-)parameter optimization, we build a development set by first sampling 10k{} queries from each stratum---that do not overlap with the test set---and taking the union.
When interpolating multiple query LMs with the main ASR LM, the $0.05$ weight is divided across the query LMs by maximizing likelihood on the dev. set using L-BFGS-B \cite{Zhu1997lbfgsb}.
\subsection{Approaches under comparison}
\label{section:experiments:fst_representations}
\label{section:experiments:system}
We compare the following approaches that provide deterministic decoding graphs, where we sweep over hyperparameters that influence model size, for the task of recognition of spoken entity-centric VA queries:
\begin{enumerate*}[label=(\arabic*)]
\item We construct $\phi\text{-RTN}${} models directly from the grammar of templates and entities ($n = 2, 3, 4$).
\item We train back-off n-gram LMs ($n = 2, 3, 4$) using SRILM with Witten-Bell smoothing, and subsequently perform entropy pruning \cite{Stolcke1998pruning} (with threshold $\theta \in \{ 4^{-i} \mid 4 \leq i < 20\} \cup \{0\}$) to reduce the model size, on the full set of expanded synthetic queries (\S\ref{section:introduction}, \S\ref{section:methodology:entity_query_grammars}). Witten-Bell smoothing was chosen since it puts less reliance on the absolute counts than other smoothing methods (e.g., Good-Turing) and is therefore more suitable for synthetic data. The grammar-generated phrases are weighted according to pseudo-counts that were obtained by rescaling their probabilities such that the lowest pseudo-count equals \numprint{1}. To avoid the filtering of n-grams with low pseudo-counts, we disabled the n-gram discounting cutoffs (i.e., $\text{gt}n\text{min} = 1 \, \forall \, n$).
After n-gram model generation, we convert the obtained ARPA model to OpenFST \cite{Allauzen2007openfst} models using Kaldi's \compacttexttt{arpa2fst}{} tool \cite{Povey2011kaldi}. To evaluate the space requirement of the model, we compare multiple FST formats: %
\begin{enumerate*}[label=(\alph*)]
\item free-form formats, such as the mutable \compacttexttt{vector}{} and the immutable space-optimized \compacttexttt{compact}{} acceptor formats,
\item the specialized \compacttexttt{ngram}{} format \cite{Sorensen2011unary} that uses the LOUDS encoding to efficiently represent n-gram back-off models.
\end{enumerate*}
For the free-form formats (\compacttexttt{vector}{}, \compacttexttt{compact}{}), we apply FST minimization after \compacttexttt{arpa2fst}{} in order to further reduce FST size.
For generation of \compacttexttt{ngram}{} FSTs, we skip the removal of redundant states during \compacttexttt{arpa2fst}{}, such that the intermediate output FST retains the necessary information about back-off, and finally use the \compacttexttt{fstconvert} utility to obtain a \compacttexttt{ngram}{} FST.
Note that all FSTs generated from the same n-gram model will be qualitatively identical and only differ in disk space requirements.
\end{enumerate*}
$\;$ \textbf{ASR system.} We used a CNN acoustic model \cite{Huang2020sndcnn}, a 4-gram LM with Good-Turing smoothing in the 1st pass, and the same LM---in addition to the query LMs---interpolated with a Neural LM in the 2nd pass using a dynamic weighting scheme.
To build the 4-gram LM, component 4-gram models built from data sources ($>$ 10B manually/auto. transcribed anonymized VA requests) were combined \cite{Pusateri2019interpolation} on held out manually transcribed VA data \cite{Bacchiani2003unsupervised}.
\section{Introduction}
\label{section:introduction}
Virtual assistants (VAs) are becoming increasingly popular \cite{Juniper2019popularity} to help users accomplish a variety of tasks \cite{Maarek2019alexa}. VA queries can be informational \cite{Broder2002taxonomy}, such as \emph{"how old is Joe Biden"}, or navigational/transactional (e.g., \emph{"show me Dune"}), and are often centered around a named entity. Spoken entity recognition is a difficult problem \cite{VanGysel2020entity,Gondala2021error,Saebi2021discriminative,Hayashi2020latent,Velikovich20128semanticlattice}, since the space of named entities is large, dynamic, and suffers from a long tail. In many use-cases, entity-oriented queries follow templates where the same template is applicable to entities of the same class \cite{Brown1992classlm,Shaik2012hierarchical}. Consider the previous example query \emph{"how old is Joe Biden"}, where segment \emph{"how old is"} references a specific entity attribute and \emph{"Joe Biden"} is an entity that belongs to the class of "famous people". We can now generalize this example into a probabilistic grammar that consists of the sole template \emph{"how old is X"} and $X$ is a non-terminal that expands to a weighted list of entity names. An additional benefit of this approach is that entities can be extracted from external knowledge sources \cite{VanGysel2020entity} that correlate with user queries and allow the VA to accurately recognize emerging entities that are not yet prevalent in training data.
In this paper we investigate the use of entity-centric probabilistic grammars as language models (LMs) within Automated Speech Recognition (ASR) to improve the recognition of entity-oriented queries that are common in VA usage. We are particularly interested in increasing the ASR system's coverage of entities, while minimizing model size---i.e., we wish to reduce the word error-rate of all entity-oriented queries represented by the grammar. Hence, methods are trained using the full grammar and evaluated on their ability to cover that same grammar.
Using grammars for ASR is not a new idea. Stolcke et al. \cite{Stolcke1994ngramscfg} presented an algorithm to estimate n-gram probabilities directly from rule-based grammars. %
In \cite{Jurafsky1995scfg}, the authors use grammars to encode semantic rules within limited-vocabulary ASR using a dynamic ad-hoc system. %
More recently, Gandhe et al. \cite{Gandhe2018lmadaptation} estimate n-gram LMs directly from grammars to improve ASR for new application intents in VAs.
Our goal is to integrate the probabilistic grammar within the finite-state transducer (FST) framework due to its ability to efficiently integrate acoustic and language knowledge sources \cite{Aubert2000overview}, where LM and acoustic components are composed dynamically at runtime \cite{Dolfing2001incremental}. The dynamic interpolation of LMs is necessary when different LM components are updated at different cadences. Reducing the size of model updates, by only updating some components of an ASR system, is essential when ASR occurs on-device and bandwidth/disk space is costly. For VAs, the entity query distribution changes faster than the general voice query distribution, and hence, requires more frequent updates. In addition, in order to guarantee efficient decoding, the approaches we consider are limited to those that guarantee determinism (i.e., unlike a dynamicly-generated class-based LM).
Our research questions (RQs) are:
\begin{enumerate*}[label=(\textbf{\footnotesize RQ\arabic*})]%
\item Can we improve the long tail entity coverage of LMs while operating under model size constraints?
\item Do size-constrained improvements in entity coverage translate into ASR quality improvements?
\end{enumerate*}
We contribute:
\begin{enumerate*}[label=(\arabic*)]%
\item analysis on prevalance of entity-oriented queries within VA usage,
\item a modeling approach---$\phi\text{-RTN}${} (\emph{``phi RTN''})---that provides a deterministic approximation to a prob. grammar compatible with the FST framework and complementary to n-gram models, and
\item their experimental evaluation.
\end{enumerate*}
\section{Methodology}
\label{section:methodology}
\newcommand{\Slot}[1]{\${}#1}%
\newcommand{\Slot{entity}}{\Slot{entity}}
\newcommand{G}{G}
\newcommand{E}{E}
\newcommand{T}{T}
\newcommand{\FST}[1]{F_{#1}}
\newcommand{w}{w}
\newcommand{\Token}[1]{{w{}}_{#1}}
\newcommand{V}{V}
\begin{figure}[t]%
\hfill%
\renewcommand\thesubfigure{\Alph{subfigure}}%
\begin{subfigure}[b]{0.45\columnwidth}%
\centering\footnotesize%
\renewcommand{\tabcolsep}{5pt}%
\resizebox{\textwidth}{!}{%
\renewcommand{\arraystretch}{0.65}%
\begin{tabular}{p{5pt}lc}
\toprule
\textbf{\#} & \textbf{template} & \textbf{prob.} \\
\midrule
(1) & play \Slot{entity}{} & $0.4$ \\
(2) & \Slot{entity}{} & $0.2$ \\
(3) & hey VA \Slot{entity}{} & $0.1$ \\
(4) & hey VA play \Slot{entity}{} & $0.1$ \\
(5) & VA play \Slot{entity}{} & $0.1$ \\
(6) & show me \Slot{entity}{} & $0.1$ \\
\bottomrule
\end{tabular}}%
\caption{Template queries.\label{fig:grammar:templates}}%
\end{subfigure}%
\hfill%
\begin{subfigure}[b]{0.45\columnwidth}%
\centering\footnotesize%
\renewcommand{\tabcolsep}{6.1985pt}%
\resizebox{\textwidth}{!}{%
\renewcommand{\arraystretch}{0.65}%
\begin{tabular}{p{5pt}lc}
\toprule
\textbf{\#} & \textbf{\Slot{entity}{}} & \textbf{prob.} \\
\midrule
(a) & hip hop rap & $2.7 \cdot 10^{-3}$ \\
(b) & Adele & $8.0 \cdot 10^{-5}$ \\
(c) & Drake & $7.9 \cdot 10^{-5}$ \\
(d) & NBA YoungBoy & $7.4 \cdot 10^{-5}$ \\
(e) & The Beatles & $6.3 \cdot 10^{-5}$ \\
\multicolumn{2}{c}{$\cdots$} \\
(f) & play on Canada & $9.6 \cdot 10^{-9}$ \\
\bottomrule
\end{tabular}}%
\caption{Non-terminal \Slot{entity}{}.\label{fig:grammar:entities}}%
\end{subfigure}%
\hfill%
\caption{Grammar of entity-centric media player queries.\label{fig:grammar}}
\end{figure}
We build an entity-centric LM of spoken VA queries that is complementary to a general LM trained on transcribed VA queries. Our goal is to improve the speech recognition of tail entities, while taking into account resource constraints (i.e., model size). ASR LMs assign a non-zero probability $\CondProb{\Token{k}}{\Token{1}, \ldots, \Token{k - 1}}$ to word $\Token{k}$ preceded by left-context $\Token{1}, \ldots, \Token{k - 1}$, with words $\Token{i}$ members of a fixed-size vocabulary $V{}$. During ASR decoding, the LM probability is combined with acoustic information to differentiate between competing hypotheses.
We will now proceed as follows. %
First, we describe how entity-oriented queries can be described as probabilistic grammars (\S\ref{section:methodology:entity_query_grammars}). Next, we provide background on non-deterministic graph grammar representations and their shortcomings (\S\ref{section:methodology:rtn}). Finally, we describe our approach, $\phi\text{-RTN}${}, that provides an approximation to FST-compatible graph-based grammars (\S\ref{section:methodology:dartn}).
\subsection{Entity-centric Query Grammars}
\label{section:methodology:entity_query_grammars}
An analysis of a month of a random sample anonymized U.S. English query logs from a popular VA shows that over \textbf{\numprint{15}\%} of media player queries that instructed the VA to play a song, album or artist follow the \emph{"play \Slot{entity}{}"} template.
Hence, a significant portion of media player queries can be represented using a probabilistic grammar. Estimating a grammar from entity-centric queries extracted from usage logs---that will subsequently be used for LM estimation---has the advantage that it allows us to keep the query templates static, while updating the weighted entity list using external knowledge sources that correlate with usage. In turn, this allows the VA to recognize emerging entities that are not yet included in transcribed training data.
Denote $T{}$ and $E{}$ as the sets of query templates and entities, resp., an example grammar $G{}$---consisting of templates $T{}$ with slots for entities $E{}$---that we wish to support is depicted in Fig.~\ref{fig:grammar}. The templates in Fig.~\ref{fig:grammar:templates} were extracted from usage logs and subsequently inspected by domain experts, whereas the entity feed in Fig.~\ref{fig:grammar:entities} was extracted from an external knowledge source. We are particularly interested in improving LM coverage for tail entity queries. We define queries whose joint probability falls below the median as tail queries.
\subsection{Recursive Transitive Networks}
\label{section:methodology:rtn}
\newcommand{S}{S}
Encoding the grammar depicted in Fig.~\ref{fig:grammar} as a static FST by expanding every occurrence of \emph{\Slot{entity}{}} in the templates (Fig.~\ref{fig:grammar:templates}) leads to excessively large models---proportional to the cross-product, $\SetSize{T{}} \cdot \SetSize{E{}}$. Recursive Transitive Networks (RTNs) \cite{Woods1970rtn} offer an alternative representation to encode grammars---and more generally, class-based LMs \cite{Brown1992classlm}---that do not store the cross-product. Instead, an RTN consists of a family of FSTs---each associated with a non-terminal---where non-terminal labels on arcs within the constituent FSTs are recursively replaced by the FST they reference. Non-terminal symbol $S{}$ indicates the root non-terminal that belongs to the constituent FST that is explored first. Grammar $G{}$ (Fig.~\ref{fig:grammar}) can be represented using a RTN with a single level of recursion. %
Unfortunately, RTNs, when used as decoding graphs in a FST decoder, are generally non-deterministic \cite[\S4.5]{Allauzen2012pdt}. An FST is non-deterministic if at a given state there are multiple outgoing arcs that share the same input symbol. Non-determinism occurs within RTNs since there exist multiple paths through the graph that result in the same token sequence. Within the grammar depicted in Fig.~\ref{fig:grammar}, this would be the case within the state that corresponds to context \emph{"hey VA"}---i.e., the wake word contributed by templates $(3)$ and $(4)$---since the next token \emph{"play"} can either match template $(4)$ or entity $(f)$.
Non-determinism is detrimental to efficient ASR decoding as multiple paths can lead to the same hypothesis, and therefore should be avoided.
\subsection{Deterministic Approximate RTNs}
\label{section:methodology:dartn}
\newcommand{s}{s}
\newcommand{\State{}^\prime}{s{}^\prime}
\newcommand{\NextState{}}{\State{}^\prime{}}
\newcommand{\ObservedTokens}[1]{\Token{1}, \ldots, \Token{#1}}
\newcommand{\NextToken}[1]{\Token{#1 + 1}}
A key characteristic of entity-oriented VA queries is that carrier phrases (i.e., the templates excluding the non-terminal) are often short and use a limited vocabulary of frequent tokens ($< 100$ unique; see \S\ref{section:experiments:statistics}), whereas the entity names typically consist of ample infrequent tokens ($> 10^4$ unique; see \S\ref{section:experiments:statistics}). We can use this observation to relax the harsh conditions necessary for determinism when using RTNs for modeling entity-oriented queries by imposing a precedence of regular symbols over non-terminal symbols. More specifically, when we are at state $s{}$ in our RTN after observing $i$ tokens $\ObservedTokens{i}$ and need to transition to the next state $\State{}^\prime{}$ after observing the next token $\NextToken{i}$, we will first attempt to match a regular symbol on any arc leaving state $s{}$, and only explore entering a non-terminal FST, or exiting the current non-terminal FST, if a match cannot be found. Within the FST framework, this behavior can be implemented using $\phi$-transitions \cite{OpenFST2020matchers}, i.e., transitions followed when the requested symbol cannot be matched at the current state, that are typically used to implement back-off n-gram models \cite{Katz1987backoff} using FSTs.
We will now proceed by describing the topology of our model, named $\phi\text{-RTN}${}, as a two-level RTN, and explain how our model is made deterministic---approximating the actual RTN---by following precedence rules using $\phi$-transitions. In addition, we discuss how we ensure that our model is correctly normalized. Later, at the end of \S\ref{section:results}, we discuss limitations/drawbacks of our approach---and means to alleviate them.
\noindent \textbf{Topology.} %
\newcommand{\TemplateFST}{\FST{T{}}}%
\newcommand{\EntityFST}{\FST{E{}}}%
\newcommand{\UnigramState}{s{}_{\Prob{w{}}}}%
\newcommand{\SmoothingAlpha}{\alpha}%
\newcommand{\SmoothingAlphaComplement}{\overline{\SmoothingAlpha{}}}%
\newcommand{\NumStates}[1]{\Apply{\text{NumStates}}{#1}}%
\newcommand{\OutgoingSymbols}[1]{\Apply{\text{OutgoingSymbols}}{#1}}%
We employ a two-level RTN, where root non-terminal $S{}$ is associated with a rigid grammar FST $\TemplateFST{}$ estimated on templates (e.g., Fig.~\ref{fig:grammar:templates}). The transition probabilities at each state are multiplied with discounting factor $\SmoothingAlphaComplement{} = 1 - \SmoothingAlpha{}$ ($0 < \SmoothingAlpha{} < 1$) to allow for out-of-domain utterances. The leftover mass, $\SmoothingAlpha{}$, is distributed over all unseen tokens at the state through the use of a back-off arc that leads to a self-loop unigram state $\UnigramState{}$ that encodes a unigram probability distribution $\Prob{w{}}, \forall\, w{} \in V{}$. Template FST $\TemplateFST{}$ references non-terminal $\Slot{entity}{}$, associated with entity FST $\EntityFST{}$. FST $\EntityFST{}$ is modeled as a regular, non-backoff $n$-gram model over entity names (e.g., Fig.~\ref{fig:grammar:entities}), with probabilities scaled by $\SmoothingAlphaComplement{}$ to allow for out-of-domain entities. The left-over mass at each state in $\EntityFST{}$ will be used for transitioning out of $\EntityFST{}$, back to $\TemplateFST{}$, and possibly $\UnigramState{}$ in the case of out-of-domain entities (see below).
\noindent \textbf{Determinism.} %
We address the RTN determinization issue discussed in \S\ref{section:methodology:rtn} by prioritizing regular symbols over non-terminal entry/exit. More specifically, within $\TemplateFST{}$, when observing non-terminal $\Slot{entity}{}$ after context $\ObservedTokens{i}$, with corresponding state $s{}$, the corresponding sub-FST $\EntityFST{}$ can be reached by following the $\phi$-transition from state $s{}$. Note that in the absence of $\Slot{entity}{}$, the $\phi$-transition leads to the unigram state $\UnigramState{}$. When entering $\EntityFST{}$, we remember at which state we need to return to in $\TemplateFST{}$. Within $\EntityFST{}$, $\phi$-transitions are used to exit sub-FST $\EntityFST{}$ and return to template FST $\TemplateFST{}$. Final states in $\EntityFST{}$ are removed, and instead, at each state $s{}$ in $\EntityFST{}$, the FST state's final probability mass is combined with the leftover mass $\SmoothingAlpha{}$ and distributed over regular symbols not observed at $s{}$.
\noindent \textbf{Normalization.} %
To ensure that the resulting model is properly normalized, i.e. $\sum_{w{} \in V} \CondProb{w{}}{s{}} = 1$ for all states $s{}$, we follow a strategy similar to back-off n-gram models \cite{Katz1987backoff}. More precisely, we assign a weight to the $\phi$-transition leaving state $s{}$ such that the probability mass assigned to unobserved events at $s{}$, when following $\phi$-transitions recursively, are scaled to fit into the leftover mass at state $s{}$.
For $\phi$-transitions leading from $\TemplateFST{}$ to $\EntityFST{}$, the weights can be computed statically when the model is constructed---since we know the start state in $\EntityFST{}$ the $\phi$-transition leads to, and the state from $\EntityFST{}$ that leads back to $\TemplateFST{}$ after following $\phi$ once more.
However, computation of the weights for $\phi$-transitions that exit from $\EntityFST{}$ to $\TemplateFST{}$ (excl. start state in $\EntityFST{}$) is more difficult---due to the dependency between the state in $\EntityFST{}$ from which we are exiting and the state in $\TemplateFST{}$ we are returning to. If we were to compute these $\phi$-transition weights statically, we would need to store $\BigOh{\NumStates{\TemplateFST{}} \cdot \NumStates{\EntityFST{}}}$ weights. To avoid the prohibitively expensive storage costs, we compute the weights of $\phi$-transitions leading from $\EntityFST{}$ back to $\TemplateFST{}$ partially at runtime. Partial computation of the weights is achieved by pre-computing the marginal unigram probability, $\sum_{w{} \in \OutgoingSymbols{s{}}} {\Prob{w{}}}$, for all explicit symbols defined at each state $s{}$ in $\EntityFST{}$ when the model is constructed. At runtime, since we know the leftover mass at state $s{}$ in $\EntityFST{}$ equals $\SmoothingAlpha{} + \CondProb{\text{final}}{s{}}$, we thus only need to consider explicit events defined at state $\State{}^\prime{}$ in $\TemplateFST{}$ that the $\phi$-transition leaving $\EntityFST{}$ leads to, in order to update the partial weight. Since in our setting, $\TemplateFST{}$ is sparse, the computation involved in this operation is negligible.
Note that $\phi\text{-RTN}${} does not support templates with two or more consecutive non-terminals since that would lead to significantly more computation.
\section{Results \& Discussion}
\label{section:results}
\newcommand{\RQRef}[1]{\textbf{\footnotesize RQ#1}}
We answer the research questions asked in \S\ref{section:introduction} as follows.
\begin{figure*}[ht]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{resources/dev-tradeoff_plots/3.pdf}
\caption{Head (top-\numprint{10}\%)}
\label{fig:tradeoff:head}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{resources/dev-tradeoff_plots/2.pdf}
\caption{Torso (between top-\numprint{10}\% and \numprint{50}\%)}
\label{fig:tradeoff:torso}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{resources/dev-tradeoff_plots/1.pdf}
\caption{Tail (bottom-\numprint{50}\%)}
\label{fig:tradeoff:tail}
\end{subfigure}
\caption{Model size vs. perplexity trade-off on dev. sets (\S\ref{section:experiments:evaluation}) (closer to bottom-left is better) as we sweep over hyper-params (\S\ref{section:experiments:fst_representations}).\label{fig:tradeoff}}
\end{figure*}
\noindent%
\RQRef{1}: %
Fig.~\ref{fig:tradeoff} shows the trade-off between perplexity and model size for the various approaches (\S\ref{section:experiments:fst_representations}) used to encode back-off LMs (\compacttexttt{vector}{}, \compacttexttt{compact}{}, \compacttexttt{ngram}{}), and our method, $\phi\text{-RTN}${} (\compacttexttt{phi-rtn}{}) on head, torso and tail dev. sets (\S\ref{section:experiments:evaluation}). When considering the back-off LMs for each dev. set, regardless of FST format, the trade-off curves all follow similar shapes and only differ by a near-constant offset on the x-axis. %
For the back-off LMs, the perplexity/size curve oscillates since we sweep over both n-gram order and pruning thresholds---combined within a single curve per approach.
On the head of the distribution (Fig.~\ref{fig:tradeoff:head}), back-off LMs quickly reach low perplexity at small model sizes ($\smallsim2^{7}\text{MB}$). However, when we consider the torso/tail (Fig.~\ref{fig:tradeoff:torso},~\ref{fig:tradeoff:tail}), the curve becomes less sharp and only converges to the minimum when using large models ($\smallsim2^{10}\text{MB}$) with nearly no pruning. %
Across all back-off LM FST formats, the \compacttexttt{ngram}{} format is most economical, followed by \compacttexttt{compact}{} and then \compacttexttt{vector}{}.
Our method, \compacttexttt{phi-rtn}{}, provides approximately the same coverage as \compacttexttt{ngram}{} on the head set (Fig.~\ref{fig:tradeoff:head}) at $\smallsim2^{6}\text{MB}$. However, on the tail set (Fig.~\ref{fig:tradeoff:tail}), \compacttexttt{phi-rtn}{} obtains about an order of magnitude lower perplexity, and consequently, better coverage, at a model size of $\smallsim2^{6}\text{MB}$ compared to the back-off LMs.
The lower perplexity of back-off LMs on the head set, and their higher perplexity on the tail, is due to the following dual-effect: %
\begin{enumerate*}[label=(\alph*)]
\item back-off smoothing methods rely on absolute counts to evaluate the reliability of empirical prob. estimates, and hence, are generally biased towards high-prob. events, and
\item entropy pruning favors the removal of low-prob. events.
\end{enumerate*}
Even in the case that we were to modify the smoothing/pruning methods to be less favorable w.r.t. the head of the distribution, the size of back-off LMs is proportional to the number of explicitly-defined probs., and hence, the curves in Fig.~\ref{fig:tradeoff:tail} would not change significantly.
We answer our first RQ as follows: while back-off LMs are unsuitable to improve long tail entity coverage when operating under model size constraints, $\phi\text{-RTN}${} provides significantly better coverage. Since the worst-case model size of $\phi\text{-RTN}${} is asymptotically bounded by the number of unique entities, we can choose the entity set to be included according to the target model size.
\newcommand{{\footnotesize \PhiRTN{}}}{{\footnotesize \compacttexttt{phi-rtn}{}}}
\newcommand{{\footnotesize \NGramFST{}}}{{\footnotesize \compacttexttt{ngram}{}}}
\newcommand{{\footnotesize \NGramMethodName{}$^{(<)}$}}{{\footnotesize {\footnotesize \NGramFST{}}{}$^{(<)}$}}
\newcommand{{\footnotesize \NGramMethodName{}$^{(\smallsim)}$}}{{\footnotesize {\footnotesize \NGramFST{}}{}$^{(\smallsym{\mathrel}{\sim})}$}}
\newcommand{{\footnotesize \NGramMethodName{}$^{(>)}$}}{{\footnotesize {\footnotesize \NGramFST{}}{}$^{(>)}$}}
\begin{table}[ht]
\caption{Overview of methods under comparison for WER evaluation, how they were selected, and their hyper-parameters.\label{tbl:asr_methods}}%
\centering%
\footnotesize%
\renewcommand{\arraystretch}{0.65}%
\setlength\tabcolsep{3pt}%
\begin{tabular}{lp{3.5cm}l}%
\toprule%
\textbf{Name} & \textbf{Selection criterion} & \textbf{Hyper-params} \\
\midrule%
\textbf{\scriptsize {\footnotesize \PhiRTN{}}{}} & Based on Fig.~\ref{fig:tradeoff} & $n = 3$ \\
\textbf{\scriptsize {\footnotesize \NGramMethodName{}$^{(<)}$}{}} & $\smallsym{\mathrel}{\sim}{}\frac{1}{2}\times$ size \textbf{{\footnotesize \PhiRTN{}}{}} & $n = 3$, $\theta = 6\text{e-}{8}$ \\
\textbf{\scriptsize {\footnotesize \NGramMethodName{}$^{(\smallsim)}$}{}} & $\smallsym{\mathrel}{\sim}{}$ size \textbf{{\footnotesize \PhiRTN{}}{}} & $n = 4$, $\theta = 1\text{e-}{8}$ \\
\textbf{\scriptsize {\footnotesize \NGramMethodName{}$^{(>)}$}{}} & $\smallsym{\mathrel}{\sim}{}2\times$ size \textbf{{\footnotesize \PhiRTN{}}{}} & $n = 4$, $\theta = 4\text{e-}{9}$ \\
\bottomrule%
\end{tabular}
\end{table}
\vspace*{-1.5em}%
\begin{table}[ht]
\caption{Error-rates using different models (Table~\ref{tbl:asr_methods}) of media player queries, interpolated with the main LM at runtime. Letters between brackets denote sub-sections.\label{tbl:asr_accuracy}}%
\centering%
\newcommand{\SubTableLabel}[2]{\speciallabel{%
#2}{tbl:asr_accuracy:#1}\textbf{[#2]}}%
\newcommand{\MethodBaselineTitle}{\textbf{Main LM only}}%
\newcommand{\MethodSmallNGramTitle}{$\;\;\;$\textbf{\scriptsize {\footnotesize \NGramMethodName{}$^{(<)}$}{}}}%
\newcommand{\MethodMediumNGramTitle}{$\;\;\;$\textbf{\scriptsize {\footnotesize \NGramMethodName{}$^{(\smallsim)}$}{}}}%
\newcommand{\MethodLargeNGramTitle}{$\;\;\;$\textbf{\scriptsize {\footnotesize \NGramMethodName{}$^{(>)}$}}}%
\newcommand{\MethodEnumTitle}{$\;\;\;$\textbf{\scriptsize {\footnotesize \PhiRTN{}}{}}}%
\newcommand{\MethodEnumSmallNGramCombinationTitle}{$\;\;\;$\textbf{\scriptsize {\footnotesize \PhiRTN{}}{} + {\footnotesize \NGramMethodName{}$^{(<)}$}{}}}%
\newcommand{\MethodEnumMediumNGramCombinationTitle}{$\;\;\;$\textbf{\scriptsize {\footnotesize \PhiRTN{}}{} + {\footnotesize \NGramMethodName{}$^{(\smallsim)}$}{}}}%
\newcommand{\MethodEnumLargeNGramCombinationTitle}{$\;\;\;$\textbf{\scriptsize {\footnotesize \PhiRTN{}}{} + {\footnotesize \NGramMethodName{}$^{(>)}$}{}}}%
\newcommand{\SectionEnumOnlyTitle}{\textit{+ $\phi\text{-RTN}${} only} \SubTableLabel{enum}{2a}}%
\newcommand{\SectionSmallBudgetTitle}{\textit{+ small N-Gram/$\phi\text{-RTN}${} combinations} \SubTableLabel{small}{2b}}%
\newcommand{\SectionMediumBudgetTitle}{\textit{+ medium N-Gram/$\phi\text{-RTN}${} combinations} \SubTableLabel{medium}{2c}}%
\newcommand{\SectionLargeBudgetTitle}{\textit{+ large N-Gram/$\phi\text{-RTN}${} combinations} \SubTableLabel{large}{2d}}%
\renewcommand{\arraystretch}{0.55}%
\setlength\tabcolsep{3pt}%
\footnotesize%
\resizebox{0.95\columnwidth}{!}{%
\input{resources/table.tex}%
}%
\end{table}
\noindent%
\RQRef{2}: %
We now investigate whether the increase in tail coverage provided by $\phi\text{-RTN}${} translates into improved ASR performance. Following the observations made from Fig.~\ref{fig:tradeoff} on the dev. sets, we choose the \compacttexttt{phi-rtn}{} model with $n = 3$ as it provides a good coverage/size trade-off. We choose \compacttexttt{ngram}{} models relative to the size of the \compacttexttt{phi-rtn}{} model by taking the \compacttexttt{ngram}{} model closest to the target size (e.g., half size of \compacttexttt{phi-rtn}{}) and pick the \compacttexttt{ngram}{} within a $\epsilon = 5\text{MB}$ window with lowest perplexity on the dev. set (\S\ref{section:experiments:evaluation}).
Table~\ref{tbl:asr_methods} shows the selected models and selection criteria/hyperparameters. In Table~\ref{tbl:asr_accuracy}, we evaluate the $\phi\text{-RTN}${} and back-off LMs interpolated individually with the main ASR LM, and combinations thereof (\S\ref{section:experiments:evaluation}). Including \compacttexttt{phi-rtn}{} results in a significant WER improvement across all test sets, when compared to the main LM alone. This is unsurprising, since no media player-specific data was added to the main LM---except the queries occurring in transcribed data (end of \S\ref{section:experiments:system})---as the models are complementary (\S\ref{section:introduction}). When comparing \compacttexttt{phi-rtn}{} and same-sized {\footnotesize \NGramMethodName{}$^{(\smallsim)}$}{} (Table~\ref{tbl:asr_accuracy:medium}), we note that while \compacttexttt{phi-rtn}{} performs signficantly better on the tail, {\footnotesize \NGramMethodName{}$^{(\smallsim)}$}{} wins on the head. Can a combination of both models result in overall better performance on all test sets? Comparing {\footnotesize \NGramMethodName{}$^{(>)}$}{} (Table~\ref{tbl:asr_accuracy:large}) with \compacttexttt{phi-rtn}{} + {\footnotesize \NGramMethodName{}$^{(\smallsim)}$}{} (Table~\ref{tbl:asr_accuracy:medium}), the \numprint{11}\% smaller combination performs similarly on the head test set, but achieves a 10\% relative WER reduction on the tail set.
Finally, we did not observe a regression on the uniform sample of VA queries (\S\ref{section:experiments:statistics}) in terms of WER and end-to-end latency.
We answer our 2nd RQ: improvements in model coverage translate to lower WER. In addition, the sum is greater than the parts: %
combining both methods yields an ASR quality increase at less space than would be obtained when using larger back-off LMs.
\noindent \textbf{Discussion.} %
Since $\phi\text{-RTN}${} stores templates and entities as two separate sub-graphs, and $\SetSize{E{}} \gg \SetSize{T{}}$, the worst-case model size of $\phi\text{-RTN}${} is asymptotically bounded by the number of unique entities.
The space-saving of $\phi\text{-RTN}${} over back-off LMs (\S\ref{section:experiments:fst_representations}) occurs since back-off LMs need to explicitly represent the relation between template contexts and entities (i.e., explicit probabilities).
$\phi\text{-RTN}${} is suitable when the language used in the templates is sufficiently limited. Within our experiments, the $\phi\text{-RTN}${} model provides coverage for 99\% of queries represented by the grammar (\S\ref{section:experiments:statistics}). However, in other applications where template language may be more complex, $\phi\text{-RTN}${} may not be the best fit.
Even though $\phi\text{-RTN}${} has limitations, its inclusion in VA ASR systems allows the recognition of long tail entities that would otherwise not be recognized at all.
Finally, collisions between the template/entity vocabulary that would lead to reduced coverage can be detected at training time and addressed by, e.g., rewriting the template/entity lists or including the affected query in the training data of a complimentary n-gram LM.
| proofpile-arXiv_066-1155 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Let $R$ be a finite ring equipped with a weight $w$. Two linear codes
$C, D \le {_RR^n}$ are \emph{isometrically equivalent} if there is an
isometry between them, i.e., an $R$-linear bijection $\varphi: C
\longrightarrow D$ that satisfies $w(\varphi(c))=w(c)$ for all $c\in
C$. We say that $\varphi$ \emph{preserves} the weight $w$.
MacWilliams in her doctoral dissertation \cite{macw63} and later
Bogart, Goldberg, and Gordon~\cite{bogagoldgord78} proved that, in the
case where $R$ is a finite field and $w$ is the Hamming weight, every
isometry is the restriction of a {\em monomial transformation} $\Phi$
of the ambient space $_RR^n$. A monomial transformation of $_RR^n$ is
simply a left linear mapping $\Phi: R^n\longrightarrow R^n$ the matrix
representation of which is a product of a permutation matrix and an
invertible diagonal matrix. Said another way, every Hamming isometry
over a finite field extends to a monomial transformation. This result
is often called the \emph{MacWilliams Extension Theorem} or the
\emph{MacWilliams Equivalence Theorem}.
With increased interest in linear codes over finite rings there arose
the natural question: could the Extension Theorem be proved in the
context of ring-linear coding theory? This question appeared
complicated, as two different weights were pertinent: the traditional
Hamming weight $w_{\rm H}$ and also a new weight $w_{\rm hom}$ called
the \emph{homogeneous} weight by its discoverers Constantinescu
and Heise~\cite{consheis97}.
In~\cite{wood99} Wood proved the MacWilliams Extension Theorem
for all linear codes over finite Frobenius rings equipped with the
Hamming weight. In the commutative case he showed in the same paper
that the Frobenius property was not only sufficient but also
necessary. In the non-commutative case, the necessity of the
Frobenius property was proved in \cite{wood08a}.
Inspired by the paper of Constantinescu, Heise, and
Honold~\cite{consheishono96} which used combinatorial methods to prove
the Extension Theorem for homogeneous weights on $\mathbb Z_m$, Greferath
and Schmidt~\cite{grefschm00} showed that the Extension Theorem is
true for linear codes over finite Frobenius rings when using the
homogeneous weight. Moreover, they showed that for all finite rings
every Hamming isometry between two linear codes is a homogeneous
isometry and vice versa.
The situation can be viewed as follows: for $R$ a finite ring, and
either the Hamming weight or the homogeneous weight, the Extension
Theorem holds for all linear codes in $R^n$ if and only if the ring is
Frobenius. This is a special case of more general results by
Greferath, Nechaev, and Wisbauer~\cite{grefnechwisb04} who proved that
if the codes are submodules of a quasi-Frobenius bi-module $_RA_R$
over any finite ring $R$, then the Extension Theorem holds for the
Hamming and homogeneous weights. The converse of this was proved by
Wood in \cite{wood09}. \medskip
Having understood all requirements on the algebraic side of the
problem, we now focus on the metrical aspect. This paper aims to
further develop a characterization of all weights on a finite
(Frobenius) ring, for which the corresponding isometries satisfy the
Extension Theorem.
In our discussion we will assume that the weights in question are
bi-invariant, which means that $w(ux) = w(x) =w(xu)$ for all $x\in R$
and $u\in R^\times$. Our main results do not apply to weights with
smaller symmetry groups such as the Lee or Euclidean weight (on
$R=\ensuremath{\field{Z}}_m$, except for $m\in\{2,3,4,6\}$), despite their importance for
ring-linear coding theory.
The goal of this paper is to give a necessary and sufficient condition
that a bi-invariant weight $w$ must satisfy in order for the Extension
Theorem to hold for isometries preserving $w$. We are not able to
characterize all such weights when the underlying ring is an arbitrary
Frobenius ring, but we do achieve a complete result for
\emph{principal ideal rings}. These are rings in which each left or
right ideal is principal, and they form a large subclass of the finite
Frobenius rings.
The present work is a continuation and generalization of earlier work
on this topic \cite{wood97, wood99a, grefhono05, grefhono06, wood09,
grefmcfazumb13}. As in \cite{grefhono06, grefmcfazumb13} the
M\"obius function on the partially ordered set of (principal, right)
ideals is crucial for the statement and proof of our main
characterization result; however, in contrast to these works we do not
need the values of the M\"obius function explicitly, but use its
defining properties instead to achieve a more general result. Our
restriction to principal ideal rings stems from our method of proof,
which requires the annihilator of a principal ideal to be principal.
The main result was proved for the case of finite chain rings in
\cite[Theorem~3.2]{grefhono05} (and in a more general form in
\cite[Theorem~16]{wood97}), in the case $\ensuremath{\field{Z}}_m$ in
\cite[Theorem~8]{grefhono06}, for direct products of finite chain
rings in \cite[Theorem~22]{grefmcfazumb13}, and for matrix rings over
finite fields in \cite[Theorem~9.5]{wood09} (see
Example~\ref{ex:examples} below). The main result gives a concrete
manifestation of \cite[Proposition~12]{wood97} and
\cite[Theorem~3.1]{wood99a}. Further to \cite{grefmcfazumb13} we
prove that our condition on the weight is not only sufficient, but
also necessary for the Extension Theorem, using an argument similar to
that in \cite{grefhono06, wood08a}. \medskip
Here is a short summary of the contents of the paper. In
Section~\ref{sec:prelims} we review the terminology of Frobenius
rings, M\"obius functions, and orthogonality matrices needed for the
statements and proofs of our main results. In addition, we prove a
result (Corollary~\ref{cor_smult}) that says that a right-invariant
weight $w$ on $R$ satisfies the Extension Property if the Hamming
weight $w_{\rm H}$ is a correlation multiple of $w$.
In Section~\ref{sec:orthog-matrices} we show that the Extension
Property holds for a bi-invariant weight if and only if its
orthogonality matrix is invertible. The main results are stated in
Section~\ref{sec:bi-inv-wts}. By an appropriate unimodular change of
basis, the orthogonality matrix can be put into triangular form, with
a simple expression for the diagonal entries (Theorem~\ref{thm:WQ}).
The Main Result (Theorem~\ref{maintheorem}) then says that the
Extension Property holds if and only if all the diagonal entries of
the orthogonality matrix are nonzero. A proof of Theorem~\ref{thm:WQ}
is given in Section~\ref{sec:proof}. \medskip
This paper is written in memory of our friend, teacher, and colleague
Werner Heise who, sadly, passed away in February 2013 after a long
illness. Werner has been very influential in ring-linear coding theory
through his discovery of the homogeneous weight on ${\mathbb Z}_m$
(``Heise weight'') and subsequent contributions.
\section{Notation and Background}%
\label{sec:prelims}
In all that follows, rings $R$ will be finite, associative and possess
an identity $1$. The group of invertible elements (units) will be
denoted by $R^\times$ or $\ensuremath{U}$. Any module $_RM$ will be unital,
meaning $1m=m$ for all $m\in M$.
\subsection*{Frobenius Rings}
We describe properties of Frobenius rings needed in this paper, as in
\cite{honold01}.
The character group of the additive group of a ring $R$ is defined as
$\widehat{R}:={\rm Hom}_{\mathbb Z}(R,{\mathbb C}^\times)$.
This group has the structure of an $R,R$-bimodule by defining
$\chi^r(x):=\chi(rx)$ and $^r\chi(x):=\chi(xr)$ for all $r,x\in R$,
and for all $\chi\in \widehat{R}$.
The \emph{left socle} ${\rm soc}(_RR)$ is defined as the sum of all
minimal left ideals of $R$. It is a two-sided ideal. A similar
definition leads to the \emph{right socle} ${\rm soc}(R_R)$ which is
also two-sided, but will not necessarily coincide with its left
counterpart.
A finite ring $R$ is
\emph{Frobenius} if one of the following four
equivalent statements holds:
\begin{itemize}\itemsep=1mm
\item $_RR \cong {_R\widehat{R}}$.
\item $R_R \cong {\widehat{R}_R}$.
\item ${\rm soc}(_RR)$ is left principal.
\item ${\rm soc}(R_R)$ is right principal.
\end{itemize}
For a finite Frobenius ring the left and right socles coincide.
Crucial for later use is the fact that finite Frobenius rings are
quasi-Frobenius and hence possess a perfect duality. This means the
following: Let $L(_RR)$ denote the lattice of all left ideals of $R$,
and let $L(R_R)$ denote the lattice of all right ideals of $R$. There
is a mapping $\perp: L(_RR) \longrightarrow L(R_R),\; I \mapsto
I^\perp$ where $I^\perp:= \{x\in R \mid Ix=0\}$ is the right
annihilator of $I$ in $R$. This mapping is an order anti-isomorphism
between the two lattices. The inverse mapping associates to every
right ideal its left annihilator.
\subsection*{Principal Ideal Rings}
A ring $R$ is \emph{left principal} if every left ideal is left
principal, similarly a ring is \emph{right principal} if every right
ideal is right principal. If a ring is both left principal and right
principal it is a {\em principal ideal ring}. Nechaev in
\cite{nechaev73} proved that ``a finite ring with identity in which
every two-sided ideal is left principal is a principal ideal ring."
Hence every finite left principal ideal ring is a principal ideal
ring. Further, as argued in \cite{nechaev73}, the finite principal
ideal rings are precisely the finite direct sums of matrix rings over
finite chain rings. They form a subclass of the class of finite
Frobenius rings (since, for example, their one-sided socles are
principal).
\subsection*{M\"obius Function}
The reader who is interested in a more detailed survey of the
following is referred to \cite[Chapter~IV]{aigner}, \cite{rota64}, or
\cite[Chapter~3.6]{stanley}.
For a finite partially-ordered set (poset) $P$, we have the incidence
algebra
\[ {\mathbb A}(P) \;:=\; \{\,f: P\times P \longrightarrow {\mathbb Q}
\mid\, x \not\le y \;\; \mbox{implies} \;\; f(x,y)=0 \,\} \:. \]
Addition and scalar multiplication in ${\mathbb A}( P)$ are defined
point-wise; multiplication is convolution:
\[ (f*g)(a,b) = \sum_{a \le c \le b} f(a,c) \, g(c,b) \:. \]
The invertible elements are exactly the functions $f\in {\mathbb
A}(P)$ satisfying $f(x,x) \ne 0$ for all $x\in P$. In particular,
the characteristic function of the partial order of $P$ given by
\[ \zeta: P\times P \longrightarrow {\mathbb Q} \:,
\quad (x,y) \mapsto \left\{\begin{array}{lcl}
1 & : & x\le y\\
0 & : & \mbox{otherwise}
\end{array}\right. \]
is an invertible element of ${\mathbb A}(P)$. Its inverse is the {\em
M\"obius function\/} $\mu: P\times P \longrightarrow {\mathbb Q}$
implicitly defined by $\mu(x,x) = 1$ and \[ \sum_{x\le t \le y}
\mu(x,t) \;=\; 0 \] if $x<y$, and $\mu(x,y) = 0$ if $x \not\le y$.
\subsection*{Weights and Code Isometries}
Let $R$ be any finite ring. By a \emph{weight} $w$ we mean any
$\ensuremath{\field{Q}}$-valued function $w: R \longrightarrow {\mathbb Q}$ on $R$,
without presuming any particular properties. As usual we extend $w$
additively to a weight on $R^n$ by setting
\[ w: R^n \longrightarrow {\mathbb Q} \:,\quad
x \mapsto \sum_{i=1}^n w(x_i) \:. \]
The \emph{left} and \emph{right symmetry groups} of $w$ are defined by
\[ G_{\mathrm{lt}}(w) := \{ u \in U: w(ux) = w(x), x \in R\} \:, \quad
G_{\mathrm{rt}}(w) := \{ v \in U: w(xv) = w(x), x \in R\} \:. \]
A weight $w$ is called \emph{left} (resp.\ \emph{right})
\emph{invariant} if $G_{\mathrm{lt}}(w) = U$ (resp.\ $G_{\mathrm{rt}}(w) = U$).
A (left) \emph{linear code} of length $n$ over $R$ is a submodule $C$
of $ {}_{R}R^{n}$. A {\em $w$-isometry\/} is a linear map $\varphi : C
\longrightarrow {}_{R}R^{n}$ with $w(\varphi(x)) = w(x)$ for all $x \in C$, i.e.,
a mapping that preserves the weight $w$.
A \emph{monomial transformation} is a bijective (left) $R$-linear
mapping $\Phi : R^{n} \longrightarrow R^{n}$ such that there is a
permutation $\pi\in S_n$ and units $u_1, \ldots, u_n\in \ensuremath{U}$ so that \[
\Phi(x_1, \ldots, x_n) \;=\; ( x_{\pi(1)} u_1 , \dots , x_{\pi(n)}
u_n ) \] for every $(x_1 , \dots , x_n ) \in R^{n}$. In other words,
the matrix that represents $\Phi$ with respect to the standard basis
of $_RR^n$ decomposes as a product of a permutation matrix and an
invertible diagonal matrix. A \emph{$G_{\mathrm{rt}}(w)$-monomial
transformation} is one where the units $u_i$ belong to the right
symmetry group $G_{\mathrm{rt}}(w)$. A $G_{\mathrm{rt}}(w)$-monomial transformation is a
$w$-isometry of $R^n$.
We say that a finite ring $R$ and a weight $w$ on $R$ satisfy the
\emph{Extension Property} if the following holds: For every positive
length $n$ and for every linear code $C\le {_RR^n}$, every injective
$w$-isometry $\varphi : C \longrightarrow {_RR}^{n}$ is the
restriction of a $G_{\mathrm{rt}}(w)$-monomial transformation of $_RR^{n}$. That
is, every injective $w$-isometry $\varphi$ extends to a monomial
transformation that is itself a $w$-isometry of $R^n$. \medskip
Let $w: R \longrightarrow {\mathbb Q}$ be a weight and let $f: R
\longrightarrow {\mathbb Q}$ be any function. We define a new weight
$wf$ as
\[ wf: R \longrightarrow {\mathbb Q} \:, \quad
x \mapsto \sum_{r\in R} w(rx)\,f(r) \:. \]
By the operation of \emph{right correlation} $(w,f)\mapsto wf$, the
vector space $V := \ensuremath{\field{Q}}^R$ of all weights on $R$ becomes a right module
$V_\ensuremath{A}$ over $\ensuremath{A} = \ensuremath{\field{Q}}[(R,\cdot)]$, the rational semigroup algebra of
the multiplicative semigroup $(R,\cdot)$ of the ring
(see~\cite{grefmcfazumb13}). For $r\in R$ denote by $e_r$ the weight
where $e_r(r) = 1$ and $e_r(s) = 0$ for $s\ne r$. Then $we_r$ is
simply given by $(we_r)(x) = w(rx)$.
Denote the natural additive extension of $wf$ to $R^n$ by $wf$ also.
\begin{lemma}\label{lem_smult}
Let $C \le {_RR^n}$ be a linear code and let $\varphi: C
\longrightarrow R^n$ be a $w$-isometry, then $\varphi$ is also a
$wf$-isometry for any function $f: R \longrightarrow {\mathbb Q}$.
\end{lemma}
\begin{proof}
For all $x\in C$ we compute
\begin{align*}
(wf) (\varphi(x))
& \;=\; \sum_{r\in R} w(r\varphi(x)) \, f(r) \;=\;
\sum_{r\in R} w(\varphi(rx)) \, f(r) \\
& \;=\; \sum_{r\in R} w(rx) \, f(r)
\;=\; (wf)(x) \:. \qedhere
\end{align*}
\end{proof}
For a weight $w$ consider the $\ensuremath{\field{Q}}$-linear map $\tilde w: \ensuremath{A} \to V$,
$f\mapsto wf$. By Lemma~\ref{lem_smult}, if $\varphi$ is a
$w$-isometry then $\varphi$ is a $w'$- isometry for all $w'
\in\im\tilde w$. Note that $\im\tilde w = w\ensuremath{A} \le V_\ensuremath{A}$.
\subsection*{Weights on Frobenius Rings}
Now let $R$ be a finite Frobenius ring. We describe two approaches
that ultimately lead to the same criterion for a weight $w$ to satisfy
the Extension Property.
\subsubsection*{Approach~1} From earlier work~\cite{wood99} we know
that the Hamming weight $w_{\rm H}$ satisfies the Extension Property.
Combining this fact with Lemma~\ref{lem_smult}, we immediately obtain
the following result.
\begin{cor}\label{cor_smult}
Let $R$ be a finite Frobenius ring and let $w$ be a weight on $R$
such that $G_{\mathrm{rt}}(w) = U$ and $wf = w_{\rm H}$ for some function
$f:R\to\ensuremath{\field{Q}}$. Then $w$ satisfies the Extension Property.
\end{cor}
In other words, if $w$ is right-invariant and $w_{\rm H}\in\im\tilde
w$ then $w$ satisfies the Extension Property.
How can we make sure that $w_{\rm H}\in\im\tilde w$? One idea is to
show that the $\ensuremath{\field{Q}}$-linear map $\tilde w$ is bijective: Using the
natural basis $(e_r)_{r\in R}$ for $V$ and the property $(we_r)(s) =
w(rs)$ it is easy to see that $\tilde w$ is described by the transpose
of the matrix $(w(rs))_{r,s\in R}$. However, if the weight function
$w$ is left- or right-invariant {\em or} satisfies $w(0) = 0$ then
this matrix is not invertible. Therefore we work with a ``reduced''
version of the map $\tilde w$.
As before, let $V := \ensuremath{\field{Q}}^R$ be the vector space of all weights on $R$,
and let $V_0^\ensuremath{U}$ be the subspace of all weights $w$ satisfying $w(0) =
0$ that are right-invariant. Similarly, we define the subspace $^\ensuremath{U}
V_0$ of all weights $w$ with $w(0) = 0$ that are left-invariant. The
corresponding invariant subspaces of $\ensuremath{A} = \ensuremath{\field{Q}}[(R,\cdot)]$ are
$\ensuremath{A}_0^\ensuremath{U}$ and $^\ensuremath{U} \ensuremath{A}_0$, where $\ensuremath{A}_0 := \ensuremath{A} / \ensuremath{\field{Q}} e_0$.
If $w$ is a weight in $V_0^\ensuremath{U}$ then $wf\in V_0^\ensuremath{U}$ for {\em any}
function $f:R\to\ensuremath{\field{Q}}$, i.e., $\im\tilde w \le V_0^\ensuremath{U}$. In this case we
could examine the bijectivity of the $\ensuremath{\field{Q}}$-linear map $\tilde w:
\ensuremath{A}_0^\ensuremath{U} \to V_0^\ensuremath{U}$ (the restriction of the above map $\tilde w$).
But this map does not have a nice matrix representation; setting
$e_{s\ensuremath{U}} = \sum_{r\in s\ensuremath{U}}e_r$ and letting $(e_{s\ensuremath{U}})_{s\ensuremath{U}\ne 0}$ be
the natural basis for $\ensuremath{A}_0^\ensuremath{U}$ and for $V_0^U$, the entries of the
matrix turn out to be sums of several values $w(rus)$.
However, if we work with the restriction $\tilde w: {}^\ensuremath{U} \ensuremath{A}_0 \to
V_0^\ensuremath{U}$ instead and if the weight $w$ is bi-invariant (i.e., both
left- and right-invariant), then, with respect to the natural bases,
this $\ensuremath{\field{Q}}$-linear map does have a nice matrix description, namely the
orthogonality matrix. This will be explained below. If this map
$\tilde w$ is invertible, then $w$ satisfies the Extension Property by
Corollary~\ref{cor_smult}.
Note: Since $\im\tilde w$ is a submodule of $V_\ensuremath{A}$ it follows that
$w_{\rm H}\in\im\tilde w$ if and only if $\im\tilde w_{\rm H} \le \im\tilde w$.
Actually, $\im\tilde w_{\rm H} = V_0^\ensuremath{U}$ (see
Proposition~\ref{prop_om-reverse} below), so that $w_{\rm
H}\in\im\tilde w$ if and only if $V_0^U \subseteq \im\tilde w$.
This is why it is a sensible approach to investigate the
surjectivity/bijectivity of the map $\tilde w$.
\subsubsection*{Approach~2} The same orthogonality matrix that appears
in Approach~1 also appears in \cite{wood97}. By
\cite[Proposition~12]{wood97} (also, \cite[Theorem~3.1]{wood99a} and
\cite[Section~9.2]{wood09}), the invertibility of the orthogonality
matrix of $w$ implies that a $w$-isometry preserves the so-called
\emph{symmetrized weight composition} associated with $G_{\mathrm{rt}}(w)$.
Then, \cite[Theorem~10]{wood97} shows that any injective linear
homomorphism that preserves the symmetrized weight composition
associated with $G_{\mathrm{rt}}(w)$ extends to a $G_{\mathrm{rt}}(w)$-monomial
transformation. Thus, if the orthogonality matrix is invertible, any
$w$-isometry extends to a $G_{\mathrm{rt}}(w)$-monomial transformation, and hence
$w$ satisfies the Extension Property.
\subsection*{Orthogonality Matrices}
Let $R$ be a finite Frobenius ring. There is a one-to-one
correspondence between left (resp., right) principal ideals and left
(resp., right) $\ensuremath{U}$-orbits. Each $\ensuremath{U}$-orbit is identified with the
principal ideal of which its elements are the generators
(\cite[Proposition~5.1]{wood99}, based on work of Bass). Define for
$r, s\in R\setminus \{0\}$ the functions $\varepsilon_{R r} (x) = \size{\ensuremath{U}
r}^{-1} $ if $x \in \ensuremath{U} r$, i.e., if $Rr = Rx$, and zero otherwise;
similarly, let $e_{sR} (x) = e_{sU}(x) = 1$ if $xR = sR$ and zero
otherwise. Then $(\varepsilon_{R r})$ and $(e_{sR})$ are bases for $^\ensuremath{U} \ensuremath{A}_0$
and $V_0^\ensuremath{U}$, as $R r$ and $sR$ vary over all left and right nonzero
principal ideals of $R$, respectively.
For a bi-invariant weight $w$, define the \emph{orthogonality matrix}
of $w$ by $W_0 = \big(w(rs) \big){}_{Rr\ne 0,\,sR\ne 0}$. That is,
the entry in the $Rr, sR$-position is the value of the weight $w$ on
the product $rs \in R$. The value $w(rs)$ is well-defined, because
$w$ is bi-invariant. Note that $W_0$ is square; this follows from
work of Greferath \cite{gref02} that shows the equality of the number
of left and right principal ideals in a finite Frobenius ring.
\begin{prop}\label{prop_matrix}
Suppose $w$ is bi-invariant with $w(0)=0$. Then
\[ w \, \varepsilon_{R r} = \sum_{sR\ne 0} w(rs) \, e_{s R} \]
for nonzero $R r$, where the sum extends over all the nonzero right
principal ideals $s R$. In particular, the matrix representing the
$\ensuremath{\field{Q}}$-linear map $\tilde w: {}^\ensuremath{U} \ensuremath{A}_0 \to V_0^\ensuremath{U}$, $f\mapsto wf$,
with respect to the bases $(\varepsilon_{R r})$ and $(e_{sR})$, is the
transpose of the matrix $W_0$.
\end{prop}
\begin{proof}
Since $w\in V_0^\ensuremath{U}$ we have $w \, \varepsilon_{R r} \in V_0^\ensuremath{U}$, and therefore
\[ w \, \varepsilon_{R r} = \sum_{sR\ne 0} (w \, \varepsilon_{R r})(s) \, e_{s R} \:. \]
Calculating, using that $w\in {}^\ensuremath{U} V_0$, we get:
\[ (w \, \varepsilon_{R r})(s) = \sum_{t \in R} w(ts) \, \varepsilon_{R r}(t)
= \sum_{t \in Ur} \size{\ensuremath{U} r}^{-1} w(ts)
= w(rs) \:. \qedhere \]
\end{proof}
In the algebraic viewpoint of \cite{grefmcfazumb13}, $V_0^{\ensuremath{U}}$ is a
right module over $^\ensuremath{U} \!\ensuremath{A}_0$. Then, $W_0$ is invertible if and only
if $w$ is a generator for $V_0^{\ensuremath{U}}$.
If $R$ is a finite field and $w = w_{\rm H}$, the Hamming weight on
$R$, then $W_0$ is exactly the orthogonality matrix considered by
Bogart, Goldberg, and Gordon~\cite[Section~2]{bogagoldgord78}. More
general versions of the matrix $W_0$ have been utilized in
\cite{wood97, wood99a, wood09}.
\begin{example}
For $R={\mathbb Z}_4$ the Lee weight $w_{\rm Lee}$ assigns $0
\mapsto 0$, $1\mapsto 1$, $2\mapsto 2$ and $3\mapsto 1$. It is a
bi-invariant weight function, as is the Hamming weight $w_{\rm H}$
on $R$. Based on the natural ordering of the (nonzero) principal
ideals of $R$ as $2R < R$ the orthogonality matrix for $w_{\rm Lee}$
is
\[ W_0^{\rm Lee} \, = \, \left[ \begin {array}{cc}
0 & 2 \\
2 & 1
\end{array}\right], \]
whereas the orthogonality matrix for $w_{\rm H}$ is given by
\[ W_0^{\rm H} \, = \, \left[ \begin {array}{cc}
0 & 1\\
1 & 1
\end{array}\right]. \]
Both of these matrices are invertible over ${\mathbb Q}$ as observed
in \cite{gref02}, where it was shown that the Extension Property is
satisfied.
\end{example}
\section{Orthogonality Matrices and the Extension Theorem}%
\label{sec:orthog-matrices}
In the present section we will show that invertibility of the
orthogonality matrix of a bi-invariant weight is necessary and
sufficient for that weight to satisfy the Extension Property. We
split this result into two statements.
\begin{prop}
Let $R$ be a finite Frobenius ring and let $w$ be a bi-invariant
weight on $R$. If the orthogonality matrix $W_0$ of $w$ is
invertible, then $w$ satisfies the Extension Property.
\end{prop}
\begin{proof}
Approach~1: by Proposition~\ref{prop_matrix} the matrix $W_0$
describes the $\ensuremath{\field{Q}}$-linear map $\tilde w: {}^\ensuremath{U} \ensuremath{A}_0 \to V_0^\ensuremath{U}$,
$f\mapsto wf$. Hence if $W_0$ is invertible the map $\tilde w$ is
bijective, and in particular $w_{\rm H}\in\im\tilde w$. Thus by
Corollary~\ref{cor_smult} the weight $w$ satisfies the Extension
Property.
Approach~2: apply \cite[Proposition~12]{wood97} or
\cite[Theorem~3.1]{wood99a}.
\end{proof}
We remark that in the foregoing discussion, $\ensuremath{\field{Q}}$ could be replaced
throughout by any field $K$ containing $\ensuremath{\field{Q}}$, for example $K = \ensuremath{\field{C}}$.
\begin{prop}\label{prop_om-reverse}
Let $R$ be a finite Frobenius ring, and let $w$ be a bi-invariant
rational weight on $R$ that satisfies the Extension Property. Then
the orthogonality matrix $W_0$ of $w$ is invertible.
\end{prop}
\begin{proof}
The proof mimics that of \cite[Theorem~4.1]{wood08a} and
\cite[Proposition~7]{grefhono06}. Assume $W_0$ singular for the
sake of contradiction. Then there exists a nonzero rational vector
$v = (v_{cR})_{cR\ne 0}$ such that $W_0 v = 0$. Without loss of
generality, we may assume that $v$ has integer entries. We proceed
to build two linear codes $C_+, C_-$ over $R$. Each of the codes
will have only one generator. The generator for $C_{\pm}$ is a
vector $g_{\pm}$ with the following property: for each ideal $cR \le
R_R$ with $v_{cR} > 0$ (for $g_+$), resp., $v_{cR} < 0$ (for $g_-$),
the vector $g_{\pm}$ contains $\abs{v_{cR}}$ entries equal to
$c$. To make these two generators annihilator-free, we append to
both a trailing $1\in R$. The typical codeword in $C_{\pm}$ is hence
of the form $a g_{\pm}$ for suitable $a \in R$. We compare $w(a
g_+)$ and $w(a g_-)$ for every $a \in R$ by calculating the
difference $D( a ) = w(a g_+) - w(a g_-)$. By our construction of
the generators $g_{\pm}$, we have \[ D( a ) \;=\; \sum_{cR\ne 0}
w(ac) \, v_{cR} \;=\; (W_0 v)_{Ra} \;=\; 0 \:, \] for all $a\in R$.
Thus $a g_+ \mapsto a g_-$ forms a $w$-isometry from $C_+$ to
$C_-$. The codes, however, are not monomially equivalent because
their entries come from different right $\ensuremath{U}$-orbits.
\end{proof}
We summarize our findings in the following theorem.
\begin{theorem}
A rational bi-invariant weight function on a finite Frobenius ring
satisfies the Extension Property if and only if its orthogonality
matrix is invertible.
\end{theorem}
The ultimate goal is to give necessary and sufficient conditions on a
bi-invariant weight $w$ on a finite Frobenius ring $R$ so that its
orthogonality matrix $W_0$ is invertible. We are able to derive such
a result for finite principal ideal rings.
\subsection*{Extended Orthogonality Matrices}
Let $R$ be a finite Frobenius ring and let $w$ be a bi-invariant
weight function with $w(0) = 0$. The orthogonality matrix for the
weight $w$ was defined as $W_0 = \big( w(rs) \big){}_{Rr\ne 0,\,sR\ne
0}$. Now define the {\em extended} orthogonality matrix for $w$ as
$W = \big( w(rs) \big){}_{Rr,\,sR}$. In order to examine the
invertibility of $W_0$ we obtain a formula for $\det W$, the
determinant of the matrix $W$. (Note that $\det W$ is well-defined up
to multiplication by $\pm1 $, the sign depending on the particular
orderings of the rows and columns of $W$.) First we relate $\det W$
to $\det W_0$, viewing $w(0)$ as an indeterminate.
\begin{prop}\label{prop_W-W0}
The determinant $\det W_0$ is obtained from $\det W$ by dividing
$\det W$ by $w(0)$ and then setting $w(0) = 0$.
\end{prop}
\begin{proof}
We treat $w(0)$ as an indeterminate $w_0$. Up to a sign change in
$\det(W)$, we may assume that the rows and columns of $W$ are
arranged so that the first row is indexed by $R0$ and the first
column is indexed by $0R$. Then $W$ has the form
\[ W = \left[ \begin{array}{c|c}
w_0 & w_0 \,\cdots\, w_0 \\ \hline
w_0 & \\
\vdots & W' \\
w_0 &
\end{array} \right] . \]
By subtracting the first row from every other row, we find that
$\det W = w_0 \det(W'-w_0J)$, where $J$ is the all-one matrix.
Finally the matrix $W_0$ equals the matrix $W'-w_0J$ evaluated at
$w_0 = 0$, so that $\det W_0 = \det(W' - w_0J)|_{w_0 = 0}$.
\end{proof}
Note that the extended orthogonality matrix $W$ is not invertible for
weights $w$ satisfying $w(0) = 0$.
\section{Bi-invariant Weights with Invertible Orthogonality\\
Matrix on Principal Ideal Rings}%
\label{sec:bi-inv-wts}
Let $R$ be a finite principal ideal ring, and let $w$ be a
bi-invariant weight on $R$. Assume $W$ is the extended orthogonality
matrix of $w$. We are interested in the determinant of $W$ and look
for a way to evaluate this determinant.
We will define an invertible matrix $(Q_{cR,Rx})_{cR,\, Rx}$ with
determinant $\pm 1$ and multiply $W$ by $Q$ from the right to arrive
at $WQ$; then $\det(WQ) = \pm\,\det(W)$. The most significant
advantage of considering $WQ$, rather than $W$, is that $WQ$ will be a
lower triangular matrix for which we can easily calculate the
determinant.
Define for any finite ring the matrix $Q$ by
\[ Q_{cR, Rx} \;:=\; \mu( (Rx)^\perp, cR ) \: , \]
for $cR\le R_R$ and $Rx\le {_RR}$, where $\mu$ is the M\"obius
function of the lattice $L^*$ of all right ideals of $R$.
\begin{lemma}\label{lemQinvertible}
For a finite principal ideal ring $R$, the matrix $Q$ is an
invertible matrix with determinant $\pm1$.
\end{lemma}
\begin{proof}
We claim that the inverse of $Q$ is given by $T_{Ra, bR} :=
\zeta(bR, (Ra)^\perp)$, where $\zeta$ is the indicator function of
the poset $L^*$, meaning
\[ \zeta(xR, yR) \;=\; \left\{\begin{array}{ccl}
1 & : & xR \le yR \:, \\
0 & : & \mbox{otherwise} \:.
\end{array}\right.\]
We compute the product $TQ$,
\[ (TQ)_{Ra,Rx} \;=\; \sum_{cR} { \zeta(cR, (Ra)^\perp)
\, \mu ( (Rx)^\perp , cR)} \:. \]
By the definition of $\zeta$ and the fact that
$\mu((Rx)^\perp,cR) =0$ unless $(Rx)^\perp \le cR$, the expression
above simplifies to
\[ (TQ)_{Ra,Rx} \;= \sum_{ (Rx)^\perp \le cR \le (Ra)^\perp} {\mu
((Rx)^\perp , cR)} \:, \] which is $1$ for $(Rx)^\perp =
(Ra)^\perp$ and $0$ otherwise by the definition of the M\"obius
function.
The matrix $T$ is upper triangular with $1$s on the main diagonal.
Thus $\det T$ and hence $\det Q$ equal $\pm 1$. (The $\pm 1$ allows
for different orders of rows and columns.)
\end{proof}
\begin{example}
Let $R := \ensuremath{\field{F}}_q[x,y] / \langle x^2, y^2 \rangle$, which is a
commutative local Frobenius ring. (When $q=2^k$, $R$ is isomorphic
to the group algebra over $\ensuremath{\field{F}}_{2^k}$ of the Klein $4$-group.) Here,
$(Rxy)^\perp = xR + yR$ is not principal and thus the above proof
does not apply; in fact, the matrix $Q$ turns out to be singular in
this case.
On the other hand, the Frobenius ring $R$ is not a counter-example
to the main result below. In fact, $\det(W_0) = \pm q \,
w(xy)^{q+3}$ satisfies the formula in
\eqref{eq:det-W0-factorization} below (up to a nonzero
multiplicative constant), so that the main result still holds over
$R$.
\end{example}
We are now ready to state the main theorems. The proof of the next
result is contained in the final section.
\begin{theorem}\label{thm:WQ}
If $R$ is a finite principal ideal ring, then the matrix $WQ$ is
lower triangular. The diagonal entry at position $(Ra, Ra)$ is
$\sum\limits_{dR \le aR} w( d ) \, \mu(0, dR)$.
\end{theorem}
We conclude that the determinant of $WQ$ and hence that of $W$ is
given by
\[ \det(W) \;=\; \pm\, \det(WQ) \;=\;
\pm\prod_{aR} \, \sum_{dR\le aR} w(d)\, \mu(0,dR) \:. \]
Applying Proposition~\ref{prop_W-W0} we find the determinant of $W_0$
to be
\begin{equation}
\label{eq:det-W0-factorization}
\det(W_0) \;=\; \pm\prod_{aR\ne 0} \, \sum_{0\ne dR\le
aR} w(d)\, \mu(0,dR) \:,
\end{equation}
as in $\det(W)$ the term $aR = 0$ provides a factor of $w(0)$ which
gets divided away, and in each remaining term the contribution from
$dR =0R$ is $w(0)$ which is set equal to $0$.
This yields our main result: a characterization of all bi-invariant
weights on a principal ideal ring that satisfy the Extension Property.
\begin{theorem}[Main Result]\label{maintheorem}
Let $R$ be a finite principal ideal ring and let $\mu$ be the
M\"obius function of the lattice $L^*$ of all right ideals of $R$.
Then a bi-invariant rational weight $w$ on $R$ satisfies the
Extension Property if and only if
\[ \sum_{0\ne dR\le aR} w(d)\, \mu(0,dR) \;\ne\, 0\quad \mbox{for all
$aR \ne 0$} \:. \]
\end{theorem}
The condition in Theorem~\ref{maintheorem} needs to be checked only
for nonzero right ideals $aR\leq\soc(R_R)$, since we have
$\mu(0,dR)=0$ if $dR\not\leq\soc(R_R)$ (see
\cite[Proposition~2]{st:homippi}, for example) and since every right
ideal contained in $\soc(R_R)$ is principal. As a consequence, the
Extension Property of $w$ depends only on the values of $w$ on the
socle of $R$.
\begin{example}
For a chain ring $R$, the main result simply says that a
bi-invariant weight function $w$ satisfies the Extension Property if
and only if it does not vanish on the socle of $R$ (compare with
\cite{grefhono05} and \cite[Theorem~9.4]{wood09}). For $R =
{\mathbb Z}_4$, it states that a bi-invariant weight will satisfy
the Extension Property if and only if $w(2) \ne 0$.
\end{example}
\begin{example}
Let $R := {\mathbb Z}_m$. The nonzero ideals in $\soc(\ensuremath{\field{Z}}_m)$ are of
the form $a\ensuremath{\field{Z}}_m$ with $a\mid m$ and $m/a>1$ square-free. The
M\"obius function of such an ideal is
$\mu(0,a\ensuremath{\field{Z}}_m)=\mu(m/a)=(-1)^r$, where $\mu(\cdot)$ denotes the
one-variable M\"obius function of elementary number theory and $r$
is the number of different prime divisors of $m/a$. According to
Theorem~\ref{maintheorem}, an invariant weight $w$ on $\ensuremath{\field{Z}}_m$ has the
Extension Property if and only if
\[ \sum_{s\mid\frac{m}{a}} w(sa) \, \mu
\Big( \frac{m}{sa} \Big) \;=\; (-1)^r \,
\sum_{s\mid\frac{m}{a}} w(sa) \, \mu(s) \;\ne\; 0\]
for all (positive) divisors $a$ of $m$ such that $m/a$ is
square-free and $>1$. We thus recover the main theorem of
\cite{grefhono06}.
\end{example}
\begin{example}\label{ex:examples}
Let $R := {\rm Mat}_n(\ensuremath{\field{F}}_q)$, $n\geq2$, the ring of
$n\times n$ matrices over the finite field $\ensuremath{\field{F}}_q$ with $q$ elements,
so that $\ensuremath{U} = {\rm GL}_n(\ensuremath{\field{F}}_q)$. The ring $R$ is a finite principal
ideal ring that is not a direct product of chain rings. For each
matrix $A\in R$, the left $\ensuremath{U}$-orbit $\ensuremath{U} A$ can be identified with
the row space of $A$, and similarly, the right $\ensuremath{U}$-orbits
correspond to the column spaces.
Let $w$ be a bi-invariant weight on $R$. Its value $w(A)$ depends
only on the rank of the matrix $A$, and therefore we can write
$w([{\rm rank}\,A]) := w(A)$. Now for $n=2$, the main result says
that $w$ satisfies the Extension Property if and only if $w([1])\ne
0$ and $q\,w([2]) \ne (q+1)\,w([1])$. For $n=3$, $w$ satisfies the
Extension Property if and only if $w([1])\ne 0$, $\,q\,w([2]) \ne
(q+1)\,w([1])$, and $q^3\,w([3]) + (q^2+q+1)\,w([1]) \ne
(q^2+q+1)\,q\,w([2])$.
It was shown in \cite[Theorem~9.5]{wood09} that the relevant
non-vanishing sums are
\begin{equation}\label{eq:wmatrixterm}
\sum_{i=1}^{s}{(-1)^i q^{(\upontop{i}{2})}
\left[\upontop{s}{i}\right]_q w([i]) } \:,
\end{equation}
where $[\upontop{s}{i}]_q$ is the $q$-nomial (Cauchy binomial)
defined as
\[ \left[\upontop{k}{l}\right]_q \,:=\;
\frac{(1 - q^k)(1-q^{k-1}) \dots (1-q^{k-l+1})}
{(1-q^l)(1-q^{l-1}) \dots (1-q)} \:. \]
The {\em rank metric} $w([k]) := k$ satisfies these conditions.
First we state the Cauchy binomial theorem:
\begin{equation*}\label{eq:cauchythm}
\prod_{i=0}^{k-1}{(1 + xq^i)} \;=\;
\sum_{j=0}^{k}{\left[\upontop{k}{j}\right]_q q^{(\upontop{j}{2})} x^j} \:.
\end{equation*}
Now we write the term in \eqref{eq:wmatrixterm} for the rank metric,
changing the sign and including $i=0$ trivially in the sum. This can
then be seen as the evaluation of a derivative.
\[ \sum_{i=0}^{s}{i(-1)^{i-1} \ q^{(\upontop{i}{2})}
\left[\upontop{s}{i}\right]_q } \;=\;
\left. \frac{d}{dx}\sum_{i=0}^{s}{x^i q^{(\upontop{i}{2})}
\left[\upontop{s}{i}\right]_q }\right|_{x=-1} \:. \]
Applying the Cauchy binomial theorem and evaluating the derivative yields:
\[ \left. \frac{d }{dx} \prod_{i=0}^{s-1}{(1+xq^i)} \right|_{x=-1}
\;=\; \left( \prod_{i=0}^{s-1}{(1-q^i)} \right)
\left( \sum_{i=0}^{s-1}{\frac{q^i}{1-q^i}} \right) \:. \]
Both expressions on the right are nonzero provided $q$ is not $\pm
1$, independent of $s$. Hence the rank metric satisfies the
Extension Property for all $q$ and $n$.
\end{example}
\begin{example}
More generally, let $R = {\rm Mat}_n(S)$ be a matrix ring over a
finite chain ring $S$. Then $\soc(R_R) = \soc({}_RR) = {\rm
Mat}_{n\times n}(\soc S) \cong {\rm Mat}_{n\times n}(\ensuremath{\field{F}}_q)$ as a
(bi-)module over the residue class field $S/\rad S\cong\ensuremath{\field{F}}_q$. Hence
the previous example applies and characterizes all bi-invariant
weights $w\colon R\to\ensuremath{\field{Q}}$ having the Extension Property.
\end{example}
\begin{example}
Any finite semisimple ring is a direct product of matrix rings over
finite fields and therefore a principal ideal ring. Hence, the main
result also applies to this case.
\end{example}
\section{A Proof of Theorem~\ref{thm:WQ}}%
\label{sec:proof}
We perform the matrix multiplication and see that the entry of $WQ$ in
position $(Ra,Rb)$ is given by the expression
\[ (WQ)_{Ra,Rb} \;=\; \sum_{cR} W_{Ra,cR} \, Q_{cR,Rb} \;=\;
\sum_{cR} w(ac)\, \mu((Rb)^\perp ,cR) \:. \]
According to the definition of the M\"obius function, $\mu((Rb)^\perp,
cR)$ can be nonzero only when $(Rb)^\perp \le cR$ (or: when $cR \in
[(Rb)^\perp, R]$, using interval notation on the lattice $L^*$ of all
right (necessarily principal) ideals of $R$). With this in mind we
write
\begin{equation}\label{main}
(WQ)_{Ra,Rb} \;= \sum_{cR \in [(Rb)^\perp, R]} w(ac) \, \mu((Rb)^\perp ,cR) \:.
\end{equation}
\subsection*{Diagonal Entries}
The diagonal terms of $WQ$ are given by
\[ (WQ)_{Ra,Ra} \;= \sum_{ cR \in [(Ra)^\perp,R]} w(ac)\, \mu((Ra)^\perp , cR) \:. \]
For an element $a\in R$ consider the left multiplication operator
$L_a: R \longrightarrow aR,\; t\mapsto at$. The mapping $L_a$ is a
(right) $R$-linear mapping with kernel $(Ra)^\perp$, and the
isomorphism theorem yields an induced order isomorphism of intervals
\[ \nu_a:[(Ra)^\perp , R] \longrightarrow [0,aR] \:, \quad
J \mapsto aJ \:. \]
It follows that if $J_1, J_2\in [(Ra)^\perp,R]$, then $\mu(J_1,J_2) =
\mu ( \nu_a(J_1), \nu_a(J_2) ) = \mu (aJ_1, aJ_2)$.
The diagonal term simplifies to
\begin{align*}
(WQ)_{Ra,Ra} &\;=\;
\sum_{cR \in [(Ra)^\perp,R]} w(ac)\, \mu((Ra)^\perp , cR) \\
&\;=\; \sum_{acR \in [0,a R]} w(ac)\, \mu(0, acR) \\
&\;=\; \sum_{dR \in [0,a R]} w(d)\, \mu(0, dR) \:,
\end{align*}
where we have applied the above interval isomorphism with $J_1 =
(Ra)^\perp$ and $J_2 = cR$, followed by the relabeling $acR=dR$.
Finally, observe that the formula $(WQ)_{Ra,Ra} = \sum_{dR \in [0,a
R]} w(d)\, \mu(0, dR)$ does not depend on the choice of generator
$a$ for the left ideal $Ra$. Indeed, any other generator has the form
$ua$, where $u$ is a unit of $R$. Left multiplication by $u$ induces
an order isomorphism of intervals $\nu_u: [0,aR] \longrightarrow
[0,uaR]$, so that $\mu(0,dR) = \mu(0,udR)$ for all $dR \in [0,aR]$.
Since $w$ is left-invariant, we have $w(ud) = w(d)$, and the right
side of the formula is well-defined.
\subsection*{Lower Triangularity}
Now let us return to the general form of the matrix $WQ$ given in
\eqref{main}. We would like to prove that $WQ$ is lower triangular,
i.e., that $Rb \nleq Ra$ will imply that $(WQ)_{Ra,Rb}=0$. To that
end, assume
\begin{equation} \label{eq:lower-tri}
Rb \nleq Ra \:.
\end{equation}
As above, the left multiplication operator $L_a$ induces a mapping
$\lambda_a : [0,R] \rightarrow [0,aR]$, which in turn induces a partition on
$[0,R]$ in a natural way. We first rewrite the general expression for
$(WQ)_{Ra,Rb}$ taking into account this partition.
\[ (WQ)_{Ra,Rb} \;= \sum_{dR \in [0, aR]} w(d)
\mathop{\sum_{cR \in [(Rb)^\perp,R] }}_{\lambda_a(cR)=dR} \mu((Rb)^\perp ,cR) \:. \]
Our goal is to examine the inner sum and show that it vanishes for
every $dR$ in question. In other words, we will show that \[
\mathop{\sum_{cR \in [(Rb)^\perp,R] }}_{\lambda_a(cR)=dR}
\mu((Rb)^\perp ,cR) \;=\; 0 \:,\quad \mbox{for all $dR \le aR$} \:. \]
We do this by induction on $dR$ in the partially ordered set $[0,aR]$.
Accordingly, we assume the existence of some $dR\in [0,aR]$ which is
minimal with respect to the property that
\[ \mathop{\sum_{cR \in [(Rb)^\perp,R] }}_{\lambda_a(cR)=dR} \mu((Rb)^\perp ,cR)
\;\ne\; 0 \:. \]
Consider the right ideal $K := L_a^{-1}(dR) = \sum\limits_{acR \le dR}
cR$. For this ideal we have $(Ra)^\perp \le K$, and moreover, $cR \le
K$ is equivalent to $acR \le dR$. For this reason
\[ \mathop{\sum_{cR \in [(Rb)^\perp,R] }}_{acR\le dR} \mu((Rb)^\perp ,cR)
\;=\; \sum_{ cR \in [(Rb)^\perp , K]} \mu((Rb)^\perp ,cR) \:. \]
By properties of $\mu$, the latter expression is nonzero if and only
if $K=(Rb)^\perp$. This would however imply $(Rb)^\perp \ge
(Ra)^\perp$ (because $(Ra)^\perp \le K$) and hence $Rb\le Ra$,
contrary to assumption \eqref{eq:lower-tri}. Hence, we conclude
\[ 0 \;= \mathop{\sum_{cR \in [(Rb)^\perp,R]}}_{acR\le dR} \mu((Rb)^\perp ,cR)
\;= \mathop{\sum_{cR \in [(Rb)^\perp,R]}}_{acR=dR} \mu((Rb)^\perp ,cR) +
\mathop{\sum_{cR \in [(Rb)^\perp,R]}}_{acR<dR} \mu((Rb)^\perp ,cR) \:. \]
In this equation the minimality property of $dR$ implies that the last
term vanishes. This finally forces
\[ \mathop{\sum_{cR \in [(Rb)^\perp,R]}}_{acR=dR} \mu((Rb)^\perp ,cR) \;=\; 0 \:, \]
contradicting the minimality property of $dR$. Lower triangularity
follows and this finishes the proof of Theorem~\ref{thm:WQ}.\qed
Note that this proof heavily relies on the hypothesis that $R$ is a
finite principal ideal ring. For a general finite Frobenius ring the
architecture of a proof will need to be vastly restructured.
Nonetheless, we conjecture that the main result, as stated, holds over
any finite Frobenius ring.
\bibliographystyle{amsplain}
| proofpile-arXiv_066-1475 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Magnetic null points (locations where all three components of the magnetic field are zero) have been associated with sites of magnetic reconnection ever since the notion of magnetic reconnection was first conceived \citep{Parker57,Sweet58}. It is now known that, in three dimensions (3D), they are not the only possible location for reconnection \citep[e.g.,][]{Schindler88,Hesse88,Hornig03,Priest03}, which may also take place in magnetic separators \citep{Longcope96,Longcope01,Haynes07,Parnell08,Parnell10a,Parnell10b} and quasi-separatrix layers \citep{Priest95,Demoulin96,Demoulin97,Aulanier06,Restante09,Wilmot09}. However, 3D null points still remain key sites for current accumulation and energy release. Using SOHO MDI (Michelson Doppler Imager) magnetogram data, \citet{Longcope09} estimate that there are around 20,000 nulls above 1.5 Mm in the solar atmosphere during solar minimum. Moreover, due to the rapid fall off in complexity of the magnetic field above the photosphere, more nulls are likely to occur below 1.5 Mm than are above.
Over the years 3D magnetic null points have been studied in detail on a number of different levels. The first set of papers considers the basic structure of nulls \citep[e.g.][]{Cowley73, Fukao73} with a comprehensive account of the linear structure of all possible 3D nulls provided in \citet{Parnell96}. The general geometry of 3D magnetic null points involves two important features. The field lines that extend into/out of the null itself lie either along a pair of lines, known as {\it spines}, or on a surface, known locally as the {\it fan}. The field lines in the fan are all directed either away from the null, forming what is known as a positive (or B-type) null, or into the null, creating a negative (A-type) null. In the case of a positive null the spines are directed into the null and for a negative null they are directed away.
A second set considers the nature of possible reconnection scenarios at 3D nulls driven by plasma flows of varying types, which may depend on the behaviour of the flow pattern relative to the spine and fan. These perturbations may be caused by a local driver \citep{Rickard96,Bulanov02,Pontin04,Pontin05a,Pontin07a,Pontin07b,Pontin07c,Priest09,Masson09,Wyper10,Alhachami10,Pontin11} or by an indirect perturbation \citep{Pariat09,Pariat10,Edmondson09,Masson09,Masson12} far away from the null. In particular, flow patterns that twist the field about the spine, known as torsional reconnection regimes, dissipate currents that lie parallel to the spine and result in the slippage of the field lines about the spine. Flow patterns across the fan or spine, known as fan-spine reconnection regimes, are associated with currents perpendicular to the spine and cause a shearing of the spine and/or fan leading to reconnection at the null itself.
In general terms, reconnection studies consider potential 3D nulls that are driven by some external force initiating reconnection. However, an arguably more realistic scenario is that flows in the system generate currents resulting in the collapse of the null in a particular way, since non-potential nulls are generically unstable \citep{Parnell97}, and leading to the accumulation of current at, or in the vicinity of, the null. Papers that look at the accumulation of currents and their associated equilibria form a third set of papers \citep[e.g. ][]{Pontin05b,Fuentes12c}. This paper belongs to this set.
Current accumulations due to spiral nulls in which the field twists about the spine were studied in \cite{Fuentes12c}, the previous paper to this one in the series on `Dynamical relaxation of coronal magnetic fields'. We looked at the formation of a static non-force-free equilibrium due to a torsional-type disturbance at a 3D null point. In that paper, current accumulations were found due to the viscous non-resistive MHD relaxation of a non-potential null with a spine-aligned current density. In the present paper, we use the same approach, but here we study current accumulations due to a shearing-type disturbance to a 3D null in which the current and spine fold towards on another. Here we consider a non-potential null with a fan-aligned current density. The current accumulations formed must be different to those found in \cite{Fuentes12c} since the reconnection in the two cases is completely different.
\citet{Pontin05b} has considered the case of a such a perturbation, associated with a component of the current that is strictly perpendicular to the spine. They find a collapse of the fan and spine towards each other and the formation of a current singularity at the location of the null in a non-force-free equilibrium. In their paper, they consider the nature of this singularity and find that it scales with the numerical resolution in an equivalent manner to the 2D singularities studied by \citet{Craig05}. Also, they find that increasing the pressure does not stop the formation of the singularity, but does significantly reduce its strength. Both of these studies used a frictional Lagrangian code with a fictitious damping term, $-\kappa{\bf v}$, added to the momentum equation to enable the field to relax to an equilibrium. This model imposes a natural constraint, since their damping term is not physical. Indeed, the polytropic model with $p\approx \rho^\gamma$ is used for pressure, which imposes a condition of adiabaticity to the relaxation process.
\citet{Fuentes11} considered very similar, but subtly different, experiments to \citet{Craig05} involving the collapse of non-potential 2D X-points embedded within a plasma. One key difference is that \citet{Fuentes11} were able to follow the dynamical evolution of the system and the energetics, by using an MHD relaxation driven by viscous forces, which naturally has an associated heating term in the energy equation. Thus, the system does not evolve adiabatically. While \citet{Craig05} focus on the scaling laws for the formation of the singularity, \citet{Fuentes11} focus on the actual non-force-free equilibrium obtained due to the presence of a plasma pressure and the nature of the singularity as it grows in time. \citet{Fuentes11} show that the field left after the viscous relaxation has finished is in a ``quasi-equilibrium'' state where the central current layer, which stretches out a short way along the separatrices, slowly changes its shape becoming thinner and shorter. Indeed, the system is found to have entered an asymptotic regime and is heading towards an infinite time singularity.
The approach of \citet{Fuentes11} is quite different to the frictional, adiabatic relaxation of \citet{Craig05}. Similarly, the study we carry out here uses a different relaxation to the frictional, adiabatic approach of \citet{Pontin05b} because it involves the viscous, non-resistive, non-adiabatic relaxation of 3D nulls. Furthermore, the model we consider here differs from \citet{Fuentes12c} because here the collapse of the null and current accumulation is associated with a current that is purely perpendicular to the spine as opposed to parallel to the spine. Additionally, here we also consider the MHD relaxation associated with a more general current that has components both perpendicular and parallel to the spine, unlike either of the previous papers.
More specifically, the aim of this paper is to investigate the nature of current accumulations at 3D radial magnetic nulls following a collapse due to the presence of an initial homogeneous current density whose component perpendicular to the spine is non-zero. This may be thought of as the result of a shearing-fan disturbance. The characteristics of any infinite time singularities will be determined, if they arise. We evaluate the effects of the plasma pressure in the evolution, while both the initial disturbance and the background plasma pressure are changed systematically. This work is a continuation of the work carried out in \citet{Fuentes10,Fuentes11} and \citet{Fuentes12c} on non-resistive MHD relaxation of magnetic fields embedded in non-zero beta plasmas.
The paper is structured as follows. In Sec. \ref{sec3}, we present the equations that define the initial configuration and the details of the numerical experiments. The results for 3D tilted nulls that are initially radial and have a fan-aligned current are presented in Sec. \ref{sec4}, while, in Sec. \ref{sec5}, the more general results for a radial null with components of current that are both parallel and perpendicular to the spine can be found. Finally, we conclude with a general overview of the problem in Sec. \ref{sec7}.
\section{Initial magnetic field configurations and numerical scheme} \label{sec3}
The initial magnetic field is chosen to be that of a positive linear 3D null point located at the origin of a constatnt current and associated with it. Like the initial field in \citet{Fuentes12c}, the spine of the null is chosen to be along the $z$-axis, however unlike \citet{Fuentes12c} the initial magnetic field here has a component of current perpendicular to the spine (along the $x$-axis and parallel to the fan), rather than parallel to the spine. In the first set of experiments, the current is purely perpendicular to the spine, and in the second set it has a general current with components both parallel and perpendicular to the spine.
The current density is chosen to be constant and can be written in terms of the component parallel to the fan, $j_f$, and parallel to the spine, $j_{sp}$, as
\begin{equation}
{\bf j}=\frac{1}{\mu}(j_f, 0, j_{sp})\;.\nonumber
\end{equation}
Following \citet{Parnell96} the magnetic field, ${\bf B}$, is
\begin{equation}
\label{M_general}
{\bf B} = \left(x-\frac{j_{sp}}{2}y,\frac{j_{sp}}{2}x+by,j_fy-(b+1)z\right)\;,
\end{equation}
where $b>0$ in order to ensure that the spine lies along the $z$-axis and that the null is positive.
\begin{figure*}[t]
\begin{minipage}[b]{1.0\linewidth}
\begin{minipage}[b]{0.50\linewidth}
\centering
\includegraphics[scale=0.32]{figure01a.eps}
\end{minipage}
\begin{minipage}[b]{0.50\linewidth}
\centering
\includegraphics[scale=0.30]{figure01b.eps}
\end{minipage}
\caption{Magnetic configuration for the initial non-equilibrium state with homogeneous fan-aligned current, for the case with $j_{f}=1$ and $p_0=1$, showing (a) the 3D configuration with field lines above and below the fan in purple and orange, respectively. The fan plane is outlined in dashed black, and the spine is represented in green, with projections onto the $xz$-plane and $yz$-plane (dashed green lines). In (b), the spine (blue) and the section of the fan plane (pink) are plotted in the $x=0$ plane, and the grey arrows show the direction of the initial Lorentz forces.}
\label{fig:fan_initial}
\end{minipage}
\vspace{0.3cm}
\begin{minipage}[b]{1.0\linewidth}
\begin{minipage}[b]{0.50\linewidth}
\centering
\includegraphics[scale=0.32]{figure02a.eps}
\end{minipage}
\begin{minipage}[b]{0.50\linewidth}
\centering
\includegraphics[scale=0.30]{figure02b.eps}
\end{minipage}
\caption{Magnetic configuration for the final equilibrium state for the radial tilted null shown in Fig. \ref{fig:fan_initial}, showing (a) the 3D configuration with field lines above and below the fan in purple and orange, respectively. The fan plane is outlined in dashed black and the spine is represented in green, with projections onto the $xz$-plane and $yz$-plane in dashed green lines. In (b), the spine (blue) and the section of the fan plane (pink) are plotted in the $x=0$ plane.}
\label{fig:fan_final}
\end{minipage}
\end{figure*}
As in \citet{Fuentes12c}, for the numerical experiments we used Lare3D, a staggered Lagrangian-remap code that solves the full MHD equations with user-controlled viscosity \citep[see][]{Arber01}. The resistivity is set to zero for our experiments, and the numerical domain is a 3D uniform grid of $512^3$ points. The boundary conditions are closed, the magnetic field lines are line-tied, and the MHD equations are normalised following a standard procedure. For further details on the boundary conditions and normalisation, see Sec. 3 of \citet{Fuentes12c}.
Thus, the normalised ideal MHD equations are as follows:
\begin{eqnarray}
\frac{\partial \rho}{\partial t}+\mbox{\boldmath$\nabla$}\cdot(\rho{\bf v}) &=& 0\;,\label{n_mass}\\
\rho\frac{\partial{\bf v}}{\partial t}+\rho({\bf v}\cdot\mbox{\boldmath$\nabla$}){\bf v} &=& -\mbox{\boldmath$\nabla$} p + (\mbox{\boldmath$\nabla$}\times{\bf B})\times{\bf B} + {\bf F}_{\nu}\;,\label{n_motion}\\
\frac{\partial p}{\partial t}+{\bf v}\cdot\mbox{\boldmath$\nabla$} p &=& -\gamma p \mbox{\boldmath$\nabla$}\cdot{\bf v}+H_{\nu}\;,\label{n_energy}\\
\frac{\partial{\bf B}}{\partial t} &=& \mbox{\boldmath$\nabla$}\times({\bf v}\times{\bf B})\;,\label{n_induction}
\end{eqnarray}
where ${\bf F}_{\nu}$ and $H_{\nu}$ are the viscous force and viscous heating terms, and the internal energy, $\epsilon$, is given by the ideal gas law, $p=\rho\epsilon(\gamma-1)$, with $\gamma=5/3$.
\section{Fan-parallel current density} \label{sec4}
\subsection{Initial state}
We first look at the relaxation of initial configurations of a magnetic null point with constant current density everywhere in the direction parallel to the fan surface, but perpendicular to the spine (i.e., along the $x$-axis), of the form $(j_{f},0,0)$. For simplicity, we assume $b=1$ in Eq. (\ref{M_general}), so the field lines have no preferred direction on the fan plane, but expand regularly through the plane.
\begin{figure*}[t]
\begin{minipage}[b]{1.0\linewidth}
\centering
\includegraphics[scale=0.60]{figure03.eps}
\caption{Contour plots of the different forces acting in the final equilibrium state in the $x=0$ plane, for the same experiment as in Fig. \ref{fig:fan_initial}. Showing, from left to right, the magnitude of the pressure force ($-|\mbox{\boldmath$\nabla$} p|$), of the Lorentz force and of the total force. Values are normalised to the maximum force of the initial state. It can be observed that the pressure and Lorentz forces balance each other out, creating a clearly non-force-free equilibrium.}
\label{fig:fan_forces}
\end{minipage}
\vspace{0.3cm}
\begin{minipage}[b]{1.0\linewidth}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[scale=0.35]{figure04a.eps}
\end{minipage}
\hspace{0.02\linewidth}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[scale=0.32]{figure04b.eps}
\end{minipage}
\caption{Contour plot (left) and corresponding surface (right) of the magnitude of the electric current density in the final equilibrium state, for the same experiment as in Fig. \ref{fig:fan_initial}, in a cross section in the $x=0$ plane. }
\label{fig:fan_current}
\end{minipage}
\end{figure*}
Our initial magnetic field is then given by
\begin{equation}
(B_x, B_y, B_z)=(x, y, j_f\,y-2z)\;.
\end{equation}
Initially, the spine line lies along the $z$-axis, and the fan plane is not perpendicular to the spine (unlike in \citet{Fuentes12c}). Instead, it is tilted about the $x$-axis (the direction of the current density), and lies in the plane
\begin{equation}
z=\frac{j_f}{3}y\;,
\end{equation}
\citep{Parnell96}. We ran various experiments with different initial plasma pressures and current densities. Figure \ref{fig:fan_initial} shows the magnetic configuration of the initial state, for $j_{f}=1$ and $p_0=1$.
Initially, for such a non-potential null, the Lorentz force, ${\bf j}\times{\bf B}$, has the $y$ and $z$ components that are non-zero. The forces in the initial state act are shown in Fig. \ref{fig:fan_initial}b, pushing the spine to the right above the fan and to the left below it, and pushing the fan up for positive values of $y$ and down for negative values of $y$. Therefore, the spine and fan will collapse into each other in the direction of the original tilt of the fan.
\subsection{Final equilibrium state}
We initially concentrate on the case shown in Fig. \ref{fig:fan_initial}, with $j_f=1$ and $p_0=1$. The numerical simulations finish at about 300 fast magnetosonic travel times (defined as the time for a fast magnetosonic wave to travel from the null to one of the boundaries). The magnetic field configuration in the final state shown in Fig. \ref{fig:fan_final}. In comparison to the initial state (Fig. \ref{fig:fan_initial}), the relaxation collapses the spine towards the fan in the region near the null, but the fan plane is only slightly perturbed. The total energy of the evolution is checked to be conserved throughout the relaxation to within an error of 0.02\%. This conservation of energy demonstrates that numerical diffusion does not play a significant role in the relaxation. Less than 2\% of the initial magnetic energy is converted to internal energy via viscous damping during the relaxation.
The final state is a non-force-free equilibrium where the non-zero Lorentz forces are globally balanced by non-zero plasma pressure forces. In Fig. \ref{fig:fan_forces}, we show the different forces in the vertical $x=0$ plane. In this experiment, the system has reached a non-force-free equilibrium with the strongest pressure force and Lorentz force concentrated around the spine line and about the fan plane, and the field is close to force-free everywhere else. Weak residual total forces remain near the fan away from the null. These forces are the consequence of the line tiding of the field (of the fan plane in particular) at the boundaries, and they would take much longer to completely disappear. Nonetheless, they do not influence the results of the present study.
\begin{figure}[t]
\centering
\includegraphics[scale=0.32]{figure05.eps}
\caption{Plots of the magnitude of the current density for three different cuts, along the spine line (black), along the tilt-axis ($x$-axis) of the fan (dashed blue) and along the $y$-axis on the fan surface (dashed orange).}
\label{fig:fan_along}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{figure06.eps}
\caption{Contour plot of the equilibrium plasma pressure for the same experiment as in Fig. \ref{fig:fan_initial}, for a vertical cross section in the $x=0$ plane.}
\label{fig:fan_pressure}
\end{figure}
The final distribution of the magnitude of the current density in the $x=0$ plane is shown in Fig. \ref{fig:fan_current}. Not surprisingly, the current accumulations appear at the same locations as the strongest pressure forces and Lorentz forces shown in Fig. \ref{fig:fan_forces}, namely, on the fan plane and along the spine. The current density is approximately zero away from the fan and the spine. It shows a weak accumulation along the spine line, but is about one order of magnitude stronger on the fan plane. At the location of the null we find a pronounced peak of the current density. The whole picture is reminiscent of the 2D X-point collapse \citep{Fuentes11}, in which the current accumulates along the four separatrices and the system evolves towards an infinite time singularity at the location of the null (but contrasts with \citet{Fuentes12c} where the current accumulates in the vicinity of the spines). \citet{Parnell97} proved that the magnetic field locally about a non-potential 3D null point, i.e. the linear field about a non-potential null, produces a Lorentz force that cannot be balanced by a plasma pressure force, and so there are no static equilibrium models of linear non-potential or force-free nulls. Therefore, even if the system is in global equilibrium in the final state, the collapse of the fan and spine towards each other does not allow a potential configuration at the null, and locally, the pressure forces cannot balance the Lorentz forces there, which gives rise to the formation of a singularity. Unfortunately, the same analysis of forces about the null defined in \citet{Fuentes11} to evaluate the formation of the singularity cannot be made here, due to the relatively low grid resolution that places the small local region in which these forces act into just a few gridpoints. The formation of a continuously growing singularity at the location of the null is evaluated in Sec. \ref{sec:sing}.
The final current layer around the location of the null can be better appreciated in Fig. \ref{fig:fan_along}. Here, we show three different cuts of the current density, along the line of the spine, along the tilt axis of the fan surface ($x$-axis), and along the $y$-axis of the fan surface. We see that the current density on the fan plane is enhanced over ten times that of the initial background current density, and it is higher in a thin layer along the $x$-axis, which is the initial tilt axis of the fan. The amplitude of the current layer is greatest at the location of the null, corresponding to the pronounced peak observed in Fig. \ref{fig:fan_current}. The distribution of current density observed in Fig. \ref{fig:fan_along} agree qualitatively with the results of \citet{Pontin05b}, who consider the collapse of actually the same initial field, using a different numerical method. The comparison shows that the way the current density accumulates about the null is not an artefact of the numerical method, but a physical result.
In Fig. \ref{fig:fan_pressure}, we show the final distribution of plasma pressure in the $x=0$ plane. The plasma pressure is enhanced inside the regions where the spine and fan have collapsed towards one another, but decreased in the regions outside. The highest residual gradients in pressure occur at the locations of the fan and spine, and the gradient of plasma pressure from positive to negative y, far from the null point (or far from the spine line in general), is not as sharp as it is at the x=0 plane (the one in the figure). Once again, this result brings out the fact that the collapse of a 3D null points resemble the 2D collapse in different ways.
\subsection{Singular current} \label{sec:sing}
Finally, for this setup, we want to evaluate the formation of the singularity at the location of the null, by looking at the time evolution of the maximum current, at the location of the null, for experiments with different initial plasma pressures and different initial currents (inclinations of the fan plane). In particular, we want to see if there is a current singularity being formed, and if there is, whether its growth rate implies an infinite or finite rate of formation.
\citet{Klapper97} rigorously demonstrated that, for 2D ideal incompressible plasmas, a singularity of the current density will take an infinite amount of time to develop, unless driven by a pressure singularity occurring outside the neighbourhood of the null point. \citet{Fuentes11} showed numerically that this is also the case for the 2D compressible collapse of magnetic X-points, and here, we aim to extend these results for 3D nulls with fan-aligned current.
\begin{figure}[t]
\includegraphics[scale=0.32]{figure07.eps}
\caption{Time evolution of the peak current for five experiments with different initial plasma pressures and an initial current of $j_f=1$. When a good asymptotic regime is achieved, a fit of $j_{max}=C+Dt$ is overplotted (red dashed line).}
\label{fig:fan_peak}
\vspace{0.3cm}
\includegraphics[scale=0.32]{figure08.eps}
\caption{The same as Fig.~\ref{fig:fan_peak}, but for four experiments with different initial current densities and an initial plasma pressure of $p_0=1$. }
\label{fig:fan_peak2}
\end{figure}
The ideal MHD relaxation of a 3D null point with an initial homogeneous current density parallel to the fan plane causing (or caused by) a tilt of the fan, results in the collapse of the fan and the spine towards each other. The current density accumulates weakly along the spine and on the fan plane forming a ridge of current along the tilt axis of the fan (axis of the initial current) which spikes at the null itself. The continuous concentration of current at the location of the null is a natural consequence from the results of \citep{Parnell97}, according to which the Lorentz forces cannot be balanced by plasma pressure forces about the 3D null. These forces feed current at the locations of the null point slowly, forming an infinite time singularity. Figures \ref{fig:fan_peak} and \ref{fig:fan_peak2} show the time evolution of this peak current for varying initial plasma pressures and current densities, respectively. In Fig. \ref{fig:fan_peak}, the current is set to $j_f=1$ and the pressure varies from $p_0=0.5$ to $p_0=1.5$. After the rapid viscous relaxation, the system enters a regime in which the maximum current follows a linear function of the form $j_{max}=C+Dt$, where the growth rate of the singularity, $D$, decreases as the initial plasma pressure, $p_0$, increases, as was found in the compressible 2D null collapse experiments of \citet{Fuentes11}. For the experiments with the lowest initial plasma pressures, a fit of this form cannot be made, as the numerical diffusion limit is reached before a well-defined linear regime has been achieved.
Similar linear behaviour is found in Fig. \ref{fig:fan_peak2}, where the pressure is set to the constant value $p_0=1$, and the current density is varied from $j_f=0.25$ to $j_f=1$. When the initial current of the system is decreased then naturally the current in the singularity also decreases as does the rate at which the singularity forms (Fig.~\ref{fig:fan_peak2}). A higher initial current density produces a bigger collapse of the fan and spine towards each other, hence an increase in the magnitude and growth rate of the singularity. Indeed, when the initial current is particularly weak then the collapse is practically negligible, so it is the rate of growth of the singularity.
\begin{figure}[t]
\centering
\includegraphics[scale=0.32]{figure09.eps}
\caption{3D magnetic configuration of the initial non-equilibrium state with generic current density, for the case with $j_{sp}=j_f=1$ and $p_0=1$, showing the same features as in Fig. \ref{fig:fan_initial}.}
\label{fig:generic_initial}
\vspace{0.3cm}
\end{figure}
\section{Generic current density} \label{sec5}
In the previous section, we considered the MHD relaxation of an initial radial null with a component current in the fan plane, perpendicular to the spine. In such an initial field the fan plane is tilted with the current aligned with the axis of tilt. In this section, we consider the numerical equilibrium resulting from the MHD relaxation of an initial radial null that has a generic current, with non-zero components along both the spine and the fan. To our knowledge the collapse of these types of nulls have not been considered before. Here, the magnetic field of the initial null consists of a tilted fan, as in the previous case, but this time with twisted field lines around the spine. \citet{Fuentes12c} considered the dynamical relaxation of a null with only a component of current parallel to the spine, which gives rise to twisted field about the spine, but maintains the fan and spine at right angles.
\subsection{Initial state}
A 3D null configuration of a radial null with a generic current density of the form ${\bf j}=\frac{1}{\mu}(j_f, 0, j_{sp})$ is considered. From Eq. (\ref{M_general}), the magnetic field of the initial state is given by
\begin{equation}
(B_x, B_y, B_z)=(x-\frac{j_{sp}}{2}y,\; \frac{j_{sp}}{2}x+y,\; j_f\,y-2z)\;.
\end{equation}
Here again we have assumed $b=1$ for simplicity. For the case studied here we have chosen the background plasma pressure to be $p_0=1$ and, for the current density, $j_{sp}=j_f=1$. Thus, in the initial state, the current density vector lies in the $xz$-plane, at $45^{\circ}$ to both the $x$ and the $z$ axis, and its magnitude is $|j|=\sqrt{2}$ everywhere. Initially, the spine line lies along the $z$-axis, the fan plane tilts about the $x$-axis following the function $z=\frac{j_f}{3}y$, and the magnetic field lines show a homogeneous twist about the spine (see Fig. \ref{fig:generic_initial}). The Lorentz forces that act in the system initially are a combination of two contributions. The first is the one shown in Fig. \ref{fig:fan_initial}b, which will drive the collapse of the fan and spine towards one another. The second is a magnetic tension force, as described in \citet{Fuentes12c}, which will remove the twist from the fan surface.
\subsection{Final equilibrium state}
\begin{figure}[t]
\centering
\includegraphics[scale=0.32]{figure10.eps}
\caption{3D magnetic configuration for the final equilibrium with generic current density, for the same experiment as in Fig. \ref{fig:generic_initial}, showing the same features as in Fig. \ref{fig:fan_initial}.}
\label{fig:generic_final}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{figure11.eps}
\caption{Contour plot of the magnitude of the electric current density in the final equilibrium state, for the same experiment as in Fig. \ref{fig:generic_initial}, in a cross section in the $x=0$ plane.}
\label{fig:gen_current}
\end{figure}
The results of the ideal relaxation from this initial configuration shown in Fig. \ref{fig:generic_initial} combine the features from the relaxation of spiral nulls \citep{Fuentes12c} and those of tilted nulls, from Sec \ref{sec4}. The magnetic field lines in the final equilibrium state are shown in Fig. \ref{fig:generic_final}. The relaxation drives the collapse of the fan and the spine towards each other at the same time as it concentrates the initial twist of the field lines about the spine line, above and below the fan surface. This can be more clearly seen in Fig. \ref{fig:gen_current}, where we plot the magnitude of the current density for the final equilibrium state, in the $x=0$ plane. It concentrates in three different regions: (i) on the fan surface, due to the spine-fan collapse; (ii) around the spine above the fan; and (iii) around the spine below the fan. Clearly, accumulation (i) is the same as what is seen in Fig. \ref{fig:fan_current} and is therefore associated with the component of current perpendicular to the spine, whereas the last two accumulations result from a concentration of the initial twist of the field lines about the spine and resemble the hourglass configuration obtained in \citet{Fuentes12c}. They are thus associated with the component of current parallel to the spine.
We now evaluate the consequences of the two deformations (twist and tilt) on the formation and growth rate of the singularity at the null. In Fig. \ref{fig:fanspine_peak} we show the time evolution of the peak current density (the current density at the null) for five different experiments with a fixed background plasma pressure ($p_0=1$) and varying both $j_f$ and $j_{sp}$ independently. When $j_{sp}$ is varied and $j_f$ is fixed, the growth rate of the singularity does not vary significantly, as the torsional disturbance leads to a current accumulation about the spines, not at the null. However, a decrease in $j_f$ for fixed {\bf $j_{sp}$} slows down the formation of the singularity, as discussed below.
\begin{figure}[t]
\includegraphics[scale=0.32]{figure12.eps}
\caption{Time evolution of the peak current for five experiments with different initial current densities involving $j_f$ and $j_{sp}$, and an initial plasma pressure of $p_0=1$. When a good asymptotic regime is achieved, a fit of $j_{max}=C+Dt$ is overplotted (red dashed).}
\label{fig:fanspine_peak}
\end{figure}
When the initial current density has the form ${\bf j}=(j_f,0,j_{sp})$, where $j_{sp}=j_f=1$, both contributions of current density initially parallel and perpendicular to the spine are the same as in each of these independent cases studied before: the case $j_f=1$ and $j_{sp}=0$ studied previously in this paper, and the case $j_f=0$ and $j_{sp}=1$ studied in \citet{Fuentes12c}. The magnitude and growth rate of the singularity for $j_f=1$ and $j_{sp}=1$ ($D=0.09$) is to be compared with the results from Sec. \ref{sec:sing} for the case $j_f=1$ and $j_{sp}=0$ ($D=0.1$). The addition of a spine-aligned component of the current density slows down the formation of the singularity.
\begin{figure}
\centering
\includegraphics[scale=0.1]{figure13.eps}
\caption{Plots of the $yz$-plane at $x=0$ showing the spine and the cut along the fan plane, for (a) the experiments with fan parallel current density only and (b) the experiments with generic current density. In both plots, the non-zero components of the initial current density are varied systematically.}
\label{fig:yzcuts}
\end{figure}
The decrease in the growth rate of the singularity may be explained as follows. The fan-aligned component of the current density, $j_f$, curves the spine line making it longer within the boundaries of the domain. This way, the relaxation of the twist component of the magnetic field (identified with the spine-aligned component of the current density) requires more current to be allocated along and around the spine than in the absence of $j_f$ (when the spine line is straight), thereby decreasing the amount of current that is available for the formation of the singularity.
The curvature of the spine line may be appreciated in Fig. \ref{fig:yzcuts}. Here we show plots of the $yz$-plane at $x=0$, indicating the spines and the cuts along the fan plane for different experiments. Figure \ref{fig:yzcuts}a corresponds to the experiments of Sec.~\ref{sec4} with fan-aligned current density. Clearly, as $j_f$ increases, the spines become more curved and get longer. Figure \ref{fig:yzcuts}b corresponds to the experiments discussed in the present section with generic current density. We note that changes in $j_{sp}$, for fixed $j_f$, do not involve a change in the shape of the spine line or in the fan plane, i.e. they all lie under the solid lines of the $j_{sp}=j_f=1$ case. On the contrary, increasing values of $j_f$, for fixed $j_{sp}$, lead to larger deformations of the spine and the fan, hence of the current layer about the null. This is due to the greater initial inclination of the fan plane that, upon the ideal MHD relaxation of the system, is transferred into a greater folding of the spine towards the fan plane.
Over all, when a current perpendicular and parallel to the spine are combined together, the resulting non-force-free equilibrium is such that it combines the results from a strictly tilted null with that from a strictly spiral null. Te spiraling of the field lines along the spine does not concentrate any current at the location of the null \citep{Fuentes12c}, so it is not a direct contribution to the formation of the singularity. In contrast, the addition of a current density parallel to the spine is found to slow down the formation of the singularity itself.
\section{Summary and conclusions} \label{sec7}
The 3D relaxation of tilted magnetic null points with an initial non-zero component of the current density perpendicular to the spine has been investigated under non-resistive conditions, in all cases resulting in a non-force-free equilibrium, where all the velocities have been damped out by viscous forces. In the final equilibrium states, the forces due to the plasma pressure and the magnetic field, which are of about the same strength as the initial non-equilibrium Lorentz forces, balance each other out, resulting in a genuine non-force-free equilibrium, save at the location of the null where an infinite time singularity occurs, similar to the 2D X-point collapse studied by \citet{Fuentes11}.
During the relaxation, for the case with an initial radial null and constant current density purely parallel to the fan (i.e., a tilted fan on which field lines expand radially), the field evolves by warping the fan and the spine so that, locally about the null, they collapse towards each other, concentrating the current density on the fan surface, but leaving it weak along the spine. The amplitude of the resulting current density on the fan is largest along the $x$-axis, the initial tilt axis of the fan along which the initial and final current densities are directed, and it shows a sharp peak at the location of the null.
After the viscous relaxation has finished, the system has entered an asymptotic regime in which the current at the location of the null increases linearly with time. This behaviour is identified with the slow formation of an infinite time singularity whose growth rate depends directly on the values of the initial plasma pressure and current density (tilt of the fan). A decrease in the plasma pressure at a fixed current density or an increase in the electric current density at a fixed plasma pressure produce an increase in the growth rate of the singularity being formed at the location of the null.
Overall, the structure of the equilibrium is very similar to the 2D X-point collapse studied in \citet{Fuentes11}; in particular, (i) the current density accumulations along the fan and spine are equivalent to the accumulations found along the separatrices in the 2D X-point case, (ii) the collapse and deformation of these features towards each other locally about the null is observed in both cases, as is (iii) the formation of an infinite time singularity at the null. The results also agree with past studies of compressible 3D nulls with current purely perpendicular to the spine that collapse to form current singularities \citep{Pontin05b}. However, we have been able to study the exact distribution of the current in the final equilibrium state in more detail. Furthermore, our numerical code allows us to determine the full dynamical evolution of the system, and therefore we are able to get a numerical growth rate for the singularity, which directly depends on the initial plasma pressure.
In \citet{Fuentes12c} where the initial magnetic field was twisted about the spine owing to current being parallel to the spine (along the $z$-axis), (i) the spine and fan remained perpendicular and did not collapse, (ii) the current (which remained parallel to the spine) accumulated in a pair of cone shape features about the spine, and (iii) no current singularity was found at the null in the radial null case. Thus the current accumulations are totally different in these two scenarios, as expected.
The effects of a more generic current with components along both the fan and spine have not been considered before the present paper. In the cases where the initial current density has both components parallel and perpendicular to the spine, the initial field shows a tilted fan on which field lines expand in spirals around the null. The resulting non-force-free equilibrium combines the two contributions, from the tilted fan and the twisted field lines, creating a current density that concentrates on the fan plane, building a singularity at the location of the null, and along the spine line, concentrating the twist of the field lines there. In principle, the results are a simple combination of the twisted and tilted cases, and so, the current density should be evenly ditributed for both contributions, even though the magnitude and growth rate of the singularity has been decreased. This is because, since the spine is longer now than in the $j_f=0$ and $j_{sp}=1$ case (because it is bended), the relaxation of the twist requires a bit more current, thereby slightly decreasing the amount of current available for the singularity at the null. The addition of a spine-aligned component of the current density slows down the formation of the singularity.
In every case considered here, we set $b=1$ in the magnetic field. This means that the two eigenvalues that are associated with the fan plane have the same magnitude, and thus, the fan plane has no major and minor axes, so the field lines have no preferred direction in the fan plane \citep{Parnell96}. The presence of major or minor axes in a fan plane may well have an effect on the equilibria obtained following the collapse of such a null with a non-zero component of current.
Magnetic reconnection is an important process of energy release in scenarios like the solar corona and the Earth's magnetotail. In the second case, it provides a mechanism for particle acceleration and for the global aurora. In the first case, it is the mechanism for particle acceleration, solar flares, and coronal mass ejections, and is highly likely to provide an important source of energy for the high temperatures observed in the corona. In the present paper, we have described a valid initial equilibrium field for the study of spontaneous magnetic reconnection, initiated by microinstabilities at a 3D magnetic null. Such a study is a natural continuation of the work carried out by \citet{Fuentes12a} and \citet{Fuentes12b} on spontaneous reconnection and the onset of impulsive bursty reconnection at non-force-free current layers.
\section*{Acknowledgements}
The authors would like to thank the referee for many useful comments that helped us to clarify the details of the experiments carried out in this paper. JFF gratefully acknowledges funding from the St Andrews Rolling Grant (ST/H001964/1). This work was partly supported by SOLAIRE European training network. Computations were carried out on the UKMHD consortium cluster funded by the STFC and SRIF.
\bibliographystyle{aa}
| proofpile-arXiv_066-1480 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\chapter[SDN-IoT based Smart City Framework]{An SDN-IoT-based Framework for Future Smart Cities: Addressing Perspective}
\chapterinitial{I}n \emph{this Chapter}, a software-defined network (SDN)-based framework for future smart cities has been proposed and discussed. It also comprises a distributed addressing scheme to facilitate the allocation of addresses to devices in the smart city dynamically. The framework is dynamic and modules can be added and omitted by a centralized controlling unit without disturbing the other components of the framework and other modules may be updated accordingly. In the proposed addressing scheme, a new Internet of Things (IoT) device will receive an IP address from one of their existing neighboring peer devices. This allows devices in the city to act as a proxy and generate a set of unique IP addresses from their own IP addresses, which can then be assigned to new (joining) devices; hence, reducing addressing overhead and latency, as well as avoiding the need to send broadcast messages during the address allocation process. Thus, it achieves considerable bandwidth and energy savings for the IoT devices.
\section{Introduction}\label{Intro}
It has been estimated that approximately 65\% of the world's population will eventually live in cities by the year 2040 \cite{b1}. There has been a trend of making cities smarter, for example by leveraging existing and emerging technologies such as Internet of Things (IoT). The latter can be broadly defined to be a (heterogeneous) network of a broad range of physical Internet connected devices, such as smart vehicles, smart home appliances, and other devices with embedded software or hardware (e.g., sensors), that can be used to connect, sense / collect, and disseminate / exchange large volume of data. This also allows us to offer advanced services that can be used to improve the quality of service delivery and life.
The increasing trend of smart cities is partly due to the lower of technological and cost barriers in deploying communication networks (e.g. wireless and 5G) in a broad range of settings, such as residential and commercial buildings, utility networks, transportation networks, and those in the critical infrastructure sectors \cite{b2,b3}. In such settings, it is clear that data plays a key role, for example in informing decision and strategy making and formulating. Such data can be collected by the broad range of IoT devices and networks, and can be compiled and analyzed to achieve improved service delivery in healthcare, manufacturing, utility, supply chain and many other services. However, there exist a number of challenges in dealing with such data, due to the volume, variety, velocity, and veracity (also commonly referred to as the four V's of big data). For example, the management and performance optimization of IoT-based smart cities and programmability of things can be extremely complex, and also the inter-connectivity can introduce security implications. Therefore, how to ensure that the underpinning communication infrastructure in the smart city is scalable, reliable, secure and efficient can be challenging, both operationally and research-wise.
Emerging software-defined networking (SDN) decouples the control plane and data plane and subsequently it enables the control plane to become directly programmable and the underlying infrastructure to be abstracted for the applications and the network services. SDN controller, also called network operating system (NOS), is logically centralized and responsible for controlling, managing, and dynamically configuring the devices at the data plane of the network. It is effective in taking decisions for the routing, quality-of-service (QoS) and load balancing dynamically. It is easy to add new network functionalities through application programs due to the programmability feature of SDN controller. Moreover, SDN enhances the network performance by providing security and the network virtualization features. SDN controller is capable to monitor all the nodes and their traffic, and eliminate the attacker node from the network on-fly by writing effective flow rules on the switches at data plane \cite{chapter-crc}.
\noindent \textbf{Motivation:}
Each device in the infrastructure should have a unique address by which it can be identified. This unique address enables unicast communication and routing between devices in the infrastructure. However, as more IoT devices are introduced in the smart city, the demand for these unique addresses increase rapidly. Manual configuration of IoT devices in most of the cases inapplicable and error prone due to large size of the network. Further, centralized Dynamic Host Configuration Protocol (DHCP) \cite{b9} is not a suitable solution as the sever has to maintain configuration information of all the nodes in the network.
\begin{figure}[!ht]
\centering
\includegraphics[height=0.48\textwidth]{Chapters/Chap1/figures/DAD.png}
\caption{Duplicate Address Detection (DAD) mechanism}
\label{dad}
\end{figure}
Duplicate Address Detection (DAD) mechanism \cite{b10} can be used to resolve address conflict in the smart city. In DAD, a joining node chooses a tentative IP address randomly and verifies the whether this address is available for use or not. In order to verify the uniqueness of the address, the joining node floods a Duplicate Address Probe (DAP) message throughout the smart city and starts a timer to receive Address Conflict Notice (ACN) message from the network. If no ACN message is received, then the joining node concludes that the tentative address is free to use and configures itself with the address permanently. It has to run the DAD process again in case the joining node receives a ACN message from the network. The addressing overhead for DAD mechanism is very high as it needs to flood a message throughout the network. Further, the broadcast storm problem \cite{b11} can be seen in DAD. Figure \ref{dad} shows the DAD mechanism where a new node tries to join the network.
\textbf{Contribution:} It can be seen from the above discussion that there is a need to design a distributed addressing scheme to efficiently handle the ever increasing requirement in SDN-IoT based smart city networks. Further, the addressing scheme should assign unique IP addresses to the devices of the network for the correct routing and unicast communications. Furthermore, the scheme needs to be scalable and should not degrade its performance with respect to addressing overhead when the network size is very large like a smart city. This Chapter has two significant contributions:
\begin{itemize}
\item firstly, an SDN-based IoT framework for a smart city architecture,
\item and secondly, a distributed addressing scheme to efficiently assign a unique IPv6 address to each device in the proposed smart city framework.
\end{itemize}
With this Chapter, readers can have a more thorough understanding architectures of SDN, IoT, and SDN-IoT-based smart cities. It further proposes an IPv6 addressing mechanism to allocate unique address to each IoT devices in a SDN-IoT-based smart city.
\noindent \textbf{Chapter Organization:} The rest of the Chapter is organized as follows: Section \ref{background} presents a background of software-defined networking (SDN), Internet of Things (IoT) and IPv6 addressing. Section\ref{rel} discusses state-of-the-art literature on SDN-IoT based networks and also address allocation techniques in various wireless networks. The proposed framework for SDN-IoT-based smart city with an addressing scheme is presented in Section\ref{proto}. Finally, Section \ref{conclu} concludes the Chapter.
\section{Background}\label{background}
In this section, we give an overview of basic preliminary concepts of Software-defined Networking (SDN), Internet of Things (IoT) and IPv6 addressing.
\begin{figure}[ht]
\centering
\includegraphics[height=0.5\textwidth]{Chapters/Chap1/figures/SDN-Arch.png}
\caption{A layering architecture of SDN}
\label{SDN-arch}
\end{figure}
\subsection{An Overview of SDN}
This Section presents an overview of SDN architecture and its working principles. It also presents the need of SDN and how SDN is different as compared to the traditional networking. Figure \ref{SDN-arch} presents the major elements, planes (layers) and interfaces between layers of SDN architecture. It has three planes: data plane, control plane and application plane.
\begin{figure}[!ht]
\centering
\includegraphics[height=0.5\textwidth]{Chapters/Chap1/figures/wp.png}
\caption{Working Principles of SDN}
\label{working-principle}
\end{figure}
\textbf{Data Plane:} The first plane in SDN architecture is the data plane (also known as infrastructure plane) that consists of hosts and traffic forwarding devices. These traffic forwarding devices are known as OpenFlow (OF) switches. These switches are called dump switches and able to forward the data from source host to destination host only after receiving the instructions (flow rules) from the SDN control layer.
\textbf{Control Plane:} The second plane in SDN architecture is the control plane that may comprise an SDN controller or a set of SDN controllers. SDN controller (also called network operating system (NOS)) is a logical entity (software programs) which is programmable. It is logically centralized. Hence it can track the network topology (global view of the network) and the statistics of the network traffic periodically. Further, SDN controller is
responsible for controlling, managing, and dynamically configuring the devices at the data plane of the network. It efficiently provides routing, quality-of-services (QoS), security and also balances the load in the network.
\textbf{Application Plane:} The third and final plane is the application plane in SDN architecture. This plane runs application programs and uses application programming interface (API) to control the network resources with the SDN controller. These application programs periodically collect information from SDN controller and provide services (e.g., routing, quality of services (QoS) and load balancing). This plane also provides a programming interface to the network administrator for developing applications according to the requirements of the network. For instance, an application can be built to monitor all the devices and their traffics periodically for detecting the misbehaving devices in the network.
The northbound application programming interface (API) defines the connection between application plane and control plane whereas the southbound API defines the connection between control plane and date plane. OpenFlow (OF) protocol has been widely used as the southbound API. The SDN controller uses OpenFlow protocol to send the flow rules to the OpenFlow switches in data plane. OpenFlow protocol uses secure socket layer (SSL) and TCP for providing secure communication and reliable delivery of data between the controller and OF switches respectively.
\begin{figure}[!ht]
\centering
\includegraphics[height=0.5\textwidth]{Chapters/Chap1/figures/LLDP.png}
\caption{Topology detection using LLDP}
\label{LLDP}
\end{figure}
The working principle of SDN is presented in Figure \ref{working-principle}. A device H1 (source) sends the packets of a flow to another device H2 (destination) through OF switches S3-S2-S1 in an SDN-based network \cite{chapter-crc}. Here, the SDN controller detects topology of the network using link layer discovery protocol (LLDP) as shown in Figure \ref{LLDP}. Thus, it knows the global topology of the network and responsible for the routing between the devices.
\begin{figure}[!ht]
\centering
\includegraphics[height=0.5\textwidth]{Chapters/Chap1/figures/IoT-basic.png}
\caption{Internet of things (IoT)}
\label{IoT-basic}
\end{figure}
\subsection{An Overview of IoT and Smart Cities}
\noindent \textbf{The Internet of Things (IoT)}: An IoT is a heterogeneous network of physical objects (things) that are embedded with electronics, sensors, software, actuators, RFID tags, and other technologies for connecting and communicating a large amount of data with other devices and networks over the Internet to offer a new class of services at anytime, anywhere and for anyone. It can form a large network by combining wired networks and different types of wireless networks such as wireless sensor networks (WSNs), ZigBee, WiFi, mobile ad hoc networks (MANETs), and RFID. IoT can be applied to make the physical infrastructures more smart, secure and reliable, and fully automated systems. These physical infrastructures include buildings (homes, schools, offices, factories, etc.), utility networks (gas, electricity, water, etc.), healthcare systems, transportation vehicles (cars, rails, planes, etc.), transportation networks (roads, railways, airports, harbors, etc.), and information technology networks, etc. IoT collects, stores, and exchanges a large volume of heterogeneous data from various types of networks and provides critical services in smart homes and buildings, healthcare systems, transportation networks, utility networks, industrial control and monitoring systems, and so on \cite{chapter-crc,b12c,b12a,b12b}.
\begin{figure}[!ht]
\centering
\includegraphics[height=0.5\textwidth]{Chapters/Chap1/figures/IoT-layers.png}
\caption{The three-layered architecture of IoT}
\label{IoT-layers}
\end{figure}
Figure \ref{IoT-layers} shows the layering architecture of IoT. It comprises of three main layers: sensing layer, network layer and application layer. The sensing layer, also known as a perception layer, consists physical objects and sensing devices. This layer is responsible for sensing and collecting the data from the physical objects. Network layer bridges between sensing layer and application layer. It carries the data collected from the physical objects through sensors. The network can be wireless or wired network for the transmission. Thus, network layer is responsible for connecting the smart things, network devices and networks to each other and also for transmitting the data from physical objects to the gateway of the network. Application layer is responsible for providing the services to the users based on their demands and applications. The applications of IoT can be smart homes and buildings, smart grids, smart health, smart cities, etc.
\begin{figure}[!ht]
\centering
\includegraphics[height=0.5\textwidth]{Chapters/Chap1/figures/SmartCity.png}
\caption{Overview of Smart city components}
\label{overview}
\end{figure}
\par \noindent \textbf{Smart City}: A smart city is an urban area that uses different types of IoT devices to collect, process and analyze the data for monitoring and managing traffic and transportation systems, utilities, power grids, waste management, water supply networks, schools, libraries, hospitals, security and surveillance systems, and other community services. It helps city officials to interact directly with both community and city infrastructure and also to monitor and manage the city resources efficiently and smartly. The main components of a smart city is depicted in Figure \ref{overview}.
\subsection{An Overview of IPv6 Addressing}
\begin{figure}[ht]
\centering
\includegraphics[height=0.5\textwidth]{Chapters/Chap1/figures/IPv4.png}
\caption{IP version 4 (IPv4) Header Format}
\label{IPv4}
\end{figure}
Internet protocol version 4 (IPv4) is the most widely deployed IP used to connect devices to the Internet. IPv4 addresses are 32-bit long and can be used to assign a total of $2^{32}$ devices (over 4 billion devices) uniquely. However, with the growth of the Internet and IoT it can be expected that the number of IPv4 addresses may eventually run out as each device that connects to the Internet and IoT requires an IP address. A new IP addressing system Internet Protocol version 6 (IPv6) is being deployed to fulfill the need for more IP addresses. An IPv6 addresses are 128-bit long and can be used to assign a total of $3.4 * 10^{38}$ devices uniquely. Further, it supports auto-configuration and provides better quality of services (QoS), mobility and security as compared to IPv4. Figure \ref{IPv4} and Figure \ref{IPv6} present the headers of IP version 4 and IP version 6 respectively.
\begin{figure}[ht]
\centering
\includegraphics[height=0.5\textwidth]{Chapters/Chap1/figures/IPv6.png}
\caption{IP version 6 (IPv6) Header Format}
\label{IPv6}
\end{figure}
\noindent \textbf{IPv6 Address Representation:} An IPv6 address is represented as eight groups of four hexadecimal digits where each group represents 16-bits. These groups are separated by colons (:). An example of an IPv6 address is:
\begin{center} 2031:0000:130f:0000:0000:09c0:876a:130b \end{center}
Leading zeroes in a group are optional and can be omitted. One or more consecutive groups containing zeros can be replaced by double colons (::), but only once per address. Therefore, the example address can be written as:
\begin{center} 2031:0:130f::9c0:876a:130b \end{center}
\noindent \textbf{IPv6 Header Format:} The header format of IPv6 is shown in Figure \ref{IPv6}. Here, the fields of IPv6 header have been discussed briefly:
\noindent \textit{Version:} This field indicates the version of Internet Protocol which contains bit sequence 0110.
\noindent \textit{Traffic class:} This field presents the class or priority of IPv6 traffic as it is similar to service field in IPv4 header. The router discards the least priority packets if congestion occurs in the network.
\noindent \textit{Flow label:} Source node uses flow label field to label the packets belonging to the same flow in order to request special handling (for example, quality of service or real time service) by intermediate IPv6 routers. It also specifies the lifetime of the flow.
\noindent \textit{Payload length:} This field indicates the total size of the payload including extension headers (if any) and upper layer data.
\noindent \textit{Next header:} This filed is used to indicate the type of extension header (if any) immediately following the IPv6 header. It also specifies the upper-layer protocols (UDP, TCP) in some cases.
\noindent \textit{Hop limit:} This field is same as time-to-live (TTL) field in IPv4 header. It specifies the maximum number of routers an IPv6 packet can travel. The value of the hop limit gets decremented by one by each router that forwards the packet. The router discards the packet if the value of the hop limit reaches to 0. This filed prevents the packet from circulating indefinitely in the network.
\noindent \textit{Source address:} This filed specifies the IPv6 address of the original source of the packet.
\noindent \textit{Destination address:} This filed indicates the IPv6 address of the final destination. In order to correctly route the packet, the intermediate routers use destination address of the packet.
\noindent \textit{Extension header:} This field have been introduced to allow the incorporation and usage of several options whenever is needed. The size of the IPv6 main header is 40-bytes long. Next Header field of IPv6 main header points to the first Extension Header and the first extension header points to the second extension header and so on.
\section{Related Works}\label{rel}
A number of different approaches have been explored in the literature, including the use of software-defined networking (SDN). For example, there have been attempts to integrate SDN and IoT technologies into the heterogeneous communication infrastructure in smart cities \cite{b4,b5,b6,b7}, by say utilizing SDN to manage and determine the correctness of network operations at run-time. This is because we can leverage the globalized view and the programmability features available in the SDN controller to control, configure, monitor and detect faults and mitigate abnormal operation(s) in the underpinning infrastructure; hence, allowing us to achieve efficiency and reliability.
\par Mavani et al., has done several works on secure addressing and privacy preserving methodologies for IoT and Mobile environment paradigm \cite{b13,b14,b15}. In IoT, billions of devices can be addressed using IPv6 addressing scheme. Attackers can spoof addresses from unsecure wireless communication channels and advertise them as a legitimate device. Malicious users can track activity of these devices by spoofing IPv6 addresses. To mitigate this type of attacks by hiding the IPv6 address from attacker. They have proposed a secure privacy preserving method\cite{b13}, which changes the IPv6 address of each device periodically and pseudorandomly in order to hide its identity. They analyzed the method using Cooja simulator to show that the method does not inflict much overhead for random changing of address and reconfiguration. In \cite{b14, b15}, they investigated the use of secure addressing and privacy mechanisms for f IPv6 over Low -Power Wireless Personal Area Networks (6LoWPAN) and designed a method to provide resilience against address spoofing and better reconfiguration time from attack disruption. They showed the efficacy of their proposal by time complexity analysis and simulation with benchmark data, but overall this does not pose much overhead to provide resilience against address spoofing.
\par Brilli et al., proposed a secure privacy aware two layer addressing scheme for 6LoWPAN wireless network in order to improve security and privacy along with reducing the chance of spoofing by hiding the traceability of the user \cite{b16}. With a minimal overhead and using standard 6LoWPAN messages security and privacy have been ensured in an energy constrained environment.
Wang et al., proposed a long-thin and tree-based topology in addressing-based routing optimization in vehicular scenarios (AROV) \cite{b17} to provide unique address to sensor nodes in 6LoWPAN wireless sensor networks Using a concept of super node for multi-hop sensor nodes serves as address initiator for its all neighbor nodes. They have shown it mitigates address failure and also gives performance in routing by reducing latency. The authors also proposed location aware addressing for 6LoWPAN wireless sensor networks \cite{b18}. In this addressing scheme without using duplicate address detection, a node can obtain a globally unique address. The address initialization is done zone wise where zones are independent of the one in another. therefore this parallel and address initialization took less time. Wang et al, further proposed stateful address configuration mechanism for wireless body area networks \cite{b19}. The uniqueness of the address is maintained without duplicate address detection. Automatic reclamation of unused or released address have been done without any extra overhead. Using simulation they have shown the efficacy of performance by reducing the address configuration delay and cost.
For heterogeneous wireless network a dynamic Internet Protocol (IP) address assignment architecture \cite{b20} has been proposed by Khair et al. The addressing mechanism introduced security and service reliability with a reduced Opex. However, this scheme does not perform well in heterogeneous heavy traffic scenarios as it incurs significant overhead.
Li et al., presented address configuration algorithm for network merging in Ad hoc network scenario \cite{b21}. By restricting the new address generation only duplicate addresses during merging their scheme significantly improve the network performance.
In \cite{b22}, an IP-based vehicular content-centric networking framework has been proposed by Wang et al., by employing the unicast address-centric paradigm to achieve content acquisition. They avoid the broadcast centric communication. Using the unicast communication, they have shown it substantially reduces the content acquisition cost and gives better performance in success rate content acquisition.
In \cite{b24}, El-Shekeil et al., investigated several conflict scenarios of using Private IP for enterprise network. They formulated the problem to minimize the routing table sizes as NP-Hard. They devised effective heuristics formulation in order to solve the problem. To prove the efficacy of the same they provided empirical result which showed significant reduce in the number of subnet entries and the routing table sizes.
A Mobile Ad-hoc Network (MANET) is a collection of mobile nodes with a dynamic self-configured network. It has no fixed and pre-established infrastructure without any centralized administrations or base stations. MANET can be integrated with IoT to implement smart cities. Therefore, IP addressing is very important and challenging issue for a MANET as it is an infrastructure-less and highly dynamic network. In light of this, Ghosh et al., proposed IPv6-based and IPv4-based secure distributed dynamic address allocation protocols \cite{b24b, b24c, b25,b25a, b25b, b26}. In these protocols, the new node gets an IP address from its neighbors acting as proxies. The new node becomes proxy once it receives an IP from the network. Further, these protocols can handle the network events such as network partitioning and merging without using complex duplicate address detection mechanisms.
Akhtar et al., proposed a congestion avoidance algorithm \cite{b26a} for IoT-MANET which used bandwidth as the main component to find the optimal route. By getting feedback about the residual bandwidth of network path each channel aware routing scheme (BARS) that can avoid congestion by monitoring residual bandwidth capacity in network paths they significantly improve network parameters like of latency, end-to-end delay and packet delivery ratio for both static and dynamic network topologies. A secure SDN based framework has been proposed for content centric application has been devised by Ghosh et al. In \cite{b27}, secure multi-path routing protocol has been designed which significantly improves the network performance.This work is pretty much feasible to incorporate for futuristic smart cities. Ghosh et al., proposed a SDN based secure framework for smart energy delivery system \cite{b28}or smart cities, which addressed a number of fault injections and controller failure scenarios as well. In \cite{b29}, Alnumay et al., designed and developed a trust-based system for securing IoT applications using a predictive model of ARMA/GARCH (1,1), whcih significantly improve network functionalities in smart city scenarios.
\section{The Proposed SDN-IoT-based Smart City Framework}\label{proto}
Here, we propose our SDN-IoT based smart city framework, which is configured, controlled, and managed by a global control center as shown in Figure \ref{Smartcity}. The proposed framework supports heterogeneous networks and contains different types of networks including ZigBee, mobile ad-hoc networks (MANETs), sensor networks and Bluetooth.
\begin{figure}[!ht]
\centering
\includegraphics[height=0.5\textwidth]{Chapters/Chap1/figures/Smart_City.png}
\caption{An SDN-IoT-based smart city framework}
\label{Smartcity}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[height=0.6\textwidth]{Chapters/Chap1/figures/Frame.png}
\caption{An SDN-IoT-based layered smart city framework}
\label{arch}
\end{figure}
We also present a SDN-IoT based layered smart city framework in Figure \ref{arch}. Our proposed architecture has three layers, described as follows. The first layer is the infrastructure layer, which consists the IoT devices sublayer and the forwarding devices sublayer. The IoT devices sublayer contains different types of wireless devices (e.g. ZigBee, sensors, and Bluetooth) to create different types of IoT application domains. These wireless devices collect large volume of data from the networks and send them to the global smart city control center for further processing. The IoT device sublayer also contains actuators to receive control commands from the global control center and execute them. The forwarding devices sublayer consists of Openflow (OF) gateways, which facilitate the forwarding of control and data packets to the global control center. The control layer contains a global SDN controller and a number of local SDN controllers. The global SDN controller is mainly responsible for controlling and monitoring communications between global control center to IoT application domains and an application domain to other application domains, and the local SDN controller controls and monitors the communication between devices inside an application domain. The application layer provides IoT services (e.g. smart homes, smart grids, and smart transportation) using SDN controllers. It further provides network services such as routing, security and quality of service (QoS) in the city.
\subsection{The Proposed Addressing Scheme}
Here, we discuss our proposed IPv6 addressing scheme that is designed to provide unique addresses to IoT devices in the infrastructure. Using the proposed addressing scheme, unique IP addresses can be generated from the IP address of an existing device in the city (network), which can then be provided to new / joining IoT devices. In other words, without the need to broadcast any message over the entire city, any new / joining IoT device can acquire an IP address from its peers / neighboring devices. This concept is adopted from \cite{b8}.
Here, we discuss the algorithm given in Function
ip-generation that generates unique IPv6 addresses
for new IoT devices joining the network. As discussed, an IPv6 address comprises eight (8) groups of four (4) hexadecimal (\textsl{HEX})
digits, which are separated by colons (for
example, 2031:0000:130f:0000:0000:09c0:876a:130b). The IPv6 address logically divided into two parts: a 64-bit network prefix and a 64-bit interface identifier. For ease of presentation, we express the address in 16-byte dotted decimal (\textsl{DEC}) format: ($b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.b_7.b_6.b_5.b_4.b_3.b_2.b_1.b_0$)$_{DEC}$ wherein $b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8$ and $b_7.b_6.b_5.b_4.b_3.b_2.b_1.b_0$ are the network prefix (which is fixed for a network domain) and the device identifier respectively.
\begin{function
\scriptsize{
getmyip $\leftarrow (b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.b_7.b_6.b_5.b_4.b_3.b_2.b_1.b_0)_{DEC}$\;
Set $static \ count \leftarrow 0$, $count1 \leftarrow 1$,$j \leftarrow 0$, $i \leftarrow 0$\;
$count \leftarrow (count + 1)$;
$j \leftarrow count$\;
\par \If {$b_7 == 0 \ and \ b_0 == 1$} { $\rhd$ local SDN controller\
\If {$j \leq 255$} {
$IP_N \leftarrow b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.j.b_6.b_5.b_4.b_3.b_2.b_1.b_0$\;
}
\Else {
$count1 \leftarrow (count1 + 1)$;
$i \leftarrow count1$\;
\If {$i \leq 255$} {
$IP_N \leftarrow b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.b_7.b_6.b_5.b_4.b_3.b_2.b_1.i$\;
}
}
}
\Else { $\rhd$ Other IoT devices acting as proxies\\
\If {$j \leq 255$} {
\uIf {$b_7 == 0 \ and \ b_0 \neq 1$} {
$IP_N \leftarrow b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.j.b_6.b_5.b_4.b_3.b_2.b_1.b_0$\;
}
\uElseIf {$b_7 \neq 0 \ and \ b_6 == 0$} {
$IP_N \leftarrow b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.b_7.j.b_5.b_4.b_3.b_2.b_1.b_0$\;
}
\uElseIf {$b_6 \neq 0 \ and \ b_5 == 0$} {
$IP_N \leftarrow b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.b_7.b_6.j.b_4.b_3.b_2.b_1.b_0$\;
}
\uElseIf {$b_5 \neq 0 \ and \ b_4 == 0$} {
$IP_N \leftarrow b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.b_7.b_6.b_5.j.b_3.b_2.b_1.b_0$\;
}
\uElseIf {$b_4 \neq 0 \ and \ b_3 == 0$} {
$IP_N \leftarrow b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.b_7.b_6.b_5.b_4.j.b_2.b_1.b_0$\;
}
\uElseIf {$b_3 \neq 0 \ and \ b_2 == 0$} {
$IP_N \leftarrow b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.b_7.b_6.b_5.b_4.b_3.j.b_1.b_0$\;
}
\ElseIf {$b_2 \neq 0 \ and \ b_1 == 0$} {
\uIf {$b_2 == 255 \ and \ b_0 == 255 \ and \ j == 255$} {
$b_0 = 254$\;
$IP_N \leftarrow b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.b_7.b_6.b_5.b_4.b_3.b_2.j.b_0$\;
}
\Else {
$IP_N \leftarrow b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.b_7.b_6.b_5.b_4.b_3.b_2.j.b_0$\;
}
}
}
}
\Return $(IP_N)_{HEX}$\;
}\caption{ip-generation()\label{GENERATEUNIQUEIP}
\end{function}
We assume that the global SDN controller runs an addressing application to configure all the local SDN controllers in the many different IoT application domains. Each local SDN controller also runs the proposed addressing application to configure any SDN and IoT devices in its domain. We further assume that a local SDN controller is configured with an IP address, say CEDF:0CB8:8BA3:8A2E::0001, by the global SDN controller. In our context, CEDF:0CB8:8BA3:8A2E is the network domain and 0000:0000:0000:0001 is the identifier of the local SDN controller. The local SDN controller can assign the network prefix CEDF:0CB8:8BA3:8A2E and the device identifiers ranging from 1.0.0.0.0.0.0.1 to 255.0.0.0.0.0.0.1 and from 0.0.0.0.0.0.0.2 to 0.0.0.0.0.0.0.255 to IoT devices in the domain.
In our example, the IoT device that has host identifier 0.0.0.0.0.0.0.2 and a proxy with host identifier 0.0.0.0.0.0.0.255 can allocate addresses from 0.0.0.0.0.0.1.2 to 0.0.0.0.0.0.255.2 and addresses from 0.0.0.0.0.0.1.255 to 0.0.0.0.0.0.255.255 in the dotted decimal format (\textsl{DEC}), respectively. Therefore, one can easily see that a node with host identifier 0.255.255.255.255.255.255.255 can assign addresses in the range between 1.255.255.255.255.255.255.255 and 255.255.255.255.255.255.255.254, with a network prefix of CEDF:0CB8:8BA3:8A2E.
\begin{figure}[!ht]
\centering
\includegraphics[height=0.6\textwidth, width=0.7\textwidth]{Chapters/Chap1/figures/address-allocate.png}
\caption{Address allocation tree in the SDN-IoT based smart city: A simplified example}
\label{Alloctree}
\end{figure}
Figure \ref{Alloctree} describes a simple example of how a peer or neighboring IoT device can allocate unique address (i.e. acting as a \emph{proxy}), where the last byte ($b0$) of an IP address is presented within the circle and the remaining bytes ($b_7, b_6, b_5, b_4, b_3, b_2, b_1$) outside the circle. In the event that a proxy (i.e. the IoT device) does not have available IP address for nodes that have just joined the infrastructure, then this particular proxy will need to request for new IP address(es) from their parent device. Similarly, in the unlikely event that the parent device does not have any available IP address for allocation, then a similar request will be made to the parent of this particular parent device. This allows the network to scale easily. Thus, in our proposed addressing scheme, address can be uniquely allocated from $b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.0.0.0.0.0.0.0.1$ to $b_{15}.b_{14}.b_{13}.b_{12}.b_{11}.b_{10}.b_9.b_8.255.255.255.255.255.255.255.254$ in the network.
We also remark that in our proposed addressing scheme, the \emph{allocation status} is maintained by the individual device. Such a status records the last assigned address (i.e. \emph{count} value), to avoid proxy devices from generating the same IP address. This allows us to avoid the need to introduce complex duplicate address detection mechanism during the process of address resolution. Further, new device obtains an IP address from its neighbor; therefore, the proposed scheme has minimal addressing overhead and latency.
\begin{table}
\center
\caption{Comparison of Address Allocation Approaches in Smart Cities}
\label{comparison1}
\begin{tabular}{@{}ccccccccccc@{}}
\hline
{Scheme} &{IP} &{Uniqueness} &{Addressing} &{Addressing} &{Scalability} &{Complexity}\\
{} & {Family} & & {Latency} &{Overhead} & & \\
\hline
{DHCP} & IPv4, IPv6 &{Yes} &{$\textsl{O(2*t*d)}$} &\textsl{O($n^{2}$)} &{Low} &{Low} \\
\hline
{DAD} & IPv4, IPv6 &{No} &{$\textsl{O(2*t*d)}$} &\textsl{O($n^{2}$)} &{Low} &{Medium} \\
\hline
{Proposed} &IPv6 &{Yes} &{$\textsl{O(2*t)}$} &\textsl{O($2*l/n$)} &{High} &{Low} \\
\hline
\end{tabular}
\end{table}
\subsection{Performance Evaluation} Table \ref{comparison1} compares the proposed address allocation scheme between the traditional DHCP and DAD schemes. Here, $n$ be the total number of IoT devices, $l$ the average number of links between devices, $d$ the network diameter and $t$ be the average 1-hop latency. We consider the following parameters to analyze the performance of our proposed addressing scheme along with DHCP and DAD schemes:
\noindent \textit{Uniqueness:} The most important metrics in address allocation scheme is to guarantee the uniqueness of the allocated addresses of each device. This unique address is needed to identify the device uniquely and also for unicast communication, and routing in a smart city. DAD does not guarantee the uniqueness of the allocated address whereas the proposed scheme and DHCP provide unique address allocation to each IoT device.
\noindent \textit{Addressing Latency:} This parameter is the time difference between points when a new device sends the request for an address and when it receives the address from the network. In DHCP, the new device needs to discover the DHCP server where an address request message is flooded in the whole network. The DHCP server sends the address to the new device in response. Therefore, the addressing latency of DHCP is O($2*t*d$). In DAD, the new device floods an address request message in the whole network and sets a timer based on the diameter of the network for receiving the address reply message. The new device configures itself when the timer expires. Thus, the addressing latency of DAD is O($2*t*d$). Whereas the new device acquires an address from a neighbour in our proposed addressing scheme. Therefore, the addressing latency of the proposed scheme is O($2*t$).
\noindent \textit{Addressing Overhead:} Addressing overhead of an addressing protocol refers to the average number of messages required for an address allocation to a new device. In DHCP, the new device floods a message throughout the smart city to discover the DHCP server. Therefore, the addressing overhead of DHCP is O($n^{2}$). In DAD, the new device randomly picks a temporary address and floods a message in the whole smart city network. Therefore, the addressing overhead of DAD considered to be O($n^{2}$). In our proposed scheme, the new device obtains an address from one of its neighbours, thus the addressing overhead is O($2*l/n$).
\noindent \textit{Scalability:} The scalability of an addressing scheme is considered to be high if the scheme does not degrade much its performance with respect to addressing latency and overhead even when the size of the network is large. The addressing overhead and the addressing latency of DHCP and DAD schemes are O($n^{2}$) and O($2*t*d$) respectively. Therefore, these schemes are considered to be low scalability. Whereas the proposed addressing scheme is considered to be highly scalable as it has O($2*l/n$) and O($2*t$) as the addressing overhead and latency respectively.
\noindent \textit{Complexity:} The addressing scheme should use the network resources (e.g., energy and memory of IoT devices, network-bandwidth) as minimal as possible at the time of address allocation. The complexity of DAD scheme is considered to be medium as it generates address from a random number and assigns to a new device. Whereas the proposed addressing scheme has low complexity as it does not need to maintain the address blocks and complex functions to generate addresses. In the proposed scheme, the existing devices (already configured with addresses) in the network acting as proxies and capable of generating addresses for new devices. This reduces the complexity and memory requirement of the proposed scheme even further.
\section{Conclusion}\label{conclu}
In this Chapter, we proposed an SDN-IoT-based smart city framework, and a distributed IPV6-based address allocation scheme. In the latter, each device in the city acts as a proxy and is capable of assigning IP addresses to new devices dynamically. We explained how the proposed approach achieves bandwidth and energy savings in IoT devices, as well as having low addressing overhead and latency since new devices obtain their addresses from their neighbors.
| proofpile-arXiv_066-1622 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction: motivations and overview of the new results}
\label{sec:intro}
TDA is now expanding towards machine learning and statistics due to stability that was proved in a very general form by Chazal et al. \cite{chazal2016structure}.
The key idea of TDA is to view a given cloud of points across all scales $s$, e.g. by blurring given points to balls of a variable radius $s$.
The resulting evolution of topological shapes is summarized by a persistence diagram.
\medskip
\begin{exa}
\label{exa:5-point_line}
Fig.~\ref{fig:5-point_line} illustrates the key concepts (before formal definitions) for the point set $A = \{0,4,6,9,10\}$ in the real line $\mathbb{R}$.
Imagine that we gradually blur original data points by growing balls of the same radius $s$ around the given points.
The balls of the closest points $9,10$ start overlapping at the scale $s=0.5$ when these points merge into one cluster $\{9,10\}$.
This merger is shown by blue arcs joining at the node at $s=0.5$ in the single-linkage dendrogram, see the bottom left picture in Fig.~\ref{fig:5-point_line} and more details in Definition~\ref{dfn:sl_clustering}.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale = 1.1]
\draw[->] (-1,0) -- (11,0) node[right]{} ;
\foreach \x/\xtext in {0, 4, 6, 9, 10}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\filldraw (0,0) circle (2pt);
\filldraw (4,0) circle (2pt);
\filldraw (6,0) circle (2pt);
\filldraw (9,0) circle (2pt);
\filldraw (10,0) circle (2pt);
\end{tikzpicture}
\begin{tikzpicture}[scale = 0.52][sloped]
\draw[style=help lines,step = 1] (-1,0) grid (10.4,4.4);
\draw [->] (-1,0) -- (-1,5) node[above] {scale $s$};
\foreach \i in {0,0.5,...,2}{ \node at (-1.5,2*\i) {\i}; }
\node (a) at (0,-0.3) {0};
\node (b) at (4,-0.3) {4};
\node (c) at (6,-0.3) {6};
\node (d) at (9,-0.3) {9};
\node (e) at (10,-0.3) {10};
\node (x) at (5,5) {};
\node (de) at (9.5,1){};
\node (bc) at (5.0,2){};
\node (bcde) at (8.0,3){};
\node (all) at (5.0,4){};
\draw [line width=0.5mm, blue ] (a) |- (all.center);
\draw [line width=0.5mm, blue ] (b) |- (bc.center);
\draw [line width=0.5mm, blue ] (c) |- (bc.center);
\draw [line width=0.5mm, blue ] (d) |- (de.center);
\draw [line width=0.5mm, blue ] (e) |- (de.center);
\draw [line width=0.5mm, blue ] (de.center) |- (bcde.center);
\draw [line width=0.5mm, blue ] (bc.center) |- (bcde.center);
\draw [line width=0.5mm, blue ] (bcde.center) |- (all.center);
\draw [line width=0.5mm, blue ] [->] (all.center) -> (x.center);
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (0.5,2.4);
\draw[->] (-0.2,0) -- (0.8,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {};
\draw[-] (0,0) -- (1,1) node[right]{};
\foreach \x/\xtext in {0.5/0.5}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=blue] (0,0.5) circle (2pt);
\filldraw [fill=blue] (0.0,1) circle (2pt);
\filldraw [fill=blue] (0,1.5) circle (2pt);
\filldraw [fill=blue] (0,2) circle (2pt);
\filldraw [fill=blue] (0,2.6) circle (2pt);
\end{tikzpicture}
\hspace*{1mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {death};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=red] (0,0.5) circle (2pt);
\filldraw [fill = red] (0.0,1) circle (2pt);
\filldraw [fill=blue] (0.0,2) circle (2pt);
\filldraw [fill=blue] (0.5,1.5) circle (2pt);
\filldraw [fill=blue] (1.0,1.5) circle (2pt);
\filldraw [fill=blue] (1.5, 2.0) circle (2pt);
\filldraw [fill=blue] (2, 2.6) circle (2pt);
\end{tikzpicture}
\caption{\textbf{Top}: the 5-point cloud $A = \{0,4,6,9,10\}\subset\mathbb{R}$.
\textbf{Bottom} from left to right: single-linkage dendrogram $\Delta_{SL}(A)$ from Definition~\ref{dfn:sl_clustering}, the 0D persistence diagram $\mathrm{PD}$ from Definition~\ref{dfn:persistence_diagram} and the new mergegram $\mathrm{MG}$ from Definition~\ref{dfn:mergegram}, where the red color shows dots of multiplicity 2.}
\label{fig:5-point_line}
\end{figure}
The persistence diagram $\mathrm{PD}$ in the bottom middle picture of Fig.~\ref{fig:5-point_line} represents this merger by the dot $(0,0.5)$ meaning that a singleton cluster of (say) point $9$ was born at the scale $s=0$ and then died later at $s=0.5$ (by merging into another cluster of point 10), see details in Definition~\ref{dfn:sl_clustering}.
When two clusters $\{4,6\}$ and $\{9,10\}$ merge at $s=1.5$, this event was previously encoded in the persistence diagram by the single dot $(0,1.5)$ meaning that one cluster inherited from (say) point 10 was born at $s=0$ and has died at $s=1.5$.
\medskip
For the same merger, the new mergegram in the bottom right picture of Fig.~\ref{fig:5-point_line} associates the following two dots.
The dot $(0.5,1.5)$ means that the cluster $\{9,10\}$ merged at the current scale $s=1.5$ was previously formed at the smaller scale $s=0.5$.
The dot $(1,1.5)$ means that another cluster $\{4,6\}$ merged at the current scale $s=1.5$ was formed at $s=1$.
\medskip
Every arc in the single-linkage dendrogram between nodes at scales $b$ and $d$ contributes one dot $(b,d)$ to the mergegram, e.g. both singleton sets $\{9\}$, $\{10\}$ merging at $s=0.5$ contribute two dots $(0,0.5)$ or one dot of multiplicity 2 shown in red, see Fig.~\ref{fig:5-point_line}.
\end{exa}
Example~\ref{exa:5-point_line} shows that the mergegram $\mathrm{MG}$ retains more geometric information of a set $A$ than the persistence diagram $\mathrm{PD}$.
It turns out that this new intermediate object (larger than $\mathrm{PD}$ and smaller than a full dendrogram) enjoys the stability of persistence, which makes $\mathrm{MG}$ useful for analysing noisy data in all cases when distance-based 0D persistence is used.
\medskip
Here is the summary of new contributions to Topological Data Analysis.
\smallskip
\noindent
$\bullet$
Definition~\ref{dfn:mergegram} introduces the concept of a mergegram for any dendrogram of clustering.
\smallskip
\noindent
$\bullet$
Theorem~\ref{thm:0D_persistence_mergegram} and Example~\ref{exa:mergegram_stronger} justify that the mergegram of a single-linkage dendrogram is strictly stronger than the 0D persistence of a distance-based filtration of sublevel sets.
\smallskip
\noindent
$\bullet$
Theorem~\ref{thm:stability_mergegram} proves that the mergegram of any single-linkage dendrogram is stable in the bottleneck distance under perturbations of a finite set in the Hausdorff distance.
\smallskip
\noindent
$\bullet$
Theorem~\ref{thm:complexity} shows that the mergegram can be computed in a near linear time.
\section{Related work on hierarchical clustering and deep neural networks}
\label{sec:review}
The aim of clustering is to split a given set of points into clusters such that points within one cluster are more similar to each other than points from different clusters.
\medskip
A clustering problem can be made exact by specifying a distance between given points and restrictions on outputs, e.g. a number of clusters or a cost function to minimize.
\medskip
All hierarchical clustering algorithms can output a hierarchy of clusters or a dendrogram visualising mergers of clusters as explained later in Definition~\ref{dfn:dendrogram}.
Here we introduce only the simplest single-linkage clustering, which plays the central role in the paper.
\begin{dfn}[single-linkage clustering]
\label{dfn:sl_clustering}
Let $A$ be a finite set in a metric space $X$ with a distance $d:X\times X\to[0,+\infty)$.
Given a distance threshold, which will be called a scale $s$, any points $a,b\in A$ should belong to one \emph{SL cluster} if and only if there is a finite sequence $a=a_1,\dots,a_m=b\in A$ such that any two successive points have a distance at most $s$, i.e. $d(a_i,a_{i+1})\leq s$ for $i=1,\dots,m-1$.
Let $\Delta_{SL}(A;s)$ denote the collection of SL clusters at the scale $s$.
For $s=0$, any point $a\in A$ forms a singleton cluster $\{a\}$.
Representing each cluster from $\Delta_{SL}(A;s)$ over all $s\geq 0$ by one point, we get the \emph{single-linkage dendrogram} $\Delta_{SL}(A)$ visualizing how clusters merge, see the first bottom picture in Fig.~\ref{exa:5-point_line}.
\hfill $\blacksquare$
\end{dfn}
Another way to visualize SL clusters is to build a Minimum Spanning Tree below.
\begin{dfn}[Minimum Spanning Tree $\mathrm{MST}(A)$]
\label{dfn:mst}
The \emph{Minimum Spanning Tree} $\mathrm{MST}(A)$ of a finite set $A$ in a metric space $X$ with a distance $d$ is a tree (a connected graph without cycles) that has the vertex set $A$ and the minimum total length of edges.
We assume that the length of any edge between vertices $a,b\in A$ is measured as $d(a,b)$.
\hfill $\blacksquare$
\end{dfn}
A review of the relevant past work on persistence diagrams is postponed to section~\ref{sec:persistence_modules}, which introduces more auxiliary notions.
A persistence diagram consists of dots $(b,d)\in\mathbb{R}^2$ whose birth/death coordinates represent a life interval $[b,d)$ of a homology class, e.g. a connected component in a Vietoris-Rips filtration, see the bottom middle picture in Fig.~\ref{fig:5-point_line}.
\medskip
Persistence diagrams are isometry invariants that are stable under noise in the sense that a topological space and its noisy point sample have close persistence diagrams.
This stability under noise allows us to classify continuous shapes by using only their discrete samples.
\medskip
Imagine that several rigid shapes are sparsely represented by a few salient points, e.g. corners or local maxima of a distance function.
Translations and rotations of these point clouds do not change the underlying shapes.
Hence clouds should be classified modulo isometries that preserve distances between points.
The important problem is to recognize of a shape, e.g. within a given set of representatives, from its sparse point sample with noise.
This paper solves the problem by computing isometry invariants, namely the new mergegram, the 0D persistence and the pair-set of distances to two nearest neighbors for each point.
\medskip
Since all dots in a persistence diagram are unordered, our experimental section~\ref{sec:experiments} uses a neural network whose output is invariant under permutations of input point by construction.
PersLay \cite{carriere2019perslay} is a collection of permutation invariant neural network layers i.e. functions on sets of points in $\mathbb{R}^n$ that give the same output regardless of the order they are inserted.
\medskip
PersLay extends the neural network layers introduced in Deep Sets \cite{zaheer2017deep}.
Perslay introduces new layers to specially handle persistence diagrams, as well as new form of representing such layers.
Each layer is a combination of a coefficient layer $\omega(p):\mathbb{R}^n \rightarrow \mathbb{R}$, point transformation $\phi(p):\mathbb{R}^n \rightarrow \mathbb{R}^q$ and permutation invariant layer $\text{op}$ to retrieve the final output
$$\text{PersLay}(\text{diagram}) = \text{op}(\{\omega(p)\phi(p)\}), \text{ where } p \in \text{diagram (any set of points in } \mathbb{R}^n).$$
\section{The merge module and mergegram of a dendrogram}
\label{sec:mergegram}
The section introduces a merge module (a family of vector spaces with consistent linear maps) and a mergegram (a diagram of points in $\mathbb{R}^2$ representing a merge module).
\begin{dfn}[partition set $\mathbb{P}(A)$]
\label{dfn:partition}
For any set $A$, a \emph{partition} of $A$ is a finite collection of non-empty disjoint subsets $A_1,\dots,A_k\subset A$ whose union is $A$.
The \emph{single-block} partition of $A$ consists of the set $A$ itself.
The \emph{partition set} $\mathbb{P}(A)$ consists of all partitions of $A$.
\hfill $\blacksquare$
\end{dfn}
If $A=\{1,2,3\}$, then $(\{1,2\},\{3\})$ is a partition of $A$, but
$(\{1\},\{2\})$ and $(\{1,2\},\{1,3\})$ are not.
In this case the partition set $\mathbb{P}(A)$ consists of 5 partitions
$$(\{1\},\{2\},\{3\}),\quad
(\{1,2\},\{3\}),\quad
(\{1,3\},\{2\}),\quad
(\{2,3\},\{1\}),\quad
(\{1,2,3\}).$$
Definition~\ref{dfn:dendrogram} below extends the concept of a dendrogram from \cite[section~3.1]{carlsson2010characterization} to arbitrary (possibly, infinite) sets $A$.
Since every partition of $A$ is finite by Definition~\ref{dfn:partition}, we don't need to add that an initial partition of $A$ is finite.
Non-singleton sets are now allowed.
\begin{dfn}[dendrogram of merge sets]
\label{dfn:dendrogram}
A \emph{dendrogram} over any set $A$ is a function $\Delta:[0,\infty)\to\mathbb{P}(A)$ of a scale $s\geq 0$ satisfying the following conditions.
\smallskip
\noindent
(\ref{dfn:dendrogram}a)
There exists a scale $r\geq 0$ such that $\Delta(A;s)$ is the single block partition for all $s\geq r$.
\smallskip
\noindent
(\ref{dfn:dendrogram}b)
If $s\leq t$, then $\Delta(A;s)$ \emph{refines} $\Delta(A;t)$, i.e. any set from $\Delta(t)$ is a subset of some set from $\Delta(A;t)$.
These inclusions of subsets of $X$ induce the natural map $\Delta_s^t:\Delta(s)\to\Delta(t)$.
\smallskip
\noindent
(\ref{dfn:dendrogram}c)
There are finitely many \emph{merge scales} $s_i$ such that $$s_0 = 0 \text{ and } s_{i+1} = \text{sup}\{s \mid \text{ the map } \Delta_s^t \text{ is identity for } s' \in [s_i,s)\}, i=0,\dots,m-1.$$
\noindent
Since $\Delta(A;s_{i})\to\Delta(A;s_{i+1})$ is not an identity map, there is a subset $B\in\Delta(s_{i+1})$ whose preimage consists of at least two subsets from $\Delta(s_{i})$.
This subset $B\subset X$ is called a \emph{merge} set and its \emph{birth} scale is $s_i$.
All sets of $\Delta(A;0)$ are merge sets at the birth scale 0.
The $\mathrm{life}(B)$ is the interval $[s_i,t)$ from its birth scale $s_i$ to its \emph{death} scale $t=\sup\{s \mid \Delta_{s_i}^s(B)=B\}$.
\hfill $\blacksquare$
\end{dfn}
Dendrograms are usually represented as trees whose nodes correspond to all sets from the partitions $\Delta(A;s_i)$ at merge scales.
Edges of such a tree connect any set $B\in\Delta(A;s_{i})$ with its preimages under $\Delta(A;s_{i})\to\Delta(A;s_{i+1})$.
Fig.~\ref{fig:3-point_dendrogram} shows the dendrogram on $A=\{1,2,3\}$.
\medskip
\begin{figure}
\parbox{100mm}{
\begin{tabular}{lccccc}
partition $\Delta(A;2)$ at scale $s_2=2$ & & & $\{1,2,3\}$ & & \\
map $\Delta_1^2:\Delta(A;1)\to\Delta(A;2)$ & & & $\uparrow$ & $\nwarrow$ & \\
partition $\Delta(A;1)$ at scale $s_1=1$ & & & \{1, 2\} & & \{3\} \\
map $\Delta_0^1:\Delta(A;0)\to\Delta(A;1)$ & & $\nearrow$ & $\uparrow$ & & $\uparrow$ \\
partition $\Delta(A;0)$ at scale $s_0=0$ & $\{1\}$ & & $\{2\}$ & & \{3\}
\end{tabular}}
\parbox{35mm}{
\begin{tikzpicture}[scale = 0.9]
\draw[style=help lines,step = 1] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {death};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {1/1, 2/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {1/1, 2/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=red] (0,1) circle (2pt);
\filldraw [fill = red] (0,1) circle (2pt);
\filldraw [fill = blue] (0,2) circle (2pt);
\filldraw [fill = blue] (1,2) circle (2pt);
\filldraw [fill = blue] (2,2.7) circle (2pt);
\end{tikzpicture}}
\caption{The dendrogram $\Delta$ on $A=\{1,2,3\}$ and its mergegram $\mathrm{MG}(\Delta)$ from Definition~\ref{dfn:mergegram}.}
\label{fig:3-point_dendrogram}
\end{figure}
In the dendrogram above, the partition $\Delta(A;1)$ consists of $\{1,2\}$ and $\{3\}$.
The maps $\Delta_s^t$ induced by inclusions respect the compositions in the sense that $\Delta_s^t\circ\Delta_r^s=\Delta_r^t$ for any $r\leq s\leq t$, e.g. $\Delta_0^1(\{1\})=\{1,2\}=\Delta_0^1(\{2\})$ and $\Delta_0^1(\{3\})=\{3\}$, i.e. $\Delta_0^1$ is a well-defined map from the partition $\Delta(A;0)$ in 3 singleton sets to $\Delta(A;1)$, but isn't an identity.
\medskip
At the scale $s_0=0$ the merge sets $\{1\},\{2\}$ have $\mathrm{life}=[0,1)$, while the merge set $\{3\}$ has $\mathrm{life}=[0,2)$.
At the scale $s_1=1$ the only merge set $\{1,2\}$ has $\mathrm{life}=[1,2)$.
At the scale $s_2=2$ the only merge set $\{1,2,3\}$ has $\mathrm{life}=[2,+\infty)$.
The notation $\Delta$ is motivated as the first (Greek) letter in the word dendrogram and by a $\Delta$-shape of a typical tree above.
\medskip
Condition~(\ref{dfn:dendrogram}a) means that
a partition of $X$ is trivial for all large scales $s$.
Condition~(\ref{dfn:dendrogram}b) says that when the scale $s$ in increasing sets from a partition $\Delta(s)$ can only merge with each other, but can not split.
Condition~(\ref{dfn:dendrogram}c) implies that there are only finitely many mergers, when two or more subsets of $X$ merge into a larger merge set.
\medskip
\begin{lem}[single-linkage dendrogram]
\label{lem:sl_clustering}
Given a metric space $(X,d)$ and a finite set $A\subset X$, the single-linkage dendrogram $\Delta_{SL}(X)$ from Definition~\ref{dfn:sl_clustering} satisfies Definition~\ref{dfn:dendrogram}.
\end{lem}
\begin{proof}
Since $A$ is finite, there are only finitely many inter-point distances within $A$, which implies condition (\ref{dfn:dendrogram}a,c).
Let $f(p):X\to\mathbb{R}$ be the distance from a point $p\in X$ to (the closest point of) $A$.
Condition (\ref{dfn:dendrogram}b) follows the inclusions $f^{-1}[0,s) \subseteq f^{-1}[0,t) $ for $s\leq t$.
\end{proof}
A \emph{mergegram} represents lives of merge sets by dots with two coordinates (birth,death).
\begin{dfn}[mergegram $\mathrm{MG}(\Delta)$]
\label{dfn:mergegram}
The \emph{mergegram} of a dendrogram $\Delta$ from Definition~\ref{dfn:dendrogram} has the dot (birth,death) in $\mathbb{R}^2$ for each merge set $A$ of $\Delta$ with $\mathrm{life}(A)$=[birth,death).
If any life interval appears $k$ times, the dot (birth,death) has the multiplicity $k$ in $\mathrm{MG}(\Delta)$.
\hfill $\blacksquare$
\end{dfn}
For simplicity, this paper considers vector spaces with coefficients (of linear combinations of vectors) only in $\mathbb{Z}_2=\{0,1\}$, which can be replaced by any field.
\begin{dfn}[merge module $M(\Delta)$]
\label{dfn:merge_module}
For any dendrogam $\Delta$ on a set $X$ from Definition~\ref{dfn:dendrogram},
the \emph{merge module} $M(\Delta)$ consists of the vector spaces $M_s(\Delta)$, $s\in\mathbb{R}$, and linear maps $m_s^t:M_s(\Delta)\to M_t(\Delta)$, $s\leq t$.
For any $s\in\mathbb{R}$ and $A\in\Delta(s)$, the space $M_s(\Delta)$ has the generator or a basis vector $[A]\in M_s(\Delta)$.
For $s<t$ and any set $A\in\Delta(s)$,
if the image of $A$ under $\Delta_s^t$ coincides with $A\subset X$, i.e. $\Delta_s^t(A)=A$, then $m_s^t([A])=[A]$, else $m_s^t([A])=0$.
\hfill $\blacksquare$
\end{dfn}
\begin{figure}[h]
\begin{tabular}{lccccccccc}
scale $s_3=+\infty$ & 0 & & & & & 0 \\
map $m_2^{+\infty}$ & $\uparrow$ & & & & & $\uparrow$\\
scale $s_2=2$ & $\mathbb{Z}_2$ & & & 0 & 0 & [\{1,2,3\}]\\
map $m_1^2$ & $\uparrow$ & & & $\uparrow$ & $\uparrow$\\
scale $s_1=1$ & $\mathbb{Z}_2\oplus\mathbb{Z}_2$ & 0 & 0 & [\{3\}] & [\{1,2\}] \\
map $m_0^1$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ \\
scale $s_0=0$ & $\mathbb{Z}_2\oplus\mathbb{Z}_2\oplus\mathbb{Z}_2$ & [\{1\}] & [\{2\}] & [\{3\}] &
\end{tabular}
\caption{The merge module $M(\Delta)$ of the dendrogram $\Delta$ on the set $X=\{1,2,3\}$ in Fig.~\ref{fig:3-point_dendrogram}.}
\label{fig:3-point_module}
\end{figure}
\begin{exa}
\label{exa:5-point_set}
Fig.~\ref{fig:5-point_set} shows the metric space $X=\{a,b,c,d,e\}$ with distances defined by the shortest path metric induced by the specified edge-lengths, see the distance matrix.
\begin{figure}[H]
\parbox{80mm}{
\begin{tikzpicture}[scale = 0.75][sloped]
\node (x) at (5,3) {x};
\node (a) at (1,1) {a};
\draw (a) -- node[above]{5} ++ (x);
\node (b) at (3.5,4.0) {b};
\draw (b) -- node[above]{1} ++ (x);
\node (c) at (7,1) {c};
\draw (c) -- node[below]{2} ++ (x);
\node (y) at (8,3) {y};
\draw (x) -- node[above]{2} ++ (y);
\node (d) at (10,5){p};
\node (e) at (10,1){q};
\draw (y) -- node[below]{2} ++ (d);
\draw (y) -- node[below]{2} ++ (e);
\end{tikzpicture}}
\parbox{40mm}{
\begin{tabular}{c|ccccc}
& a & b & c & p & q \\
\hline
a & 0 & 6 & 7 & 9 & 9 \\
b & 6 & 0 & 3 & 5 & 5 \\
c & 7 & 3 & 0 & 6 & 6 \\
p & 9 & 5 & 6 & 0 & 4 \\
q & 9 & 5 & 6 & 4 & 0
\end{tabular}}
\caption{The set $X=\{a,b,c,d,e\}$ has the distance matrix defined by the shortest path metric.}
\label{fig:5-point_set}
\end{figure}
\begin{figure}[H]
\begin{tikzpicture}[scale = 0.6][sloped]
\draw[style=help lines,step = 1] (-1,0) grid (10.4,6.3);
\foreach \i in {0,0.5,...,3.0} { \node at (-1.4,2*\i) {\i}; }
\node (a) at (0,-0.3) {a};
\node (b) at (4,-0.3) {b};
\node (c) at (6,-0.3) {c};
\node (d) at (8,-0.3) {p};
\node (e) at (10,-0.3) {q};
\node (x) at (5,6.75) {};
\node (de) at (9,4){};
\node (bc) at (5.0,3){};
\node (bcde) at (7.0,5){};
\node (all) at (5.0,6){};
\draw [line width=0.5mm, blue ] (a) |- (all.center);
\draw [line width=0.5mm, blue ] (b) |- (bc.center);
\draw [line width=0.5mm, blue ] (c) |- (bc.center);
\draw [line width=0.5mm, blue ] (d) |- (de.center);
\draw [line width=0.5mm, blue ] (e) |- (de.center);
\draw [line width=0.5mm, blue ] (de.center) |- (bcde.center);
\draw [line width=0.5mm, blue ] (bc.center) |- (bcde.center);
\draw [line width=0.5mm, blue ] (bcde.center) |- (all.center);
\draw [line width=0.5mm, blue] [->] (all.center) -> (x.center);
\end{tikzpicture}
\hspace*{1cm}
\begin{tikzpicture}[scale = 1.1]
\draw[style=help lines,step = 0.5] (0,0) grid (3.4,3.4);
\draw[->] (-0.2,0) -- (3.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,3.4) node[above] {death};
\draw[-] (0,0) -- (3.4,3.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2, 2.5/2.5, 3.0/3}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2, 2.5/2.5, 3.0/3}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw[fill=red] (0,1.5) circle (2pt);
\filldraw [fill = red] (0.0,2.0) circle (2pt);
\filldraw [fill = blue] (0.0,3) circle (2pt);
\filldraw [fill = blue] (1.5,2.5) circle (2pt);
\filldraw [fill = blue] (2.0,2.5) circle (2pt);
\filldraw [fill = blue] (2.5, 3.0) circle (2pt);
\filldraw [fill = blue] (3, 3.7) circle (2pt);
\end{tikzpicture}
\caption{\textbf{Left}: the dendrogram $\Delta$ for the single linkage clustering of the set 5-point set $X=\{a,b,c,d,e\}$ in Fig.~\ref{fig:5-point_set}.
\textbf{Right}: the mergegram $\mathrm{MG}(\Delta)$, red dots have multiplicity 2.}
\label{fig:5-point_set_mergegram}
\end{figure}
The dendrogram $\Delta$ in the first picture of Fig.~\ref{fig:5-point_set_mergegram} generates the mergegram as follows:
\begin{itemize}
\item
each of the singleton sets $\{b\}$ and $\{c\}$ has the dot (0,1.5), so its multiplicity is 2;
\item
each of the singleton sets $\{p\}$ and $\{q\}$ has the dot (0,2), so its multiplicity is 2;
\item
the singleton set $\{a\}$ has the dot $(0,3)$;
the merge set $\{b,c\}$ has the dot (1.5,2.5);
\item
the merge set $\{p,q\}$ has the dot (2,2.5);
the merge set $\{b,c,p,q\}$ has the dot (2.5,3);
\item
the merge set $\{a,b,c,p,q\}$ has the dot $(3,+\infty)$.
\end{itemize}
\end{exa}
\section{Background on persistence modules and diagrams}
\label{sec:persistence_modules}
This section introduces the key concepts from the thorough review by Chazal et al. \cite{chazal2016structure}.
As will become clear soon, the merge module of any dendrogram belongs to a wider class below.
\begin{dfn}[persistence module $\mathbb{V}$]
\label{dfn:persistence_module}
A \emph{persistence module} $\mathbb{V}$ over the real numbers $\mathbb{R}$ is a family of vector spaces $V_t$, $t\in \mathbb{R}$ with linear maps $v^t_s:V_s \rightarrow V_t$, $s\leq t$ such that $v^t_t$ is the identity map on $V_t$ and the composition is respected: $v^t_s \circ v^s_r = v^t_r$ for any $r \leq s \leq t$.
\hfill $\blacksquare$
\end{dfn}
The set of real numbers can be considered as a category $\mathbb{R}$ in the following sense.
The objects of $\mathbb{R}$ are all real numbers.
Any two real numbers such that $a\leq b$ define a single morphism $a\to b$.
The composition of morphisms $a\to b$ and $b \to c$ is the morphism $a \leq c$.
In this language, a persistence module is a functor from $\mathbb{R}$ to the category of vector spaces.
\medskip
A basic example of $\mathbb{V}$ is an interval module.
An interval $J$ between points $p<q$ in the line $\mathbb{R}$ can be one of the following types: closed $[p,q]$, open $(p,q)$ and half-open or half-closed $[p,q)$ and $(p,q]$.
It is convenient to encode types of endpoints by $\pm$ superscripts as follows:
$$[p^-,q^+]:=[p,q],\quad
[p^+,q^-]:=(p,q),\quad
[p^+,q^+]:=(p,q],\quad
[p^-,q^-]:=[p,q).$$
The endpoints $p,q$ can also take the infinite values $\pm\infty$, but without superscripts.
\begin{exa}[interval module $\mathbb{I}(J)$]
\label{exa:interval_module}
For any interval $J\subset\mathbb{R}$, the \emph{interval module} $\mathbb{I}(J)$ is the persistence module defined by the following vector spaces $I_s$ and linear maps $i_s^t:I_s\to I_t$
$$I_s=\left\{ \begin{array}{ll}
\mathbb{Z}_2, & \mbox{ for } s\in J, \\
0, & \mbox{ otherwise };
\end{array} \right.\qquad
i_s^t=\left\{ \begin{array}{ll}
\mathrm{id}, & \mbox{ for } s,t\in J, \\
0, & \mbox{ otherwise }
\end{array} \right.\mbox{ for any }s\leq t.$$
\end{exa}
\medskip
The direct sum $\mathbb{W}=\mathbb{U}\oplus\mathbb{V}$ of persistence modules $\mathbb{U},\mathbb{V}$ is defined as the persistence module with the vector spaces $W_s=U_s\oplus V_s$ and linear maps $w_s^t=u_s^t\oplus v_s^t$.
\medskip
We illustrate the abstract concepts above using geometric constructions of Topological Data Analysis.
Let $f:X\to\mathbb{R}$ be a continuous function on a topological space.
Its \emph{sublevel} sets $X_s^f=f^{-1}((-\infty,s])$ form nested subspaces $X_s^f\subset X_t^f$ for any $s\leq t$.
The inclusions of the sublevel sets respect compositions similarly to a dendrogram $\Delta$ in Definition~\ref{dfn:dendrogram}.
\medskip
On a metric space $X$ with with a distance function $d:X\times X\to[0,+\infty)$, a typical example of a function $f:X\to\mathbb{R}$ is the distance to a finite set of points $A\subset X$.
More specifically, for any point $p\in X$, let $f(p)$ be the distance from $p$ to (a closest point of) $A$.
For any $r\geq 0$, the preimage $X_r^f=f^{-1}((-\infty,r])=\{q\in X \mid d(q,A)\leq r\}$ is the union of closed balls that have the radius $r$ and centers at all points $p\in A$.
For example, $X_0^f=f^{-1}((-\infty,0])=A$ and $X_{+\infty}^f=f^{-1}(\mathbb{R})=X$.
\medskip
If we consider any continuous function $f:X\to\mathbb{R}$, we have the inclusion $X_s^f\subset X_r^f$ for any $s\leq r$.
Hence all sublevel sets $X_s^f$ form a nested sequence of subspaces within $X$.
The above construction of a \emph{filtration} $\{X_s^f\}$ can be considered as a functor from $\mathbb{R}$ to the category of topological spaces.
Below we discuss the most practically used case of dimension 0.
\begin{exa}[persistent homology]
\label{exa:persistent_homology}
For any topological space $X$, the 0-dimensional \emph{homology} $H_0(X)$ is the vector space (with coefficients $\mathbb{Z}_2$) generated by all connected components of $X$.
Let $\{X_s\}$ be any \emph{filtration} of nested spaces, e.g. sublevel sets $X_s^f$ based on a continuous function $f:X\to\mathbb{R}$.
The inclusions $X_s\subset X_r$ for $s\leq r$ induce the linear maps between homology groups $H_0(X_s)\to H_0(X_r)$ and define the \emph{persistent homology} $\{H_0(X_s)\}$, which satisfies the conditions of a persistence module from Definition~\ref{dfn:persistence_module}.
\hfill $\blacksquare$
\end{exa}
\medskip
If $X$ is a finite set of $m$ points, then $H_0(X)$ is the direct sum $\mathbb{Z}_2^m$ of $m$ copies of $\mathbb{Z}_2$.
\medskip
The persistence modules that can be decomposed as direct sums of interval modules can be described in a very simple combinatorial way by persistence diagrams of dots in $\mathbb{R}^2$.
\begin{dfn}[persistence diagram $\mathrm{PD}(\mathbb{V})$]
\label{dfn:persistence_diagram}
Let a persistence module $\mathbb{V}$ be decomposed as a direct sum of interval modules from Example~\ref{exa:interval_module} : $\mathbb{V}\cong\bigoplus\limits_{l \in L}\mathbb{I}(p^{*}_l,q^{*}_l)$, where $*$ is $+$ or $-$.
The \emph{persistence diagram} $\mathrm{PD}(\mathbb{V})$ is the multiset
$\mathrm{PD}(\mathbb{V}) = \{(p_l,q_l) \mid l \in L \} \setminus \{p=q\}\subset\mathbb{R}^2$.
\hfill $\blacksquare$
\end{dfn}
\medskip
The 0-dimensional persistent homology of a space $X$ with a continuous function $f:X\to\mathbb{R}$ will be denoted by $\mathrm{PD}\{H_0(X_s^f)\}$.
Lemma~\ref{lem:merge_module_decomposition} will prove that the merge module $M(\Delta)$ of any dendrogram $\Delta$ is also decomposable into interval modules.
Hence the mergegram $\mathrm{MG}(\Delta)$ from Definition~\ref{dfn:mergegram} can be interpreted as the persistence diagram of the merge module $M(\Delta)$.
\section{The mergegram is stronger than the 0-dimensional persistence}
\label{sec:mergegram_stronger}
Let $f:X\to\mathbb{R}$ be the distance function to a finite subset $A$ of a metric space $(X,d)$.
The persistent homology $\{H_k(X_s^f)\}$ in any dimension $k$ is invariant under isometries of $X$.
\medskip
Moreover, the persistence diagrams of very different shapes, e.g. topological spaces and their discrete samples, can be easily compared by the bottleneck distance in Definition~\ref{dfn:bottleneck_distance}.
\medskip
Practical applications of persistence are justified by Stability Theorem~\ref{thm:stability_persistence} saying that the persistence diagram continuously changes under perturbations of a given filtration or an initial point set.
A similar stability of mergegrams will be proved in Theorem~\ref{thm:stability_mergegram}.
\medskip
This section shows that the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ has more isometry information about the subset $A\subset X$ than the 0-dimensional persistent homology $\{H_0(X_s^f)\}$.
\medskip
Theorem~\ref{thm:0D_persistence_mergegram} shows how to obtain the 0D persistence $\mathrm{PD}\{H_0(X_s^f)\}$ from $\mathrm{MG}(\Delta_{SL}(A))$, where $f:X\to\mathbb{R}$ is the distance to a finite subset $A\subset X$.
Example~\ref{exa:mergegram_stronger} builds two 4-point sets in $\mathbb{R}$ whose persistence diagrams are identical, but their mergegrams are different.
\medskip
We start from folklore Claims~\ref{claim:0D_persistence_SL}-\ref{claim:0D_persistence_MST}, which interpret the 0D persistence $\mathrm{PD}\{H_0(X_s^f)\}$ using the classical concepts of the single-linkage dendrogram and Minimum Spanning Tree.
\begin{myclaim}[0D persistence from $\Delta_{SL}$]
\label{claim:0D_persistence_SL}
For a finite set $A$ in a metric space $(X,d)$, let $f:X\to\mathbb{R}$ be the distance to $A$.
In the single-linkage dendrogram $\Delta_{SL}(A)$, let $0<s_1<\dots<s_m<s_{m+1}=+\infty$ be all distinct merge scales.
If $k\geq 2$ subsets of $A$ merge into a larger subset of $A$ at a scale $s_i$, the multiplicity of $s_i$ is $\mu_i=k-1$.
Then the persistence diagram $\mathrm{PD}\{H_0(X_s^f)\}$ consists of the dots $(0,s_i)$ with multiplicities $\mu_i$, $i=1,\dots,m+1$.
\hfill $\blacksquare$
\end{myclaim}
\begin{myclaim}[0D persistence from MST]
\label{claim:0D_persistence_MST}
For a set $A$ of $n$ points in a metric space $(X,d)$, let $f:X\to\mathbb{R}$ be the distance to $A$.
Let a Minimum Spanning Tree $\mathrm{MST}(A)$ have edge-lengths $l_1\leq\dots\leq l_{n-1}$.
The persistence diagram $\mathrm{PD}\{H_0(X_s^f)\}$ consists of the $n-1$ dots $(0,0.5l_i)$ counted with multiplicities if some edge-lengths are equal, plus the infinite dot $(0,+\infty)$.
\hfill $\blacksquare$
\end{myclaim}
\begin{thm}[0D persistence from a mergegram]
\label{thm:0D_persistence_mergegram}
For a finite set $A$ in a metric space $(X,d)$, let $f:X\to\mathbb{R}$ be the distance to $A$.
Let the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ be a multiset $\{(b_i,d_i)\}_{i=1}^k$, where some dots can be repeated.
Then the persistence diagram $\mathrm{PD}\{H_0(X_s^f)\}$ is the difference of the multisets $\{(0,d_i)\}_{i=1}^{k}-\{(0,b_i)\}_{i=1}^{k}$ containing each dot $(0,s)$ exactly $\#b-\#d$ times, where $\#b$ is the number of births $b_i=s$, $\#d$ is the number of deaths $d_i=s$.
All trivial dots $(0,0)$ are ignored, alternatively we take $\{(0,d_i)\}_{i=1}^{k}$ only with $d_i>0$.
\hfill $\blacksquare$
\end{thm}
\begin{proof}
In the language of Claim~\ref{claim:0D_persistence_SL}, let at a scale $s>0$ of multiplicity $\mu$ exactly $\mu+1$ subsets merge into a set $B\in\Delta_{SL}(A;s)$.
By Claim~\ref{claim:0D_persistence_SL} this set $B$ contributes $\mu$ dots $(0,s)$ to the persistence diagrams $\mathrm{PD}\{H_0(X_s^f)\}$.
By Definition~\ref{dfn:mergegram} the same set $B$ contributes $\mu+1$ dots of the form $(b_i,s)$, $i=1,\dots,\mu+1$, corresponding to the $\mu+1$ sets that merge into $B$ at the scale $s$.
Moreover, the set $B$ itself will merge later into a larger set, which creates one extra dot $(s,d)\in\mathrm{PD}\{H_0(X_s^f)\}$.
The exceptional case $B=A$ corresponds to $d=+\infty$.
\smallskip
If we remove one dot $(0,s)$ from the $\mu+1$ dots counted above
as expected in the difference $\{(0,d_i)\}_{i=1}^{k}-\{(0,b_i)\}_{i=1}^{k}$ of multisets, we get exactly $\mu$ dots $(0,s)\in\mathrm{PD}\{H_0(X_s^f)\}$.
The required formula has been proved for contributions of any merge set $B\subset A$.
\end{proof}
In Example~\ref{exa:5-point_line} the mergegram in the last picture of Fig.~\ref{fig:5-point_line} is the multiset of 9 dots:
$$\mathrm{MG}(\Delta_{SL}(A))=\{(0,0.5),(0,0.5),(0,1),(0,1),(0.5,1.5),(1,1.5),(0,2),(1.5,2),(2,+\infty)\}.$$
Taking the difference of multisets and ignoring trivial dots $(0,0)$, we get \\
$\mathrm{PD}(H_0\{X_s^f\})=\{(0,0.5),(0,0.5),(0,1),(0,1),(0,1.5),(0,1.5),(0,2),(0,2),(0,+\infty)\}-$ \\
$-\{(0,0.5),(0,1),(0,2)\}=\{(0,0.5),(0,1),(0,1.5),(0,2),(0,+\infty)\}
\mbox{ as in Fig.~\ref{fig:5-point_line}}.$
\begin{exa}[the mergegram is stronger than 0D persistence]
\label{exa:mergegram_stronger}
Fig.~\ref{fig:4-point_set1} and~\ref{fig:4-point_set2} show the dendrograms, identical 0D persistence diagrams and different mergegrams for the sets $A=\{0,1,3,7\}$ and $B=\{0,1,5,7\}$ in $\mathbb{R}$.
This example together with Theorem~\ref{thm:0D_persistence_mergegram} justify that the new mergregram is strictly stronger than 0D persistence as an isometry invariant.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale = 0.5][sloped]
\draw [->] (-1,0) -- (-1,5) node[above] {scale $s$};
\draw[style=help lines,step = 1] (-1,0) grid (6.4,4.4);
\foreach \i in {0,0.5,...,2} { \node at (-1.5,2*\i) {\i}; }
\node (a) at (0,-0.4) {0};
\node (b) at (2,-0.4) {1};
\node (c) at (4,-0.4) {3};
\node (d) at (6,-0.4) {7};
\node (x) at (4.625,5) {};
\node (y) at (4.625,4) {};
\node (ab) at (1,1) {};
\node (abc) at (2.5,2){};
\node (abce) at (3,4){};
\draw [line width=0.5mm, blue] (a) |- (ab.center);
\draw [line width=0.5mm, blue] (b) |- (ab.center);
\draw [line width=0.5mm, blue] (ab.center) |- (abc.center);
\draw [line width=0.5mm, blue] (c) |- (abc.center);
\draw [line width=0.5mm, blue] (d) |- (abce.center);
\draw [line width=0.5mm, blue] (abc.center) |- (abce.center);
\draw [line width=0.5mm, blue] [->] (y.center) -> (x.center);
\end{tikzpicture}
\hspace*{2mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=blue] (0,0.5) circle (2pt);
\filldraw [fill=blue] (0,1) circle (2pt);
\filldraw [fill=blue] (0,2) circle (2pt);
\filldraw [fill=blue] (0,2.6) circle (2pt);
\end{tikzpicture}
\hspace*{2mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {death};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=red] (0,0.5) circle (2pt);
\filldraw [fill=blue] (0.0,1) circle (2pt);
\filldraw [fill=blue] (0.0,2) circle (2pt);
\filldraw [fill=blue] (0.5,1.0) circle (2pt);
\filldraw [fill=blue] (1.0,2.0) circle (2pt);
\filldraw [fill=blue] (2,2.6) circle (2pt);
\end{tikzpicture}
\caption{\textbf{Left}: single-linkage dendrogram \footnote{Note: The horizontal axes of the dendrograms are distorted} $\Delta_{SL}(A)$ for $A=\{0,1,3,7\}\subset\mathbb{R}$.
\textbf{Middle}: the 0D persistence diagram for the sublevel filtration of the distance to $A$.
\textbf{Right}: mergegram $\mathrm{MG}(\Delta_{SL}(A))$. }
\label{fig:4-point_set1}
\end{figure}
\begin{figure}[H]
\begin{tikzpicture}[scale = 0.5][sloped]
\draw [->] (-1,0) -- (-1,5) node[above] {scale $s$};
\draw[style=help lines,step = 1] (-1,0) grid (6.4,4.4);
\foreach \i in {0,0.5,...,2} { \node at (-1.5,2*\i) {\i}; }
\node (a) at (0,-0.4) {0};
\node (b) at (2,-0.4) {1};
\node (c) at (4,-0.4) {5};
\node (d) at (6,-0.4) {7};
\node (ab) at (1,1) {};
\node (cd) at (5,2) {};
\node (abcd) at (3,4){};
\node (x) at (3,5) {};
\node (y) at (3,4) {};
\draw [line width=0.5mm, blue] (a) |- (ab.center);
\draw [line width=0.5mm, blue] (b) |- (ab.center);
\draw [line width=0.5mm, blue] (c) |- (cd.center);
\draw [line width=0.5mm, blue] (d) |- (cd.center);
\draw [line width=0.5mm, blue] (ab.center) |- (abcd.center);
\draw [line width=0.5mm, blue] (cd.center) |- (abcd.center);
\draw [line width=0.5mm, blue] [->] (y.center) -> (x.center);
\end{tikzpicture}
\hspace*{2mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw [fill=blue] (0,0.5) circle (2pt);
\filldraw [fill=blue] (0,1) circle (2pt);
\filldraw [fill=blue] (0,2) circle (2pt);
\filldraw [fill=blue] (0,2.6) circle (2pt);
\end{tikzpicture}
\hspace*{2mm}
\begin{tikzpicture}[scale = 1.0]
\draw[style=help lines,step = 0.5] (0,0) grid (2.4,2.4);
\draw[->] (-0.2,0) -- (2.4,0) node[right] {birth};
\draw[->] (0,-0.2) -- (0,2.4) node[above] {death};
\draw[-] (0,0) -- (2.4,2.4) node[right]{};
\foreach \x/\xtext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(\x,0)}] (0pt,2pt) -- (0pt,-2pt) node[below] {$\xtext$};
\foreach \y/\ytext in {0.5/0.5, 1/1, 1.5/1.5, 2.0/2}
\draw[shift={(0,\y)}] (2pt,0pt) -- (-2pt,0pt) node[left] {$\ytext$};
\filldraw[fill=red] (0,0.5) circle (2pt);
\filldraw[fill = red] (0.0,1) circle (2pt);
\filldraw [fill=blue] (0.5,2.0) circle (2pt);
\filldraw [fill=blue] (1.0,2.0) circle (2pt);
\filldraw [fill=blue] (2,2.6) circle (2pt);
\end{tikzpicture}
\caption{\textbf{Left}: single-linkage dendrogram $\Delta_{SL}(B)$ for $B=\{0,1,5,7\}\subset\mathbb{R}$.
\textbf{Middle}: the 0D persistence diagram for the sublevel filtration of the distance to $B$.
\textbf{Right}: mergegram $\mathrm{MG}(\Delta_{SL}(B))$. }
\label{fig:4-point_set2}
\end{figure}
\end{exa}
\section{Distances and stability of persistence modules}
\label{sec:stability_persistence}
Definition~\ref{dfn:homo_modules} introduces homomorphisms between persistence modules, which are needed to state the stability of persistence diagrams $\mathrm{PD}\{H_0(X_s^f)\}$ under perturbations of a function $f:X\to\mathbb{R}$.
This result will imply a similar stability for the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ for the dendrogram $\Delta_{SL}(A)$ of the single-linkage clustering of a set $A$ within a metric space $X$.
\begin{dfn}[a homomorphism of a degree $\delta$ between persistence modules]
\label{dfn:homo_modules}
Let $\mathbb{U}$ and $\mathbb{V}$ be persistent modules over $\mathbb{R}$.
A \emph{homomorphism} $\mathbb{U}\to\mathbb{V}$ of \emph{degree} $\delta\in\mathbb{R}$ is a collection of linear maps $\phi_t:U_t \rightarrow V_{t+\delta}$, $t \in \mathbb{R}$, such that the diagram commutes for all $s \leq t$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=1.0]
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
U_s & U_t \\
V_{s+\delta} & V_{t+\delta} \\};
\path[-stealth]
(m-1-1) edge node [left] {$\phi_s$} (m-2-1)
edge [-] node [above] {$u^t_s$} (m-1-2)
(m-2-1.east|-m-2-2) edge node [above] {$v^{t+\delta}_{s+\delta}$}
node [above] {} (m-2-2)
(m-1-2) edge node [right] {$\phi_t$} (m-2-2);
\end{tikzpicture}
\end{figure}
Let $\text{Hom}^\delta(\mathbb{U},\mathbb{V})$ be all homomorphisms $\mathbb{U}\rightarrow \mathbb{V}$ of degree $\delta$.
Persistence modules $\mathbb{U},\mathbb{V}$ are \emph{isomorphic} if they have inverse homomorphisms $\mathbb{U}\to\mathbb{V}$ and $\mathbb{V}\to\mathbb{U}$ of degree $\delta=0$.
\hfill $\blacksquare$
\end{dfn}
For a persistence module $\mathbb{V}$ with maps $v_s^t:V_s\to V_t$, the simplest example of a homomorphism of a degree $\delta\geq 0$
is $1_{\mathbb{V}}^{\delta}:\mathbb{V}\to\mathbb{V}$ defined by the maps $v_s^{s+\delta}$, $t\in\mathbb{R}$.
So the maps $v_s^t$ defining the structure of $\mathbb{V}$ shift all vector spaces $V_s$ the difference of scale $\delta=t-s$.
\medskip
The concept of interleaved modules below is an algebraic generalization of a geometric perturbation of a set $X$ in terms of (the homology of) its sublevel sets $X_s$.
\begin{dfn}[interleaving distance ID]
\label{dfn:interleaving_distance}
Persistent modules $\mathbb{U}$ and $\mathbb{V}$ are $\delta$-interleaved if there are homomorphisms $\phi\in \text{Hom}^\delta(\mathbb{U},\mathbb{V})$ and $\psi \in \text{Hom}^\delta(\mathbb{V},\mathbb{U}) $ such that $\phi\circ\psi = 1_{\mathbb{V}}^{2\delta} \text{ and } \psi\circ\phi = 1_{\mathbb{U}}^{2\delta}$.
The \emph{interleaving distance} is
$\mathrm{ID}(\mathbb{U},\mathbb{V})=\inf\{\delta\geq 0 \mid \mathbb{U} \text{ and } \mathbb{V} \text{ are } \delta\text{-interleaved} \}$.
\hfill $\blacksquare$
\end{dfn}
If $f,g:X\to\mathbb{R}$ are continuous functions such that $||f-g||_{\infty}\leq\delta$ in the $L_{\infty}$-distance, the persistence modules $H_k\{f^{-1}(-\infty,s]\}$, $H_k\{g^{-1}(-\infty,s]\}$ are $\delta$-interleaved for any $k$ \cite{cohen2007stability}.
The last conclusion extended to persistence diagrams in terms of the bottleneck distance below.
\begin{dfn}[bottleneck distance BD]
\label{dfn:bottleneck_distance}
Let multisets $C,D$ contain finitely many points $(p,q)\in\mathbb{R}^2$, $p<q$, of finite multiplicity and all diagonal points $(p,p)\in\mathbb{R}^2$ of infinite multiplicity.
For $\delta\geq 0$, a $\delta$-matching is a bijection $h:C\to D$ such that $|h(a)-a|_{\infty}\leq\delta$ in the $L_{\infty}$-distance on the plane for any point $a\in C$.
The \emph{bottleneck} distance between persistence modules $\mathbb{U},\mathbb{V}$ is $\mathrm{BD}(\mathbb{U},\mathbb{V}) = \text{inf}\{ \delta \mid \text{ there is a }\delta\text{-matching between } \mathrm{PD}(\mathbb{U}) \text{ and } \mathrm{PD}(\mathbb{V})\}$.
\hfill $\blacksquare$
\end{dfn}
The original stability of persistence for sequences of sublevel sets persistence was extended as Theorem~\ref{thm:stability_persistence} to $q$-tame persistence modules.
Intuitively, a persistence module $\mathbb{V}$ is $q$-tame any non-diagonal square in the persistence diagram $\mathrm{PD}(\mathbb{V})$ contains only finitely many of points, see \cite[section~2.8]{chazal2016structure}.
Any finitely decomposable persistence module is $q$-tame.
\begin{thm}[stability of persistence modules]\cite[isometry theorem~4.11]{chazal2016structure}
\label{thm:stability_persistence}
Let $\mathbb{U}$ and $\mathbb{V}$ be q-tame persistence modules. Then $\mathrm{ID}(\mathbb{U},\mathbb{V}) = \mathrm{BD}(\mathrm{PD}(\mathbb{U}),\mathrm{PD}(\mathbb{V}))$,
where $\mathrm{ID}$ is the interleaving distance, $\mathrm{BD}$ is the bottleneck distance between persistence modules.
\hfill $\blacksquare$
\end{thm}
\section{Stability of the mergegram for any single-linkage dendrogram}
\label{sec:stability}
In a dendrogram $\Delta$ from Definition~\ref{dfn:dendrogram}, any merge set $A$ of $\Delta$ has a life interval $\mathrm{life}(A)=[b,d)$ from its birth scale $b$ to its death scale $d$.
Lemmas~\ref{lem:merge_module_decomposition} and~\ref{lem:merge_modules_interleaved} are proved in appendices.
\begin{lem}[merge module decomposition]
\label{lem:merge_module_decomposition}
For any dendrogram $\Delta$ in the sense of Definition~\ref{dfn:dendrogram}, the merge module $M(\Delta)\cong\bigoplus\limits_{A}\mathbb{I}(\mathrm{life}(A))$ decomposes over all merge sets $A$.
\hfill $\blacksquare$
\end{lem}
Lemma~\ref{lem:merge_module_decomposition} will allow us to use the stability of persistence in Theorem~\ref{thm:stability_persistence} for merge modules and also Lemma~\ref{lem:merge_modules_interleaved}.
Stability of the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ will be proved under perturbations of $A$ in the Hausdorff distance defined below.
\begin{dfn}[Hausdorff distance HD]
\label{dfn:Hausdorff_distance}
For any subsets $A,B$ of a metric space $(X,d)$, the \emph{Hausdorff distance} $\mathrm{HD}(A,B)$ is $\max\{\sup\limits_{a\in A}\inf\limits_{b\in B} d(a,b), \sup\limits_{b\in B}\inf\limits_{a\in A} d(a,b)\}$.
\hfill $\blacksquare$
\end{dfn}
\begin{lem}[merge modules interleaved]
\label{lem:merge_modules_interleaved}
If any subsets $A,B$ of a metric space $(X,d)$ have $\mathrm{HD}(A,B)=\delta$, then the merge modules $M(\Delta_{SL}(A))$ and $M(\Delta_{SL}(B))$ are $\delta$-interleaved.
\hfill $\blacksquare$
\end{lem}
\begin{thm}[stability of a mergegram]
\label{thm:stability_mergegram}
Any finite subsets $A,B$ of a metric space $(X,d)$ have the mergegrams $\mathrm{BD}(\mathrm{MG}(\Delta_{SL}(A)),\mathrm{MG}(\Delta_{SL}(B))\leq \mathrm{HD}(A,B)$.
Hence any small perturbation of $A$ in the Hausdorff distance yields a similarly small perturbation in the bottleneck distance for its mergegram $\mathrm{MG}(\Delta_{SL}(A))$ of the single-linkage clustering dendrogram $\Delta_{SL}(A)$.
\end{thm}
\begin{proof}
The given subsets $A,B$ with $\mathrm{HD}(A,B)=\delta$ have $\delta$-interleaved merge modules by Lemma~\ref{lem:merge_modules_interleaved}, i.e. $\mathrm{ID}(\mathrm{MG}(\Delta_{SL}(A)),\mathrm{MG}(\Delta_{SL}(B))\leq\delta$.
Since any merge module $M(\Delta)$ is finitely decomposable, hence $q$-tame, by Lemma~\ref{lem:merge_module_decomposition}, the corresponding mergegram $\mathrm{MG}(M(\Delta))$ satisfies Theorem~\ref{thm:stability_persistence}, i.e.
$\mathrm{BD}(\mathrm{MG}(\Delta_{SL}(A)),\mathrm{MG}(\Delta_{SL}(B))\leq\delta$ as required.
\end{proof}
Theorem~\ref{thm:stability_mergegram} is confirmed by the following experiment on cloud perturbations in Fig.~\ref{fig:BD-vs-noise_bound}.
\begin{figure}[h]
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = noise bound, ylabel = bottleneck distance,grid]
\addplot table [x=a, y=b, col sep=comma] {TableAvg.csv};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = noise bound,grid]
\addplot table [x=a, y=b, col sep=comma] {TableMax.csv};
\end{axis}
\end{tikzpicture}
\caption{The bottleneck distances (average on the left, maximum on the right) between mergegrams of sampled point clouds and their perturbations.
Both graphs are below the line $y=2x$. }
\label{fig:BD-vs-noise_bound}
\end{figure}
\begin{enumerate}
\item We uniformly generate $N=100$ black points in the cube $[0,100]^3\subset\mathbb{R}^3$.
\item Then we generate a random number of red points such that the $\epsilon$ ball of every black point randomly has 1, 2 or 3 red points for a noise bound $\epsilon\in[0.1,10]$ taken with a step size 0.1.
\item Compute the bottleneck distance between the mergegrams of black and red points.
\item Repeat the experiment $K=100$ times, plot the average and maximum in Fig.~\ref{fig:BD-vs-noise_bound}.
\end{enumerate}
\section{Experiments on a classification of point sets and conclusions}
\label{sec:experiments}
Algorithm~\ref{alg:mergegram} computes the mergegram of the SL dendrogram for any finite set $A\subset\mathbb{R}^m$.
\begin{alg}
\begin{algorithmic}
\STATE
\STATE \textbf{Input} : a finite point cloud $A\subset\mathbb{R}^m$
\STATE Compute $\mathrm{MST}(A)$ and sort all edges of $\mathrm{MST}(A)$ in increasing order of length
\STATE Initialize Union-Find structure $U$ over $A$. Set all points of $A$ to be their components.
\STATE Initialize the function $\text{prev: Components}[U] \rightarrow \mathbb{R}$ by setting $\text{prev}(t) = 0$ for all $t$
\STATE Initialize the vector Output that will consists of pairs in $\mathbb{R} \times \mathbb{R}$
\FOR{Edge $e = (a,b)$ in the set of edges (increasing order)}
\STATE Find components $c_1$ and $c_2$ of $a$ and $b$ respectively in Union-Find $U$
\STATE Add pairs (prev$[c_1]$,length($e$)), (prev$[c_2]$,length($e$)) $\in \mathbb{R}^2$ to Output
\STATE Merge components $c_1$ and $c_2$ in Union-Find $U$ and denote the component by $t$
\STATE Set prev$[t]$ = length($e$)
\ENDFOR
\STATE \textbf{return} Output
\end{algorithmic}
\label{alg:mergegram}
\end{alg}
Let $\alpha(n)$ be the inverse Ackermann function.
Other constants below are defined in \cite{march2010fast}.
\begin{thm}[a fast mergegram computation]
\label{thm:complexity}
For any cloud $A\subset\mathbb{R}^m$ of $n$ points, the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ can be computed in time $O(\max\{c^6,c_p^2c^2_l \}c^{10}n\log n\,\alpha(n))$.
\end{thm}
\begin{proof}
A Minimum Spanning Tree $\mathrm{MST}(A)$ needs $O(\max\{c^6,c_p^2c^2_l \}c^{10}n\log n\,\alpha(n))$ time by \cite[Theorem~5.1]{march2010fast}.
The rest of Algorithm~\ref{alg:mergegram} is dominated by $O(n\alpha(n))$ Union-Find operations.
Hence the full algorithm has the same computational complexity as the MST.
\end{proof}
The experiments summarized in Fig.~\ref{fig:100-point_clouds} show that the mergegram curve in blue outperforms other isometry invariants on the isometry classification by the state-of-the-art PersLay.
We generated 10 classes of 100-point clouds within the unit ball $\mathbb{R}^m$ for $m=2,3,4,5$.
For each class, we made 100 copies of each cloud and perturbed every point by a uniform random shift in a cube of the size $2\times\epsilon$, where $\epsilon$ is called a \emph{noise bound}.
For each of 100 perturbed clouds, we added 25 points such that every new point is $\epsilon$-close to an original point.
Within each of 10 classes all 100 clouds were randomly rotated within the unit ball around the origin, see Fig.~\ref{fig:clouds}.
For each of the resulting 1000 clouds, we computed the mergegram, 0D persistence diagram and the diagram of pairs of distances to two nearest neighbors for every point.
\newcommand{44mm}{44mm}
\begin{figure}[h!]
\includegraphics[height=44mm]{images/cloud0.png}
\includegraphics[height=44mm]{images/cloud0+noise+extra.png}
\includegraphics[height=44mm]{images/cloud0+noise+extra+rotation.png}
\caption{\textbf{Left}: an initial random cloud with 100 blue points.
\textbf{Middle}: all blue points are perturbed, 25 extra orange points are added.
\textbf{Right}: a cloud is rotated through a random angle.
Can we recognize that the initial and final clouds are in the same isometry class modulo small noise?}
\label{fig:clouds}
\end{figure}
The machine learning part has used the obtained diagrams as the input-data for the Perslay \cite{carriere2019perslay}.
Each dataset was split into learning and test subsets in ratio 4:1.
The learning loops ran by iterating over mini-batches consisting of 128 elements and going through the full dataset for a given number of epochs.
The success rate was measured on the test subset.
\medskip
The original Perslay module was rewritten in Tensorflow v2 and RTX 2080 graphics card was used to run the experiments.
The technical concepts of PersLay are explained in \cite{carriere2019perslay}:
\begin{itemize}
\item Adam(Epochs = 300, Learning rate = 0.01)
\item Coefficents = Linear coefficents
\item Functional layer = [PeL(dim=50), PeL(dim=50, operationalLayer=PermutationMaxLayer)].
\item Operation layer = TopK(50)
\end{itemize}
The PersLay training has used the following invariants compared in Fig.~\ref{fig:100-point_clouds}:
\begin{itemize}
\item cloud : the initial cloud $A$ of points corresponds to the baseline curve in black;
\item PD0: the 0D persistence diagram $\mathrm{PD}$ for distance-based filtrations of sublevel sets in red;
\item NN(2) brown curve: for each point $a\in A$ includes distances to two nearest neighbors;
\item the mergegram $\mathrm{MG}(\Delta_{SL}(A))$ of the SL dendrogram has the blue curve above others.
\end{itemize}
\begin{figure}[t]
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = 2 dimensions; noise bound, ylabel = success rate,grid]
\addplot table [x=e, y=m, col sep=comma] {100Points25NoiseResults/dim2.csv};
\addlegendentry{mergegram}
\addplot table [x=e, y=h, col sep=comma] {100Points25NoiseResults/dim2.csv};
\addlegendentry{PD0}
\addplot table [x=e, y=n, col sep=comma] {100Points25NoiseResults/dim2.csv};
\addlegendentry{NN(2)}
\addplot table [x=e, y=c, col sep=comma] {100Points25NoiseResults/dim2.csv};
\addlegendentry{cloud}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = 2 dimensions; noise bound, grid]
\addplot table [x=e, y=m, col sep=comma] {100Points25NoiseResults/dim3.csv};
\addplot table [x=e, y=h, col sep=comma] {100Points25NoiseResults/dim3.csv};
\addplot table [x=e, y=n, col sep=comma] {100Points25NoiseResults/dim3.csv};
\addplot table [x=e, y=c, col sep=comma] {100Points25NoiseResults/dim3.csv};
\end{axis}
\end{tikzpicture}
\medskip
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = 4 dimensions; noise bound, ylabel = success rate,grid]
\addplot table [x=e, y=m, col sep=comma] {100Points25NoiseResults/dim4.csv};
\addplot table [x=e, y=h, col sep=comma] {100Points25NoiseResults/dim4.csv};
\addplot table [x=e, y=n, col sep=comma] {100Points25NoiseResults/dim4.csv};
\addplot table [x=e, y=c, col sep=comma] {100Points25NoiseResults/dim4.csv};
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}[scale=0.85]
\begin{axis}[xlabel = 5 dimensions; noise bound,grid]
\addplot table [x=e, y=m, col sep=comma] {100Points25NoiseResults/dim5.csv};
\addplot table [x=e, y=h, col sep=comma] {100Points25NoiseResults/dim5.csv};
\addplot table [x=e, y=n, col sep=comma] {100Points25NoiseResults/dim5.csv};
\addplot table [x=e, y=c, col sep=comma] {100Points25NoiseResults/dim5Cor.csv};
\end{axis}
\end{tikzpicture}
\caption{Success rates of PersLay in identifying isometry classes of 100-point clouds uniformly sampled in a unit ball, averaged over 5 different clouds and 5 cross-validations with 20/80 splits.
}
\label{fig:100-point_clouds}
\end{figure}
Fig.~\ref{fig:100-point_clouds} shows that the new mergegram has outperformed all other invariants on the isometry classification problem.
The 0D persistence turned out to be weaker than the pairs of distances to two neighbors.
The topological persistence has found applications in data skeletonization with theoretical guarantees \cite{kurlin2015homologically,kalisnik2019higher}.
We are planning to extend the experiments in section~\ref{sec:experiments} for classifying rigid shapes by comining the new mergegram with the 1D persistence, which has the fast $O(n\log n)$ time for any 2D cloud of $n$ points \cite{kurlin2014fast, kurlin2014auto}.
\medskip
In conclusion, the paper has extended the 0D persistence to a stronger isometry invariant, which has kept the celebrated stability under noise important for applications to noisy data.
The initial C++ code for the mergregram is at https://github.com/YuryUoL/Mergegram and will be updated.
We thank all the reviewers for their valuable time and helpful suggestions.
\bibliographystyle{plainurl
| proofpile-arXiv_066-1743 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
The presence of an external inhomogeneous potential or defect in metallic systems modulates the electronic charge density around the imperfectness due to the scattering of electrons near the Fermi level.
These charge density oscillations are known as the Friedel oscillations (FO) and are mostly visible at low temperatures \cite{Friedel52,Friedel58,alloul_friedel_2012}.
FO occur in real materials due to the presence of ad-atoms, interstitial defects, or surface irregularities after cleavage.
The studies of FO can be significant for a wide range of systems as we discussed in \cite{1742-6596-592-1-012059,byczuk2019t,chatterjee2019real}.
To mention a few, FO have been observed experimentally in quantum corals, metal surfaces like Cu (111), semiconductor surfaces like GaAs (111) around point defects using Scanning Tunneling Microscopy (STM) at around 4-5 K \cite{Crommie93,Eigler90,Kanisawa01,Binnig82}.
FO have been seen to produce an asymmetry in the quantum transport at the interface of mono-layer and bi-layer graphene which can be used as an application in novel quantum switching devices \cite{clark2014energy}.
It has been demonstrated that the FO, due to the topological defects in the carbon nanotubes, is important for understanding properties like selective dot growth, magnetic interaction through carbon nanotubes and optical spectroscopy of interface states using the tight-binding model \cite{chico}.
Kolesnychenko \textit{et al.} have observed, using the STM, anisotropic FO while studying the surface electronic structure of transition metal Cr(001) produced by the cleavage of a single crystal having surface areas, where impurity concentrations slightly exceeded the bulk concentration due to the existence of a high dopant zone in the crystal \cite{kolesnychenko2005surface}. \\
In order to theoretically understand the FO in Cr or other transition metals, which belongs to the class of correlated electronic systems, it is important to consider the Coulomb interaction between the electrons.
FO have been studied in one-dimensional (1d) interacting fermionic chain using several theories, e.g., the bosonization method or the density-matrix renormalization group \cite{egger1995friedel,schuster2004local}, or the Bethe-Ansatz local-density approximation \cite{vieira2008friedel}.
The Fermi liquid theory for two-dimensional (2d) and three-dimensional (3d) systems \cite{simion2005friedel} was applied to investigate FO.
The FO induced by non-magnetic impurities in 2d Hubbard model in the presence of interactions have been studied using the slave-boson technique, which involves the static renormalization of the impurity potential \cite{ziegler1998friedel}.
FO seen around the Kondo screening cloud in the presence of magnetic impurities using the t-matrix formalism have been reported in \cite{affleck_friedel_2008}. \\
The dynamical mean-field theory (DMFT) is considered as an advanced and suitable technique to capture the effects of correlations particularly around the Mott metal to insulator transition which is significant for describing the compounds with partially filled d and f shells, e.g., transition metals and their oxides \cite{vollhardt2012Dynamical,vollhardt2010dynamical,vollhardt_investigation_1993,kotliar2004strongly,byczuk2008dynamical,titvinidze_dynamical_2012,0953-8984-9-35-010}.
A real space extension of DMFT (R-DMFT) is needed to treat the strongly correlated inhomogeneous systems \cite{Helmes08,Snook08,suarez2020two}. \\
In our previous work we have investigated the behavior of FO in models of correlated lattice systems in the metallic and Mott insulating phase in the presence of a single impurity potential using the R-DMFT \cite{1742-6596-592-1-012059,chatterjee2019real}.
We have reported that the oscillations get damped with increasing the interaction, and disappear at the Mott transition and beyond it in the Mott insulating phase. \\
In reality, a single-site impurity potential could be a too idealized concept when we think of inhomogeneities in the surfaces of true materials.
There are usually more than one defects or contamination.
Even one encounters extended inhomogeneities and interface effects in multi-layered nano-structures, as for example is discussed in Ref.~\cite{freericks2006transport}.
Grosu \textit{et al.} studied the problem of FO in 1d noninteracting electron gas in the presence of two impurities, modeled by a double delta function separated by a finite distance, using a linear response theory \cite{grosu2008friedel}.
They showed that the presence of the second impurity significantly changes the density oscillations (changing the positions of the maxima and minima) depending on the distance between the impurities. The studies of two impurity scattering has been further extended to a 1d interacting fermionic system using the bosonization method in \cite{liu2003two}.
The scattering and quasiparticle interference from two and multiple magnetic impurities adsorbed on 2d and 3d interacting hosts have been probed using the t-matrix formalism and the numerical renormalization group \cite{derry2015quasiparticle,mitchell2015multiple}. However, in these studies the interference effects are discussed in the local density of states in the presence of interactions and not in the particle density oscillations. \\
We thus see that the studies of FO in the presence of two or multiple impurities have been mostly conducted for the 1d interacting systems, while an attempt to understand real materials requires models in higher dimensions.
Moreover, the behavior of FO in the Mott insulating regime for models with many imperfections has not been addressed.
Also the current state of knowledge lacks any quantitative treatment of the screening and interference effects in systems in the presence of interactions.\\
A proper description of FO in real materials with strong electronic correlations demands a realistic modeling combining the density functional theory within the local density approximation and DMFT (LDA+DMFT) \cite{schuler2016many,kolorenvc2015electronic} and its extension in real space. Such techniques are computationally non-trivial in the presence of inhomogenities, particularly if it is not just an ad-atom but an impurity atom embedded in the host, e.g. Cr atom in a Pb surface \cite{choi2017mapping}. The translational invariance of the lattice is broken in such a case.\\
Motivated by the state of art we extend our simple one-band Hubbard model for various types of impurity potentials going beyond the single impurity case and treating the correlations using an approximate self-energy based on DMFT as is discussed in the next Sections.
We investigate both non-interacting and interacting two-dimensional finite lattice systems.\\
In this paper we address the following questions:
(a) How do the FO change due to the interference effects when we introduce the second impurity?
(b) How does the interference change when we vary the distance between the impurities and switch on the interaction?
(c) How does the picture change if we generalize the two impurities to multiple impurities scattered over the lattice or the extended inhomogeneity?
(d) Do we see any interference effect or FO for any of these models of impurity potential in the Mott insulating phase?\\
Our studies show that the interaction reduces the interference pattern in FO and the screening effects in the system but the interaction does not alter the position of the interference maxima and minima.\\
Tight-binding models of interacting lattice electrons in two and three dimensions are not tractable on an analytical level. Therefore, to inquire a complete description of the FO in presence of multiple impurities we solve analytically and numerically a model of non-interacting particles moving in a continuous space.
We show how presence of few impurities builds interference patterns. In the limit of diluted impurities we derive analytical formulae of the FO in two and three spatial dimensions, which are straightforward generalization of the Friedel result \cite{Friedel52,Friedel58,alloul_friedel_2012}. \\
The paper is organized as follows.
We start in Sec.~II with an analytical and numerical derivation of FO in a non-interacting system in continuous space.
Next, we discuss in Sec.~III our lattice models and methods used to solve them.
We introduce there physical quantities describing our systems.
Afterwards in Sec.~IV we present our numerical results for: (a) two impurities, (b) multiple impurities, (c) extended inhomogenity, and (d) a chain of impurities or a domain wall.
Finally, in Sec.~V we conclude our results and provide an outlook.\\
\section{Multiple impurities in non-interacting systems}
Before we discuss our results on FO in interacting lattice fermions with multiple impurities we present the corresponding theory for non-interacting particles in continuous space. \\
\subsection{Exact formal solution}
Such a system is described by the Green's function \cite{Economou}
\begin{equation}
G({\bf r}, {\bf r}'; \omega) = \langle {\bf r} | \frac{1}{\hbar \omega +i 0^+-\hat{H}} | {\bf r}' \rangle,
\end{equation}
where $\hbar \omega$ is the real energy ($\hbar$ is the Planck constant) and the one particle Hamiltonian is
\begin{equation}
\hat{H}= -\frac{\hbar^2}{2m} \nabla ^2 + V({\bf r}) ,
\end{equation}
which contains a potential
\begin{equation}
V({\bf r})=\sum_{i=1}^{N_{\rm imp}} V_i({\bf r}- {\bf l}_i)
\label{potential}
\end{equation}
originated from $N_{\rm imp}$ independent impurities located at positions ${\bf l}_i$.
The vector $\bf r$ is the position variable and $m$ is the particle mass.
In this Section all d-dimensional vectors are denoted in boldface.
It is known that the Green's function in the presence of an external potential $V({\bf r})$ obeys an integral equation \cite{Economou}
\begin{eqnarray}
G({\bf r}, {\bf r}'; \omega) = G_0({\bf r}- {\bf r}'; \omega) \nonumber \\
+ \int dr'' G_0({\bf r}- {\bf r}''; \omega) V({\bf r}'') G({\bf r}'', {\bf r}'; \omega),
\end{eqnarray}
where $G_0({\bf r}- {\bf r}'; \omega)$ is the Green's function in the continuous space.
It takes the form \cite{Economou}
\begin{equation}
G_0({\bf r}- {\bf r}'; \omega) = \left\{
\begin{array}{ccc}
-\frac{2m}{\hbar^2}\frac{i}{4} H_0^+(k|{\bf r}-{\bf r}'|) & {\rm for} & d=2 \\
& & \\
-\frac{2m}{\hbar^2} \frac{e^{ik |{\bf r}-{\bf r}'|}}{4\pi |{\bf r}-{\bf r}'|}& {\rm for} & d=3,
\end{array}
\right.
\label{green-d}
\end{equation}
where $k=\sqrt{2m \omega}/\hbar$ and $ H_0^+$ is the $0$th order Hankel function.
We note that the Green's functions are singular when $|{\bf r}-{\bf r}'|\rightarrow 0$. \\
Typically, the impurity potentials $V_i({\bf r})$ in Eq.~(\ref{potential}) are short range.
Therefore, we model them by a zero-range potential represented by a Dirac-delta function with the strength proportional to the effective scattering strength $a^s_i$.
However, such an extremely localized potential must be properly regularized giving rise to the Fermi pseudopotential \cite{Fermi36} with a general form
\begin{equation}
V({\bf r})=\sum_{i=1}^{N_{\rm imp}} a_i^s \delta({\bf r}- {\bf l}_i) \hat{{\cal R}}_d,
\end{equation}
where $\hat{{\cal R}}_d$ is a regularization operator depending on system dimensionality \cite{Wodkiewicz91,Donner05,Le19}.
E.g., in three dimensions $\hat{{\cal R}}_d=4\pi (\partial/\partial r) r$ \cite{Wodkiewicz91,Donner05,Le19}.
The pseudopotential mimics the fact that the wave function is strongly suppressed inside the impurity potential.
On the other hand, the regularization $\hat{{\cal R}}_d$ allows to avoid a singular behavior of the Green's functions when $|{\bf r}-{\bf r}'|\rightarrow 0$.
One should also note that in dimensions higher than one a pure Dirac-delta like potential is invisible to particles, which means that they are not scattered at all.
The use of the Fermi pseudopotential allows us to deal with a zero-range potential but which still has scattering effects.
In the following we write
\begin{equation}
V({\bf r})=\sum_{i=1}^{N_{\rm imp}} V_i \delta({\bf r}- {\bf l}_i) ,
\end{equation}
where $V_i$ parameters as well as the Green's function $G_0({\bf 0}; \omega)$ are supposed to be adequately renormalized quantities \cite{Wodkiewicz91,Donner05,Le19}. \\
For a system with local impurity potentials the integral equation reads
\begin{eqnarray}
G({\bf r}, {\bf r}'; \omega) = G_0({\bf r}- {\bf r}'; \omega) \nonumber \\
+ \sum_{i=1}^{N_{\rm imp}} V_i G_0({\bf r}- {\bf l}_i; \omega) G({\bf l}_i, {\bf r}'; \omega),
\label{int1}
\end{eqnarray}
Expressing the left hand side at ${\bf r}={\bf l}_i$ we obtain a set of linear equations to determine $G({\bf l}_i, {\bf r}'; \omega)$, i.e.
\begin{eqnarray}
G({\bf l}_i, {\bf r}'; \omega) = G_0({\bf l}_i- {\bf r}'; \omega) \nonumber \\
+ \sum_{j=1}^{N_{\rm imp}} V_j G_0({\bf l}_i- {\bf l}_j; \omega) G({\bf l}_j, {\bf r}'; \omega).
\label{int2}
\end{eqnarray}
This set of equations can be written in a matrix form
\begin{equation}
\sum_{j=1}^{N_{\rm imp}} M_{ij}(\omega) \; G({\bf l}_j, {\bf r}'; \omega) = G_0({\bf l}_i- {\bf r}'; \omega),
\end{equation}
where the $M$ matrix is
\begin{equation}
M_{ij}(\omega) =[\delta_{ij}- V_j G_0({\bf l}_i- {\bf l}_j; \omega)].
\end{equation}
The diagonal elements of this matrix $ M_{ii}(\omega) =[1- V_i G_0({\bf 0}; \omega)]$ are not singular due to a regularization procedure \cite{Wodkiewicz91,Donner05,Le19}.
Then, by inverting this matrix, in the absence of bound states, either analytically or numerically for each $\omega$ we find the solution of Eq.~(\ref{int2})
\begin{equation}
G({\bf l}_i, {\bf r}'; \omega) = \sum_{j=1}^{N_{\rm imp}} M_{ij}^{-1}(\omega) G_0({\bf l}_j- {\bf r}'; \omega).
\end{equation}
Finally, by using Eq.~(\ref{int1}) we determine the exact Green's function
\begin{eqnarray}
G({\bf r}, {\bf r}'; \omega) = G_0({\bf r}- {\bf r}'; \omega) \nonumber \\
+ \sum_{i,j=1}^{N_{\rm imp}} G_0({\bf r}- {\bf l}_i; \omega) T_{ij}(\omega) G_0({\bf l}_j- {\bf r}'; \omega) ,
\end{eqnarray}
where
\begin{equation}
T_{ij}(\omega) = V_i M_{ij}^{-1}(\omega)
\end{equation}
is a t-matrix.
The local density of states (LDOS) is provided by the diagonal part of the Green's function
\begin{equation}
\rho({\bf r};\omega) = -\frac{1}{\pi} {\rm Im} \;G({\bf r}, {\bf r}; \omega) ,
\end{equation}
and the local density of non-interacting fermions at $T=0$ with the Fermi energy $E_F$ is given by
\begin{equation}
n({\bf r}) = \int_0^{E_F}d\omega \; \rho({\bf r};\omega) = -\frac{1}{\pi} \int_0^{E_F}d\omega \; {\rm Im} \;G({\bf r}, {\bf r}; \omega) .
\end{equation}
It is clear that in the multiple impurity case, the terms $G_0({\bf r}- {\bf l}_i; \omega) G_0({\bf l}_j- {\bf r}; \omega)$ are responsible for an oscillatory behavior with respect to ${\bf r}$ of the LDOS and the charge density.
Note that in this Sec.~II we neglect for simplicity the spin of particles, which otherwise would lead to a trivial factor of two in the corresponding equations. \\
\subsection{Explicit analytical and numerical solutions in special cases}
For a small number of scattering centers one can invert the matrix $M$ analytically.
\subsubsection{A single impurity}
\begin{figure} [ht!]
\centering
\includegraphics[width=0.5\textwidth]{fig1.pdf}
\caption{ The real and imaginary part of the t-matrix for a single impurity with $V_1=1$. }
\label{single-t}
\end{figure}
For example, for $N_{\rm imp}=1$ we find that $M_{11}(\omega)=1-V_1 G_0({\bf 0};\omega)$ and therefore
\begin{eqnarray}
G({\bf r}, {\bf r}'; \omega) = G_0({\bf r}- {\bf r}'; \omega) \nonumber \\
+ G_0({\bf r}- {\bf l}_1; \omega) T_{11}(\omega) G_0({\bf l}_1- {\bf r}'; \omega) ,
\end{eqnarray}
where
\begin{equation}
T_{11}(\omega) = \frac{V_1}{1-V_1 G_0({\bf 0};\omega)} .
\label{exact-t-one}
\end{equation}
In Fig.~\ref{single-t} we present the real and imaginary parts of the t-matrix for the single impurity with $V_1=1$ in three dimensions.
Here in Sec.~II~B we have used units with $2m/\hbar^2=1$.
The Fermi energy $E_F$ is fixed by demanding that the uniform particle density is equal to one. \\
\begin{figure} [ht!]
\centering
\includegraphics[width=0.5\textwidth]{fig2.pdf}
\caption{ Friedel oscillations due to a single impurity with $V_1=1$ in three dimensions. The orange curve represents the exact result determined with using the t-matrix (\ref{exact-t-one}). The blue curve shows the behavior of the envelope which asymptotically decays with a distance as $1/r^3$ (the constant is $0.15$). The green curve represents oscillatory behavior derived from the asymptotic Friedel formula (\ref{Friedel-one}). }
\label{Friedel-single}
\end{figure}
The changes in the local density of particles is determined by
\begin{eqnarray}
\Delta n({\bf r}) =
-\frac{1}{\pi} \int_0^{E_F}d\omega \; {\rm Im} \; G_0({\bf r}; \omega) T_{11}(\omega) G_0(- {\bf r}; \omega) . \nonumber \\
\end{eqnarray}
In Fig.~\ref{Friedel-single} we present the exact result for the changes of the particle densities from the potential located at the origin ${\bf l}=0$.
The stepwise behavior at large $r$ is due to a finite numerical accuracy in determining the integrals.
It can be easily verified that at large distances the amplitudes of the oscillation decay as $1/r^3$, which is described by the original Friedel formula in three dimensions, namely
\begin{equation}
\Delta n({\bf r}) =I^s \frac{\cos(2k_F r+\phi_s)}{r^3},
\label{Friedel-one}
\end{equation}
where $k_F=\sqrt{2mE_F}/\hbar$ is the Fermi vector length, $r=|{\bf r}|$, $\phi_s$ is a phase shift, and $I^s$ is the constant proportional to the effective scattering length \cite{Garcia71}.
In deriving the Eq.~(\ref{Friedel-one}) the $\omega$-dependence of the t-matrix is neglected and it is approximated by its value in the $k\rightarrow 0$ limit.
In Fig.~\ref{Friedel-single} we plot the density changes determined from the asymptotic Friedel formula (\ref{Friedel-one}) and we see that at distances $r\gtrsim 7$ it practically gives the same results as those obtained exactly. \\
\subsubsection{Two impurities}
For $N_{\rm imp}=2$ we find that
\begin{eqnarray}
G({\bf r}, {\bf r}'; \omega) = G_0({\bf r}- {\bf r}'; \omega) \nonumber \\
+ G_0({\bf r}- {\bf l}_1; \omega) T_{11}(\omega) G_0({\bf l}_1- {\bf r}'; \omega) \nonumber \\
+ G_0({\bf r}- {\bf l}_1; \omega) T_{12}(\omega) G_0({\bf l}_2- {\bf r}'; \omega) \nonumber \\
+ G_0({\bf r}- {\bf l}_2; \omega) T_{21}(\omega) G_0({\bf l}_1- {\bf r}'; \omega) \nonumber \\
+ G_0({\bf r}- {\bf l}_2; \omega) T_{22}(\omega) G_0({\bf l}_2- {\bf r}'; \omega)
,
\end{eqnarray}
\begin{widetext}
where
\begin{equation}
T_{11}(\omega)= \frac{V_1}{[1-V_1G_0({\bf 0};\omega) ][1-V_2 G_0({\bf 0};\omega)]- V_1V_2 G_0({\bf l}_1-{\bf l}_2;\omega) G_0({\bf l}_2-{\bf l}_1;\omega)}
[1-V_2 G_0({\bf 0};\omega)] ,
\label{two-t-matrix}
\end{equation}
\begin{equation}
T_{12}(\omega)= \frac{V_1}{ [1-V_1G_0({\bf 0};\omega) ][1-V_2 G_0({\bf 0};\omega)]- V_1V_2 G_0({\bf l}_1-{\bf l}_2;\omega) G_0({\bf l}_2-{\bf l}_1;\omega)}
V_2 G_0( {\bf l}_1-{\bf l}_2 ;\omega),
\end{equation}
\begin{equation}
T_{21}(\omega)= \frac{V_2}{ [1-V_1G_0({\bf 0};\omega) ][1-V_2 G_0({\bf 0};\omega)]- V_1V_2 G_0({\bf l}_1-{\bf l}_2;\omega) G_0({\bf l}_2-{\bf l}_1;\omega)}
V_1 G_0( {\bf l}_2-{\bf l}_1;\omega) ,
\end{equation}
and
\begin{equation}
T_{22}(\omega)= \frac{V_2}{ [1-V_1G_0({\bf 0};\omega) ][1-V_2 G_0({\bf 0};\omega)]- V_1V_2 G_0({\bf l}_1-{\bf l}_2;\omega) G_0({\bf l}_2-{\bf l}_1;\omega)}
[1-V_1 G_0({\bf 0};\omega)] .
\label{two-tt-matrix}
\end{equation}
\end{widetext}
Due to the multiple scattering between the two impurities the t-matrix is not a sum of t-matrices for two independent impurities located at different points. It is a matrix element function which depends only on the energy $\omega$ and a distance between the impurities $\Delta l= |{\bf l}_1-{\bf l}_2|$. \\
In Fig.~\ref{two-t} we plot the real and imaginary parts of the off-diagonal t-matrix elements for different distances between the impurities in three dimensions.
For comparison we also present the ratios of real and imaginary parts between off-diagonal and diagonal matrix elements.
Because of the inter-impurity scattering and interferences of quantum waves the t-matrix oscillates and changes a sign in contrast to the single impurity case in Fig.~\ref{single-t}.
Moreover, we can see that with increasing the inter-impurity distance the relative values of the off-diagonal elements with respect to the diagonal elements are decreasing. \\
\begin{figure} [ht!]
\centering
\includegraphics[width=0.4\textwidth]{fig3a.pdf}
\includegraphics[width=0.4\textwidth]{fig3b.pdf}
\caption{ Real (upper panel) and imaginary (lower panel) of the off-diagonal t-matrix elements and their ratio with respect to diagonal elements for different distances between impurities with $V_1=V_2=1$ in three dimensions. }
\label{two-t}
\end{figure}
In Fig.~\ref{Friedel-two} we present exact results for FO in case of two impurities in three dimensions at different distances $\Delta l$.
A strong interference oscillations are visible between the impurities.
Outside them the oscillation amplitudes decay as $1/r^3$, similarly as in one impurity case.
We also plot in Fig.~\ref{Friedel-two} the results when the off-diagonal t-matrix elements are neglected.
Differences are very small and mostly visible in space between the impurities.
Outside them the results depicted by blue (exact) and orange (approximate) lines are almost the same. \\
\begin{figure} [ht!]
\centering
\includegraphics[width=0.5\textwidth]{fig4.pdf}
\caption{ Friedel oscillations due to two impurities with $V_1=V_2=1$ at different distances $\Delta l$ in three dimensions. The blue curves represent the exact result determined with using the t-matrix Eqs.~(\ref{two-t-matrix}-\ref{two-tt-matrix}). The orange curves show the behavior of the Friedel oscillations when the off-diagonal elements of t-matrix are neglected. The green curves represent oscillatory behavior derived from the generalized Friedel formula (\ref{general-friedel}). }
\label{Friedel-two}
\end{figure}
For $N_{\rm imp}=3$ or $4$ it is still possible to invert the matrix $M$ analytically and obtain the exact analytical expressions for the Green's functions. This allows to investigate FO in analytical details.
However, the final equations become more and more cumbersome, cf. \cite{Donner05}. \\
\subsection{Approximation in diluted impurity limit and generalized Friedel formula}
An important simplification occurs when the impurities are far away from each other, i.e., $|{\bf l}_i-{\bf l}_j|$ is large as compared to other relevant distances, e.g., the Fermi wave length. In this diluted limit we can neglect the off diagonal elements of the matrix $M$ because the Green's function decays as $G_0( {\bf l}_i-{\bf l}_j;\omega ) \sim 1/|{\bf l}_i-{\bf l}_j|^{(d-1)/2}$, cf. Fig.~\ref{two-t}. In this limit
\begin{equation}
M_{ij}^{-1} (\omega)= \frac{\delta_{ij}}{1-V_i G_0({\bf 0};\omega)}
\end{equation}
and
\begin{equation}
T_{ij}(\omega) = \delta_{ij} \frac{V_i}{1-V_i G_0({\bf 0};\omega)} .
\end{equation}
The t-matrix is diagonal and each matrix element takes into account only multiple scattering on the corresponding single impurity.
Inter-impurity scattering effects are neglected in this limit.
The Green's function is now given by
\begin{eqnarray}
G({\bf r}, {\bf r}'; \omega) = G_0({\bf r}- {\bf r}'; \omega) \nonumber \\
+ \sum_{i=1}^{N_{\rm imp}} G_0({\bf r}- {\bf l}_i; \omega) T_{ii}(\omega) G_0({\bf l}_i- {\bf r}'; \omega) ,
\end{eqnarray}
and changes due to multiple impurities in the LDOS is
\begin{equation}
\Delta \rho({\bf r};\omega)= - \frac{1}{\pi} \sum_{i=1}^{N_{\rm imp}} {\rm Im}\; G_0({\bf r}- {\bf l}_i; \omega) T_{ii}(\omega) G_0({\bf l}_i- {\bf r}; \omega) .
\end{equation}
Finally, the changes in the local density is determined from
\begin{eqnarray}
\Delta n({\bf r}) = \nonumber \\
-\frac{1}{\pi} \sum_{i=1}^{N_{\rm imp}} \int_0^{E_F}d\omega \; {\rm Im} \; G_0({\bf r}- {\bf l}_i; \omega) T_{ii}(\omega) G_0({\bf l}_i- {\bf r}; \omega) . \nonumber \\
\end{eqnarray}
In the limit of diluted impurities the FO pattern is a sum of FO patterns coming from each scattering centers independently. \\
Using an explicit form of the Green's function in three dimensions (Eq.~\ref{green-d}) and approximating the t-matrix by an effective scattering length
\begin{equation}
b^s_i=\lim_{k\rightarrow 0} f^+(k),
\end{equation}
where $\hbar \omega=\hbar^2k^2/2m$ and $f^+(k)$ is the scattering amplitude due to the i-th impurity and obtained directly from the t-matrix $T_{ii}(\omega)$
we find that the change in the particle density is expressed by
\begin{eqnarray}
\Delta n ({\bf r}) = \nonumber \\
\sum_{i=1}^{N_{\rm imp}} I_i ^s\frac{ (2k_F|{\bf r}-{\bf l}_i| ) \cos (2k_F|{\bf r}-{\bf l}_i| )- \sin (2k_F|{\bf r}-{\bf l}_i|) }{ (2k_F|{\bf r}-{\bf l}_i|)^4 },
\nonumber \\
\label{FO2d}
\end{eqnarray}
where $I_i^s\sim b_i^s$ are constants depending on the impurity potential strengths $V_i$ \cite{Garcia71}.
In the asymptotic limit $|{\bf r}-{\bf l}_i| \gg 1/k_F$ we obtain a generalization of the Friedel result to multiple impurity case
\begin{equation}
\Delta n ({\bf r}) = \sum_{i=1}^{N_{\rm imp}} I_i ^s\frac{ \cos (2k_F|{\bf r}-{\bf l}_i| )}{ (2k_F|{\bf r}-{\bf l}_i|)^3 }.
\label{general-friedel}
\end{equation}
In the dilute limit one expects that FO pattern is a sum of FO patterns yielded by each independent impurity.
The oscillatory behavior decays as the cubic distance from the impurities.
In Fig.~\ref{Friedel-two} we plot the results for FO based on the generalized formula (\ref{general-friedel}) in case of two impurities in three dimensions and compare them with the exact results.
The $\omega$-dependence of the diagonal t-matrix elements seems to be relevant is space between the impurities.
However, outside the impurities the Eq.~(\ref{general-friedel}) practically describes the FO of two impurities very well. \\
In a similar way one can obtain the generalization of Friedel result in two dimensions
\begin{equation}
\Delta n ({\bf r}) = \sum_{i=1}^{N_{\rm imp}} I_i ^s\frac{ \sin (2k_F|{\bf r}-{\bf l}_i| +\phi_i)}{ (2k_F|{\bf r}-{\bf l}_i|)^2 },
\label{FO2d}
\end{equation}
where $\phi_i$ are certain phase shifts.
The last result required an asymptotic form of the Green's function in two dimension, which is found from an asymptotic expansion of the Hankle function $H_{\nu}^+(z)\approx \sqrt{2/\pi z } \exp[i(z-(2\nu+1)\pi/4)]$.
The oscillatory behavior decays as the square distance from the impurities.\\
\begin{figure*}
\centering
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig5a.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig5b.pdf}
\end{minipage}\hfill
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig5c.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig5d.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig5e.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig5f.pdf}
\end{minipage}
\caption{Friedel oscillations and interference patterns in two dimensional system of non-interacting fermions. Upper left panel: A single impurity with $I_1=24$ placed at ${\bf l} _1=(16,16)$. Upper right panel: Two impurities close to each other with $I_1=I_2=24$ placed at ${\bf l}_1=(16,16)$ and ${\bf l}_2=(16,20)$.
Middle left panel: Two impurities far away from each other with $I_1=I_2=24$ placed at ${\bf l}_1=(16,16)$ and ${\bf l}_2=(16,27)$.
Middle right panel: Two impurities placed diagonally with $I_1=I_2=24$ placed at ${\bf l}_1=(10,10)$ and ${\bf l}_2=(16,16)$.
Lower left panel: Three impurities with different strengths $I_1=10$ at ${\bf l}_1=(16,16)$, $I_2=10$ at ${\bf l}_1=(10,15)$, and $I_2=5$ at ${\bf l}_1=(20,25)$.
Lower right panel: Multiple impurities with $I_i=5$ and at positions ${\bf l}_1=(3,3)$, ${\bf l}_2=(20,5)$, ${\bf l}_3=(5,20)$, ${\bf l}_4=(25,22)$, and ${\bf l}_5=(17,28)$. Here, the uniform density of particles is set by $k_F=1$, which leads to a characteristic wave-length scale $\lambda_F=2\pi/k_F$.
}
\label{free}
\end{figure*}
Finally we note that a superposition of FO from independent impurities gives rise to interference patterns. Few cases, determined from Eq.~(\ref{FO2d}) for multiple impurities in two dimensions, are shown in Fig.~\ref{free}. We selected the phase shifts $\phi_i=0$ for each impurities and $k_F=1$, which sets a characteristic wave-length scale $\lambda_F=2\pi/k_F$. Hence, the uniform density of particles is different from that in Sec. II B. The presented patterns resemble those seen in various STM experiments on metallic surfaces with defects or impurities. \\
\section{Model and method}
\label{Model}
In order to demonstrate a role of strong correlations between the electrons we consider a lattice model with on-site inter-particle interaction.
Such a system is described by the Hubbard model.
With changing the inter-particle interaction strength the system described by this model evolves between the Fermi liquid regime and the Mott insulator regime.
In the rest of the paper we present FO interference patterns in these different regimes.
The very first difference with respect to continuous case, presented earlier, is a lack of rotational symmetry.
In addition the continuous case is not able to describe the Mott insulating phase and the metal-insulator transition, which are indications of strong correlations.
\subsection{Model}
We consider the one-band Hubbard model \cite{hubbard_electron_1963, gutzwiller1963effect, kanamori1963electron} with an external inhomogeneous potential
\begin{equation}
H = \sum_{ij, \sigma} t_{ij}\ \hat{a}_{i\sigma}^\dag\ \hat{a}_{j\sigma} +\sum_{i\sigma} V_{i\sigma}\ \hat{a}_{i\sigma}^\dag\ \hat{a}_{i\sigma} + U \sum_{i} \hat{n}_{i\downarrow}\hat{n}_{i\uparrow} ,
\label{hubbard}
\end{equation}
where $\hat{a}_{i\sigma}$ ($\hat{a}_{i\sigma}^{\dag}$) is the annihilation (creation) fermionic operator with spin $\sigma$ on the $i^{th}$ lattice site, $t_{ij}$ is the hopping matrix element between the $i^{th}$ and $j^{th}$ sites with $t_{ii}=0$. The second term describes the external (inhomogeneous) potential given by $V_{i\sigma}$ which is assumed to be real. The third term models the local part of the electronic interaction between two fermions with opposite spins located on the same lattice site.\\
We consider a two dimensional square lattice with the number of lattice sites $N_{L}=31^2$ (the size of the lattice is $31\times31$) and the following models of the external (inhomogeneous) spin independent potential:
\begin{itemize}
\item Two single site impurities placed either along the diagonal direction of the lattice or along a vertical direction of the lattice for different relative distances. Mathematically, $V_{i}=V_{01}\delta_{i i_{01}}+V_{02}\delta_{i i_{02}}$.
\item A more general case where several impurities are randomly scattered over various lattice sites. This aims to model a contaminated surface with various dopant zones or interstitial defects. Mathematically, $V_{i}=V_{01}\delta_{i i_{01}}+V_{02}\delta_{i i_{02}}+V_{03}\delta_{i i_{03}}+..$.
\item A step like potential or extended inhomogeneity across the lattice aimed to describe inhomogneous surface irregularities after a cleavage. Mathematically, $V_{i\sigma}= V_{0}\Theta (X_i-X_0)$, where $X_0$ is the horizontal lattice coordinate where the steplike potential begins (i.e., the Heaviside function $\Theta(x)$ is non-zero).
\item A chain of impurities placed along the diagonal and vertical directions of the lattice aimed to model a domain wall or an interface. In freshly cleaved samples such domain walls can be found connecting large lattice inhomogeneties and can be observed experimentally using the STM.
\end{itemize}
\subsection{Method}
\begin{figure*}
\centering
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig6a.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig6b.pdf}
\end{minipage}\hfill
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig6c.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{fig6d.pdf}
\end{minipage}
\caption{Interference effect on FO due to the scattering from two impurities of equal magnitude, i.e. $V_{01}=V_{02}=24t$ placed at
$\vec{R}_{01}=(15a,15a)$ and $\vec{R}_{02}=(15a,22a)$ respectively along the vertical direction of the ($31\times31$) square lattice. We show $U=0t$, $2t$ (upper panel, left to right) $5t$, $12t$ (lower panel, left to right).
The insets (on the right) show oscillations along the vertical line passing through the impurities. The color scale is spanned in between the highest and lowest values of the density in the system.
The color scale changes for different $U$ since the minimal value of the density increases with it.}
\label{2dtwoimpvert}
\end{figure*}
\begin{figure} [ht!]
\centering
\includegraphics[width=0.3\textwidth]{fig7a.pdf}
\qquad
\includegraphics[width=0.3\textwidth]{fig7b.pdf}
\caption{FO in the square lattice in the presence of a single impurity $V_{0}=24t$ in the centre ($\vec{R}_{0}=(15a, 15a)$). All other model parameters and plotting style is the same as in Fig.~\ref{2dtwoimpvert}. We show $U=0t$, $2t$ (upper panel, left to right) and $5t$, $12t$, (lower panel, left to right). We only show the vertical line passing through site containing the impurity for a comparison with Fig~\ref{2dtwoimpvert}. A complete density profile for the single impurity case is available in \cite{chatterjee2019real}.}
\label{denmapsimp}
\end{figure}
All single particle properties of the system are obtained from the retarded Green's function obtained by inverting the matrix Dyson equation \cite{rickayzen1980green} in the lattice position space
\begin{equation}
\mathbf{G}(z)= [(z+\mu)\mathbf{1}-\mathbf{t}-\mathbf{V}-\Sigma(z)\mathbf{1}]^{-1},
\label{greenfunction}
\end{equation}
where $\hbar z= \omega+ i0^+$ is the energy approaching the real axis from above and $\mu$ is the chemical potential.
$\Sigma(z)$ is the site independent homogeneous part of the self-energy which approximates the effect of correlations and is calculated using the DMFT for the same parameters of the corresponding homogeneous Hubbard Hamiltonian.
Hereafter, all matrices are expressed in bold faced notation.
The non-diagonal matrix $\bf t$ corresponds to the hopping amplitudes $t_{ij}$ and the diagonal matrix ${\bf V}$ reflects the on-site inhomogeneous potential $V_i$. The unity matrix is written as $\bf 1$. \\
In the DMFT the self-energy is diagonal in lattice site indices and accounts for all local dynamic correlation effects. In case of homogeneous lattice systems, all lattice sites are equivalent. The self-energy is computed by mapping a lattice site into an effective single impurity Anderson model (SIAM) and solving it by using standard techniques like a continuous time quantum Monte Carlo, an exact diagonalization, a numerical renormalization group method (NRG), etc. However, in the presence of external impurities, translational invariance of the lattice is broken and the lattice sites are non-equivalent. Hence, it is essential now to solve the SIAM separately at each lattice site and the local self-energy becomes site dependent. In other words, in an inhomogeneous system the self-energy has a homogeneous part due to the electron-electron interactions and an inhomogeneous part due to the contribution from the interaction and the external impurities. Owing to the site dependent part of the self-energy, the result of the impurity potential in the system is not static but effectively dynamic \cite{byczuk2019t}. Ideally, in order to get the full picture of a correlated inhomogeneous system we should consider both the homogeneous and inhomogeneous part of the self energy solving the full real-space DMFT (R- DMFT) equations self-consistently.\\
Unfortunately, deciphering the full R-DMFT equations is computationally exhaustive, especially for higher-dimensional systems with a large number of lattice sites. Hence, as a first approximation we omit the inclusion of inhomogeneous, site-dependent part of the self-energy in our present studies to obtain some initial insights on the behavior of the system with correlations. We call this approximation homogeneous self-energy approximation (HSEA) \cite{chatterjee2019real}. The homogeneous part of the self-energy is computed by solving the DMFT self-consistency equations for infinite homogeneous system at zero temperature by using the NRG method. The open-source code ``NRG Ljubljana'' \cite{vzitkonrg, costi1990new} is used for this purpose. The computed self-energy (within the HSEA) is then transferred to the real space Dyson equation (\ref{greenfunction}) containing the impurity potential and used to obtain the one-particle Green's function. We note here that although the self-energies are computed for a homogeneous system, the Green's function is still obtained by inverting the Dyson equation containing the impurity potential in the real space and thus inhomogeneity of the system is taken into account. \\
A detailed mathematical formalism of the R-DMFT and HSEA is presented in \cite{chatterjee2019real} wherein we also show that the results from the two methods agree well for a single impurity potential. It might be an interesting future project to compare the results from R-DMFT and HSEA for our extended models of the impurity potentials. \\
\subsection{Physical quantities}
Once we know the Green's function of the system from Eq.~(\ref{hubbard}) and (\ref{greenfunction}) we obtain the local spectral function as
\begin{equation}
A_{i \sigma}(\omega)=-\frac{1}{\pi} \text{Im} \ G_{ii\sigma}(\omega).
\label{spectral}
\end{equation}
Having (\ref{spectral}) we compute the spin resolved local density of particles at zero temperature as
\begin{equation} \label{local_occupationhom}
\bar{n}_{i\sigma}= \int_{-\infty}^{E_F} A_{i\sigma}(\omega)f(\omega) \,d\omega .
\end{equation}
We consider spin rotational invariant systems, i.e., $\bar{n}_{i\uparrow}=\bar{n}_{i\downarrow}$, and the total number of particles per site is given by
\begin{equation}
\bar{n}=\frac{1}{N_{L}}\sum_{i=1}^{N_L} \bar{n}_{i},
\end{equation}
where $\bar{n}_{i}=n_{i\uparrow}+n_{i\downarrow}$.
Eq.~(\ref{local_occupationhom}) is the most relevant for our studies of FO. \\
We further quantify the screening and interference effects in the system by the screening charge defined by
\begin{equation}
Z=\sum\limits_{i\sigma}\left(\bar{n}_{i\sigma}-\frac{\bar{n}_{\rm hom}}{2}\right),
\label{screen1}
\end{equation}
where the summation runs over all the lattice sites and $\bar{n}_{\rm hom}$ corresponds to the density of particles of the reference homogeneous system (i.e., with $V_{i}=0$).
\section{Numerical Results}
\label{results}
We choose the chemical potential $\mu= U/2$ such that the homogeneous system is at half-filling i.e. $\bar{n}=1$. In all cases the hopping amplitude $t_{ij}=t$ is only between nearest neighbors. We set $t=1$ to define the energy units and set the lattice constant $a=1$ to define the length units in our numerical calculations. The band-width W of the system is given by $W=2zt$, where z is the co-ordination number. The system is subjected to the periodic boundary conditions with a finite number of the lattice sites $N_{L}$. We perform our simulations at a zero temperature ($T=0$).\\
The strength of electronic correlation is parametrized by tuning the parameter $U$.
We study the 2d homogeneous system for different $U$ values and see that the Mott transition occurs at $U_{c}\approx11.5t$.
Hence, we choose $U=0t$, $2t$, $5t$, and $12t$ to represent a non-interacting lattice gas, a weakly interacting metallic phase, an intermediate interacting metallic phase, and a Mott insulating phase of the inhomogeneous system, respectively. \\
\subsection {Two impurities}
\begin{figure*}
\centering
\begin{minipage}{0.4\textwidth}
\includegraphics[width=0.8\textwidth]{fig8a.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\includegraphics[width=0.8\textwidth]{fig8b.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\includegraphics[width=0.8\textwidth]{fig8c.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\includegraphics[width=0.8\textwidth]{fig8d.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\includegraphics[width=0.8\textwidth]{fig8e.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\includegraphics[width=0.8\textwidth]{fig8f.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\includegraphics[width=0.8\textwidth]{fig8g.pdf}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\includegraphics[width=0.8\textwidth]{fig8h.pdf}
\end{minipage}
\caption{The variation in the interference in the FO for different relative distances between the impurities. The first impurity $V_{01}$ is kept fixed at $\vec{R}_{01}=(15a, 15a)$ and the position of the second impurity is shifted along the vertical line and placed at $\vec{R}_{02}=(15a, 19a)$, $\vec{R}_{02}=(15a, 20a)$, $\vec{R}_{02}=(15a, 21a)$, $\vec{R}_{02}=(15a,27a)$ (from top to bottom). We show the interaction $U=2t$ (left column), and $U=5t$ (right column). All other model parameters, and the plotting style are the same as in Fig.~\ref{2dtwoimpvert}.}
\label{2dtwoimpvertdist}
\end{figure*}
We start with the case where two impurities are present in the system.
In Fig. \ref{2dtwoimpvert} we show the interference patterns in FO due to two impurities of equal magnitude $V_{01}=V_{02}=24t$ placed in the lattice sites $(15a, 15a)$ and $(15a, 22a)$ along a vertical direction of the square lattice for the non-interacting system and the interacting system with $U=2t$, $ 5t$, and $12t$.
We further compare it with the case where only a single impurity is present in the system in Fig.~\ref{denmapsimp}.
On comparing Fig.~\ref{2dtwoimpvert} and Fig.~\ref{denmapsimp} the interference effect induced by the second impurity is visible.
We see that within the HSEA the interaction does not change the position of the interference maxima and minima but reduce their heights and intensities as seen in $U=2t$ and $5t$ cases.
This is analogous to the damping of FO with the interactions in the single impurity case.
The behavior of the non-interacting system and the weakly-interacting one with $U=2t$ is very similar.
No interference effects and FO are visible in the Mott insulating phase at $U=12t$. \\
In order to investigate how the interference maximum and minimum changes as we vary the relative distance between the impurities, in Fig. \ref{2dtwoimpvertdist} we fix the position of the first impurity at $(15a, 15a)$ and vary the position of the second impurity to: $(15a,19a)$, $(15a,20a)$, $(15a, 21a)$, and $(15a,27a)$.
We show only the cases for $U=2t$, and $ 5t$ since the behavior of the non-interacting system is very similar to the $U=2t$ one and no interference effects are seen in the system with $U=12t$.
We see the occurrence of a minimum for $(15a,19a)$, and a maximum for $(15a, 20a)$.
Beyond a certain cross-over distance, e.g., for $(15a,27a)$ the interference effects are negligible and the inhomogeneities behave independently as in the diluted impurity regime.
Again, on comparing the cases $U=2t$ and $U=5t$ we see that the interaction does not change the position of the maximum and minimum but reduces their height.
In other words, we conclude that the interaction reduces the interference effects.\\
We now place two impurities of equal magnitude $V_{01}=V_{02}=24t$ along the diagonal direction of the lattice at sites $(10a, 10a)$ and $(15a, 15a)$ and show the cases for $U=2t$, and $U=5t$ in Fig.~\ref{2dtwoimpdiagsame}. Comparing Fig.~\ref{2dtwoimpdiagsame} and Fig.~\ref{2dtwoimpvertdist} we already see that the interference patterns are qualitatively different when the impurities are placed along the diagonal. Particularly, for $U=5t$ alternate regions of high and low density along the diagonal of the lattice are clearly visible. The interference pattern in the interstitial region between the two impurities is the most dominant. No FO or scattering interference effects are visible for $U=12t$.\\
\begin{figure} [ht!]
\centering
\includegraphics[width=0.25\textwidth]{fig9a.pdf}
\qquad
\includegraphics[width=0.25\textwidth]{fig9b.pdf}
\caption{Interference effect on FO due to the scattering from two impurities of equal magnitude, i.e., $V_{01}=V_{02}=24t$,
placed at $\vec{R}_{01}=(9a,9a)$ and $\vec{R}_{02}=(15a,15a)$, respectively, along the diagonal direction of the square lattice.
We show the interactions $U=2t$ (upper panel) and $U=5t$ (lower panel). The model, all other parameters, and
the plotting style are the same as in Fig.~\ref{2dtwoimpvert}.}
\label{2dtwoimpdiagsame}
\end{figure}
In order to get a quantitative description of the interference effects with the relative distance between the impurities we study the dependance of the screening charge as defined by Eq.~(\ref{screen1}) for different positions of the second impurity, i.e., when placed along the vertical or along diagonal directions as shown in Fig.~\ref{2dtwoimpscreen} (top and bottom panels) respectively.
When the two impurities are placed along the vertical line, an oscillatory behavior is seen in the screening charge, i.e., maxima (minima) appear when the impurities are separated by an odd (even) number of lattice sites both for the non-interacting and interacting systems.
The screening charge does not change with distance and reaches its constant residual value in the Mott phase ($U=12t$) again confirming the absence of any FO or interference effects in this regime.\\
The oscillatory behavior in screening charge with the distance is absent when the impurities are placed along the diagonal direction as seen in Fig. \ref{2dtwoimpscreen} (bottom panel).
Along the diagonal direction the Manhattan distance between the two sites is always even and, hence, there are always interference minima in the FO.
If one compares the evolution of Z for even lattice spacing in the upper panel with the same in the lower panel they almost match perfectly. The screening charge reaches the same constant residual value for $U=12t$ like in the case when the impurities are placed along the vertical direction.
In both cases, at any given distance the screening charge reduces with the increasing interactions, which is in agreement with the case when only a single impurity is present in the system \cite{chatterjee2019real}. \\
\begin{figure} [ht!]
\centering
\includegraphics[width=0.4\textwidth]{fig10b.pdf}
\qquad
\includegraphics[width=0.4\textwidth]{fig10a.pdf}
\caption{Variation of the screening charge defined by Eq.~(\ref{screen1}) with relative (Manhattan) distance between the impurities ($V_{01}= V_{02}= 24t$). The impurities are placed in the vertical (diagonal) directions in the upper (lower) panel. Different values of the parameter U corresponds to the different interactions. We show $U=0t$, $2t$, $5t$, and $12t$. The legends for the plot in the upper panel (not shown) is same as that of the lower panel. }
\label{2dtwoimpscreen}
\end{figure}
\subsection {Multiple impurities}
\begin{figure} [ht!]
\centering
\includegraphics[width=0.2\textwidth]{fig11a.pdf}
\qquad
\includegraphics[width=0.2\textwidth]{fig11b.pdf}
\qquad
\includegraphics[width=0.2\textwidth]{fig11c.pdf}
\qquad
\includegraphics[width=0.2\textwidth]{fig11d.pdf}
\caption{Interference effect and FO in the presence of five impurities each of magnitude $V_{1}=V_{2}=V_{3}=V_{4}=V_{5}=10t$
randomly scattered over the lattice sites located at: $\vec{R}= (3a, 3a)$, $(20a, 5a)$, $(5a, 20a)$,
$(25a, 22a)$, and $(17a, 28a)$ of the square lattice. We show the non-interacting system and the system with the interactions $U=2t$ (upper panel, left to right), and $U=5t$, $U=12t$ (lower panel, left to right). All other model parameters and the plotting style are the same as in Fig.~\ref{2dtwoimpvert}.
}
\label{2dmultimp}
\end{figure}
We now move to a more complex case where we extend our studies to several impurities randomly scattered over the surface. This aims to model a contaminated surface of a transition metal in the presence of dopant/defects, e.g. the Cr 001 surface in \cite{kolesnychenko2005surface}. We use the square lattice, also to predict the behavior on the surfaces of 3d systems, exploiting the fact that lower dimensions can also mimic the higher dimensions in the DMFT approximation due to the momentum independence in the self-energy. This feature has also been exploited in \cite{chatterjee2019real}.\\
In Fig. \ref{2dmultimp}, we consider five impurities each of magnitude $V_{0}=10t$, randomly scattered over the lattice at sites:
$(3a, 3a)$, $(20a, 5a)$, $(5a, 20a)$, $(25a, 22a)$, $(17a, 28a)$ for the non-interacting system and systems with $U=2t$, $U=5t$, and $U=12t$.
We see oscillations around the impurities together with a complex interference pattern (like a checkerboard) in the interstitial spaces between the impurities for the non-interacting system, and systems with $U=2t$, $5t$.
Interference effects get localized around the impurities with the increasing interactions (cf., $U=5t$).
No FO are observed in the Mott insulator for ($U=12t$).
This is in agreement to our previous studies where a single impurity or two impurities are present in the system.
Thus we conclude that, at least within HSEA the absence of FO and any interference effects due to scattering in the Mott insulating phase is rather universal irrespective of the model of the inhomogeneous potential.\\
\subsection {Extended inhomogeneity}
We apply a step potential across the square lattice ($32\times32$), i.e. for all the lattice sites with x-coordinates $X_i\leqslant 15a$ the potential is $V_{0}=3t$ and in the rest of the system is $V_{0}=0$.
This potential models an extended inhomogeneity which could correspond to the surface irregularities in materials developed during the process of cleavage.
In Fig. \ref{2dextinhom} we show local densities in the non-interacting system (upper panel) and in the interacting system with $U=12t$ (middle panel).
The step-like potential divides the lattice into two half-planes with a different average occupation ($\bar{n}_{i}$).
FO is visible for $U=0$, but the period of oscillations differ in the two half-planes as illustrated in the inset, where we show the FO in the cut perpendicular to the potential edge.
Different oscillation periods originate from different uniform densities of particles in each halves of the systems.
Any signature of FO is absent for the Mott phase ($U=12t$), cf. the bottom panel of Fig. \ref{2dextinhom}.
In Fig. \ref{2dextinhom} (bottom panel) the influence of interactions in this system is studied taking cuts perpendicular to the step of the potential.
We see that the system becomes more homogeneous and the screening charge decreases with the increasing interaction. \\
\begin{figure} [ht!]
\centering
\includegraphics[width=0.3\textwidth]{fig12a}
\qquad
\includegraphics[width=0.3\textwidth]{fig12b}
\qquad
\includegraphics[width=0.3\textwidth]{fig12c}
\caption{Interference effect on FO due to the scattering from an extended inhomogeneity:
A step potential applied across the square lattice
($32\times32$), such that for all the lattice sites upto ($15a, 15a$) $V_{0}=3t$ and $V_{0}=0$, otherwise. We show the non-interacting case (upper panel) and Mott phase ($U=12t$) (middle panel). The right insets show oscillations on a cut (dotted line in the main panels) perpendicular to the step of the potential. In the bottom panel we compare a similar cut for the different interactions $U=0t$, $2t$, $5t$, and $12t$.}
\label{2dextinhom}
\end{figure}
\subsection {A chain of impurities}
\begin{figure} [ht!]
\centering
\includegraphics[width=0.3\textwidth]{fig13a.pdf}
\qquad
\includegraphics[width=0.3\textwidth]{fig13b.pdf}
\qquad
\includegraphics[width=0.3\textwidth]{fig13c.pdf}
\qquad
\includegraphics[width=0.3\textwidth]{fig13d.pdf}
\caption{Interference effect on FO due to the scattering from a chain of impurity atoms of equal magnitude
$V_{0}=24t$ along the diagonal direction of the square lattice.
The interaction parameter is $U=0t$, $2t$, $5t$, and $12t$ from the top to the bottom panel, respectively.
The model, all other
parameters, and the plotting style are the same as in Fig.~\ref{2dtwoimpvert}.
}
\label{2ddomainwall}
\end{figure}
Finally, we study the case where a chain of impurities each of magnitude ($V_{0} = 24t$) is placed along the diagonal direction and a vertical direction of the square ($31\times31$) lattice.
This aims to model a domain wall or an interface.
We investigate the behavior of FO for these two orientations of the chain with the different interactions.
First, in Fig. \ref{2ddomainwall} we present the case for the diagonally oriented chain for the non-interacting system and systems with $U=2t,5t$,and $ 12t$.
The insets shows a projection of the densities along a zigzag line oriented perpendicularly to the diagonal direction.
FO are visible around the chain and the behavior is similar for the non-interacting system and $U=2t$, like in the other previous cases.
FO get localized around the chain with the increasing interaction (cf., $U=5t$).
On the ends of the cut (corners of the lattice) we observe an increase of oscillations, which is due to the imposed periodic boundary conditions.
In the case of a Mott insulator with $U=12t$ no FO are visible.
The chain creates an interface and effectively forms two subsystems separated in space. \\
In Fig. \ref{2ddomainwallvert} we show the behavior for the system with the interactions if the chain of impurities is oriented along the vertical direction.
The inset shows a horizontal cut perpendicular to the chain.
In contrast, to the previous case we do not see any FO but just a density minimum corresponding to the repulsive potential both for the non-interacting and interacting systems.
We only show the cases for $U=2t$ and $U=12t$ since the behavior of the system do not change much with the interactions.
This different behavior as compared to the diagonally oriented chain lies in the geometrical orientation of the impurities with respect to each other.
If the chain is vertically oriented, each impurity site has two neighboring sites occupied by the impurities.
On the other hand, if the chain is diagonally oriented, each impurity site is completely surrounded by nearest neighboring sites without impurities.
Hence, in the last case the distance between the impurity sites and the sites on a perpendicular cut, measured in Manhattan metric, is always even, in contrast to the former case, where it is always odd.
This difference makes the interference pattern between FO created by each impurity from the chain always constructive in the diagonal case, while it is destructive in the vertical case.\\
In Fig. \ref{1ddomain} we compare the FO from the diagonally oriented chain (blue line) and from the vertically oriented chain (green line) of impurities, presented above, with the FO of a one-dimensional lattice having a single impurity potential (red line), with $N_{L}=32$ sites and a single impurity $V_{0}=12t$ placed in the center.
We also show the FO from a single impurity potential in the square lattice (black line).
We consider the non-interacting systems for all these cases. T
he comparison shows that none of the 2d systems could be simplified to an assembly of 1d chains with a single impurity potential.
While the vertically oriented chain shows no oscillations, the decay of FO due to the diagonally oriented chain is not exactly similar to the 1d chain.
Eventually, the FO from both the vertically and diagonally oriented chains are also quite different compared to a chain from a 2d lattice with a single impurity potential at the center.
Hence, the substantial role of the geometrical orientation of the chain of impurities on the interference effect prevents one to simplify this system to an equivalent 1d chains with single impurity potentials assembled together. \\
\begin{figure} [ht!]
\centering
\includegraphics[width=0.3\textwidth]{fig14b.pdf}
\qquad
\includegraphics[width=0.3\textwidth]{fig14d.pdf}
\caption{Interference effect on FO due to the scattering from a chain of impurity atoms of equal magnitude
$V_{0}=24t$ along the vertical line of the square lattice.
The interaction parameter is $U=2t$, and $12t$ from the top to the bottom panel, respectively.
The model, all other
parameters, and the plotting style are the same as in Fig.~\ref{2dtwoimpvert}.
}
\label{2ddomainwallvert}
\end{figure}
\begin{figure} [ht!]
\centering
\includegraphics[width=0.4\textwidth]{fig15.pdf}
\caption{Comparison of FO along: 1d lattice chain with $N_{L}=32$ sites and a single impurity potential $V_{0}=12t$ placed at the center (1d), across the perpendicular cut for a chain of impurities $V_{0}=24t$ along the diagonal chain (D), vertical chain (V), and a single impurity potential $V_{0}=24t$ placed at the centre of the square lattice (S). We show the non-interacting case $U=0t$.}
\label{1ddomain}
\end{figure}
\section{Conclusions}
\label{summary}
We have studied the interference effects in FO due to the scattering from two impurities, multiple impurities, extended inhomogeneities in non-interacting and interacting fermion systems.
On comparing the FO in the presence of a single impurity we see that in two impurity systems, the additional impurity induces interference effects on FO. The interference maxima and minima change with the relative position of the impurities up to a certain cross-over distance beyond which the impurities behave independently.
The interaction does not change the position of maxima and minima but reduces their intensity and consequently the interference effects. The screening charge shows an oscillatory behavior with the even and odd lattice spacing between the impurities along a vertical column. A more complex pattern is seen in the presence of multiple impurities but the FO still localize around the impurities with the increasing interactions. In case of the extended inhomogeneities the system becomes more homogeneous with the increasing interactions. In case of a chain of impurities in the square lattice FO is present for a diagonal chain while absent for a vertical chain due to constructive and destructive of FO in these two geometries respectively. In all the models of the impurity potential no FO or interference effects are seen in the Mott insulating phase.\\
For a completeness we also presented a theory of FO in non-interacting fermions in an empty space with localized impurities. The exact analytical formulation was derived and generalization of the Friedel formula was obtained within the independent impurities approximation. A few numerical results in two and three dimensional systems were given. \\
We have used a homogeneous self-energy approximation based on DMFT for our studies where the inhomogeneous part of the self-energy due to the contribution from the impurities is neglected. Probing the additional effects on the interference in FO when we calculate the full self-energy solving the R-DMFT equations self-consistently or include the spatial correlation effects beyond the single-site DMFT would give a more complete picture on top of our studies. Still, in order to establish a connection with the real materials, the model is rather simplistic. One needs to further extend such studies using the material specific DMFT (LDA+DMFT) solving the multi-band Hubbard model. Our work can be a good starting point to motivate such realistic modeling and future experiments. \\
\begin{acknowledgments}
This work was supported by Foundation for Polish Science (FNP) through the TEAM/2010-6/2 project, co-financed by the EU European Regional Development Fund. We thank D. Vollhardt for motivating discussions. BC acknowledges discussions with J. Koloren{\v{c}}, and V. Pokorn{\'y}. Computing facilities provided by the Czech National Grid Infrastructure MetaCentrum (project CESNET LM2015042) is gratefully acknowledged.
\end{acknowledgments}
| proofpile-arXiv_066-1750 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
One of the important developments in the study of the famous many-particle Calogero-Sutherland systems \cite{C,Su} (see \cite{OP,Poly-rev} for reviews) is their generalization to supersymmetric cases.
Most of the researches in these directions have been devoted to supersymmetrization of the rational Calogero systems
(see, for example, the papers \cite{FrM,BHKVas,BGK,BGL,Wyl,GLP-2007,FIL08,KL-10,FI,FILS,KLS-18,KLS-18b,Feig,KLS-19,KL-20}
and the review \cite{superc}).
Supersymmetric generalizations of the hyperbolic and trigonometric Calogero-Sutherland systems have been studied in a very limited
number of works
(see, for example, the papers \cite{SSuth,BrinkTurW,BorManSas,IoffeNee,DeLaMa,Serg,SergVes,Feig,KLS-19,KL-20} and references therein).
In a recent paper \cite{FIL19}, $\mathcal{N}{=}\, 2$ and $\mathcal{N}{=}\, 4$ supersymmetric generalizations
of the hyperbolic Calogero-Sutherland system were proposed using the gauging procedure \cite{DI-06-1,FIL08}
(see also the matrix description of the Calogero models in \cite{Poly-gauge,Gorsky,Poly-rev}).
In the paper \cite{Fed20} the $\mathcal{N}{=}\, 2$ hyperbolic Calogero-Sutherland model \cite{FIL19} was considered.
In this paper, the Hamiltonian analysis of the $\mathcal{N}{=}\, 4$ many-particle system obtained in \cite{FIL19} was studied in detail.
At the component level, the $\mathcal{N}{=}\, 4$ matrix model obtained in \cite{FIL19} is described by
the positive definite Hermitian $c$-number $(n{\times}n)$--matrix field
$X(t):=\|X_a{}^b(t)\|$, $({X_a{}^b})^* =X_b{}^a$, $\det X \neq 0$,
and the Hermitian $c$-number $(n{\times}n)$--matrix gauge field
$A(t):=\|A_a{}^b(t)\|$, $({A_a{}^b})^* =A_b{}^a$ ($a,b=1,\ldots ,n$).
In opposite to the $\mathcal{N}{=}\,2$ case \cite{Fed20}, the $\mathcal{N}{=}\, 4$ model
uses the complex odd $(n{\times}n)$--matrix field
$\Psi^i(t):=\|\Psi^i{}_a{}^b(t)\|$, $ \bar\Psi_i(t):=\|\bar\Psi_i{}_a{}^b(t)\|$,
$({\Psi^i{}_a{}^b})^* =\bar\Psi_i{}_b{}^a$,
and the complex $c$-number $\mathrm{U}(n)$-spinor field
$Z^k(t):=\|Z^k_a(t)\|$, $\bar Z_k(t):=\|\bar Z_k^a(t)\|$, $\bar Z_k^a = ({Z^k_a})^*$,
which have additional $\mathrm{SU}(2)$-spinor indices $i,k=1,2$.
This $\mathcal{N}{=}\, 4$ $n$-particle system is described by the on-shell component action
$
{\displaystyle S_{\rm matrix} = \int \mathrm{d}t \, L_{\rm matrix} }
$
with the Lagrangian (system II in \cite{FIL19})
\begin{eqnarray}
\label{N2Cal-com}
L_{\rm matrix} & {=} & \frac12\ {\rm Tr}\Big( \,X^{-1}\nabla\! X \,X^{-1}\nabla\! X+ 2c\, A\Big)
\ + \ \frac{i}{2}\, \Big(\bar Z_k \nabla\! Z^k - \nabla\! \bar Z_k Z^k\Big)
\\ [5pt]
&&
+ \ \frac{i}{2}\ {\rm Tr} \Big( X^{-1}\bar\Psi_k X^{-1}\nabla \Psi^k - X^{-1}\nabla \bar\Psi_k X^{-1}\Psi^k \Big)
\nonumber\\ [5pt]
&&
+ \ \frac{1}{8}\, {\rm Tr} \Big( \{X^{-1}\Psi^i, X^{-1}\bar\Psi_i\}\, \{X^{-1}\Psi^{k} , X^{-1}\bar\Psi_{k}\} \Big)
\,,
\nonumber
\end{eqnarray}
where the quantity $c$ is a real constant and the
covariant derivatives are defined by
$\nabla\! X = \dot X +i\, [A, X]$ and $\nabla \Psi^k = \dot \Psi^k +i\, [A,\Psi^k]$,
$\nabla\! Z^k = \dot Z^k + iAZ^k$ and {\cal c.c.}
Despite the external similarity of the Lagrangian \p{N2Cal-com} with the $\mathcal{N}{=}\,2$ supersymmetric Lagrangian \cite{Fed20}, the $\mathrm{SU}(2)$-spinor character of the Grassmann matrix quantities $\Psi^i$ and semi-dynamical even variables $Z^i$ leads to the distinctive properties of the $\mathcal{N}{=}\,4$ system under consideration.
First, using the $\mathrm{SU}(2)$-spinors $Z^i$ leads to the $\mathcal{N}{=}\,4$ matrix system that is supersymmetric generalization of the $\mathrm{U}(2)$-spin hyperbolic Calogero-Sutherland system, and not the spinless hyperbolic Calogero-Sutherland system as in the $\mathcal{N}{=}\,2$ case \cite{Fed20}.
Second, due to the $\mathrm{SU}(2)$-spinor nature of the Grassmann matrix quantities $\Psi^i$, the $\mathcal{N}{=}\,4$ supercharges contain
additional terms in odd variables, that were absent in the $\mathcal{N}{=}\,2$ case \cite{Fed20}.
This paper examines the $\mathcal{N}{=}\,4$ case in detail in order to identify these and other distinctive properties of this $\mathcal{N}{=}\,4$ system.
The plan of the paper is as follows.
In Section~2, the Hamiltonian formulation of the matrix system \p{N2Cal-com} is presented.
Partial gauge fixing eliminates purely gauge bosonic off-diagonal matrix fields and
yields a classically-equivalent system, whose bosonic limit is exactly
the multi-particle $\mathrm{U}(2)$-spin hyperbolic Calogero-Sutherland system.
Using the Noether procedure in Section~3 allows one to find the full set of $\mathcal{N}{=}\,4$ supersymmetry generators.
The Dirac brackets superalgebra of these generators is closed to first class constraints.
Section~4 is devoted to the construction of the Lax representation for the equation of motion
of the $\mathcal{N}{=}\,4$ supersymmetric $\mathrm{U}(2)$-spin hyperbolic Calogero-Sutherland system.
Section~5 presents the reduction of the considered $\mathrm{U}(2)$-spin system
that yields the $\mathcal{N}{=}\,4$ supersymmetric spinless hyperbolic Calogero-Sutherland system.
Section~6 contains summary and outlook.
\setcounter{equation}{0}
\section{Hamiltonian formulation}
Here we present the Hamiltonization of the matrix system (\ref{N2Cal-com}) with the $\mathrm{U}(n)$ gauge symmetry
and its partial gauge-fixing.
\subsection{Hamiltonian formulation of the matrix system}
The system with the Lagrangian (\ref{N2Cal-com}) is described by pairs of the phase variables
$(X_a{}^b, P_c{}^d)$, $(Z^i_a, \mathcal{P}_k^b)$, $(\bar Z_i^a, \bar\mathcal{P}^k_b)$,
$(\Psi^i{}_a{}^b, \Pi_k{}_c{}^d)$, $(\bar\Psi_i{}_a{}^b, \bar\Pi^k{}_c{}^d)$ with
the nonvanishing canonical Poisson brackets
\begin{equation}\label{PB-X}
\{X_a{}^b, P_c{}^d \}_{\scriptstyle{\mathrm{P}}} = \delta_a^d \delta_c^b \,,\qquad
\{Z^i_a, \mathcal{P}_k^b \}_{\scriptstyle{\mathrm{P}}} = \delta_a^b \delta^i_k \,,\quad
\{\bar Z_i^a, \bar\mathcal{P}^k_b \}_{\scriptstyle{\mathrm{P}}} = \delta_b^a \delta_i^k\,,
\end{equation}
\begin{equation}\label{PB-Psi}
\{\Psi^i{}_a{}^b, \Pi_k{}_c{}^d \}_{\scriptstyle{\mathrm{P}}} = \delta_a^d \delta_c^b \delta^i_k \,, \quad
\{\bar\Psi_i{}_a{}^b, \bar\Pi^k{}_c{}^d \}_{\scriptstyle{\mathrm{P}}} = \delta_a^d \delta_c^b \delta_i^k \,.
\end{equation}
The phase variables are subject to the primary constraints
\begin{equation}\label{const-Z}
G_k^a := \mathcal{P}_k^a - \frac{i}2 \, \bar Z_k^a \approx 0\,, \qquad
\bar G^k_a := \bar\mathcal{P}^k_a + \frac{i}2 \, Z^k_a \approx 0 \,,
\end{equation}
\begin{equation}\label{const-Psi}
\Upsilon_k{}_a{}^b := \Pi_k{}_a{}^b - \frac{i}2 \,(X^{-1}\bar\Psi_k X^{-1})_a{}^b\approx 0 \,, \qquad
\bar \Upsilon^k{}_a{}^b := \bar\Pi^k{}_a{}^b - \frac{i}2 \,(X^{-1} \Psi^k X^{-1})_a{}^b\approx 0 \,.
\end{equation}
Besides, the matrix momentum of $X_a{}^b$ has the form
\begin{equation}\label{P-X}
P_a{}^b = (X^{-1}\nabla X X^{-1})_a{}^b
\end{equation}
and the momenta of the coordinates $A_a{}^b$ are zero.
The canonical Hamiltonian of the system has the form
\begin{equation}\label{t-Ham}
H_{\rm matrix}=\ P_b{}^a \dot X_a{}^b + \mathcal{P}_k^a \dot Z^k_a + \bar\mathcal{P}^k_a \dot{\bar Z}_k^a +
\Pi_k{}_b{}^a \dot\Psi^k{}_a{}^b + \bar\Pi^k{}_b{}^a\dot{\bar\Psi}_k{}_a{}^b - L_{\rm matrix}\ =\ H+ {\rm Tr}\big(A F \big)\,,
\end{equation}
where the first term
has the following form
\begin{equation}
\label{Ham-matrix1}
H \ = \
\frac12\,{\rm Tr}\Big(XPXP\Big)\ - \ \frac{1}{8}\, {\rm Tr} \Big( \{X^{-1}\Psi^i, X^{-1}\bar\Psi_i\}\, \{X^{-1}\Psi^{k} , X^{-1}\bar\Psi_{k}\} \Big)
\end{equation}
and the second term ${\rm Tr}\big(A F \big)$ uses the quantities
\begin{equation}\label{F-constr}
F_a{}^b := i[P,X]_a{}^b + Z^k_a\bar Z_k^b-\frac12\,\{X^{-1}\Psi^k, X^{-1}\bar\Psi_k \}_a{}^b
-\frac12\,\{\Psi^k X^{-1}, \bar\Psi_k X^{-1} \}_a{}^b - c\,\delta_a{}^b\,.
\end{equation}
Vanishing momenta of $A_a{}^b$ indicate that
quantities \p{F-constr} are the secondary constraints
\begin{equation}\label{F-constr1}
F_a{}^b \approx 0\,.
\end{equation}
The variables $A_a{}^b$ in the Hamiltonian \p{t-Ham} play the role of the Lagrange multipliers for these constraints.
The constraints \p{const-Z} and \p{const-Psi} possess the following nonzero Poisson brackets:
\begin{equation}\label{PB-const-2}
\{ G_i^a , \bar G^k_b \}_{\scriptstyle{\mathrm{P}}} =-i\delta^a_b \delta^k_i\,,\qquad
\{ \Upsilon_i{}_a{}^b , \bar \Upsilon^k{}_c{}^d \}_{\scriptstyle{\mathrm{P}}} =-iX^{-1}_{\ \ a}{}^d X^{-1}_{\ \ c}{}^b \delta^k_i
\end{equation}
and are the second class constraints.
Using the Dirac brackets for the constraints \p{const-Z}, \p{const-Psi}
\begin{eqnarray}\label{DB-const-2}
\{ A,B\}_{\scriptstyle{\mathrm{D}}} &=&
\{ A,B\}_{\scriptstyle{\mathrm{P}}}
\ +\,i \{A , G_k^a \}_{\scriptstyle{\mathrm{P}}} \{ \bar G^k_a , B \}_{\scriptstyle{\mathrm{P}}} -i \{ A , \bar G^k_a \}_{\scriptstyle{\mathrm{P}}}\{ G_k^a , B \}_{\scriptstyle{\mathrm{P}}} \\ [6pt]
&&
-i \{A , \Upsilon_k{}_a{}^b \}_{\scriptstyle{\mathrm{P}}}X_b{}^c X_d{}^a\{ \bar \Upsilon^k{}_c{}^d, B \}_{\scriptstyle{\mathrm{P}}} -i
\{ A , \bar \Upsilon^k{}_a{}^b \}_{\scriptstyle{\mathrm{P}}}X_b{}^c X_d{}^a\{ \Upsilon_k{}_c{}^d , B \}_{\scriptstyle{\mathrm{P}}}
\,, \nonumber
\end{eqnarray}
we eliminate the momenta $\mathcal{P}_k^a$, $\bar\mathcal{P}^k_a$, $\Pi_k{}_a{}^b$, $\bar\Pi^k{}_a{}^b$.
The nonvanishing Dirac brackets of residual phase variables take the form
\begin{equation}\label{DB-X}
\{X_a{}^b, P_c{}^d \}_{\scriptstyle{\mathrm{D}}} = \delta_a^d \delta_c^b \,,
\end{equation}
\begin{equation}\label{DB-P}
\begin{array}{rcl}
\{P_a{}^b, P_c{}^d \}_{\scriptstyle{\mathrm{D}}} &= & -\frac{i}4\, [X^{-1}(\Psi^k X^{-1}\bar\Psi_k + \bar\Psi_k X^{-1}\Psi^k)X^{-1}]_a{}^d X^{-1}_{\ \ c}{}^b \\ [5pt]
&& +\, \frac{i}4\, X^{-1}_{\ \ a}{}^d [X^{-1}(\Psi^k X^{-1}\bar\Psi_k + \bar\Psi_k X^{-1}\Psi^k)X^{-1}]_c{}^b \,,
\end{array}
\end{equation}
\begin{equation}\label{DB-Z}
\{Z^i_a, \bar Z_k^b \}_{\scriptstyle{\mathrm{D}}} = -i\delta_a^b \delta_k^i\,,
\qquad
\{\Psi^i{}_a{}^b, \bar\Psi_k{}_c{}^d \}_{\scriptstyle{\mathrm{D}}} = -iX_a{}^d X_c{}^b \delta_k^i \,,
\end{equation}
\begin{equation}\label{DB-PPs}
\begin{array}{rcl}
\{\Psi^k{}_a{}^b, P_c{}^d \}_{\scriptstyle{\mathrm{D}}} &=& \frac{1}2\, \delta_a^d (X^{-1}\Psi^k )_c{}^b + \frac{1}2\, \delta_c^b (\Psi^k X^{-1})_a{}^d\,,\\ [5pt]
\{\bar\Psi_k{}_a{}^b, P_c{}^d \}_{\scriptstyle{\mathrm{D}}} &=& \frac{1}2\, \delta_a^d (X^{-1}\bar\Psi_k )_c{}^b + \frac{1}2\, \delta_c^b (\bar\Psi_k X^{-1})_a{}^d\,.
\end{array}
\end{equation}
The residual constraints $F_a{}^b=(F_b{}^a)^*$, defined in \p{F-constr1}, form the $u(n)$ algebra with respect to the Dirac brackets \p{DB-const-2}:
\begin{equation}\label{DB-FF}
\{F_a{}^b, F_c{}^d \}_{\scriptstyle{\mathrm{D}}} =-i \delta_a{}^d F_c{}^b + i \delta_c{}^b F_a{}^d \,.
\end{equation}
So the constraints \p{F-constr}, \p{F-constr1} are the first class ones and generate local $\mathrm{U}(n)$ transformations
\begin{equation}\label{trans-C}
\delta C= \sum_{a,b} \alpha_b{}^a \{C, F_a{}^b \}_{\scriptstyle{\mathrm{D}}}
\end{equation}
of an arbitrary phase variable $C$ where $\alpha_a{}^b(\tau)=(\alpha_b{}^a(\tau))^*$ are the local parameters.
These transformations of the primary phase variables have the form
\begin{equation}\label{Un-tran}
\begin{array}{c}
\delta X_a{}^b =-i[\alpha,X]_a{}^b \,, \quad \delta P_a{}^b =-i[\alpha,P]_a{}^b\,,\quad
\delta Z_a^k =-i(\alpha Z^k)_a \,, \quad
\delta \bar Z_k{}_a =i(\bar Z_k \alpha)^a\,, \\ [7pt]
\delta \Psi^k{}_a{}^b = -i[\alpha,\Psi^k]_a{}^b \,, \quad
\delta \bar\Psi_k{}_a{}^b = -i[\alpha,\bar\Psi_k]_a{}^b \,.
\end{array}
\end{equation}
\subsection{Hamiltonian formulation of partial gauge-fixing of the matrix system}
The gauges $X_a{}^b\,{=}\,0$ at $a\,{\neq}\,b$ fix the local transformations \p{Un-tran} with the parameters $\alpha_a{}^b(\tau)$, $a{\neq}b$
generated by the off-diagonal constraints $F_a{}^b \,{\approx}\,0$, $a\,{\neq}\,b$ in the set \p{F-constr}, \p{F-constr1}.
This gauge fixing takes the form \cite{FIL08,FIL19,Fed20}
\begin{equation}\label{x-fix}
x_a{}^b\approx 0
\end{equation}
if we apply the expansions
\begin{equation}\label{XP-exp}
X_a{}^b =x_a \delta_a{}^b + x_a{}^b\,,
\qquad
P_a{}^b = \mathrm{p}_a \delta_a{}^b + \mathrm{p}_a{}^b\,,
\end{equation}
where $x_a{}^b$ and $\mathrm{p}_a{}^b$ represent the off-diagonal matrix quantities.
In addition, using
the constraints $F_a{}^b\,{\approx}\,0$, $a\,{\neq}\,b$, we express the momenta $\mathrm{p}_a{}^b$ through the remaining phase variables:
\begin{equation}\label{p-exp}
\mathrm{p}_a{}^b= -\frac{i\,Z^k_a\bar Z_k^b}{x_a-x_b}+\frac{i\,(x_a+x_b)\,\{\Phi^k,\bar\Phi_k\}_a{}^b}{2(x_a-x_b)\sqrt{x_ax_b}}\,,
\end{equation}
where we use the odd matrix variables $\Phi^k{}_a{}^b$, $\bar\Phi_k{}_a{}^b=(\Phi^k{}_b{}^a)^*$ defined by
\begin{equation}\label{Phi-def}
\Phi^k{}_a{}^b:= \frac{\Psi^k{}_a{}^b}{\sqrt{x_ax_b}}\,,\qquad
\bar\Phi_k{}_a{}^b:= \frac{\bar\Psi_k{}_a{}^b}{\sqrt{x_ax_b}}\,.
\end{equation}
Thus, the partial gauge fixing conditions \p{x-fix} and \p{p-exp} remove the variables
$x_a{}^b$ and $\mathrm{p}_a{}^b$.
As a result, after the partial gauge fixing, phase space of the considered system is defined
by $2n$ even real variables $x_a$, $\mathrm{p}_a$, $2n$ even complex variables $Z^i_a$ and $2n^2$ odd complex variables $\Phi^i{}_a{}^b$.
Their nonvanishing Dirac brackets are
\begin{eqnarray}\label{DB-xp}
\{x_a, \mathrm{p}_b \}^{'}_{\scriptstyle{\mathrm{D}}} &=& \delta_{ab} \,,
\\ [6pt]
\label{DB-Z1}
\{Z^i_a, \bar Z_k^b \}^{'}_{\scriptstyle{\mathrm{D}}} &=& -i\,\delta_a^b \delta_k^i\,,
\\ [6pt]
\label{DB-Ph}
\{\Phi^i{}_a{}^b, \bar\Phi_k{}_c{}^d \}^{'}_{\scriptstyle{\mathrm{D}}} &=& -i\,\delta_a^d \delta_c^b \delta_k^i \,.
\end{eqnarray}
In contrast to \p{DB-P} and \p{DB-PPs} the momenta $\mathrm{p}_a$ commute with each other and with the Grassmannian quantities $\Phi^k{}_a{}^b$.
Moreover, due to \p{DB-Ph}, the odd variables $\Phi^k{}_a{}^b$ and $\bar\Phi_i{}_b{}^a$ form canonical pairs
(compare with \p{DB-Z}).
In the Hamiltonian \p{Ham-matrix1} the momenta $\mathrm{p}_a$
are presented in the term $\sum_{a}(x_a \mathrm{p}_a)^2/2$.
Let us represent this term in standard form for particle kinetic energy.
For this we introduce the phase variables
\begin{equation}\label{p-q}
q_a=\log x_a \,,\quad p_{\,a}=x_a \mathrm{p}_a \,,\qquad \{q_a,p_{\,b}\}^{'}_{\scriptstyle{\mathrm{D}}} =\delta_{ab}\,.
\end{equation}
In these variables and \p{Phi-def} and after the gauge-fixing \p{x-fix}, \p{p-exp},
the Hamiltonian \p{Ham-matrix1} takes the form
\begin{equation}\label{Ham-fix}
\mathrm{H} \ = \
\frac12\,\sum_{a}p_a p_a \ + \
\frac18\,\sum_{a\neq b} \frac{R_a{}^bR_b{}^a}{\sinh^2 \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\ - \, \frac{1}{8}\ {\rm Tr} \Big( \{\Phi^i, \bar\Phi_i\}\{ \Phi^k, \bar\Phi_k\} \Big)
\,,
\end{equation}
where
\begin{equation}\label{T-def}
R_a{}^b := Z^k_a\bar Z_k^b- \cosh\left(\frac{q_a-q_b}{2}\right)\{ \Phi^k, \bar\Phi_k \}_a{}^b\,.
\end{equation}
The residual first class constraints in the set \p{F-constr}, \p{F-constr1} are $n$ diagonal constraints
\begin{equation}\label{F-constr-d}
F_a := F_a{}^a =R_a{}^a -c= Z^k_a\bar Z_k^a- \{ \Phi^k, \bar\Phi_k \}_a{}^a - c\approx 0\qquad\mbox{(no summation over $a$)}\,,
\end{equation}
which form an abelian algebra with respect to the Dirac brackets \p{DB-Ph}
\begin{equation}
\label{DB-constr1}
\{F_a , F_b \}^{'}_{\scriptstyle{\mathrm{D}}} = 0
\end{equation}
and generate the $[\mathrm{U}(1)]^n$ gauge transformations of $Z^k_a$ and $\Phi^k{}_a{}^b$ with the local
parameters $\gamma_a(t)$:
\begin{equation}\label{b-Ab}
Z^k_a \rightarrow \, \mathrm{e}^{i\gamma_a} Z^k_a \,, \quad \bar Z_k^a \rightarrow \,
\mathrm{e}^{-i\gamma_a}
\bar Z_k^a \qquad (\mbox{no
sum over}\; a)\,,
\end{equation}
\begin{equation}\label{Psi-Ab}
\Phi^k{}_a{}^b \rightarrow \, \mathrm{e}^{i\gamma_a} \Phi^k{}_a{}^b \mathrm{e}^{-i\gamma_b}\,, \quad
\bar\Phi_k{}_a{}^b \rightarrow \, \mathrm{e}^{i\gamma_a} \bar\Phi_k{}_a{}^b \mathrm{e}^{-i\gamma_b}\qquad (\mbox{no
sums over}\; a,b)\,.
\end{equation}
Similarly to \p{XP-exp}, we can use the expansions of the Grassmannian matrix quantities \p{Phi-def}
in the diagonal and off-diagonal parts:
\begin{equation}\label{Phi-exp}
\Phi^k{}_a{}^b =\varphi^k_a \delta_a{}^b + \phi^k{}_a{}^b\,,
\qquad
\bar\Phi_k{}_a{}^b =\bar\varphi_k{}_a \delta_a{}^b + \bar\phi_k{}_a{}^b\,,
\end{equation}
where $\phi^k{}_a{}^a=\bar\phi_k{}_a{}^a=0$ at the fixed index $a$.
The Dirac brackets \p{DB-Ph} of the diagonal quantities $\varphi^k_a$, $\bar\varphi_k{}_a$ and
the off-diagonal ones $\phi^k{}_a{}^b$, $\bar\phi_k{}_a{}^b$ have the form
\begin{equation}
\label{DB-Ph1}
\{\varphi^i_a , \bar\varphi_k{}_b \}^{'}_{\scriptstyle{\mathrm{D}}} = -i\,\delta_{a b}\delta^i_k\,,\qquad
\{\phi^i{}_a{}^b, \bar\phi_k{}_c{}^d \}^{'}_{\scriptstyle{\mathrm{D}}} = -i\,\delta_a^d \delta_c^b\delta^i_k \,.
\end{equation}
The constraints \p{F-constr-d} involve only the off-diagonal fermions $\phi$, $\bar\phi$:
\begin{equation}\label{F-constr-d1}
F_a = Z^k_a\bar Z_k^a- \{ \phi^k, \bar\phi_k \}_a{}^a - c\approx 0\qquad\mbox{(no summation over $a$)}\,.
\end{equation}
In the variables $\varphi$, $\bar\varphi$, $\phi$, $\bar\phi$ the Hamiltonian \p{Ham-fix} takes the form
\begin{eqnarray}
\mathrm{H} &=& \frac12\,\sum_{a}p_a p_a \ + \
\frac18\,\sum_{a\neq b} \frac{\bar Z_i^a Z^k_a \ \bar Z_k^b Z^i_b}{\sinh^2 \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\nonumber\\
&&
+\,
\frac14\,\sum_{a\neq b} \frac{\coth \Big({\displaystyle\frac{q_a-q_b}{2}}\Big) }
{\sinh \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}\,Z^i_a \bar Z_i^b
\Big[(\varphi^k_a-\varphi^k_b)\bar\phi_k{}_b{}^a+
(\bar\varphi_k{}_a-\bar\varphi_k{}_b)\phi^k{}_b{}^a - \{\phi^k,\bar\phi_k \}_b{}^a\Big]
\nonumber\\
&&
+\,
\frac18\,\sum_{a\neq b}
\frac{1}
{\sinh^2 \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}\,
\Big[(\varphi^i_a-\varphi^i_b)(\varphi^k_a-\varphi^k_b)\bar\phi_i{}_a{}^b\bar\phi_k{}_b{}^a
+ (\bar\varphi_i{}_a-\bar\varphi_i{}_b)(\bar\varphi_k{}_a-\bar\varphi_k{}_b)\phi^i_a{}^b\phi^k_b{}^a
\nonumber \\
&&\qquad\qquad\qquad \qquad\qquad\quad
+\ 2(\varphi^i_a-\varphi^i_b)(\bar\varphi_k{}_a-\bar\varphi_k{}_b)\bar\phi_i{}_a{}^b \phi^k{}_b{}^a
+ \{\phi^i,\bar\phi_i \}_a{}^b \{\phi^k,\bar\phi_k \}_b{}^a
\nonumber \\
&&\qquad\qquad\qquad \qquad\qquad\quad
+\ 2(\bar\varphi_i{}_a-\bar\varphi_i{}_b)\phi^i{}_a{}^b \{\phi^k,\bar\phi_k \}_b{}^a
+2(\varphi^i_a-\varphi^i_b)\bar\phi_i{}_a{}^b \{\phi^k,\bar\phi_k \}_b{}^a \Big]
\nonumber\\
&&
-\,
\frac18\,\sum_{a} \{\phi^i,\bar\phi_i \}_a{}^a \{\phi^k,\bar\phi_k \}_a{}^a\,.
\label{Ham-fix1}
\end{eqnarray}
In the bosonic limit the Hamiltonian \p{Ham-fix1} takes the form
\begin{equation}\label{H-bose-lim}
\mathrm{H}_{bose} \ =\ \frac12\,\sum_{a}p_a p_a +
\frac18\,\sum_{a\neq b} \frac{S_a{}_i{}^k S_b{}_k{}^i}{\sinh^2 \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\,,
\end{equation}
where the quantities
\begin{equation}
\label{S-Z-def}
S_a{}_i{}^k:= {\bar Z}{}_i^{a}Z^k_{a}
\end{equation}
at all values $a$ form the $u(2)$ algebras with respect to the Dirac brackets:
\begin{equation}
\label{S-S-Dir}
\{S_a{}_i{}^k , S_b{}_j{}^l \}^{'}_{\scriptstyle{\mathrm{D}}} = -i\,\delta_{a b}\left(\delta^k_j S_a{}_i{}^l -\delta^l_i S_a{}_j{}^k\right)\,.
\end{equation}
Thus, the Hamiltonian \p{H-bose-lim} has the form
\begin{equation}\label{H-bose-lim1}
\mathrm{H}_{bose} \ =\ \frac12\,\sum_{a}p_a p_a +
\frac18\,\sum_{a\neq b} \frac{\mathrm{Tr}(S_a S_b)}{\sinh^2 \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\end{equation}
and is same as the Hamiltonian of the $\mathrm{U}(2)$-spin hyperbolic Calogero-Sutherland $A_{n-1}$-root system \cite{GH-84,W-85,Poly-rev}.
Derivation of this many-particle spin system in the ${\mathcal N}{=}\,4$ case is the result of using semi-dynamical
$\mathrm{SU}(2)$-spinor variables, which are the field components of the {\bf (4,4,0)} multiplets \cite{FIL19}.
In contrast to the ${\mathcal N}{=}\,4$ case considered here, the use of semi-dynamical scalar variables in the ${\mathcal N}{=}\,2$ case produces
``a less rich'' supersymmetric system, namely the ${\mathcal N}{=}\,2$ spinless hyperbolic Calogero-Sutherland system \cite{Fed20}.
\setcounter{equation}{0}
\section{${\mathcal N}{=}\,4$ supersymmetry generators}
As discussed in Sect.\,1, the system (\ref{N2Cal-com}) considered here was derived from the ${\mathcal N}{=}\,4$ superfield model \cite{FIL19}.
Therefore, it is invariant under ${\mathcal N}{=}\,4$ supersymmetry transformations of the matrix component fields:
\begin{equation}\label{tr-susy}
\begin{array}{rcl}
\delta X &=& -\varepsilon_i \Psi^i + \bar\varepsilon^{\,i} \bar\Psi_i\,, \\ [6pt]
\delta \Psi^{i} &=& i\,\bar\varepsilon^{\,i}\,\nabla X + \bar\varepsilon_{k} X\Big[ X^{-1}\Psi^{(i} ,X^{-1}\bar\Psi^{k)} \Big]\,,\\ [6pt]
\delta \bar\Psi_{i} &=& i\, \varepsilon_{i}\,\nabla X + \varepsilon^{\,k} X\Big[ X^{-1}\Psi_{(i} ,X^{-1}\bar\Psi_{k)} \Big]\,, \\ [6pt]
\delta Z^{i}&=&0\,,\quad \delta \bar Z_{i}\ =\ 0\,,\qquad \delta A\ =\ 0\,,
\end{array}
\end{equation}
where $\varepsilon_k$, $\bar\varepsilon^k=(\varepsilon_k)^*$ is two complex Grassmannian parameters.
These transformations are generated by the following Noether charges:
\begin{equation}\label{Q-matrix}
\begin{array}{rcl}
Q^i &=& {\displaystyle {\rm Tr}\,\Big( P\Psi^i \ + \ \frac{i}{2}\, X^{-1}\bar\Psi^i X^{-1}\Psi_kX^{-1}\Psi^k \Big), } \\ [7pt]
\bar Q_i &=& {\displaystyle {\rm Tr}\,\Big( P\bar\Psi_i \ + \ \frac{i}{2}\, X^{-1}\Psi_i X^{-1}\bar\Psi^kX^{-1}\bar\Psi_k \Big) },
\end{array}
\end{equation}
where the matrix momentum $P_a{}^b$ is presented in \p{P-X}.
The supercharges \p{Q-matrix} and the Hamiltonian $H$ defined in \p{Ham-matrix1}
form the ${\mathcal N}{=}\,4$ $d{=}\,1$ superalgebra
\begin{equation}\label{N2-susy-matrix}
\{ Q^i, \bar Q_j \}_{\scriptstyle{\mathrm{D}}} = -2i\,H\,\delta^i_j\,,\qquad
\{ Q^i, H \}_{\scriptstyle{\mathrm{D}}}=\{ \bar Q_i, H \}_{\scriptstyle{\mathrm{D}}}=0
\end{equation}
with respect to the Dirac brackets \p{DB-X}-\p{DB-PPs}.
Putting the partial gauge fixing conditions \p{x-fix}, \p{p-exp} in expressions \p{Q-matrix}
and going to the variables \p{Phi-def}, \p{p-q}, we obtain the ${\mathcal N}{=}\,4$ supersymmetry generators
\begin{equation}\label{Q}
\begin{array}{rcl}
\mathrm{Q}^{\,i} &=& {\displaystyle \sum\limits_{a} p_a \Phi^i{}_a{}^a \ -\ \frac{i}{2}\sum\limits_{a\neq b}
\frac{ R_a{}^b \Phi^i{}_b{}^a}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\ +\ \frac{i}{2}\sum\limits_{a, b}\, [\Phi^k, \bar\Phi_k]_a{}^b\Phi^i{}_b{}^a \,, } \\ [8pt]
\bar{\mathrm{Q}}_{\,i} &=& {\displaystyle \sum\limits_{a} p_a \bar\Phi_i{}_a{}^a\ -\ \frac{i}{2}\sum\limits_{a\neq b}
\frac{ R_a{}^b \bar\Phi_i{}_b{}^a}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\ -\ \frac{i}{2}\sum\limits_{a, b}\, [\Phi^k, \bar\Phi_k]_a{}^b\bar\Phi_i{}_b{}^a }
\end{array}
\end{equation}
for the partial gauge fixing system,
which is described by the Hamiltonian \p{Ham-fix} and the first class constraints \p{F-constr-d}.
Using the Grassmannian variables $\varphi^i_a$, $\bar\varphi_i{}_a$,
$\phi^i{}_a{}^b$, $\bar\phi_i{}_a{}^b$, defined in \p{Phi-exp}, we cast the generators \p{Q} in the form
\begin{eqnarray}
\label{Q1}
\mathrm{Q}^{\,i} &= &\sum\limits_{a} p_a \varphi^i_a-\frac{i}{2}\sum\limits_{a\neq b}
\frac{ Z^k_a\bar Z_k^b\, \phi^i{}_b{}^a}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)} \\
&& +\,\frac{i}{2}\sum\limits_{a\neq b}
\coth \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)
\Big[ (\bar\varphi_k{}_a-\bar\varphi_k{}_b)\phi^k{}_a{}^b +(\varphi^k_a-\varphi^k_b)\bar\phi_k{}_a{}^b +\{\phi^k,\bar\phi_k\}_a{}^b
\Big]\,\phi^i{}_b{}^a
\nonumber\\
&&
+\,\frac{i}{2}\,\Big[
\sum\limits_{a\neq b} \Big((\varphi_k{}_a+\varphi_k{}_b)\phi^k{}_b{}^a \bar\phi^i{}_a{}^b +
\phi_k{}_a{}^b \phi^k{}_b{}^a \bar\varphi^i_a \Big)
+\!\!\!\!
\sum\limits_{a\neq b\neq c\neq a} \!\!\!\! \phi_k{}_a{}^b \phi^k{}_b{}^c \bar\phi^i{}_c{}^a +
\sum\limits_{a} \varphi_k{}_a\varphi^k_a\bar\varphi^i_a \Big]\,,
\nonumber
\\
\label{bQ1}
\bar{\mathrm{Q}}_{\,i} &= &\sum\limits_{a} p_a \bar\varphi_i{}_a-\frac{i}{2}\sum\limits_{a\neq b}
\frac{ Z^k_a\bar Z_k^b\, \bar\phi_i{}_b{}^a}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)} \\
&& +\,\frac{i}{2}\sum\limits_{a\neq b}
\coth\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)
\Big[ (\bar\varphi_k{}_a-\bar\varphi_k{}_b)\phi^k{}_a{}^b +(\varphi^k_a-\varphi^k_b)\bar\phi_k{}_a{}^b +\{\phi^k,\bar\phi_k\}_a{}^b
\Big]\,\bar\phi_i{}_b{}^a
\nonumber\\
&&
+\,\frac{i}{2}\,\Big[
\sum\limits_{a\neq b} \Big(\phi_i{}_a{}^b \bar\phi^k{}_b{}^a (\bar\varphi_k{}_a+\bar\varphi_k{}_b) +
\varphi_i{}_a \bar\phi^k{}_a{}^b \bar\phi_k{}_b{}^a \Big)
+\!\!\!\!
\sum\limits_{a\neq b\neq c\neq a} \!\!\!\! \phi_i{}_a{}^b \bar\phi^k{}_b{}^c \bar\phi_k{}_c{}^a +
\sum\limits_{a} \varphi_i{}_a \bar\varphi^k_a\bar\varphi_k{}_a\Big]\,.
\nonumber
\end{eqnarray}
Taking into account the Dirac brackets \p{DB-Ph}, \p{p-q} and
\begin{equation}\label{R-alg}
\begin{array}{rcl}
\{R_a{}^b, R_c{}^d \}^{'}_{\scriptstyle{\mathrm{D}}}&=& -i\Big(\delta_a^d R_c{}^b-\delta_c^b R_a{}^d \Big)\\
&& -i\,
\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)\sinh\Big({\displaystyle\frac{q_c-q_d}{2}}\Big)
\Big(\delta_a^d\{ \Phi^k, \bar\Phi_k \}_c{}^b-\delta_c^b\{ \Phi^k, \bar\Phi_k \}_a{}^d\Big)\,,
\end{array}
\end{equation}
we find that the supercharges $\mathrm{Q}^i$, $\bar \mathrm{Q}_i$ defined in \p{Q}
form the ${\mathcal N}{=}\,4$ superalgebra
\begin{eqnarray}\label{DB-QQ}
\{\mathrm{Q}^i, \mathrm{Q}^k \}^{'}_{\scriptstyle{\mathrm{D}}} &=& -\frac{i}{4}\sum\limits_{a\neq b}
\frac{ \phi^{(i}{}_a{}^b \phi^{k)}{}_b{}^a}{\sinh^2\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}\,\Big( F_a-F_b\Big) \,,
\\ [6pt]
\label{DB-QbQ}
\{ \mathrm{Q}^i , \bar \mathrm{Q}_k \}^{'}_{\scriptstyle{\mathrm{D}}} &=& -2i\,\mathrm{H}\,\delta^i_k -\frac{i}{4}\sum\limits_{a\neq b}
\frac{ \phi^{i}{}_a{}^b \bar\phi_{k}{}_b{}^a}{\sinh^2\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}\,\Big( F_a-F_b\Big) \,,
\\ [6pt]
\label{DB-HQ}
\{ \mathrm{Q}^i, \mathrm{H} \}^{'}_{\scriptstyle{\mathrm{D}}} &=&
-\frac{1}{8}\sum\limits_{a\neq b}
\frac{ R_a{}^b \phi^{i}{}_b{}^a}{\sinh^3\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}\,\Big( F_a-F_b\Big)
\end{eqnarray}
and {\cal c.c.}, where the Hamiltonian $\mathrm{H}$ and the constraints $F_a\approx0$ are given in \p{Ham-fix} and \p{F-constr-d}.
Thus, the quantities $\mathrm{H}$, $\mathrm{Q}^i$, $\bar \mathrm{Q}_i$,
defined in \p{Ham-fix}, \p{Q},
form the $\mathcal{N}{=}\,4$ superalgebra with respect to the Dirac brackets
on the shell of the first class constraints \p{F-constr-d}.
Moreover, the generators $\mathrm{H}$, $\mathrm{Q}^i$, $\bar \mathrm{Q}_i$
are gauge invariant: they have the vanishing Dirac brackets with the first class constraints \p{F-constr-d},
\begin{equation}
\label{DB-constr1-Q}
\{\mathrm{Q}^i , F_a \}^{'}_{\scriptstyle{\mathrm{D}}} = \{\bar\mathrm{Q}_i , F_a \}^{'}_{\scriptstyle{\mathrm{D}}} =
\{ \mathrm{H} , F_a \}^{'}_{\scriptstyle{\mathrm{D}}} =0 \,.
\end{equation}
The form of the first two terms in expressions \p{Q} is similar to the ${\mathcal N}{=}\,2$ supercharges
presented in \cite{Fed20}.
But the last terms in the ${\mathcal N}{=}\,4$ supercharges \p{Q} were absent in the ${\mathcal N}{=}\,2$ case.
Their appearance is the result of the $\mathrm{SU}(2)$ spinor nature of Grassmann variables in the ${\mathcal N}{=}\,4$ case.
Moreover, the first and last terms in the supercharges \p{Q1}, \p{bQ1}
\begin{equation} \label{Q-A-2n}
{\mathbb{Q}}^i = \sum_a \left( p_a\varphi^i_a +\frac{i}{2}\, \varphi_k{}_a \varphi^k_a\bar\varphi^i_a\right) ,\qquad
\bar{\mathbb{Q}}_i = \sum_a \left( p_a\bar\varphi_i{}_a +\frac{i}{2}\, \varphi_i{}_a \bar\varphi^k_a\bar\varphi_k{}_a\right)
\end{equation}
contain only diagonal fermions $\varphi^i_a$, $\bar\varphi_i{}_a$ and possess the following Dirac brackets:
\begin{equation} \label{alg1-cl-An}
\{ {\mathbb{Q}}^i, \bar {\mathbb{Q}}_k \}_{\scriptstyle{\mathrm{D}}} = -2i\delta^i_k {\mathbb{H}}\,,\qquad
\{ {\mathbb{Q}}^i, {\mathbb{H}} \}_{\scriptstyle{\mathrm{D}}} = \{ \bar {\mathbb{Q}}_i, {\mathbb{H}} \}_{\scriptstyle{\mathrm{D}}} = 0\,,
\end{equation}
where
${\mathbb{H}} = \frac12\sum_a p_a^{\ 2}$.
Although supercharges \p{Q-A-2n} contain terms trilinear in fermions in contrast to the ${\mathcal N}{=}\,2$ case \cite{Fed20},
these quantities and ${\mathbb{H}}$ generate the ${\mathcal N}{=}\,4$ supersymmetric system
describing $n$ non-interacting free particles. This system is described by the ${\mathcal N}{=}\,4$
superfield Lagrangian $\mathscr{L}\sim \sum_a\log{\mathscr{X}_a}$ (see \cite{superc,IKL89,FIL12}).
It should also be noted that the terms of the supercharges \p{Q1}, \p{bQ1},
without the first and last terms \p{Q-A-2n}, describe the interaction of particles
and are zero when the off-diagonal matrix fermions $\phi^i{}_a{}^b$, $\bar\phi_i{}_a{}^b$ vanish.
Similarly to the ${\mathcal N}{=}\,2$ case \cite{Fed20}, we can make gauge-fixing for the residual $n$ real
first class constraints \p{F-constr-d} (or \p{F-constr-d1}).
However, in the considered ${\mathcal N}{=}\,4$ case, we have $2n$ complex spinor variables $Z^i_a$
in opposite to the ${\mathcal N}{=}\,2$ case with $n$ complex spinorial degrees of freedom in the last case.
Thus, in the ${\mathcal N}{=}\,4$ case considered here the ${\mathcal N}{=}\,4$ multiparticle model possesses
$n$ complex semi-dynamical degrees of freedom in phase space and describes the ${\mathcal N}{=}\,4$ supersymmetrization
of the many-particle system which differs from the one in the ${\mathcal N}{=}\,2$ case.
In Section\,5, we use the reduction that eliminates these semi-dynamical degrees of freedom in the ${\mathcal N}{=}\,4$ invariant way.
\setcounter{equation}{0}
\section{Lax representation}
Classical dynamics of the system with partial gauge-fixing considered here is defined by the total Hamiltonian
\begin{equation}\label{Ham-fix-t}
\mathrm{H}_{\mathrm{T}} \ = \ \mathrm{H}\ +\ \sum_a\lambda_aF_a \,,
\end{equation}
where the Hamiltonian $\mathrm{H}$ is defined in \p{Ham-fix} and $\lambda_a(t)$ are the Lagrange multipliers
for the first class constraints $F_a$, presented in \p{F-constr-d}.
A time derivative of an arbitrary phase variable $B(t)$ takes the form
\begin{equation}\label{der-B}
\dot B \ = \ \{ B, \mathrm{H}_{\mathrm{T}} \}^{'}_{\scriptstyle{\mathrm{D}}} \,.
\end{equation}
Let us represent this dynamics in the Lax representation \cite{Lax}.
To do this, we introduce the $n{\times}n$ matrix
\begin{equation}
\label{L-matr}
L_{a}{}^{b} \ = \ p_a\, \delta_{a}{}^{b} \ - \ i\left( 1- \delta_{a}^{b} \right)
\frac{ R_a{}^b }{2\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}\,,
\end{equation}
whose evolution
\begin{equation}
\label{L-der-eq}
\dot L_{a}{}^{b} \ = \ \{ L_{a}{}^{b}, \mathrm{H}_{\mathrm{T}} \}^{'}_{\scriptstyle{\mathrm{D}}}
\end{equation}
is represented by the matrix commutator
\begin{equation}
\label{L-eq}
\dot L_{a}{}^{b} \ = \ -i [M+\Lambda,L]_{a}{}^{b}
-i\left( 1- \delta_{a}^{b} \right)\frac{ L_a{}^b\left(F_a-F_b\right)}{4\sinh^2\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)} \,,
\end{equation}
where the matrices $M$ and $\Lambda$ have the following form:
\begin{equation}
\label{M-matr}
M_{a}{}^{b} \ = \
\frac{1}{4}\,\{\Phi^k,\bar\Phi_k\}_{a}{}^{a}\delta_{a}{}^{b} \ + \ \frac{1}{4}\left( 1- \delta_{a}^{b} \right)
\left(\frac{ \cosh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
{\sinh^2\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}\,R_a{}^b +\{\Phi^k,\bar\Phi_k\}_{a}{}^{b}\right),
\end{equation}
\begin{equation}
\label{La-matr}
\Lambda_{a}{}^{b} \ = \ \lambda_{a}\, \delta_{a}{}^{b}
\end{equation}
and $F_a$ are the constraints defined in \p{F-constr-d1}.
The equations of motion
of the fermionic matrix variables $\Phi^i_{a}{}^{b}$, $\bar\Phi_i{}_{a}{}^{b}$
are also represented as commutators
\begin{equation}
\label{Ps-eq}
\begin{array}{rcccl}
\dot \Phi^i_{a}{}^{b} &=& \{ \Phi^i_{a}{}^{b}, \mathrm{H}_{\mathrm{T}} \}^{'}_{\scriptstyle{\mathrm{D}}} &=& -i [M+\Lambda,\Phi^i]_{a}{}^{b}
\,, \\ [7pt]
\dot {\bar\Phi}_i{}_{a}{}^{b} &=& \{ {\bar\Phi}_i{}_{a}{}^{b}, \mathrm{H}_{\mathrm{T}} \}^{'}_{\scriptstyle{\mathrm{D}}} &=& -i [M+\Lambda,{\bar\Phi_i}]_{a}{}^{b}
\end{array}
\end{equation}
with the same matrices $M$ and $\Lambda$.
On the shell of the first class constraints \p{F-constr-d1} $F_a\approx 0$, equations \p{L-eq}, \p{Ps-eq} are actually the Lax equations
and yield the conserved charges in a simple way.
So due to equations \p{L-eq}, \p{Ps-eq}, the trace
\begin{equation}
\label{J-f-def}
\mathcal{J} := {\mathrm{Tr}} (\mathcal{F})
\end{equation}
of any polynomial function
$\mathcal{F}(L,\Phi,\bar\Phi)$ of the matrix variables $L_{a}{}^{b}$, ${\Phi}^i_{a}{}^{b}$, ${\bar\Phi}_i{}_{a}{}^{b}$
is a conserved quantity on the shell of constraints \p{F-constr-d1}:
\begin{equation}
\label{F-conser}
\dot \mathcal{J} \approx 0\,.
\end{equation}
In particular, on the shell of constraints \p{F-constr-d1}, the traces
\begin{equation}
\label{Ik-def}
I_\mathrm{k} := {\mathrm{Tr}} (L^\mathrm{k})\,,\qquad \mathcal{I}^i_{\,\mathrm{k}} := {\mathrm{Tr}} (\Phi^i L^\mathrm{k-1})\,, \qquad \bar{\mathcal{I}}_{\,\mathrm{k}}{}_i := {\mathrm{Tr}} (\bar\Phi L^\mathrm{k-1})\,,\qquad \mathrm{k}=1,\ldots,n
\end{equation}
are conserved:
\begin{equation}
\label{Ik-eq}
\dot I_\mathrm{k} = \frac{i\mathrm{k}}{4}\sum\limits_{a\neq b}
\frac{ (L^{\mathrm{k}})_a{}^b }{\sinh^2\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}\,\Big( F_a-F_b\Big) \approx 0\,,
\qquad \dot {\mathcal{I}}^i_{\,\mathrm{k}} = 0\,,\qquad \dot {\bar{\mathcal{I}}}_{\,\mathrm{k}}{}_i = 0\,.
\end{equation}
The Hamiltonian \p{Ham-fix} and the supercharges \p{Q} have the form
\begin{equation}\label{Ham-fix-c}
\mathrm{H} = \frac12\,I_2 +J\,,\qquad \mathrm{Q}^i =\mathcal{I}^i_{\,2}+\mathcal{J}^i\,,
\qquad \bar \mathrm{Q}_i =\bar{\mathcal{I}}_{\,2}{}_i+\bar\mathcal{J}_i \,,
\end{equation}
where
\begin{equation}\label{Ham-fix3}
J := -\frac18 \,{\rm Tr}\Big(\{\Phi^i,\bar\Phi_i\}\{\Phi^k,\bar\Phi_k\}\Big)\,,
\quad \mathcal{J}^i := \frac{i}{2} \,{\rm Tr}\Big([\Phi^k,\bar\Phi_k]\Phi^i \Big)\,,\quad
\bar\mathcal{J}_i := -\frac{i}{2} \,{\rm Tr}\Big(\bar\Phi_i[\Phi^k,\bar\Phi_k] \Big)\,.
\end{equation}
The equations of motion
of the commuting spinning variables $Z^i_{a}$, $\bar Z_i^{a}$
are represented as
\begin{equation}
\label{Z-eq}
\begin{array}{rcccl}
\dot Z^i_{a} & = & \{ \Phi^i_{a}{}^{b}, \mathrm{H} \}^{'}_{\scriptstyle{\mathrm{D}}} & = &
-i \sum\limits_{ b} \left(A_{a}{}^{b} +\Lambda_{a}{}^{b}\right) Z^i_{b}\,,\\ [7pt]
\dot {\bar Z}{}_i^{a} & = & \{ {\bar\Phi}_i{}_{a}{}^{b}, \mathrm{H} \}^{'}_{\scriptstyle{\mathrm{D}}} & = & i \sum\limits_{ b} {\bar Z}{}_i^{b} \left(A_{b}{}^{a} +\Lambda_{b}{}^{a}\right),
\end{array}
\end{equation}
where the matrix $A$ has the form
\begin{equation}
\label{A-matr}
A_{a}{}^{b} \ = \ \left( 1- \delta_{a}^{b} \right)
\frac{ R_a{}^b} {4\sinh^2\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\end{equation}
and the matrix $\Lambda$ is defined in \p{La-matr}.
Due to \p{Z-eq} we obtain (see \p{S-Z-def})
\begin{equation}
\label{Z-inv}
\dot S_k{}^i = 0 \,,\qquad \mbox{where} \qquad S_k{}^i:=\sum\limits_{a}{\bar Z}{}_k^{a}Z^i_{a} \,.
\end{equation}
It should be noted that the structure of the conserved charges in the considered supersymmetric system \p{F-conser}
is similar to the form of the charges in the trigonometric (non-matrix) supersymmetric system studied in \cite{DeLaMa}.
Deriving the Lax pair and finding the set of conserved charges \p{J-f-def} paves the way for analyzing the integrability
of the $\mathcal{N}{=}\,4$ supersymmetric system considered here.
Analysis of the superalgebra of conserved charges and integrability of
the considered many-particle supersymmetric system will be the subject of the next article.
\setcounter{equation}{0}
\section{Spinless hyperbolic Calogero-Sutherland system as a result of the reduction procedure}
Semi-dynamical variables have the following Dirac brackets with the total Hamiltonian \p{Ham-fix-t}, \p{Ham-fix}
\begin{equation}
\label{sd-H}
\{ H_T, Z_a^j \}^{'}_{\scriptstyle{\mathrm{D}}} =
{\displaystyle\frac{i}4\sum_{b(\neq a)} \frac{R_a{}^b Z_b^j}{\sinh^2\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
+i\lambda_a Z_a^j}
\end{equation}
and with the supercharges \p{Q}
\begin{equation}
\label{sd-Q}
\{ Q^i, Z_a^j \}^{'}_{\scriptstyle{\mathrm{D}}} =
-{\displaystyle\frac12\sum_{b(\neq a)} \frac{\Phi^i{}_a{}^b Z_b^j}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}}\,, \qquad
\{ \bar Q_i, Z_a^j \}^{'}_{\scriptstyle{\mathrm{D}}} =
-{\displaystyle\frac12\sum_{b(\neq a)} \frac{ \bar\Phi_i{}_a{}^b Z_b^j}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}}\,.
\end{equation}
Therefore, the conditions
\begin{equation}
\label{red-Z}
Z_a^{j=2}=0\,, \qquad
\bar Z^a_{j=2} =0\,,\qquad \mbox{at all $a$}
\end{equation}
are invariant under the $\mathcal{N}{=}\,4$ supersymmetry transformations and we can use them as reduction conditions.
Similarly to \cite{KL-09}, the reduction \p{red-Z} implies the conditions
\begin{equation}
\label{red-gen}
S^{(\pm)}_a:=S_a{}_i{}^k\sigma^{\pm}{}_k{}^i \qquad \mbox{at all $a$,}
\end{equation}
where the quantities $S_a{}_i{}^k$ are defined in \p{S-Z-def}, $\sigma^{\pm}=\sigma^{1}\pm i\sigma^{2}$ and
$\sigma^{1,2}$ are the Pauli matrices.
So the conditions \p{red-Z} lead to zero two generators in all $u(2)$ algebras \p{S-Z-def}, \p{S-S-Dir}.
After reduction with the conditions \p{red-Z} the obtained system involves only half of the initial semi-dynamical variables
\begin{equation}
\label{rest-Z}
z_a:= Z_a^{j=1}\,, \quad \bar z^a:=\bar Z^a_{j=1}\,, \qquad \{ z_a, \bar z^b \}^{'}_{\scriptstyle{\mathrm{D}}} =-i\delta_a^b\,.
\end{equation}
Reduction of the Hamiltonian \p{Ham-fix} takes the form
\begin{equation}\label{Ham-fix-r}
\mathcal{H} \ = \
\frac12\,\sum_{a}p_a p_a \ + \
\frac18\,\sum_{a\neq b} \frac{T_a{}^bT_b{}^a}{\sinh^2 \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\ - \, \frac{1}{8}\ {\rm Tr} \Big( \{\Phi^i, \bar\Phi_i\}\{ \Phi^k, \bar\Phi_k\} \Big)
\,,
\end{equation}
where
\begin{equation}\label{T-def-r}
T_a{}^b := z_a\bar z^b- \cosh\left(\frac{q_a-q_b}{2}\right)\{ \Phi^k, \bar\Phi_k \}_a{}^b\,.
\end{equation}
In this case, the $\mathcal{N}{=}\,4$ supersymmetry generators \p{Q} take the form
\begin{equation}\label{Q-r}
\begin{array}{rcl}
\mathcal{Q}^{\,i} &=& {\displaystyle \sum\limits_{a} p_a \Phi^i{}_a{}^a \ -\ \frac{i}{2}\sum\limits_{a\neq b}
\frac{ T_a{}^b \Phi^i{}_b{}^a}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\ +\ \frac{i}{2}\sum\limits_{a, b}\, [\Phi^k, \bar\Phi_k]_a{}^b\Phi^i{}_b{}^a\,, } \\ [8pt]
\bar{\mathcal{Q}}_{\,i} &=& {\displaystyle \sum\limits_{a} p_a \bar\Phi_i{}_a{}^a\ -\ \frac{i}{2}\sum\limits_{a\neq b}
\frac{ T_a{}^b \bar\Phi_i{}_b{}^a}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\ -\ \frac{i}{2}\sum\limits_{a, b}\, [\Phi^k, \bar\Phi_k]_a{}^b\bar\Phi_i{}_b{}^a\,, }
\end{array}
\end{equation}
while the first class constraints \p{F-constr-d} become
\begin{equation}\label{F-constr-d-r}
\mathcal{F}_a := T_a{}^a -c= z_a\bar z^a- \{ \Phi^k, \bar\Phi_k \}_a{}^a - c\approx 0\qquad\mbox{(no summation over $a$)}\,.
\end{equation}
Similarly to quantities \p{T-def} with the Dirac brackets \p{R-alg}, quantities \p{T-def-r} satisfy
\begin{equation}\label{T-alg}
\begin{array}{rcl}
\{T_a{}^b, T_c{}^d \}^{'}_{\scriptstyle{\mathrm{D}}}&=& -i\Big(\delta_a^d T_c{}^b-\delta_c^b T_a{}^d\Big) \\
&& -i\,
\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)\sinh\Big({\displaystyle\frac{q_c-q_d}{2}}\Big)
\Big(\delta_a^d\{ \Phi^k, \bar\Phi_k \}_c{}^b-\delta_c^b\{ \Phi^k, \bar\Phi_k \}_a{}^d\Big)\,,
\end{array}
\end{equation}
As result, the charges \p{Q-r}, \p{Ham-fix-r} form the same ${\mathcal N}{=}\,4$ superalgebra \p{DB-QQ}-\p{DB-HQ},
up to the first class constraints \p{F-constr-d-r}.
However this reduced system contains $n$ first class constraints \p{F-constr-d-r} which,
together with the gauge fixing conditions, can eliminate all $n$ complex semi-dynamical variables $z_a$.
So similaly to the $\mathcal{N}{=}\,2$ case considered in \cite{Fed20}, we can make the gauge-fixing
\begin{equation}\label{fix-z}
\bar z^a= z_a \qquad\mbox{(for all $a$)}
\end{equation}
for the first class constraints \p{F-constr-d-r}.
Then, the components of the spinor $z_a$ become real and are expressed through the remaining variables by the following expressions:
\begin{equation}\label{Z-sqrt}
z_a= \sqrt{c+\{ \Phi^k, \bar\Phi_k \}_a{}^a} \qquad\mbox{(no summation over $a$)}\,.
\end{equation}
In this gauge the supercharges \p{Q1}, \p{bQ1} take the form
\begin{eqnarray}\nonumber
\mathcal{Q}^{\,i} &= &\sum\limits_{a} p_a \Phi^i{}_a{}^a-\frac{i}{2}\sum\limits_{a\neq b}
\frac{ \sqrt{c+\{ \Phi^k, \bar\Phi_k \}_a{}^a}\,\sqrt{c+\{ \Phi^j, \bar\Phi_j \}_b{}^b} \ \Phi^i{}_b{}^a}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)} \\
&& +\,\frac{i}{2}\sum\limits_{a\neq b}
\coth \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)
\{ \Phi^k, \bar\Phi_k \}_a{}^b\,\Phi^i{}_b{}^a +\frac{i}{2}\sum\limits_{a, b} [\Phi^k{}, \bar\Phi_k]_a{}^b \Phi^i{}_b{}^a\,,
\label{Q2}\\
\nonumber
\bar{\mathcal{Q}}_{\,i} &= &\sum\limits_{a} p_a \bar\Phi_i{}_a{}^a-\frac{i}{2}\sum\limits_{a\neq b}
\frac{ \sqrt{c+\{ \Phi^k, \bar\Phi_k \}_a{}^a}\,\sqrt{c+\{ \Phi^j, \bar\Phi_j \}_b{}^b} \
\bar\Phi_i{}_b{}^a}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)} \\
&& +\,\frac{i}{2}\sum\limits_{a\neq b}
\coth\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)
\{ \Phi^k, \bar\Phi_k \}_a{}^b\,\bar\Phi_i{}_b{}^a -\frac{i}{2}\sum\limits_{a, b} [\Phi^k{}, \bar\Phi_k]_a{}^b \bar\Phi_i{}_b{}^a\,.
\label{bQ2}
\end{eqnarray}
Moreover, in this gauge and in a pure bosonic limit,
the reduced Hamiltonian \p{Ham-fix-r} takes the form
\begin{equation}\label{Ham-fix-r-b}
\mathcal{H}_{bose} \ = \
\frac12\,\sum_{a}p_a p_a \ + \
\frac18\,\sum_{a\neq b} \frac{c^2}{\sinh^2 \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)}
\end{equation}
and is the Hamiltonian of the standard spinless hyperbolic Calogero-Sutherland system.
Thus, the reduction \p{red-Z} of the considered system yields
gauge formulation of the $\mathcal{N}{=}\,4$ spinless hyperbolic Calogero-Sutherland system \cite{C,Su,OP,Poly-rev}.
Due to the presence of the square roots in the second terms in the supercharges \p{Q2}, \p{bQ2}
they contain higher degrees with respect to the Grassmannian variables.
To avoid this, new variables
\begin{equation}\label{xi-def}
\xi^i{}_a{}^b= \Phi^i{}_a{}^b\sqrt{\frac{c+\{ \Phi^j, \bar\Phi_j \}_b{}^b}{c+\{ \Phi^k, \bar\Phi_k \}_a{}^a}}\,, \qquad
\bar\xi_i{}_a{}^b= \bar\Phi_i{}_a{}^b\sqrt{\frac{c+\{ \Phi^j, \bar\Phi_j \}_b{}^b}{c+\{ \Phi^k, \bar\Phi_k \}_a{}^a}}
\end{equation}
were introduced in \cite{KLS-18b}.
In these quantities the supercharges \p{Q1}, \p{bQ1} take the form
\begin{eqnarray}\nonumber
{\mathcal{Q}}^{\,i} &= &\sum\limits_{a} p_a \xi^i{}_a{}^a -\frac{i}{2}\sum\limits_{a\neq b}
\frac{ \left(c+\{ \xi^k, \bar\xi_k \}_b{}^b\right) \xi^i{}_b{}^a}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)} \\
&& +\,\frac{i}{2}\sum\limits_{a\neq b}
\coth \Big({\displaystyle\frac{q_a-q_b}{2}}\Big)
\{ \xi^k, \bar\xi_k \}_a{}^b\,\xi^i{}_b{}^a -\frac{i}{2}\,\beta\sum\limits_{a, b}[ \xi^k, \bar\xi_k ]_a{}^b\,\xi^i{}_b{}^a\,,
\label{Q3}\\
\nonumber
\bar{\mathcal{Q}}_{\,i} &= &\sum\limits_{a} p_a \bar\xi_i{}_a{}^a-\frac{i}{2}\sum\limits_{a\neq b}
\frac{ \left(c+\{ \xi^k, \bar\xi_k \}_b{}^b\right)
\bar\xi_i{}_b{}^a}{\sinh\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)} \\
&& +\,\frac{i}{2}\sum\limits_{a\neq b}
\coth\Big({\displaystyle\frac{q_a-q_b}{2}}\Big)
\{ \xi^k, \bar\xi_k \}_a{}^b\,\bar\xi_i{}_b{}^a +\frac{i}{2}\,\beta\sum\limits_{a, b} [ \xi^k, \bar\xi_k ]_a{}^b\,\bar\xi_i{}_b{}^a\,,
\label{bQ3}
\end{eqnarray}
where $\beta\,{=}\,{-}1$,
and coincide exactly with the ${\mathcal N}{=}\,4$ supersymmetry generators presented in \cite{KL-20}.\footnote{The author thanks Sergey Krivonos for the information that the value $\beta\,{=}\,{-}1$ is also valid in the hyperbolic case of the model presented in \cite{KL-20}.}
Point out that in contrast to the properties of the Grassmannian variables \p{Phi-def},
quantities \p{xi-def} do not form pairs with respect to complex conjugation,
that is some obstacle in quantization of the system in such representation.
\setcounter{equation}{0}
\section{Concluding remarks and outlook}
In this paper, the Hamiltonian description
of the $\mathcal{N}{=}\,4$ supersymmetric multi-particle hyperbolic Calogero-Sutherland system is presented,
which was obtained from the matrix superfield model by the gauging procedure \cite{FIL19}.
In contrast to the $\mathcal{N}{=}\,2$ case, the $\mathcal{N}{=}\,4$ supersymmetric generalization
of the gauged model
has the $\mathrm{U}(2)$ spin hyperbolic Calogero-Sutherland system as a bosonic core.
In the presented paper,
there are obtained explicit expressions of the $\mathcal{N}{=}\,4$ supersymmetry generators for different descriptions
of the system under consideration.
The supercharges \p{Q-matrix} and the Hamiltonian \p{Ham-matrix1} of the fully matrix system have a simple form,
but this system contains a large number of auxiliary degrees of freedom,
which can be eliminated by $n^2$ first class constraints \p{F-constr1}.
After the partial gauge fixing \p{x-fix}, eliminating off-diagonal even matrix variables,
we obtain the formulation in which the $\mathcal{N}{=}\,4$ supersymmetry generators \p{Q1}, \p{bQ1}
have the Calogero-like form and are closed on the Hamiltonian \p{Ham-fix} (or \p{Ham-fix1}) and $n$ first class constraints \p{F-constr-d}
generating the residual $[\mathrm{U}(1)]^n$ gauge symmetry.
Without off-diagonal odd variables in the classical supercharges \p{Q} (or \p{Q1}, \p{bQ1}), the nontrivial interaction terms disappear in them.
It is possible to impose the reduction conditions \p{red-Z}
that are $\mathcal{N}{=}\,4$ supersymmetry invariant and eliminate half of the spinning variables.
As result, we get the $\mathcal{N}{=}\,4$ supersymmetric system with $n$ first class constraints \p{F-constr-d-r},
which allows us gauging of the remaining spinning variables. Such a reduced system is in fact
the $\mathcal{N}{=}\,4$ generalization of the spinless hyperbolic Calogero-Sutherland system
equivalent to the model presented in \cite{KL-20}.
In addition, the Lax representation \p{L-eq}, \p{Ps-eq}, \p{Z-eq} of the equations of motion
for the system under consideration is presented.
The set of conserved quantities \p{F-conser}, \p{Ik-def}, \p{Z-inv} is found.
Analysis of the classical integrability of the $\mathcal{N}{=}\,4$ system considered here will be the subject of the next paper.
Moreover, a further research will be devoted to quantum integrability
of the supersymmetric $\mathcal{N}{=}\,2$ and $\mathcal{N}{=}\,4$ systems constructed here.
Supersymmetry quantum generators are obtained using the Weyl ordering in quantum analogs of quantities such as
the $\mathcal{N}{=}\,2$ supersymmetric case.
However, in contrast to the $\mathcal{N}{=}\,2$ case \cite{Fed20},
due to the $\mathrm{SU}(2)$-doublet nature of odd variables in the $\mathcal{N}{=}\,4$ case,
the separation of the invariant sector with only diagonal odd variables does not work in the $\mathcal{N}{=}\,4$ quantum case.
\smallskip
\section*{Acknowledgements}
I would like to thank Evgeny Ivanov and Sergey Krivonos for useful discussions.
This work was supported by the Russian Science Foundation, grant no.\,16-12-10306.
| proofpile-arXiv_066-1793 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{sec:intro}
X-shaped radio galaxies (XRG) are a numerically small but enigmatic species in the zoo of radio galaxies, since their radio emission arises from not one, but two (mis-aligned) pairs of radio lobes of a comparable extent \citep[e.g.][]{leahy84,Capetti02}. Of these, the `primary' (i.e. active) lobes often show a terminal hot spot that signifies an ongoing energy supply via bi-polar jets. In contrast, the secondary lobes (`wings') are usually diffuse and devoid of a terminal hot spot. Two major explanations have been advanced for this morphological dichotomy: (i) each wing is merely a continuation of the hydrodynamical `back flow' in the primary lobe, which gets deflected due to buoyancy forces, upon impinging on an ellipsoidal hot interstellar medium (ISM) of the parent galaxy \citep{leahy84,Worrall95,Hodges-Kluck10}; or (ii) the wings are relics of the lobe pair whose energy supply ceased as the twin-jets feeding them flipped over in a new direction due to a merger of the jetted super-massive black hole (SMBH) with another SMBH, thus giving rise to the active lobes seen presently \citep{Rottmann01,Zier01}. This possibility of a spin-flip via the SMBH merger and consequent emission of gravity waves \citep{Merritt02} brought XRGs into the limelight about two decades ago, even though the first example of an XRG (3C 315) has been known for nearly half a century \citep{hogbom74}.
Clearly, spectral index mapping as an indicator of the ages of different parts of XRGs is a key step towards understanding the origin of the XRG phenomenon. Early studies of XRG 3C 315 resulted in contradictory claims about spectral index gradients in this radio source \citep{Hogbom79,Alexander87}. The reported lack of spectral gradients \citep{Hogbom79} was intriguing, since in both above models of XRGs, the wings are identified as the repository of aged synchrotron plasma. \citet{Rottmann01} investigated this issue by comparing his single-dish (Effelsberg) images of nine prominent XRGs at 10.5 GHz (beam \simi 1.15\text{$^\prime$}) and seven of them also at 32 GHz (beam \simi0.45\text{$^\prime$}) with the existing Westerbork telescope maps made below 1 GHz as well as VLA maps between 1 to 8 GHz. The use of high-frequency maps is advantageous for identifying regions of synchrotron ageing. Interestingly, for two XRGs in that sample, namely 3C 223.1 and 3C 403, \citet{Rottmann01} reported an anomalous spectral index distribution, with the wings exhibiting a flatter radio spectrum compared to the primary lobes. He also found these two sources to have the smallest spectral ages in his XRG sample. \citet{Dennett-Thorpe02} confirmed a flatter spectrum for the wings in 3C 223.1, but found the spectral difference to be marginal ($\alpha_{\rm lobe}$ - $\alpha_{\rm wing}$ \simi 0.08). A similar `tendency' was reported by \citet{Mack05}, based on their spectral index map (74 - 1400 MHz), with a 45\text{$^{\prime\prime}$} ~ beam which could, however, scarcely resolve the wings. On the other hand, a distinctly flatter spectrum of the wings in 3C 223.1 was reported by \citet{Lal05}, in spite of their use of maps made at metre-wavelengths (240 - 610 MHz), where spectral steepening is expected to be less pronounced. These rather dissonant findings about the significance level of spectral flattening in the wings have provided us the impetus to take a fresh look into the reported spectral peculiarity of this $z=$ 0.1075 XRG.
We have taken advantage of the recently available LOFAR maps of 3C 223.1 at 144 MHz with 6\text{$^{\prime\prime}$} and 20\text{$^{\prime\prime}$} ~ beams (LoTSS-DR2\footnote{\url{https://lofar-surveys.org/dr2_release.html}} ; \citealt{LOTSSDR2}), in conjunction with the VLA images at C-band and X-band, obtained by us from the NRAO VLA Archive Survey \footnote{\url{http://www.vla.nrao.edu/astro/nvas/}}. These VLA D-array maps at C and X-bands have beamwidths of 14.2\text{$^{\prime\prime}$} $\times$ 11.7\text{$^{\prime\prime}$}, and 8.2\text{$^{\prime\prime}$} $\times$ 6.7\text{$^{\prime\prime}$}, respectively.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.11]{Figures/Fig1.pdf}
\caption{{\tiny Radio imaging and spectral data for 3C223.1. (a) LoTSS DR2 144 MHz radio contours (levels: 0.0085,0.0097,0.0134,0.0213,0.0384,0.0754,0.1549,0.3263,0.6955, and 1.4909 Jy~beam$^{-1}$) of the 6\text{$^{\prime\prime}$} map overlaid on PanSTARRS r band image. (b): Integrated spectral index fit from 144 to 10700 MHz. (c): Three frequencies (144, 4910, and 8350 MHz) spectral index map and error map (d) of 3C223.1. The beam of 20\text{$^{\prime\prime}$} $\times$ 20\text{$^{\prime\prime}$} ~ is shown at the bottom right corner. The host galaxy location is shown by the black marker.}}
\label{fig:lofarsifit}
\end{figure*}
\begin{figure}
\includegraphics[scale=0.18]{Figures/3C233.1-vlapol_R.pdf}
\caption{{\tiny Double-boomerang morphology of XRG 223.1 can be clearly seen in the above figure based on Figure 4b of \citet{Black1992}, where VLA 8 GHz 2.5\text{$^{\prime\prime}$} ~ image contours of 3C223.1 are shown with polarisation vectors. The contours are drawn at (-4,4,6,9,12,15,20,30,50,100,150,200,300,400,500, and 600) $\times$ 55~$\muup$Jy~beam$^{-1}$ (rms or $\rm \sigma_{I}$). For the polarized intensity, the $\rm \sigma_{P}$ is \simi36~$\muup$Jy~beam$^{-1}$ and the vectors are drawn only at regions which have a surface-brightness $>$4$\sigma_{I}$. We have marked outline of the `double boomerang' in red colour along with the dashed line showing 40\text{$^{\circ}$} ~ position angle of the disk of the host galaxy (Fig.\ \ref{fig:lofarsifit}a).}}
\label{fig:vlapol}
\end{figure}
\vspace{-0.3cm}
\section{Spectral index mapping of the XRG 3C 223.1} \label{sec:2}
The above-mentioned LoTSS-DR2 map at 144 MHz was combined with the VLA maps at 4910 MHz and 8350 MHz, following the method given by \citet{duy17} for generating the three-frequencies spectral index map with 20\text{$^{\prime\prime}$} matched beam. These VLA D-array maps at 4.91 and 8.35 GHz essentially recover the entire radio structure of this XRG, since their integrated flux densities of 0.80$\pm$0.04 Jy and 0.52$\pm$0.03 Jy, respectively, are in full accordance with the integrated spectrum
(see Fig.\ \ref{fig:lofarsifit}b and Table\ \ref{tab:radioimtab}).
This is consistent with the fact that even at 8.35 GHz, dense UV coverage of the VLA D-array data used here extends down to 35 m, which is enough to pick up the entire structure of this bright 2.0\text{$^\prime$} ~ radio source. Fig.\ \ref{fig:lofarsifit} (c-d) displays the derived spectral index and error maps, based on a combination of high sensitivity, resolution, and frequency range ( 1 : 58 ) that is unmatched for this XRG. Spectral flattening is distinctly visible towards each wing.
In the primary lobes, the northern and southern hot spots have $\alpha_{144}^{8350}$ = -0.70$\pm$0.04 and -0.73 $\pm$0.04, respectively, and their associated (primary) lobes exhibit a spectral steepening by $\Delta\alpha$ \simi 0.1. However, going further into the wings, the spectral gradient reverses sign and the spectrum turns markedly flatter along the ridge line, right up to the wing's edge. Interestingly, spectrum in certain parts of the wings is even flatter than it is at the hot spots. This confirms, with a much higher level of precision and in much greater spatial detail, the original result from \cite{Rottmann01} and it is also consistent with the findings of \citet{Lal05}. This point is further discussed in the next section.
\begin{table}[htbp]
\setlength{\tabcolsep}{4pt}
\caption{{\tiny Integrated flux densities of 3C223.1 (Fig.\ \ref{fig:lofarsifit}b). Reference: 1)Present work, based on LoTSS-DR2 \citep{LOTSSDR2} \& NVSS \citep{nvss}, 2)\citet{Kellermann68}, 3) \citet{Kellermann73}, 4) \citet{Black1992}, 5) \citet{Mack05}. }}\label{tab:radioimtab}
\begin{tabular}{lcccc}
\hline
Frequency & Flux & Telescope &Beam & Ref \\
(MHz) & (Jy) & & & \\
\hline
74 & 19.7$\pm$3.6 & VLA-A & 24\text{$^{\prime\prime}$} & 5\\
144 & 9.7$\pm$0.9 & LoFAR & 20\text{$^{\prime\prime}$} & 1\\
1400 & 2.0$\pm$0.1 &VLA-C-D & 45\text{$^{\prime\prime}$} & 1\\
2695 & 1.23$\pm$0.04 & NRAO-140ft & 11.3\text{$^\prime$} $\times$10.5\text{$^\prime$} & 2\\
4910 & 0.80$\pm$0.04 & VLA-D & 20\text{$^{\prime\prime}$} & 4\\
8350 & 0.52$\pm$0.03 & VLA-D & 20\text{$^{\prime\prime}$} & 4\\
10700 & 0.31$\pm$0.03 & Effelsberg & 2.85\text{$^\prime$} & 3\\
\hline
\end{tabular}
\end{table}
\vspace{-0.7cm}
\section{Discussion}\label{sec:Disc}
The spectral index map presented in Fig.\ \ref{fig:lofarsifit}c clearly establishes XRG 3C 223.1 (J094124.028+394441.95) as the prime example of an XRG whose wings have a flatter radio spectrum than the primary lobes, thus challenging the currently popular models of XRGs, including the back-flow diversion model mentioned in Sect.\ \ref{sec:intro}. A potentially useful hint for the origin of this anomalous spectral gradient comes from the recent MeerKAT observations of the XRG PKS 2014-55, dubbed as `double-boomerang' XRG (dbXRG), which is hosted by a Seyfert 2 elliptical galaxy located in a poor group of galaxies at $z$ = 0.06063 (\citealt{cotton20} and references therein). Although this giant XRG is currently the leading exponent of the `double-boomerang' morphology, a few other XRGs with a similar appearance have been reported \citep[e.g.][]{Lal19}. Examples of this include the prototypical XRG, 3C 315 itself \citep{hogbom74}, as well as the newly recognised case of XRG J1552+6534, whose LoTSS-DR2 image is presented in Fig.\ \ref{fig:3xrgs}, resembling a `double crescent'. It may be noted that even for 3C 223.1, the existing 8.3 GHz VLA map with a 2.5\text{$^{\prime\prime}$} ~ beam \citep{Black1992} exhibits a turn-around of the back flow in each lobe, akin to the double-boomerang morphology (see, Fig.\ \ref{fig:vlapol}). The map also shows that the magnetic field is aligned with the edges of the radio lobes (see, also, \citealt{Dennett-Thorpe02}). Probably due to the much greater relative spatial resolution available for the giant dbXRG PKS 2014-55, a sharp-edged faint radio cocoon of typical width \simi50 kpc has been detected around both its lobe pairs and the two radio cocoons appear to almost touch each other near their apexes where the elliptical host galaxy is situated (Fig. 5 in \citealt{cotton20}). The cocoons appear to act as a sheath around the backflow both before and after its deflection. However, due to their faintness, the magnetic field geometry inside the radio cocoons is essentially unknown. Although the exceptionally high relative spatial resolution afforded by the giant size of that dbXRG is not yet achievable for other dbXRGs, it seems reasonable to expect that the backflows in them are also surrounded by similar protective radio cocoons.
\begin{figure}
\centering
\includegraphics[scale=0.38]{Figures/3XRGs_SINGLECOL.pdf}
\caption{{\tiny Three spectacular XRGs culled by us from LoTSS-DR2: 6\text{$^{\prime\prime}$} ~ LoTSS 144 MHz images overlaid with VLASS 2.5\text{$^{\prime\prime}$} contours, where only the radio emission greater than 3$\sigma$ is shown. More details given in Table\ \ref{tab:3xrgs}. For J0941+3227, the radio core coincident (not shown here) with the host galaxy is quite faint and detected at 1$\sigma$ in both FIRST and VLASS, marked with `+' symbol in panel a.}}
\label{fig:3xrgs}
\end{figure}
In the case of 3C 223.1 as well, the two boomerangs are seen to approach each other to within \simi 2\text{$^{\prime\prime}$} (\simi4.3 kpc), near the location of the host elliptical galaxy which is known to possess a conspicuous dusty disk extending across the stellar body of the optical galaxy (Fig.\ \ref{fig:lofarsifit}a, Fig.\ \ref{fig:vlapol}), at a position angle of \simi40\text{$^{\circ}$}. The dusty disk was detected in a \textit{Hubble Space Telescope} snapshot survey of radio galaxies \citep{deKoff96}. This disk of large extent and right orientation may well be playing a significant role in blocking and deflecting the hydrodynamic backflow streaming through the two primary lobes. The compression of the magnetic field of the disk (and, possibly, of its synchrotron halo) by the impact of the collimated backflow could be contributing to the powerful push needed to transform the obliquely incident backflow into a boomerang shape. Post-rebound, the backflow propagation is guided by the steepest pressure gradient in the ISM of the host galaxy, as envisioned in \citet{leahy84}. A similar magnetic rebound may have contributed to the formation of the classic double boomerang in the giant dbXRG PKS 2014-55 where the backflow has been posited to impinge upon an ellipsoidal gaseous interstellar medium (ISM) of the host galaxy, with a required extent of \simi150 kpc \citep{cotton20}. We note that even in this elliptical Seyfert-2 galaxy, a nearly edge-on dusty disk has been detected which too is oriented nearly along the symmetry plane of the double boomerang \citep{Abbott2018,cotton20}. However, in order to effectively contribute to the backflow rebounding observed in this giant XRG, the gaseous disk would have to be larger by one order of magnitude than its detected extent of about 12 kpc. Such a possibility has not been ruled out, however. Sensitive H{\sc i} imaging of non-cluster early-type galaxies have revealed cases in which an H{\sc i} disk extends over several tens of kpc, which was probably acquired from one or more approaching gas-rich galaxies, and such H{\sc i} disks are prone to having kpc-scale central disks of dusty molecular gas \citep{Serra2012,Yildiz2020}.
Here, it is pertinent to recall an independent evidence for the role of magnetic tension, which has emerged from recent MeerKAT observations of the radio galaxy MRC 0600-399 in Abell 3376, followed-up with numerical simulations. Based on this information, \citet{Chibueze21} have argued that even the observed sharp bending of the relatively powerful jets of this wide-angle-tail (WAT) radio galaxy has taken place due to the jets encountering the tension of a compressed layer of external magnetic field. More generally, the backflowing synchrotron plasma of a lobe could also be diverted upon hitting a flattened gaseous structures, such as a sheet or filament of the cosmic web, as recently proposed for the case of the giant radio galaxy GRG 0503-286 (\citealt{Dabhade22}, see also, \citealt{GK09}). A spectacular example of a flattened gaseous obstruction between the two lobes can be seen in the recent LoTSS-DR2 image of a large XRG (J0941+3227 of size \simi 0.6 Mpc), whose lobes appear separated by a `linear' gap of average width \simi25 kpc (Fig.\ \ref{fig:3xrgs}a).
\begin{table*}[htbp]
\centering
\setlength{\tabcolsep}{4pt}
\caption{{\tiny Coordinates of the host galaxies of 3 spectacular XRGs selected by us from LoTSS-DR2 (Fig.\ \ref{fig:3xrgs}). The redshift ($z$), taken from the Sloan Digital Sky Survey (SDSS; \citealt{sdssyork}), is spectroscopic for J0941+3227 and photometric for J1328+5654. The sizes refer to the separation between the two hotspots (in the primary lobes). S$_{\rm 144}$ is the integrated flux density at 144 MHz (LoTSS-DR2) and P$_{144}$ the corresponding total radio luminosity. $\sigma_{\rm map}$ is the rms noise in the maps (Fig.\ \ref{fig:3xrgs}). Throughout this paper, we have adopted a flat cosmology with parameters $\rm \Omega_m$ = 0.27, and a Hubble constant of H$_0$ = 71 km s$^{-1}$ Mpc$^{-1}$. }}\label{tab:3xrgs}
\begin{tabular}{lcccccccc}
\hline
Source & R.A (J2000) & Dec (J2000) & $z$ & Size & Size & S$_{\rm 144}$ & P$_{144}$ &$\sigma_{\rm map}$\\
& & & &(\text{$^{\prime\prime}$}) & (kpc) & (Jy) & ($\times 10^{26}$ W~Hz$^{-1}$) &($\muup$Jy~beam$^{-1}$)\\
\hline
J0941+3227 & 09:41:46.10 & +32:27:18.64 & 0.45261$\pm$0.00005 & 110 & 634 & 0.90$\pm$0.09 & 6.0&103\\
J1328+5654 & 13:28:31.82 & +56:54:59.41 & 0.651$\pm$0.073 & 61 & 427 & 0.34$\pm$0.03& 5.2 & 65\\
J1552+6534 & 15:52:06.34 & +65:34:24.57 & - & 128 & - & 0.25$\pm$0.03 &- & 91\\
\hline
\end{tabular}
\end{table*}
\vspace{-0.3cm}
\subsection{Clues to the flatter radio spectrum of the wings}
In this section, we summarise some observational clues bearing on the question of the flatter radio spectrum of XRG wings (compared to the primary lobes), for which 3C 223.1 is thus demonstrated to be a proto-type. In this dbXRG, the clear reversal of radio spectral gradient following the deflection of the backflow into the wings (Sect.\ \ref{sec:2}) stands in sharp contrast to the monotonous spectral steepening along the backflow, which is typical of edge-brightened double radio sources. We propose that the spectral flattening observed towards the wings in 3C 223.1 is linked to particle acceleration (or re-acceleration) during the process of rebounding of the backflow. Plausibly, this could occur as the backflow impinges upon the disk (and its likely synchrotron halo) and encounters the tension of their magnetic field lines compressed by the impact of the backflow. We may recall that in both aforementioned examples, dbXRG PKS 2014-55 \citep{cotton20} and the cluster radio source MRC 0600-399 \citep{Chibueze21}, localised regions of enhanced radio emission accompanied by spectral flattening have actually been observed near the areas where a powerful collimated flow of the synchrotron plasma (jet or backflow) undergoes a sharp bending or rebound. Such patches of flatter radio spectrum could either dilute or mask the effect of spectral steepening in the ageing synchrotron plasma deflected into the XRG wings, or, in the extreme case, may even cause spectral flattening in the wings. Recent MHD simulations, reported in \citep{Chibueze21}, have shown that when a jet flow encounters the tension of magnetic field lines in a compressed layer, an efficient conversion of the magnetic energy into relativistic particles via magnetic reconnection can occur; then, the relativistic particles, accelerated in situ, get transported along the deflected stream of synchrotron plasma \citep[see, also,][]{Giri2022}. A similar process may be occurring at a significant level in the XRGs whose wings do not show spectral steepening or even exhibit a spectral flattening; for instance, in the rare case of XRG 3C 223.1. In view of this, it would be desirable to extend the spectral index mapping to the several other XRGs which are candidates for wings having flatter radio spectra compared to the primary lobes, based on their metre wavelength imaging observations \citep{Lal19}. It is important to extend their spectral mapping to centimetre wavelengths, where spectral steepening due to synchrotron losses should be more pronounced. Also, it would be instructive to look for signs of spectral flattening in regions where the jet flow appears to undergo a deflection upon colliding with an obstruction such as a galaxy shell. This possibility seems particularly relevant to the XRGs in which the wings take off sharply from the primary lobes at large distances ( $>>$ 10 kpc) from the parent galaxy, which is well beyond the likely interstellar medium of the parent galaxy \citep{xrggk12,Joshi19}. Observational evidence for jet-shell interactions \citep{GKCHITRE83} has been reported for the nearest radio galaxy Centaurus A \citep{GK84,GK10}.
\vspace{-0.3cm}
\subsection{A new kind of lobe symmetry in XRGs}
Conceptually, the most straightforward explanation for XRGs -- perhaps inspired by the XRGs whose wings do not exhibit a steeper radio spectrum compared to the primary lobes -- is that the central engine consists of a close binary of SMBH (see, \citealt{Begelman1980}), each of which produces its own pair of lobes \citep{Lal05,Lal19}. However, several observations have questioned the general applicability of this model, including the ubiquitous absence (i) of parsec-scale nuclear radio jets pointing towards the wings and (ii) of terminal hot spots in the wings (see, \citealt{xrggk12} for a review of the models of XRGs). Another challenge to this hypothesis stems from the detection of a lateral offset between the ridge-lines of the two wings in some well-mapped XRGs \citep{GK2003}. We note that such an offset is problematic even for the basic spin-flip model (Sec.\ \ref{sec:intro}), however, that model can be reconciled in case the wings arise due to bending of the twin jets by the ISM of the host galaxy, which has been set in rotation during the orbital infall of a merging galaxy \citep{GK2003}.
Here, we would like to draw attention to a new kind of morphological peculiarity, namely, a lateral offset between the pair of 'primary' radio lobes that extend parallel to each other. The case of XRG (J1328+5654) exemplifying such an anomalous morphology is shown in Fig.\ \ref{fig:3xrgs}b, featuring the LoFAR image and contours from the 3 GHz Very Large Array Sky Survey (VLASS; \citealt{vlass}). This specific morphology of the primary lobes is puzzling, since they are widely believed to be directly fed by the bipolar jets emanating from the nucleus, unlike the wings (see, e.g. \citealt{cotton20} and references therein). Unfortunately, the existing radio maps of this XRG lack the spatial resolution and sensitivity to trace the detailed trajectories of its jets, from the nucleus to the terminal hot spots seen in the primary lobes. It would be instructive to obtain this vital information through sensitive, high-resolution radio imaging in order to unravel how the bipolar jets in such XRGs undergo bending and how the backflows in the two lobes remain parallel to each other while being laterally offset.
\section{Conclusions} \label{sec:conc}
Taking advantage of the recent availability of the LoTSS-DR2 map of the X-shaped radio galaxy (XRG) 3C 223.1 at 144 MHz and using its archival VLA observations at 4.9 GHz and 8.4 GHz, we mapped the radio spectral index distribution across this XRG with unprecedented precision and spatial detail. This firmly establishes it as a prime example of an XRG in which the radio wings exhibit a distinctly flatter spectrum than the primary lobes, setting aside the debate over the level of this spectral anomaly. Evidence is also presented in support of this XRG having a `double boomerang' type radio morphology. Based on existing observational clues, we suggest that the flatter spectrum of the wings in this XRG manifests an extreme case of in situ acceleration and energisation of relativistic particles as the collimated hydrodynamical backflow of synchrotron plasma in the primary lobes impinges obliquely upon the prominent gaseous disk (and its likely synchrotron plasma halo) within the host galaxy and rebounds due to the tension of its magnetic field lines, which are compressed by the impact of the collimated backflows from opposite sides. Lastly, we have drawn attention to a new and intriguing morphological symmetry whereby the two primary lobes of an XRG, although parallel to each other, have a distinct lateral offset. Explaining this morphological anomaly appears more challenging than for similar morphological pattern found for the wings in several XRGs.
\section*{Acknowledgements}
GK acknowledges a Senior Scientist fellowship of the Indian National Science Academy. We would like to dedicate this work to late Prof. S.M. Chitre who, together with one of the authors (GK) introduced the concept of jet-shell interactions in radio galaxies \citep{GKCHITRE83}. The acknowledgment for LoTSS data usage can be found at \url{https://lofar-surveys.org/credits.html}.
The VLA archival data was obtained via the NRAO VLA Archive Survey, (c) AUI/NRAO.
We acknowledge that this work has made use of \textsc{aplpy} \citep{apl}.
\bibliographystyle{aa}
| proofpile-arXiv_066-2057 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The direct detection of gravitational waves emitted from binary black hole mergers, as well as the first image of the supermassive black hole at the center of the M87 galaxy, are two important breakthroughs in modern physics. These two crucial achievements have ushered in a golden era in which exploring strong gravity regimes, such as the vicinity of black holes, can indeed be realized. In particular, it turns out that these extremely compact objects are so far the best candidates for testing General Relativity (GR) or even for testing fundamental quantum theories of gravity, such as Loop Quantum Gravity (LQG).
LQG is a non-perturbative quantum gravity approach, within which the essence of spacetime is characterized by discretized eigenvalues of a set of geometrical operators. This discrete nature, in particular the non-zero area gap associated with the geometrical operators, plays a key role in removing classical singularities in the theory. Moreover, within the framework of LQG and considering the semi-classical regimes, one can construct various effective (non-rotating) black hole models, with one common feature that the classical singularity is usually replaced by a transition surface that connects a black hole and a white hole regions \cite{Gambini:2013ooa,Corichi:2015xia,Olmedo:2017lvt,Ashtekar:2018lag,Ashtekar:2018cay,Bodendorfer:2019xbp,Bodendorfer:2019cyv,Arruga:2019kyd,Assanioussi:2019twp,BenAchour:2020gon,Gambini:2020nsf,Bodendorfer:2019nvy,Bodendorfer:2019jay,Blanchette:2020kkk,Assanioussi:2020ezr} (see \cite{Bojowald:2020dkb} for a recent review){\footnote{Upon using different quantization approaches, the classical singularities inside a non-rotating black hole could be altered into other interesting scenarios, such as the Nariai spacetime \cite{Boehmer:2007ket} or an Euclidean region \cite{Bojowald:2018xxu}. The formation of the inner horizon is also possible \cite{BenAchour:2018khr,Kelly:2020uwj}.}}. However, the technical difficulties of invoking real-valued Ashtekar-Barbero variables in axisymmetric spacetimes \cite{Frodden:2012en,Gambini:2020fnd} largely hinder the progress of constructing effective LQG models for rotating black holes. In the literature, some attempts along this ling have been made by adopting the Newman-Janis Algorithm (NJA) \cite{Caravelli:2010ff,DeLorenzo:2015taa,Liu:2020ola,Brahma:2020eos}. As a metric-generating method, NJA works quite well in generating Kerr and Kerr-Newman metrics starting with their non-rotating counterparts, called seed metrics \cite{Newman:1965tw}. Although it is challenging at this point to justify the validity of using NJA beyond GR, it may still allow us to construct effective models for rotating black holes, which could capture the key features that LQG black holes are supposed to have.
As we have just mentioned, although there are various ways of constructing effective non-rotating LQG black holes, one common feature is that the classical singularity is replaced by a transition surface, where the radius of the 2-sphere acquires its minimum. Based on this observation, one could naively guess the possible spacetime structure that a rotating LQG black hole could have. Such a conjecture has been made in \cite{Brahma:2020eos}. The authors of \cite{Brahma:2020eos} argued that depending on the relative location of the transition surface and the two horizons{\footnote{One direct consequence when a black hole starts spinning is the appearance of the inner horizon.}}, a rotating LQG black hole could appear as a wormhole, a black hole with one horizon, or a black hole with two horizons. In \cite{Brahma:2020eos}, the rotating counterpart of a particular LQG black hole model \cite{Bodendorfer:2019nvy,Bodendorfer:2019jay} was constructed as a toy model to support the general conjecture of the paper. In addition, possible astrophysical implications and observational consequences were discussed.
In this paper, instead of starting with a specific LQG black hole model, we would like to strengthen the above conjecture on a more general ground. More explicitly, we start with the following assumptions: \textrm{(i)} The resultant metric after performing NJA can effectively describe the rotating counterpart of a LQG black hole. \textrm{(ii)} The non-rotating LQG black holes asymptotically reduce to the Schwarzschild black holes at spatial infinity, and replace the classical singularity with a transition surface. \textrm{(iii)} Quantum corrections start to have substantial effects on the spacetime geometry only at the region sufficiently close to the transition surface. We will show that based on these fair assumptions, the general conjecture made in \cite{Brahma:2020eos} can already be supported. Two specific effective LQG black hole models will also be provided to support our arguments.
The paper is outlined as follows. In sec.~\ref{sec.NJA}, we quickly review how NJA and its revised version work in general to obtain rotating spacetimes from a non-rotating seed metric. Then, in sec.~\ref{sec.assumption}, we put down the minimal set of assumptions on the seed metric functions of a non-rotating LQG black hole. Based on these fair assumptions, we make conjectures on the possible spacetime structures of rotating LQG black holes in sec.~\ref{sec.rotating}. We finally conclude in sec.~\ref{sec.conclusion}.
\section{Revised Newman-Janis Algorithm}\label{sec.NJA}
In this work, we first assume that the rotating LQG black hole metric can be effectively derived using NJA. This assumption allows us to make some quantitative statements on the phenomenology of the model. In this section, we will briefly review NJA \cite{Newman:1965tw} and how its revised version \cite{Azreg-Ainou:2014pra} works.
The NJA starts with a general static and spherically symmetric seed metric:
\begin{equation}
ds^2=-g(y)dt^2+\frac{dy^2}{g(y)}+b(y)^2d\Omega_2^2\,,\label{NJAseed}
\end{equation}
where $g(y)$ and $b(y)$ are metric functions and they are expressed in terms of a radial variable $y$. The seed metric can be recast into the advanced null coordinate system $(u,y,\theta,\phi)$ by defining the following variables
\begin{equation}
u\equiv t-y_*\,,\qquad \frac{dy_*}{dy}\equiv\frac{1}{g(y)}\,.
\end{equation}
In this way, the inverse metric can be expressed using a null tetrad $Z_a^\mu=\left(l^\mu,n^\mu,m^\mu,\bar{m}^\mu\right)$ via
\begin{equation}
g^{\mu\nu}=-l^\mu n^\nu-l^\nu n^\mu+m^\mu\bar{m}^\nu+m^\nu\bar{m}^\mu\,,\label{metrictetrad}
\end{equation}
where $\bar{m}^\mu$ is the complex conjugate of $m^\mu$. The explicit expression of the null tetrad $Z_a^\mu$ is not shown here because it is not very informative (see \cite{Newman:1965tw,Azreg-Ainou:2014pra} for the detailed expression).
The most important step in NJA is to perform a complex shift on the advanced null coordinates
\begin{equation}
u'=u-ia\cos\theta\,,\qquad y'=y+ia\cos\theta\,,
\end{equation}
where $a$ will be later regarded as the spin of the spacetime. Note that the angular coordinates $\theta$ and $\phi$ remain unchanged. After the complex shift, the coordinate set becomes $(u',y',\theta,\phi)$. Also, the metric functions can in general be expressed as functions of $y$ and $\theta$ (after dropping the prime for simplicity). Let us denote them as
\begin{equation}
g(y)\rightarrow G(y,\theta)\,,\qquad b(y)^2\rightarrow\Psi(y,\theta)\,,
\end{equation}
for the time being. After the complex shift, the set of null tetrad basis is changed and one can use Eq.~\eqref{metrictetrad} to obtain a new line element in the advanced null coordinates as follows
\begin{align}
ds^2=&-2dudy+2a\sin^2\theta\left(G-1\right)dud\phi-Gdu^2+\Psi d\theta^2+2a\sin^2\theta dyd\phi\nonumber\\
&+\sin^2\theta\left[\Psi+a^2\sin^2\theta\left(2-G\right)\right]d\phi^2\,.\label{NJA1}
\end{align}
At this point, the metric functions $G(y,\theta)$ and $\Psi(y,\theta)$ remain undetermined. They can actually be determined in the last step of NJA, or more precisely, the last step of the revised version of NJA. This step is to rewrite the metric \eqref{NJA1} in the Boyer-Lindquist coordinate system $(t,y,\theta,\varphi)$, in which the $g_{t\varphi}$ component is the only off-diagonal metric component. This can be done by considering the following transformations:
\begin{equation}
du=dt+\lambda_1(y)dy\,,\qquad d\phi=d\varphi+\lambda_2(y)dy\,,\label{coordinatetran}
\end{equation}
where
\begin{equation}
\lambda_1(y)=-\frac{\Psi(y,\theta)+a^2\sin^2\theta}{G(y,\theta)\Psi(y,\theta)+a^2\sin^2\theta}\,,\quad\lambda_2(y)=-\frac{a}{G(y,\theta)\Psi(y,\theta)+a^2\sin^2\theta}\,.
\end{equation}
As one can clearly see, in order to have well-defined coordinate transformations \eqref{coordinatetran}, $\lambda_1$ and $\lambda_2$ have to be functions of $y$ only. This is in general not possible for arbitrary $G(y,\theta)$ and $\Psi(y,\theta)$. Even if one considers the standard complexification procedure in the original NJA, the coordinate transformations \eqref{coordinatetran} are still not guaranteed to be well-defined. However, in the revised NJA \cite{Azreg-Ainou:2014pra}, the metric functions $G(y,\theta)$ and $\Psi(y,\theta)$ are chosen exquisitely such that the transformations \eqref{coordinatetran} are ensured to be well-defined. More explicitly, the metric functions are chosen to be related to those of the seed metric as follows
\begin{equation}
G(y,\theta)=\frac{g(y)b(y)^2+a^2\cos^2\theta}{\Psi(y,\theta)}\,,\qquad \Psi(y,\theta)=b(y)^2+a^2\cos^2\theta\,.
\end{equation}
Then, the functions $\lambda_1$ and $\lambda_2$ are ensured to be functions of $y$ and the transformations \eqref{coordinatetran} are well-defined.
After imposing the coordinate transformations, the final expression of the rotating spacetime, in the Boyer-Lindquist coordinates, can be written as
\begin{equation}
ds^2=-\left(1-\frac{2Mb}{\rho^2}\right)dt^2-\frac{4aMb\sin^2\theta}{\rho^2}dtd\varphi+\rho^2d\theta^2+\frac{\rho^2dy^2}{\Delta}+\frac{\Sigma\sin^2\theta}{\rho^2}d\varphi^2\,,\label{NJAmetricfinal}
\end{equation}
where
\begin{align}
\rho^2=b(y)^2+a^2\cos^2\theta\,,\quad M=M(y)\equiv b(y)\left(1-g(y)\right)/2\,,\nonumber\\
\Delta=\Delta(y)\equiv g(y)b(y)^2+a^2\,,\quad \Sigma=\left(a^2+b(y)^2\right)^2-a^2\Delta\sin^2\theta\,.
\end{align}
\section{Assumptions on the seed metric}\label{sec.assumption}
Having written down the general expression of the rotating metric \eqref{NJAmetricfinal}, one can observe that the rotating metric can actually be expressed in terms of the seed metric functions. Therefore, any assumptions or restrictions on the seed metric \eqref{NJAseed} would have direct consequences on the rotating metric. In this section, we will outline the assumptions on the seed metric functions $g(y)$ and $b(y)$ before we further discuss the possible spacetime structures of the rotating spacetime.
We assume that the seed metric for non-rotating LQG black holes has the following properties
\begin{arabiclist}
\item The seed metric has a non-degenerated event horizon at $y=y_h\ne0$, such that $g(y_h)=0$, $g'(y_h)>0$, and $b(y_h)\ne0$, where the prime denotes the derivative with respect to $y$.
\item The quantum effects replace the classical singularity by a spacelike transition surface inside the event horizon. without loss of generality, we assume that the transition surface is located at $y=0$ at which the radius of the 2-sphere has a minimum value, i.e., $b(0)=b_0>0$ and $b'(0)=0$. Also, since the transition surface is spacelike, we require $g(0)<0$. The transition surface at $y=0$ connects a black hole region on one side and a white hole region on the other side. The regions of a positive (negative) $y$ correspond to the black (white) hole regions. To be more explicit, we may assume that the metric functions near the transition surface can be expanded as follows:
\begin{equation}
g(y)=-|g_0|+g_2y^2+\mathcal{O}(y^3)\,,\quad b(y)=b_0+b_2y^2+\mathcal{O}(y^3)\,.\label{taylory0}
\end{equation}
Note that upon requiring the spacetimes to be symmetric in the black hole and white hole regions at least near the transition surface, one can have $g'(0)=0$.
\item The seed metric asymptotically reduces to the Schwarzschild one, namely, when $|y|\rightarrow\infty$, we have
\begin{equation}
b(y)\rightarrow |y|\,,\qquad g(y)\rightarrow 1-\frac{2M_B}{b(y)}\rightarrow1-\frac{2M_B}{|y|}\,,
\end{equation}
where $M_B$ is the Arnowitt-Deser-Misner mass of the black hole. This assumption ensures that the rotating counterpart \eqref{NJAmetricfinal} asymptotically reduces to Kerr metric when $|y|\rightarrow\infty$.
\item Quantum effects are sizable only in the vicinity of the transition surface.
\end{arabiclist}
The last assumption can be put down in a more mathematical manner. Essentially, if we collect all the quantum effects and regard their associated geometric corrections as an \textit{effective} matter content, the LQG black hole model can be formally governed by the Einstein equation with an effective energy-momentum tensor on the right-hand side:
\begin{equation}
G_{\mu\nu}=T^{\textrm{eff}}_{\mu\nu}\,,
\end{equation}
where $G_{\mu\nu}$ is the Einstein tensor and we have used the convention $8\pi G_N=1$. Naively, the effective energy-momentum tensor associated with quantum effects would violate energy conditions such that it can provide sort of repulsive forces to prevent the gravitational collapse. In this regard, the last assumption can be translated into the requirement that \textit{the effective energy-momentum tensor satisfies the strong energy condition inside the event horizon, except for the region very close to the transition surface.}
The fulfillment of energy conditions can actually restrict the behavior of the metric functions significantly. First, according to the results in \cite{Yang:2021civ}, if the strong energy condition is satisfied inside a static black hole, there is at most one non-degenerated inner horizon inside every connected branch of a black hole event horizon. Since in our case, the transition surface inside the event horizon is spacelike, therefore, there cannot exist any inner horizon within the region where strong energy condition is satisfied. As long as the strong energy condition can possibly be violated only very close to the transition surface, the spacetime has only one horizon at $y=y_h$ on one side of the transition surface, i.e., $g(y)$ has a single root at $y=y_h$ in the region $y\ge0$.
Indeed, the fulfillment of the strong energy condition inside the event horizon of the seed metric \eqref{NJAseed} implies
\begin{equation}
I_1(y)\equiv\frac{2b'(y)g'(y)}{b(y)}+g''(y)+4g(y)\frac{b''(y)}{b(y)}\ge0\,,\qquad I_2(y)\equiv g(y)\frac{b''(y)}{b(y)}\ge0\,.\label{sec}
\end{equation}
To proceed, we define the function
\begin{equation}
F(y)\equiv g(y)b(y)^2\,,
\end{equation}
and consider the following combination of its derivatives
\begin{equation}
\frac{1}{b(y)^2}\left[F''(y)-\frac{2b'(y)}{b(y)}F'(y)\right]=I_1(y)-2g(y)\frac{b'(y)^2}{b(y)^2}-2I_2(y)\,.\label{seccheck}
\end{equation}
According to the first inequality of \eqref{sec} and due to the fact that $g(y)<0$ inside the horizon, the combination of the first two terms on the right-hand side of Eq.~\eqref{seccheck} is non-negative. In addition, we further assume that in the region where strong energy condition is satisfied, the metric function $b(y)$ can already be well approximated by $|y|$ such that the contribution from $I_2(y)$ is negligible. Based on these arguments and assumptions, the left-hand side of Eq.~\eqref{seccheck} is non-negative, meaning that the function $F(y)$ cannot have local maxima inside the horizon, except for the region very close to the transition surface. For simplicity, we will directly assume that the local maximum of the function $F(y)$, if it has any, can only appear at the transition surface.
In the next section, we will show that, if the rotating counterpart of a LQG black hole can be described effectively by the metric \eqref{NJAmetricfinal}, its spacetime structure can already be substantially determined based on the above assumptions.
\section{Spacetime structure of a rotating LQG black hole}\label{sec.rotating}
According to the rotating metric \eqref{NJAmetricfinal}, the event horizon of the metric is given by
\begin{equation}
\Delta(y)\equiv F(y)+a^2=0\,.
\end{equation}
Note that when $a=0$, the horizon is located at $y=y_h$ because $g(y_h)=0$. Therefore, the question about how many event horizons could be there reduces to the following mathematical question: \textit{How many roots of the equation $F(y)+a^2=0$ could be there in the region $y>0$?}
The first observation is that once a non-zero spin $a$ is included, the function $F(y)+a^2$ becomes positive at $y=y_h$. Outside $y_h$, the metric functions $g(y)$ and $b(y)$ gradually reduce to their Schwarzschild counterparts, based on the assumption 3. Therefore, the number of horizons is determined by the number of roots of $F(y)+a^2$ within the region $0<y<y_h$, and this turns out to strongly depend on the sign of $F(0)+a^2$.
Let us split the discussion into the value of $F(0)+a^2$ being negative, zero, and positive separately:
\begin{itemize}
\item $F(0)+a^2<0$:\\
In this case, it is apparent that there must be at least one root in the region $0<y<y_h$ because $F(y_h)+a^2>0$. In addition, if we adopt the assumption that the function $F(y)$ can possibly have a local maximum only at $y=0$ (assumption 4), then $F(y)+a^2$ can only have a single root between $y=0$ and $y=y_h$. Therefore, the spacetime contains one event horizon, behind which hides a spacelike transition surface.
\item $F(0)+a^2=0$:\\
In this case, the transition surface itself becomes an event horizon of the spacetime. The detailed spacetime structure then depends on whether the function $F(y)+a^2$ at $y=0$ is a local maximum or a local minimum. Mathematically, we can use Eqs.~\eqref{taylory0} to obtain the expansion of the function $F(y)+a^2$ near the transition surface:
\begin{align}
F(y)+a^2&=F(0)+a^2+\left(b_0g_2-|g_0|b_2\right)y^2+\mathcal{O}(y^3)\nonumber\\
&=\left(b_0g_2-|g_0|b_2\right)y^2+\mathcal{O}(y^3)\,.\label{quadraticcoeff}
\end{align}
If the quadratic coefficient is positive (negative), the function $F(y)+a^2$ has a local minimum (maximum) at $y=0$.
\begin{itemize}
\item Local minimum at $y=0$ ($b_0g_2-|g_0|b_2>0$):\\
In this case, the function $F(y)+a^2$ monotonically increases with respect to $y$ in the region $0<y<y_h$. Therefore, the transition surface itself is the only event horizon in $y\ge0$ and it is a null surface.
\item Local maximum at $y=0$ ($b_0g_2-|g_0|b_2<0$):\\
In this case, when $y$ increases, the function $F(y)+a^2$ would first decrease and then increase up to $y=y_h$. Therefore, in addition to the transition surface itself, there is another event horizon outside the transition surface. We then have a non-singular black hole with two horizons, with the transition surface itself being the inner horizon.
\item Undetermined case ($b_0g_2-|g_0|b_2=0$):\\
It is indeed possible that the coefficients in the expansions \eqref{taylory0} are so fine-tuned that the quadratic coefficient in Eq.~\eqref{quadraticcoeff} vanishes. In this case, whether $F(y)+a^2$ at $y=0$ is really a local maximum or minimum depends on the sign of the higher-order coefficients in the expansion. However, the following argument is still valid: If the function $F(y)+a^2$ has a local minimum (maximum) at $y=0$, there is one (two) event horizon(s) in $y\ge0$.
\end{itemize}
\item $F(0)+a^2>0$:\\
In this case, the function $F(y)+a^2$ is positive both at $y=0$ and $y=y_h$. Similar to the case for $F(0)+a^2=0$, the number of horizons also depends on whether the function $F(y)+a^2$ has a local maximum or a local minimum at $y=0$.
\begin{itemize}
\item Local minimum at $y=0$:\\
This case includes both the possibilities that $b_0g_2-|g_0|b_2>0$ as well as the fine-tuned case where $b_0g_2-|g_0|b_2=0$ while higher-order coefficients imply a local minimum at $y=0$. In this case, there is no root for $F(y)+a^2=0$ between $y=0$ and $y=y_h$. Therefore, there is no event horizon in the spacetime. The transition surface is naked and is a timelike surface, giving rise to a wormhole geometry.
\item Local maximum at $y=0$:\\
This case includes both the possibilities that $b_0g_2-|g_0|b_2<0$ as well as the fine-tuned case where $b_0g_2-|g_0|b_2=0$ while higher-order coefficients imply a local maximum at $y=0$. In this case, the number of horizons depends on whether the local minimum of $F(y)+a^2$ between $y=0$ and $y=y_h$, say, $y=y_m$, is positive, zero, or negative.
\begin{itemize}
\item If $F(y_m)+a^2>0$, there is no root within $0<y<y_h$. Therefore, the spacetime has no event horizon and becomes a wormhole with a timelike transition surface.
\item If $F(y_m)+a^2=0$, the local minimum is zero. In this case, there is a degenerated horizon at $y=y_m$. The transition surface is inside the degenerated horizon and is timelike.
\item If $F(y_m)+a^2<0$, there are two event horizons, one exterior horizon at $y_m<y<y_h$ and the other inner horizon at $0<y<y_m$. The spacetime is a non-singular black hole with two horizons. The transition surface is timelike and hidden inside the inner horizon.
\end{itemize}
\end{itemize}
\end{itemize}
\begin{figure}[t]
\centerline{\psfig{file=rotatingLQGparamerter.pdf,width=4in}}
\vspace*{8pt}
\caption{This figure shows the spacetime structure of the metric \eqref{NJAmetricfinal} within different regions of the parameter space.\label{fig1}}
\end{figure}
\begin{figure}[t]
\centerline{\psfig{file=BMM2.pdf,width=2.8in}\psfig{file=AOS.pdf,width=2.8in}}
\vspace*{8pt}
\caption{The spacetime structures of the rotating counterparts of the BMM (left) and the AOS (right) models, respectively. The rotating metrics are obtained through NJA. The vertical axis in each figure represents the quantum parameters of the models. The horizontal axis labels the spin of the black hole. Regions I, II, and III represent wormholes, black holes with one horizon, and black holes with two horizons, respectively (see Fig.~\ref{fig1}). \label{fig2}}
\end{figure}
The above discussions are summarized graphically in Fig.~\ref{fig1}. In this figure, the horizontal and the vertical axes denote the values of $F(0)+a^2$ and $b_0g_2-|g_0|b_2$, respectively. The parameter space toward the right of the horizontal axis corresponds to an increasing black hole spin. On the other hand, the parameter space toward the top of the vertical axis corresponds to larger deviations from Kerr metric, i.e., larger quantum parameters. This can be understood naively by the fact that the minimum value of the 2-sphere radius $b_0$ is expected to approach zero in the GR limit. In addition, the metric function $g(y)$ should approach to minus infinity in the same limit, i.e., $|g_0|\rightarrow\infty$ in the GR limit.
According to Fig.~\ref{fig1}, one can see that the values $F(0)+a^2$ and $b_0g_2-|g_0|b_2$ are zero at the intersection of the two axes. On the left of the vertical axis (cyan region), $F(0)+a^2$ is negative and the spacetime represents a black hole with only one horizon, behind which lies a spacelike transition surface. This includes the parameter space of non-rotating LQG black holes ($a=0$).
When the value $F(0)+a^2$ is positive, as what we have mentioned before, depending on the sign of the value of $b_0g_2-|g_0|b_2$, the spacetime can represent a wormhole ($b_0g_2-|g_0|b_2>0$, pink region) or a black hole with two event horizons ($b_0g_2-|g_0|b_2<0$, light gray region). In particular, in the parameter space where $b_0g_2-|g_0|b_2<0$, if the spin is sufficiently large such that $F(y_m)+a^2$ is positive, the event horizon disappears and the spacetime represents a wormhole. Note that as long as $F(0)+a^2$ is positive, the transition surface is always timelike.
Furthermore, on the boundary between the cyan and the pink regions (red line), the transition surface itself is the only event horizon in the spacetime. On the other hand, on the boundary between the cyan and the light gray regions (blue line), there are two horizons, with the inner one being the transition surface itself.
Before closing this section in which we have provided a general discussion on the possible spacetime structures for rotating LQG black holes, we would like to give two specific examples to support our arguments. We consider the non-rotating LQG black hole models proposed by Bodendorfer-Mele-M\"unch (BMM) \cite{Bodendorfer:2019nvy,Bodendorfer:2019jay} and Ashtekar-Olmedo-Singh (AOS) \cite{Ashtekar:2018lag,Ashtekar:2018cay,Ashtekar:2020ckv,Devi:2021ctm}. In both of these non-rotating black hole models, the classical Schwarzschild singularity is replaced by a spacelike transition surface inside the event horizon. We then construct their rotating counterparts using NJA and show their spacetime structures in Fig~\ref{fig2}. The horizontal axis in each figure represents the spin of the black hole. As for the vertical axes, on the other hand, the quantities $A_\lambda$ and $Q$ stand for the quantum parameters in the BMM model and the AOS model, respectively. One can clearly identify the cyan, pink, and light gray regions in each figure with their counterparts in Fig.~\ref{fig1}. The fine-tuned cases given by the red and blue curves can also be identified. This means that these two particular examples fit perfectly well the general discussion above. We expect that our general argument also applies to other LQG black hole models, as long as their non-rotating models have a transition surface inside the event horizon. Note also that similar spacetime structures can be obtained in a purely phenomenological way \cite{Mazza:2021rgq}, without resorting to quantum gravitational approaches.
\section{Conclusions}\label{sec.conclusion}
In this work, we give a conjecture on what the spacetime structure of a rotating LQG black hole may look like, based on a few general assumptions on metric functions of their non-rotating counterparts. The first assumption, and perhaps the most controversial one, is that the rotating metric can be constructed effectively by using NJA applied to a non-rotating seed metric. This assumption allows us, at least qualitatively, to address the physical consequences coming from rotating LQG black holes, given that a self-consistent LQG treatment on axisymmetric spacetimes is still lacking. Other assumptions that we have made include the assumption that the classical Schwarzschild singularity is replaced by a spacelike transition surface for non-rotating LQG black holes. This is again a fair assumption because the transition surface is a quite common consequence in many LQG black hole models. Finally, we assume that the quantum effects only concentrate near the transition surface. This assumption allows us to fairly restrict the behaviors of the seed metric functions.
Our argument can be understood as follows: In addition to the original event horizon, a non-zero spin value of the black hole would generically generate one more event horizon in the spacetime. Depending on the relative location of the transition surface and the two horizons, the spacetime structures of a rotating LQG black hole can become either a wormhole with a timelike transition surface, a black hole with one horizon and one spacelike transition surface, or a black hole with two horizons and one timelike transition surface. Our results are expected to fit most of effective LQG black holes, and are also expected to give some hints in the future developments of a more mathematically consistent construction of rotating LQG black hole models.
\section*{Acknowledgments}
CYC is supported by Institute of Physics in Academia Sinica, Taiwan.
| proofpile-arXiv_066-2120 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Recent studies~\citep{zela2020understanding,liang2019darts,chu2019fair} have shown that one critical issue for differentiable architecture search \citep{liu2018darts} regarding the performance collapse due to superfluous skip connections. Accordingly, some empirical indicators for detecting the occurrence of collapse have been produced. R-DARTS \citep{zela2020understanding} shows that the loss landscape has more curvatures (characterized by higher Hessian eigenvalues w.r.t. architectural weights) when the derived architecture generalizes poorly. By regularizing for a lower Hessian eigenvalue, \cite{zela2020understanding,chen2020stabilizing} attempt to stabilize the search process. Meanwhile, by directly constraining the number of skip connections to a fixed number (typically 2), the collapse issue becomes less pronounced \citep{chen2019progressive,liang2019darts}.
These indicator-based approaches have several main drawbacks. Firstly, robustness relies heavily on the quality of the indicator. An imprecise indicator either inevitably accepts poor models or mistakenly reject good ones. Secondly, indicators impose strong priors by directly manipulating the inferred model, which is somewhat suspicious, akin to touching the test set. Thirdly, extra computing cost \citep{zela2020understanding} or careful tuning of hyper-parameters \citep{chen2019progressive,liang2019darts} are required. Therefore, it's natural to ask the following questions:
\begin{wrapfigure}{R}{0.5\columnwidth}
\vskip -5pt
\centering
\includegraphics[width=0.5\columnwidth]{figures/darts--illustration_v2.pdf}
\vspace{-15pt}
\caption{Schematic illustration about (a) DARTS and (b) the proposed DARTS-, featuring an auxiliary skip connection (thick red line) with a decay rate $\beta$ between every two nodes to remove the potential unfair advantage that leads to performance collapse.}
\label{fig:darts-illustration}
\vskip -15pt
\end{wrapfigure}
\begin{itemize}
\item Can we resolve the collapse without handcrafted indicators and restrictions to interfere with the searching and/or discretization procedure?
\item Is it possible to achieve robustness in DARTS without tuning extra hyper-parameters?
\end{itemize}
To answer the above questions, we propose an effective and efficient approach to stabilize DARTS. Our contributions can be summarized as follows:
\textbf{New Paradigm to Stabilize DARTS.}
While empirically observing that current indicators ~\citep{zela2020understanding,chen2020stabilizing} can avoid performance collapse at a cost of reduced exploration coverage in the search space, we propose a novel \emph{indicator-free} approach to stabilize DARTS, referred to as DARTS-\footnote{We name it so as we undertake an inward way, as opposed to those outward ones who design new indicators, add extra cost and introduce new hyper-parameters.}, which involves an auxiliary skip connection (see Figure~\ref{fig:darts-illustration}) to remove the \emph{unfair advantage} \citep{chu2019fair} in the searching phase.
\textbf{Strong Robustness and Stabilization.}
We conduct thorough experiments across seven search spaces and three datasets to demonstrate the effectiveness of our method. Specifically, our approach robustly obtains state-of-the-art results on 4 search space with 3$\times$ fewer search cost than R-DARTS \citep{zela2020understanding}, which requires four independent runs to report the final performance.
\textbf{Seamless Plug-in Combination with DARTS Variants.}
We conduct experiments to demonstrate that our approach can work together seamlessly with other orthogonal DARTS variants by removing their handcrafted indicators without extra overhead. In particular, our approach is able to improve $0.8\%$ accuracy for P-DARTS, and $0.25\%$ accuracy for PC-DARTS on CIFAR-10 dataset.
\section{Related Work}
\textbf{Neural architecture search and DARTS variants.}
Over the years, researchers have sought to automatically discover neural architectures for various deep learning tasks to relieve human from the tedious effort, ranging from image classification \citep{zoph2017learning}, objection detection \citep{ghiasi2019fpn}, image segmentation \citep{liu2019auto} to machine translation \citep{so2019evolved} etc. Among many proposed approaches, Differentiable Architecture Search \citep{liu2018darts} features weight-sharing and resolves the searching problem via gradient descent, which is very efficient and easy to generalize. A short description of DARTS can be found in \ref{app:prelim}. Since then, many subsequent works have been dedicated to accelerating the process \citep{dong2019searching}, reducing memory cost \citep{xu2020pcdarts}, or fostering its ability such as hardware-awareness \citep{cai2018proxylessnas,wu2018fbnet}, finer granularity \citep{mei2020atomnas} and so on. However, regardless of these endeavors, a fundamental issue of DARTS over its searching performance collapse remains not properly solved, which extremely prohibits its application.
\textbf{Robustifying DARTS.} As DARTS \citep{liu2018darts} is known to be unstable as a result of performance collapse \citep{chu2019fair}, some recent works have devoted to resolving it by either designing indicators like Hessian eigenvalues for the collapse \citep{zela2020understanding} or adding perturbations to regularize such an indicator \citep{chen2020stabilizing}. Both methods rely heavily on the indicator's accurateness, i.e., to what extent does the indicator correlate with the performance collapse? Other methods like Progressive DARTS \citep{chen2019progressive} and DARTS+ \citep{liang2019darts} employ a strong human prior, i.e., limiting the number of skip-connections to be a fixed value. Fair DARTS \citep{chu2019fair} argues that the collapse results from the \emph{unfair advantage} in an exclusive competitive environment, from which skip connections overly benefit to cause an abundant aggregation. To suppress such an advantage from overshooting, they convert the competition into collaboration where each operation is independent of others. It is however an indirect approach. SGAS \citep{li2019sgas}, instead, circumvents the problem with a greedy strategy where the unfair advantage can be prevented from taking effect. Nevertheless, potentially good operations might be pruned out too early because of greedy underestimation.
\section{DARTS-}
\begin{comment}
Following one-shot based NAS~ \citep{one-shot NAS}, DARTS~ \citep{liu2018darts} constructs neural networks by stacking normal and reduction cells and utilizes a directed acyclic graph (DAG) to represent the architecture of a cell. $N$ nodes, which represents a feature map, are contained in each cell in sequential order, and node $i$ connects with all the previous nodes in the same cell. We denote the edge from node $i$ to $j$ as $e_{i,j}$, which contains all the candidate operations in the search space $o_{i,j}, o\in \mathcal{O}$. Furthermore, DARTS~ \citep{liu2018darts} leads in the architecture parameters $\alpha_{i,j}^o$ to control the importance of different operations and connections. Consequently, the output of edge $e_{i,j}$, denoted as $\overline{o}_{i,j}$, is the weighted average of the output of operations $o_{i,j}$
\begin{equation}
\overline{o}_{i,j}(x_i) = \sum_{o\in \mathcal{O}}\frac{\exp(\alpha^o_{i,j})}{\sum_{o'\in \mathcal{O}}\exp(\alpha^{o'}_{i,j})} o_{i,j}(x_i)
\end{equation}
where $x_i$ is the output of node $i$, and the output of node $j$ can be computed as:
\begin{equation}
x_j = \sum_{i<j} \overline{o}_{i,j}(x_i)
\end{equation}
Neural architecture search can be modeled as a bilevel optimization problem as Eq.~\ref{eq:}, where $\omega$ is the network parameters and $\alpha$ is the architecture parameters:
\begin{align}
\min_\alpha \quad &\mathcal{L}_{val}(\omega^*(\alpha), \alpha) \\
\text{s.t.} \quad &\omega^*(\alpha) = \text{argmin}_\omega \mathcal{L}_{train}(\omega, \alpha)
\end{align}
\end{comment}
\subsection{Motivation}
We start from the detailed analysis of the role of skip connections. Skip connections were proposed to construct a residual block in ResNet~ \citep{he2016deep}, which significantly improves training stability. It is even possible to deepen the network up to hundreds of layers without accuracy degradation by simply stacking them up. In contrast, stacking the plain blocks of VGG has degenerated performance when the network gets deeper. Besides, \cite{ren2015faster, wei2017boosting,tai2017image,li2018multi} also empirically demonstrate that deep residual network can achieve better performance on various tasks.
From the gradient flow's perspective, skip connection is able to alleviate the gradient vanishing problem.
Given a stack of $n$ residual blocks, the output of the $(i+1)^{\text{th}}$ residual block $X_{i+1}$ can be computed as
$X_{i+1} = f_{i+1}(X_{i}, W_{i+1}) + X_{i}$,
where $f_{i+1}$ denotes the operations of the $(i+1)^{\text{th}}$ residual block with weights $W_{i+1}$. Suppose the loss function of the model is $\mathcal{L}$, and the gradient of $X_{i}$
can be obtained as follows ($\mathbbm{1}$ denotes a tensor whose items are all ones):
\begin{align}
\frac{\partial \mathcal{L}}{\partial X_{i}} &= \frac{\partial \mathcal{L}}{\partial X_{i+1}} \cdot \left(\frac{\partial f_{i+1}}{\partial X_{i}} + \mathbbm{1} \right)
= \frac{\partial \mathcal{L}}{\partial X_{i+j}} \cdot \prod_{k=1}^{j} \left(\frac{\partial f_{i+k}}{\partial X_{i+k-1}} + \mathbbm{1} \right)
\end{align}
We observe that the gradient of shallow layers always includes the gradient of deep layers, which mitigates the gradient vanishing of $W_i$. Formally we have,
\begin{align}
\frac{\partial \mathcal{L}}{\partial W_{i}}
&= \frac{\partial \mathcal{L}}{\partial X_{i+j}} \cdot \prod_{k=1}^{j} \left(\frac{\partial f_{i+k}}{\partial X_{i+k-1}} + \mathbbm{1} \right) \cdot \frac{\partial f_{i}}{\partial W_{i}}
\end{align}
To analyze how skip connect affects the performance of residual networks, we introduce a trainable coefficient $\beta$ on all skip connections in ResNet. Therefore, the gradient of $X_{i}$ is converted to:
\begin{align}
\frac{\partial \mathcal{L}}{\partial X_{i}} &= \frac{\partial \mathcal{L}}{\partial X_{i+1}} \cdot \left(\frac{\partial f_{i+1}}{\partial X_{i}} + \mathbf{\beta} \right)
\end{align}
Once $\beta < 1$, gradients of deep layers gradually vanish during the back-propagation (BP) towards shallow layers. Here $\beta$ controls the memory of gradients in BP to stabilize the training procedure.
\begin{wrapfigure}{R}{0.4\columnwidth}
\vskip -0.1 in
\centering
\includegraphics[width=0.4\columnwidth]{figures/resnet_beta.pdf}
\vspace{-15pt}
\caption{Tendency of trainable coefficient $\beta$ (initialized with \{0, 0.5, 1\}) of the skip connection in ResNet50 and test accuracy (inset figure) vs. epochs. The residual structure is proved to learn a large $\beta$ to ease training in all three cases. All models are trained and tested on CIFAR-10.}
\label{fig:resnet_beta}
\end{wrapfigure}
We conduct a confirmatory experiment on ResNet50 and show the result in Fig~\ref{fig:resnet_beta}. By initializing $\beta$ with $\{ 0, 0.5, 1.0 \}$, we can visualize the tendency of $\beta$ along with training epochs. We observe that $\beta$ converges towards $1$ after 40 epochs no matter the initialization, which demonstrates that the residual structure learns to push $\beta$ to a rather large value to alleviate gradient vanishing.
Similarly, DARTS \citep{liu2018darts} utilizes a trainable parameter $\beta_{skip}$ to denote the importance of skip connection.
However, In the search stage, $\beta_{skip}$ can generally increase and dominate the architecture parameters, and finally leads to performance collapse.
We analyze that a large $\beta_{skip}$ in DARTS could result from two aspects: On the one hand, as the supernet automatically learns to alleviate gradient vanishing, it pushes $\beta_{skip}$ to a proper large value; On the other hand, the skip connection is indeed an important connection for the target network, which should be selected in the discretization stage. As a consequence, the skip connection in DARTS plays two-fold roles: as \emph{an auxiliary connection to stabilize the supernet training}, and as \emph{a candidate operation to build the final network}.
Inspired by the above observation and analysis, we propose to stabilize the search process by distinguishing the two roles of skip connection and handling the issue of gradient flow.
\subsection{Stepping out of the Performance Collapse}
To distinguish the two roles, we introduce an auxiliary skip connection between every two nodes in a cell, see Fig.~\ref{fig:darts-illustration} (b).
On the one hand, the fixed auxiliary skip connection carries the function of stabilizing the supernet training, even when $\beta_{skip}$ is rather small. On the other hand, it also breaks the unfair advantage~\citep{chu2019fair} as the advantageous contribution from the residual block is factored out.
Consequently, the learned architectural parameter $\beta_{skip}$ can be freed from the role of controlling the memory of gradients, and is more precisely aimed to represent the relative importance of skip connection as a candidate operation. In contrast to Eq.~\ref{eq:darts-node-softmax}, the output feature map of edge $e^{(i,j)}$ can now be obtained by Eq.~\ref{eq:ours-node-softmax}, where $\beta_o^{i,j} = \frac{\exp(\alpha_{o}^{(i,j)})}{\sum_{o' \in \mathcal{O}} \exp(\alpha_{o'}^{(i,j)})}$ denotes the normalized importance, and $\beta$ is a coefficient independent from the architecture parameters.
Moreover, to eliminate the impact of auxiliary connection on the discretization procedure, we propose to decrease $\beta$ to 0 in the search phase, and our method can be degenerated to standard DARTS at the end of the search. Note that our method is not insensitive to the type of decay strategy, so we choose linear decay by default for simplicity.
\begin{align}\label{eq:ours-node-softmax}
\bar{o}^{(i,j)}(x) &= \beta x + \sum_{o \in \mathcal{O}}\beta^{(i,j)}_o o(x) = \left(\beta+\beta^{(i,j)}_{skip}\right)x + \sum_{o \neq skip}\beta^{(i,j)}_o o(x)
\end{align}
We then analyze how the auxiliary skip connection handles the issue of gradient flow. Referring to the theorem of a recent work by ~\cite{zhou2020theory}, the convergence of network weight $\mathbf{W}$ in the supernet can heavily depend on $\beta_{skip}$. Specifically, suppose only three operations (none, skip connection, and convolution) are included in the search space and MSE loss is utilized as training loss, when architecture parameters $\beta^o_{i,j}$ are fixed to optimize $\mathbf{W}$ via gradient descent, training loss can decrease by ratio ($1-\eta \lambda/4$) at one step with probability at least $1-\delta$, where $\eta$ is the learning rate that should be bounded by $\delta$, and $\lambda$ follows Eq.~\ref{eq:theory_lambda}.
\begin{equation}
\lambda \propto \sum_{i=0}^{h-2} \left[ \left(\beta^{(i,h-1)}_{conv}\right)^2 \prod_{t=0}^{i-1}\left(\beta^{(t,i)}_{skip}\right)^2 \right]
\label{eq:theory_lambda}
\end{equation}
where $h$ is the number of layers of the supernet. From Eq.~\ref{eq:theory_lambda}, we observe that $\lambda$ relies much on $\beta_{skip}$ than $\beta_{conv}$, which indicates that the network weights $\mathbf{W}$ can converge faster with a large $\beta_{skip}$.
However, by involving an auxiliary skip connection weighted as $\beta$, Eq.~\ref{eq:theory_lambda} can be refined as follows:
\begin{equation}
\lambda \propto \sum_{i=0}^{h-2} \left[ \left(\beta^{(i,h-1)}_{conv}\right)^2 \prod_{t=0}^{i-1}\left(\beta^{(t,i)}_{skip} + \beta \right)^2 \right]
\label{eq:theory_lambda_ours}
\end{equation}
\begin{wrapfigure}[17]{R}{0.52\textwidth}
\vspace{-22pt}
\begin{minipage}{0.52\textwidth}
\begin{algorithm}[H]
\caption{DARTS-}
\label{alg:darts-minus}
\begin{algorithmic}[1]
\REQUIRE~~\\
Network weights $w$; Architecture parameters $\alpha$; \\
Number of search epochs $E$; \\
Decay strategy for $\beta_e, e\in\{ 1,2,..., E\}$.
\ENSURE ~~\\
Searched architecture parameters $\alpha$.
\STATE Construct a super-network by stacking cells in which there is an auxiliary skip connection between every two nodes of choice
\FOR{each $e\in [1,E]$}
\STATE Update weights $w$ by $\nabla_{w}{\cal L}_{train}(w, \alpha, \beta_e)$
\STATE Update parameters $\alpha$ by $\nabla_{\alpha }{\cal L}_{val}(w, \alpha, \beta_e)$
\ENDFOR
\STATE Derive the final architecture based on learned $\alpha$ from the best validation supernet.
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{wrapfigure}
where $\beta \gg \beta_{skip}$ making $\lambda$ insensitive to $\beta_{skip}$, so that the convergence of network weights $\mathbf{W}$ depends more on $\beta_{conv}$. In the beginning of the search, the common value for $\beta_{skip} $ is 0.15 while $\beta$ is 1.0. From the view of convergence theorem \citep{zhou2020theory}, the auxiliary skip connection alleviates the privilege of $\beta_{skip}$ and equalize the competition among architecture parameters. Even when $\beta$ gradually decays, the fair competition still holds since network weights $\mathbf{W}$ have been converged to an optimal point. Consequently, DARTS- is able to stabilize the search stage of DARTS.
Extensive experiments are performed to demonstrate the efficiency of the proposed auxiliary skip connection, and we emphasize that our method is flexible to combine with other methods to further improve the stabilization and searching performance. The overall algorithm is given in Alg.~\ref{alg:darts-minus}.
\subsection{Relationship to Prior Work}
Our method is aimed to address the performance collapse in differentiable neural architecture search. Most previous works \citep{zela2020understanding,chen2020stabilizing,liang2019darts} concentrate on developing various criteria or indicators characterizing the occurrence of collapse. Whereas, we don't study or rely on these indicators because they can mistakenly reject good models. Inspired by \cite{chu2019fair}, our method focuses on calibrating the biased searching process. The underlying philosophy is simple: if the biased process is rectified, the searching result will be better. In summary, our method differs from others in two aspects: being process-oriented and indicator-free. Distinct from \cite{chu2019fair} that tweaks the competitive environment, our method can be viewed as one that breaks the unfair advantage.
Moreover, we don't introduce any handcrafted indicators to represent performance collapse, thus greatly reducing the burden of shifting to different tasks.
\section{Experiments} \label{sec:exp}
\subsection{Search Spaces and Training Settings}\label{sec: training setting}
For searching and evaluation in the standard DARTS space (we name it as \textbf{S0} for simplicity), we keep the same settings as in DARTS \citep{liu2018darts}. We follow R-DARTS \citep{zela2020understanding} for their proposed reduced spaces \textbf{S1- S4} (harder than S0). However, the inferred models are trained with two different settings from R-DARTS \citep{zela2020understanding} and SDARTS \citep{chen2020stabilizing}. The difference lies in the number of layers and initial channels for evaluation on CIFAR-100. R-DARTS sets 8 layers and 16 initial channels. Instead, SDARTS uses 20 and 36 respectively. For the proxyless searching on ImageNet, we instead search in MobileNetV2-like search space (we name it \textbf{S5}) proposed in FBNet \citep{wu2018fbnet}. We use the SGD optimizer for weight and Adam ($\beta_1=0.5$ and $\beta_2=0.999$, 0.001 learning rate) for architecture parameters with the batch-size of 768. The initial learning rate is 0.045 and decayed to 0 within 30 epochs following the cosine schedule. We also use L2 regularization with 1e-4. It takes about 4.5 GPU days on Tesla V100. More details are provided in the appendix. We also use NAS-Bench-201 (\textbf{S6}) since DARTS performs severely bad. In total, we use 7 different search spaces to conduct the experiments, which involves three datasets.
\subsection{Searching Results}
\begin{wraptable}{r}{6.5cm}
\vspace{-20pt}
\caption{Comparison of searched CNN in the DARTS search space on two different datasets.}\smallskip
\centering
\resizebox{.45\columnwidth}{!}{
\smallskip\begin{tabular}{lrrr}
\toprule
\textbf{Dataset} & \textbf{DARTS} & \textbf{R-DARTS (L2)} & \textbf{Ours}\\
\midrule
C10 (S0) & 2.91$\pm$0.25 & 2.95$\pm$0.21 & \textbf{2.63$\pm$0.07}\\
C100 (S0) & 20.58$\pm$0.44 & 18.01$\pm$0.26 & \textbf{17.51$\pm$0.25} \\
\bottomrule
\end{tabular}
}
\label{tab:CNN-standard-space}
\vspace{-10pt}
\end{wraptable}
\textbf{CIFAR-10 and CIFAR-100.}
Following the settings of R-DARTS \citep{zela2020understanding}, we obtain an average top-1 accuracy of $97.36\%$ on CIFAR-10, as shown in Table~\ref{tab:comparison-cifar-imagenet}. Moreover, our method is very robust since out of six independent runs the searching results are quite stable. The best cells found on CIFAR-10 ($97.5\%$) are shown in Figure~\ref{fig:c10_best_cell} (\ref{app:fig-geno}). Results on CIFAR-100 are presented in Table~\ref{tab:comparison-cifar100} (see \ref{app:train}).
Moreover, our method has a much lower searching cost (\textbf{3$\times$ less}) than R-DARTS \citep{zela2020understanding}, where four independent searches with different regularization settings are needed to generate the best architecture. In other words, its robustness comes from the cost of more $CO_2$ emissions.
\begin{comment}
\begin{table}[tb!]
\setlength{\tabcolsep}{1pt}
\begin{center}
\caption{Comparison of the state-of-the-art models on CIFAR-10 (left) and ImageNet (right). $^\dagger$: Based on the provided genotypes. $^\star$: Training the best searched model for several times (\emph{whose average doesn't indicate the stability of the method}). To assure the robustness, our average result is obtained on 5 independently searched models for CIFAR-10. $^\ddagger$: Estimated by \cite{wu2018fbnet}. Notice methods in the middle rows are transferred from CIFAR-10 models. $^{\diamond}$: w/ SE and Swish. } \smallskip
\label{tab:comparison-cifar-imagenet}
\begin{scriptsize}
\begin{minipage}{0.48\textwidth}
\vspace{-36pt}
\begin{tabular}{*{5}{l}}
\toprule
\textbf{Models} & \textbf{\scriptsize{Params}} & \textbf{\scriptsize{FLOPs}} & \textbf{Acc} & \textbf{Cost} \\
& \scriptsize{(M)} & \scriptsize{(M)} & \scriptsize{(\%)} & \tiny{GPU Days} \\
\midrule
NASNet-A \citeyp{zoph2017learning} & 3.3 & 608$^\dagger$ & 97.35 & 2000 \\
ENAS \citeyp{pham2018efficient} & 4.6 & 626$^\dagger$ & 97.11 & 0.5 \\
DARTS \citeyp{liu2018darts} & 3.3 & 528$^\dagger$ & 97.00$\pm0.14^\star$ & 0.4 \\
SNAS \citeyp{xie2018snas} & 2.8 & 422$^\dagger$ & 97.15$\pm0.02^\star$ & 1.5\\
GDAS \citeyp{dong2019searching} & 3.4 & 519$^\dagger$ & 97.07 & 0.2\\
P-DARTS \citeyp{chen2019progressive} & 3.4 & 532$^\dagger$ & 97.5 & 0.2 \\
PC-DARTS \citeyp{xu2020pcdarts} & 3.6 & 558$^\dagger$ & 97.43 & 0.1 \\
R-DARTS \citeyp{zela2020understanding} & - & - & 97.05$\pm$0.21 & 1.6 \\
SDARTS-ADV \citeyp{chen2020stabilizing} & 3.3 & - & 97.39$\pm$0.02 & 1.3\\
DARTS- (avg.) & 3.5 & 583 & 97.41$\pm$0.08 & 0.4\\
DARTS- (best) & 3.5 & 568 & 97.5 & 0.4\\
\bottomrule
\end{tabular}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\vspace{0pt}
\begin{tabular}{*{2}{l}*{3}{l}c}
\toprule
\textbf{Models} & \textbf{FLOPs} & \textbf{Params} & \textbf{Top-1} & \textbf{Top-5} & \textbf{Cost} \\
& \scriptsize{(M)} & \scriptsize{(M)} & \scriptsize{(\%)} & \scriptsize{(\%)} & \scriptsize{(GPU days)} \\
\midrule
AmoebaNet-A \citeyp{real2019regularized} & 555 & 5.1 & 74.5& 92.0& 3150 \\
MnasNet-92 \citeyp{tan2018mnasnet} & 388 & 3.9 & 74.79 &92.1& 3791$^\ddagger$ \\
FBNet-C \citeyp{wu2018fbnet} & 375 & 5.5 & 74.9 & 92.3& 9 \\
FairDARTS-D \citeyp{chu2019fair} & 440 & 4.3 & 75.6& 92.6&3 \\
PC-DARTS \citeyp{xu2020pcdarts} & 597 & 5.3 & 75.8 & 92.7 & 3.8 \\
\textbf{DARTS- (ours) }& 467 & 4.9 & 76.2 & 93.0 & 4.5\\
\midrule
NASNet-A \citeyp{zoph2017learning} & 564 & 5.3 &74.0 & 91.6& 2000 \\
DARTS \citeyp{liu2018darts} & 574 & 4.7 & 73.3 & 91.3 &0.4 \\
SNAS \citeyp{xie2018snas} &522&4.3&72.7 & 90.8&1.5 \\
PC-DARTS \citeyp{xu2020pcdarts} & 586 & 5.3 & 74.9 & 92.2 &0.1 \\
FairDARTS-B \citeyp{chu2019fair}& 541 & 4.8 &75.1 & 92.5 & 0.4 \\
\midrule
MobileNetV3 \citeyp{howard2019searching} & 219 & 5.4 &75.2 &92.2&$\approx$3000 \\
MixNet-M \citeyp{tan2020mixconv} &360 & 5.0 & 77.0 & 93.3& $\approx$3000\\
EfficientNet B0 \citeyp{tan2019efficientnet} &390 & 5.3 & 76.3 &93.2 &$\approx$3000 \\
\textbf{DARTS- (ours)} $^\diamond$ & 470 & 5.5 & 77.8& 93.9 &4.5 \\
\bottomrule
\end{tabular}
\end{minipage}
\end{scriptsize}
\end{center}
\vskip -0.2 in
\end{table}
\end{comment}
\begin{table}[tb!]
\setlength{\tabcolsep}{1pt}
\begin{center}
\caption{Comparison of the state-of-the-art models on CIFAR-10 (left) and ImageNet (right). On CIFAR-10 dataset, our average result is obtained on 5 independently searched models to assure the robustness. For ImageNet, networks in the top block are directly searched on ImageNet; the middle indicates that architectures are searched on CIFAR-10 and then transferred to ImageNet; the bottom indicates models have SE and Swish. We search in S0 for CIFAR-10 and S5 for ImageNet. } \smallskip
\label{tab:comparison-cifar-imagenet}
\begin{scriptsize}
\begin{minipage}{0.48\textwidth}
\vspace{0pt}
\begin{threeparttable}
\begin{tabular}{*{5}{l}}
\toprule
\textbf{Models} & \textbf{\scriptsize{Params}} & \textbf{\scriptsize{FLOPs}} & \textbf{Acc} & \textbf{Cost} \\
& \scriptsize{(M)} & \scriptsize{(M)} & \scriptsize{(\%)} & \tiny{GPU Days} \\
\midrule
NASNet-A \citeyp{zoph2017learning} & 3.3 & 608$^\dagger$ & 97.35 & 2000 \\
ENAS \citeyp{pham2018efficient} & 4.6 & 626$^\dagger$ & 97.11 & 0.5 \\
DARTS \citeyp{liu2018darts} & 3.3 & 528$^\dagger$ & 97.00$\pm0.14^\star$ & 0.4 \\
SNAS \citeyp{xie2018snas} & 2.8 & 422$^\dagger$ & 97.15$\pm0.02^\star$ & 1.5\\
GDAS \citeyp{dong2019searching} & 3.4 & 519$^\dagger$ & 97.07 & 0.2\\
P-DARTS \citeyp{chen2019progressive} & 3.4 & 532$^\dagger$ & 97.5 & 0.3 \\
PC-DARTS \citeyp{xu2020pcdarts} & 3.6 & 558$^\dagger$ & 97.43 & 0.1 \\
DARTS- (best) & 3.5 & 568 & 97.5 & 0.4\\
\hline
P-DARTS
\citeyp{chen2019progressive}$^\ddagger$ & 3.3$\pm$0.21 & 540$\pm$34 & 97.19$\pm$0.14 & 0.3 \\
R-DARTS \citeyp{zela2020understanding} & - & - & 97.05$\pm$0.21 & 1.6 \\
SDARTS-ADV \citeyp{chen2020stabilizing} & 3.3 & - & 97.39$\pm$0.02 & 1.3\\
DARTS- (avg.) & 3.5$\pm$0.13 & 583$\pm$22 & 97.41$\pm$0.08 & 0.4\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[$\dagger$] Based on the provided genotypes.
\item[$^\ddagger$] 5 independent searches using their released code.
\item[$\star$]Training the best searched model for several times (\emph{whose average doesn't indicate the stability of the method})
\end{tablenotes}
\end{threeparttable}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\vspace{-2pt}
\begin{threeparttable}
\begin{tabular}{*{2}{l}*{3}{l}c}
\toprule
\textbf{Models} & \textbf{FLOPs} & \textbf{Params} & \textbf{Top-1} & \textbf{Top-5} & \textbf{Cost} \\
& \scriptsize{(M)} & \scriptsize{(M)} & \scriptsize{(\%)} & \scriptsize{(\%)} & \scriptsize{(GPU days)} \\
\midrule
AmoebaNet-A \citeyp{real2019regularized} & 555 & 5.1 & 74.5& 92.0& 3150 \\
MnasNet-92 \citeyp{tan2018mnasnet} & 388 & 3.9 & 74.79 &92.1& \ \ 3791$^\ddagger$ \\
FBNet-C \citeyp{wu2018fbnet} & 375 & 5.5 & 74.9 & 92.3& 9 \\
FairNAS-A \citeyp{chu2019fairnas} &388 & 4.6 & 75.3 & 92.4 & 12 \\
SCARLET-C \citeyp{chu2019scarletnas} & 365 & 6.7 & 76.9 & 93.4 & 10 \\
FairDARTS-D \citeyp{chu2019fair} & 440 & 4.3 & 75.6& 92.6&3 \\
PC-DARTS \citeyp{xu2020pcdarts} & 597 & 5.3 & 75.8 & 92.7 & 3.8 \\
\textbf{DARTS- (ours) }& 467 & 4.9 & 76.2 & 93.0 & 4.5\\
\midrule
NASNet-A \citeyp{zoph2017learning} & 564 & 5.3 &74.0 & 91.6& 2000 \\
DARTS \citeyp{liu2018darts} & 574 & 4.7 & 73.3 & 91.3 &0.4 \\
SNAS \citeyp{xie2018snas} &522&4.3&72.7 & 90.8&1.5 \\
PC-DARTS \citeyp{xu2020pcdarts} & 586 & 5.3 & 74.9 & 92.2 &0.1 \\
FairDARTS-B \citeyp{chu2019fair}& 541 & 4.8 &75.1 & 92.5 & 0.4 \\
\midrule
MobileNetV3 \citeyp{howard2019searching} & 219 & 5.4 &75.2 &92.2&$\approx$3000 \\
MoGA-A \citeyp{chumoga} & 304 & 5.1 & 75.9 & 92.8 & 12 \\
MixNet-M \citeyp{tan2020mixconv} &360 & 5.0 & 77.0 & 93.3& $\approx$3000\\
EfficientNet B0 \citeyp{tan2019efficientnet} &390 & 5.3 & 76.3 &93.2 &$\approx$3000 \\
NoisyDARTS-A$^{\diamond}$ &449 & 5.5 & 77.9 & 94.0 & 12\\
\textbf{DARTS- (ours)}$^\diamond$ & 470 & 5.5 & 77.8& 93.9 &4.5 \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\scriptsize
\item[$\ddagger$] Estimated by \cite{wu2018fbnet}.
\item[${\diamond}$] SE modules and Swish enabled.
\end{tablenotes}
\end{threeparttable}
\end{minipage}
\end{scriptsize}
\end{center}
\vskip -0.2 in
\end{table}
\textbf{ImageNet.}
To further verify the efficiency of DARTS-, we directly search on ImageNet in S5 and compare our results with the state-of-the-art models under the mobile setting in Table~\ref{tab:comparison-cifar-imagenet}. The visualization of the architecture is given in Fig~\ref{fig:darts-a-arch-imagnet}. DARTS-A obtains 76.2$\%$ top-1 accuracy on the ImageNet validation dataset. By contrast, direct applying DARTS on this search space only obtains $66.4\%$ \citep{chu2019fair}. Moreover, it obtains 77.8$\%$ top-1 accuracy after being equipped with auto-augmentation \citep{cubuk2018autoaugment} and squeeze-and-excitation \citep{hu2018squeeze}, which are also used in EfficientNet.
\begin{table*}[tb!]
\setlength{\tabcolsep}{2pt}
\begin{center}
\caption{Searching performance on NAS-Bench-201 \citep{dong2020nasbench201}. Our method robustly obtains new SOTA. Averaged on 4 runs of searching. $^{1st}$: first-order, $^{2nd}$: second-order }
\label{table:nas-bench-201}
\small
\begin{tabular}{lr*{6}{c}}
\toprule
Method & Cost & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} & \multicolumn{2}{c}{ImageNet16-120} \\
& (hours) & valid & test & valid & test & valid & test \\
\midrule
DARTS$^{1st}$ \citeyp{liu2018darts} & 3.2 & 39.77$\pm$0.00 & 54.30$\pm$0.00 & 15.03$\pm$0.00 & 15.61$\pm$0.00 & 16.43$\pm$0.00 & 16.32$\pm$0.00 \\
DARTS$^{2nd}$ \citeyp{liu2018darts} & 10.2 & 39.77$\pm$0.00 & 54.30$\pm$0.00 & 15.03$\pm$0.00 & 15.61$\pm$0.00 & 16.43$\pm$0.00 & 16.32$\pm$0.00 \\
GDAS \citeyp{dong2019searching} & 8.7 & 89.89$\pm$0.08 & 93.61$\pm$0.09 & 71.34$\pm$0.04 & 70.70$\pm$0.30 & 41.59$\pm$1.33 & 41.71$\pm$0.98 \\
SETN \citeyp{dong2019one} & 9.5 & 84.04$\pm$0.28 & 87.64$\pm$0.00 & 58.86$\pm$0.06 & 59.05$\pm$0.24 & 33.06$\pm$0.02 & 32.52$\pm$0.21 \\
DARTS- & 3.2 & \textbf{91.03$\pm$0.44} & \textbf{93.80$\pm$0.40} & \textbf{71.36$\pm$1.51} & \textbf{71.53$\pm$1.51} & \textbf{44.87$\pm$1.46} & \textbf{45.12$\pm$0.82} \\
DARTS- (best) & 3.2 & 91.55 & 94.36&73.49& 73.51& 46.37& 46.34 \\
optimal & n/a & 91.61 & 94.37 & 73.49 & 73.51 & 46.77 & 47.31 \\
\bottomrule
\end{tabular}
\end{center}
\vskip -0.15in
\end{table*}
\textbf{NAS-Bench-201.}
Apart from standard search spaces, benchmarking with known optimal in a limited setting is also recommended. NAS-Bench-201 \citep{dong2020nasbench201} consists of 15,625 architectures in a reduced DARTS-like search space, where it has 4 internal nodes and 5 operations per node. We compare our method with prior work in Table~\ref{table:nas-bench-201}. We search on CIFAR-10 and look up the ground-truth performance with found genotypes on various test sets. Remarkably, we achieve a new state of the art, the best of which almost touches the optimal.
\textbf{Transfer results on objection detection}
We further evaluate the transfer-ability of our models on down-stream object detection task by replacing the backbone of RetinaNet \citep{lin2017focal} on MMDetection toolbox platform \citep{chen2019mmdetection}. Specifically, with the same training setting as \cite{chu2019fair}, our model achieves 32.5\% mAP on the COCO dataset, surpassing other similar-sized models such as MobileNetV3, MixNet, and FairDARTS. The detailed results are shown in Appendix (Table~\ref{table:darts-coco-retina}).
\subsection{Orthogonal Combination with Other Variants}
Our method can be flexibly adapted to combine with prior work for further improvements. Here we investigate the joint outcome with two methods: P-DARTS and PC-DARTS.
\label{sec:pdarts-discuss}
\begin{comment}
\begin{wraptable}{r}{5.5cm}
\vspace{-20pt}
\setlength{\tabcolsep}{1pt}
\small
\centering
\caption{Comparison of P-DARTS removing the strong prior and combining DARTS-. The results are averaged over 3 independent experiments on C10.}
\smallskip\begin{tabular}{cp{1cm}p{1cm}c}
\toprule
& \textbf{Remove Prior (M=2)} & \textbf{With DARTS-} & \textbf{Acc ($\%$)}\\
\midrule
\multirow{2}*{P-DARTS
} & \checkmark & $\times$ & 96.48$\pm$0.55 \\
~ & \checkmark & \checkmark &97.28$\pm$0.04 \\
\bottomrule
\end{tabular}
\label{tab:pdarts-darts-}
\vspace{-10pt}
\end{wraptable}
\end{comment}
\begin{wraptable}{r}{5.5cm}
\vspace{-20pt}
\setlength{\tabcolsep}{1pt}
\small
\centering
\caption{We remove the strong constraints on \textit{the number of skip connections as 2 and dropout }(priors) for P-DARTS and compare its performance w/ and w/o DARTS-. }
\smallskip\begin{tabular}{ccc}
\toprule
\textbf{Method} & \textbf{Setting}& \textbf{Acc (\%)} \\
\midrule
P-DARTS & w/o priors & 96.48$\pm$0.55 \\
P-DARTS- & w/o priors &97.28$\pm$0.04 \\
\bottomrule
\end{tabular}
\label{tab:pdarts-darts-}
\vspace{-12pt}
\end{wraptable}
\textbf{Progressive DARTS (P-DARTS).} P-DARTS \citep{chen2019progressive} proposes a progressive approach to search gradually with deeper depths while pruning out the uncompetitive paths. Additionally, it makes use of some handcrafted criteria to address the collapse (the progressive idea itself cannot deal with it), for instance, they impose two strong priors by regularizing the number of skip connections $M$ as 2 as well as dropout. To be fair, we remove such a carefully handcrafted trick and run P-DARTS for several times. As a natural control group, we also combine DARTS- with P-DARTS. We run both experiments for 3 times on CIFAR-10 dataset in Table~\ref{tab:pdarts-darts-}. Without the strong priors, P-DARTS severely suffers from the collapse, where the inferred models contain an excessive number of skip connections. Specifically, it has a very high test error ($3.42\%$ on average), even worse than DARTS. However, P-DARTS can benefit greatly from the combination with DARTS-. The improved version (we call P-DARTS-) obtains much higher top-1 accuracy (\textbf{+0.8\%}) on CIFAR-10 than its baseline.
\begin{wraptable}{r}{6.3cm}
\vspace{-20pt}
\setlength{\tabcolsep}{1pt}
\small
\centering
\caption{Comparison of PC-DARTS \textit{removing the strong prior }(i.e. channel shuffle) and combining DARTS-. The results are from 3 independent runs on CIFAR-10. The GPU memory cost is on a batch size of 256.}
\smallskip\begin{tabular}{lp{1cm}ccc}
\toprule
\textbf{Method}& \textbf{Setting} & \textbf{Acc} ($\%$) & \textbf{Memory} & \textbf{Cost} \\
\midrule
PC-DARTS & $K=2$ & 97.09$\pm$0.14 & 19.9G & 3.75h \\
PC-DARTS- & $K=2$ & 97.35$\pm0.02$ & 20.8G & 3.41h \\
\bottomrule
\end{tabular}
\label{tab:pcdarts-darts-}
\vspace{-15pt}
\end{wraptable}
\textbf{Memory Friendly DARTS (PC-DARTS).}
To alleviate the large memory overhead from the whole supernet, PC-DARTS \citep{xu2020pcdarts} selects the partial channel for searching. The proportion hyper-parameter $K$ needs careful calibration to achieve a good result for specific tasks. As a byproduct, the search time is also reduced to 0.1 GPU days ($K$=4). We use their released code and run repeated experiments across different seeds under the same settings.
To accurately evaluate the role of our method, we choose $K$=2 (a bad configuration in the original paper). We make comparisons between the original PC-DARTS and its combination with ours (named PC-DARTS-) in Table~\ref{tab:pcdarts-darts-}. PC-DARTS- can marginally boost the CIFAR-10 top-1 accuracy (+$0.26\%$ on average). The result also confirms that our method can make PC-DARTS less sensitive to its hyper-parameter $K$ while keeping its advantage of less memory cost and run time.
\subsection{Ablation Study}\label{sec:ablation}
\textbf{Robustness to Decay Strategy.} Our method is insensitive to the type of decay policy on $\beta$. We design extra two strategies as comparisons: \emph{cosine} and \emph{step} decay. They both have the similar performance. Specifically, when $\beta$ is scheduled to zero by the cosine strategy, the average accuracy of four searched CIFAR-10 models in S3 is 97.33\%$\pm$0.09, the best is 97.47\%. The step decay at epoch 45 obtains $97.30\%$ top-1 accuracy on average in the same search space.
\begin{wraptable}{r}{5cm}
\vspace{-23pt}
\caption{Searching performance on CIFAR-10 in S3 w.r.t the initial linear decay rate $\beta_0$. Each setting is run for three times.}
\smallskip
\centering
\smallskip\begin{tabular}{l*{2}{c}}
\toprule
$\beta_0$ & Error (\%) \\
\midrule
1 & 2.65$\pm$0.04 \\
0.7 & 2.76$\pm$0.16 \\
0.4 &3.04$\pm$0.19 \\
0.1 & 3.11$\pm$0.16\\
0 & 4.58$\pm$1.3 \\
\bottomrule
\end{tabular}
\label{tab:beta-sensitiveness}
\vspace{-20pt}
\end{wraptable}
\textbf{Robustness Comparison on C10 and C100 in S0-S4.}
To verify the robustness, it is required to search several times to report the average performance of derived models \citep{Yu2020Evaluating,yang2020nas}. As shown in Table~\ref{tab:CNN-standard-space}, Table~\ref{tab:comparison-rdarts-s2-s3-avg} and Table~\ref{tab:comparison-rdarts-s2-s3-best}, our method outperforms the recent SOTAs across several spaces and datasets. Note that SDARTS-ADV utilizes adversarial training and requires 3$\times$ more search times than ours. Especially, we find a good model in $S_3$ on CIFAR-100 with the lowest top-1 test error $15.86\%$. The architectures of these models can be found in the appendix.
\begin{comment}
\begin{table}[tb!]
\caption{Transfer results on COCO datasets of various drop-in backbones.}
\smallskip
\centering
\resizebox{.98\columnwidth}{!}{
\smallskip
\begin{tabular}{*{1}{l}H*{8}{l}}
\toprule
\multirow{2}*{\textbf{Backbones}} & \multirow{2}*{\textbf{FLOPs}} & \textbf{Params} & \multirow{2}*{\textbf{Acc}} & \multirow{2}*{\textbf{AP}} & \multirow{2}*{\textbf{AP$_{50}$}} & \multirow{2}*{\textbf{AP$_{75}$}} & \multirow{2}*{\textbf{AP$_S$}} & \multirow{2}*{\textbf{AP$_M$}} & \multirow{2}*{\textbf{AP$_L$}} \\
& & (M) & & & & & & & \\
\midrule
MobileNetV2 \citeyp{sandler2018mobilenetv2} & 300 & 3.4& 72.0 & 28.3 & 46.7 & 29.3 & 14.8 & 30.7 & 38.1\\
SingPath NAS \citeyp{stamoulis2019single} & 365 & 4.3 & 75.0 & 30.7 & 49.8 & 32.2 & 15.4 &33.9 & 41.6\\
MnasNet-A2 \citeyp{tan2018mnasnet} & 340& 4.8 & 75.6 & 30.5 & 50.2 & 32.0 & 16.6 & 34.1 & 41.1\\
MobileNetV3 \citeyp{howard2019searching} & 219 & 5.4 & 75.2& 29.9 & 49.3 & 30.8 & 14.9 & 33.3 & 41.1\\
MixNet-M \citeyp{tan2020mixconv} & 360 & 5.0 & 77.0 & 31.3& 51.7 & 32.4& 17.0 & 35.0 & 41.9 \\
FairDARTS-C \citeyp{chu2019fair} & 386 & 5.3 & 77.2 & 31.9 & 51.9 & 33.0 & 17.4 & 35.3 & 43.0 \\
\textbf{DARTS-A (Ours)} & 470 & 5.5& 77.8& 32.5& 52.8 & 34.1& 18.0 & 36.1 & 43.4 \\
\bottomrule
\end{tabular}
}
\vskip -0.15in
\label{table:darts-coco-retina}
\end{table}
\end{comment}
\begin{comment}
\begin{table*}[tb!]
\begin{minipage}{0.38\textwidth}
\caption{Comparison of searched CNN in the DARTS search space on three different datasets.}
\smallskip
\begin{scriptsize}
\begin{tabular}{lrrr}
\toprule
\textbf{Dataset} & \textbf{DARTS} & \textbf{R-DARTS(L2)} & \textbf{Ours}\\
\midrule
C10 & 2.91$\pm$0.25 & 2.95$\pm$0.21 & \textbf{2.63$\pm$0.07}\\
C100 & 20.58$\pm$0.44 & 18.01$\pm$0.26 & \textbf{17.92$\pm$0.36} \\
\bottomrule
\end{tabular}
\label{tab:CNN-standard-space}
\end{scriptsize}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}{0.58\textwidth}
\centering
\caption{Comparison of searched CNN architectures in two reduced search spaces S2 and S3 (proposed by \citep{zela2020understanding}) on CIFAR-10 and CIFAR-100. We report mean$\pm$std of 3 found architectures retrained from scratch (settings same as R-DARTS \citep{zela2020understanding})}
\smallskip
\begin{scriptsize}
\begin{tabular}{cccccc}
\toprule
\multicolumn{2}{c}{\textbf{Benchmark}} & \textbf{DARTS} & \textbf{DARTS-ES} & \textbf{DARTS-ADA} & \textbf{ours}\\
\midrule
\multirow{2}*{C10} & S2 & 4.42$\pm$0.40 & 3.41$\pm$0.14 & 3.59$\pm$0.31 & \textbf{2.79$\pm$0.04} \\
~ & S3 & 4.12$\pm$0.85 & 3.71$\pm$1.14 & 2.99$\pm$0.34 & \textbf{2.65$\pm$0.04}\\
\midrule
\multirow{2}*{C100} & S2 & 28.75$\pm$0.92 & 24.68$\pm$1.43 & 26.88$\pm$1.11 & \textbf{22.91$\pm$0.54}\\
~ & S3 & 29.01$\pm$0.24 & 26.99$\pm$1.79 & 24.55$\pm$0.63 & \textbf{21.47$\pm$0.40}\\
\bottomrule
\end{tabular}
\end{scriptsize}
\label{tab:comparison-rdarts-s2-s3-avg}
\end{minipage}
\end{table*}
\end{comment}
\begin{comment}
\begin{table*}[t]
\centering
\caption{Comparison in various search spaces. We report the \textbf{lowest error rate} of 3 found architectures. $^\dagger$: under \cite{chen2020stabilizing}'s training settings where all models have 20 layers and 36 initial channels (the best is shown in boldface). $^\ddagger$: under \cite{zela2020understanding}'s settings where CIFAR-100 models have 8 layers and 16 initial channels (The best is in boldface and underlined). } \smallskip
\footnotesize
\setlength{\tabcolsep}{3pt}
\begin{tabular}{c*{11}{c}}
\toprule
\multicolumn{2}{c}{ \multirow{2}*{\textbf{Benchmark}}} & \multirow{2}*{\textbf{DARTS}$^\ddagger$} & \multirow{2}*{\textbf{PC-DARTS}$^\dagger$} & \multicolumn{2}{c}{\textbf{R-DARTS}$^\ddagger$} & \multicolumn{2}{c}{\textbf{DARTS}$^\ddagger$} & \multicolumn{2}{c}{\textbf{SDARTS}$^\dagger$} & \multirow{2}*{\textbf{Ours}$^\dagger$} & \multirow{2}*{\textbf{Ours}$^{\ddagger}$} \\
\cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10}
& & & & DP & L2 & ES & ADA & RS & ADV & & \\
\midrule
\multirow{4}*{C10}& S1 & 3.84 & 3.11 & 3.11 & 2.78 & 3.01 & 3.10 & 2.78 & 2. 73 & \textbf{2.68} & \textbf{2.68} \\
~ & S2 & 4.85 & 3.02 & 3.48 & 3.31 & 3.26 & 3.35 & 2.75 & 2.65 & \textbf{2.63} & \textbf{2.63} \\
~ & S3 & 3.34 & 2.51 & 2.93 & 2.51 & 2.74 & 2.59 & 2.53 & 2.49 & \textbf{2.42} & \textbf{2.42} \\
~ & S4 & 7.20 & 3.02 & 3.58 & 3.56 & 3.71 & 4.84 & 2.93 & 2.87 & \textbf{2.86} & \textbf{2.86} \\
\midrule
\multirow{4}*{C100} & S1 & 29.46 & 18.87 & 25.93 & 24.25 & 28.37 & 24.03 & 17.02 & \textbf{16.88} & 16.92 & \textbf{\underline{22.41}} \\
~ & S2 & 26.05 & 18.23 & 22.30 & 22.24 & 23.25 & 23.52 &17.56 & 17.24 & \textbf{16.14} & \textbf{\underline{21.61}} \\
~ & S3 & 28.90 & 18.05 & 22.36 & 23.99 & 23.73 & 23.37 & 17.73 & 17.12 & \textbf{15.86} & \textbf{\underline{21.13}}\\
~ & S4 & 22.85 & 17.16 & 22.18 & 21.94 & \textbf{\underline{21.26}} & 23.20 & 17.17 & \textbf{15.46} & 17.48 & 21.55 \\%SDARTS(v2) 21.46 & \textbf{21.25}
\bottomrule
\end{tabular}
\label{tab:comparison-rdarts-s2-s3-best}
\end{table*}
\end{comment}
\begin{table*}[t]
\centering
\caption{Comparison in various search spaces. We report the \textbf{lowest error rate} of 3 found architectures. $^\dagger$: under \cite{chen2020stabilizing}'s training settings where all models have 20 layers and 36 initial channels (the best is shown in boldface). $^\ddagger$: under \cite{zela2020understanding}'s settings where CIFAR-100 models have 8 layers and 16 initial channels (The best is in boldface and underlined). } \smallskip
\footnotesize
\setlength{\tabcolsep}{3pt}
\begin{tabular}{c*{7}{c}|c*{4}{c}}
\toprule
\multicolumn{2}{c}{ \multirow{2}*{\textbf{Benchmark}}} & \multirow{2}*{\textbf{DARTS}$^\ddagger$} & \multicolumn{2}{c}{\textbf{R-DARTS}$^\ddagger$} & \multicolumn{2}{c}{\textbf{DARTS}$^\ddagger$} & \multirow{2}*{\textbf{Ours}$^{\ddagger}$} & \multirow{2}*{\textbf{PC-DARTS}$^\dagger$} & \multicolumn{2}{c}{\textbf{SDARTS}$^\dagger$} & \multirow{2}*{\textbf{Ours}$^\dagger$} \\
\cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){10-11}
& & & DP & L2 & ES & ADA & & & RS & ADV &\\
\midrule
\multirow{4}*{C10}& S1 & 3.84 & 3.11 & 2.78 & 3.01 & 3.10 & \textbf{2.68} & 3.11 & 2.78 & 2.73 & \textbf{2.68} \\
~ & S2 & 4.85 & 3.48 & 3.31 & 3.26 & 3.35 & \textbf{2.63} & 3.02 & 2.75 & 2.65 & \textbf{2.63} \\
~ & S3 & 3.34 & 2.93 & 2.51 & 2.74 & 2.59 & \textbf{2.42} & 2.51 & 2.53 & 2.49 & \textbf{2.42} \\
~ & S4 & 7.20 & 3.58 & 3.56 & 3.71 & 4.84 & \textbf{2.86} & 3.02 & 2.93 & 2.87 & \textbf{2.86} \\
\midrule
\multirow{4}*{C100} & S1 & 29.46 & 25.93 & 24.25 & 28.37 & 24.03 & \textbf{\underline{22.41}} & 18.87 & 17.02 & \textbf{16.88} & 16.92 \\
~ & S2 & 26.05 & 22.30 & 22.24 & 23.25 & 23.52 & \textbf{\underline{21.61}} & 18.23 & 17.56 & 17.24 & \textbf{16.14} \\
~ & S3 & 28.90 & 22.36 & 23.99 & 23.73 & 23.37 & \textbf{\underline{21.13}} & 18.05 & 17.73 & 17.12 & \textbf{15.86}\\
~ & S4 & 22.85 & 22.18 & 21.94 & \textbf{\underline{21.26}} & 23.20 & 21.55 & 17.16 & 17.17 & \textbf{15.46} & 17.48 \\%SDARTS(v2) 21.46 & \textbf{21.25}
\bottomrule
\end{tabular}
\label{tab:comparison-rdarts-s2-s3-best}
\vskip -0.2 in
\end{table*}
\textbf{Sensitivity Analysis of $\beta$}\quad
The power of the auxiliary skip connection branch can be discounted by setting a lower initial $\beta$. We now evaluate how sensitive our approach is to the value of $\beta$. It's trivial to see that our approach degrades to DARTS when $\beta = 0$. We compare results when searching with $\beta \in \{1,0.7,0.4,0.1,0\}$ in Table~\ref{tab:beta-sensitiveness}, which show that a bigger $\beta_0$ is advantageous to obtain better networks.
\textbf{The choice of auxiliary branch}
Apart from the default skip connection serving as the auxiliary branch, we show that it is also effective to replace it with a learnable 1$\times$1 convolution projection, which is initialized with an identity tensor. The average accuracy of three searched CIFAR-10 models in S3 is 97.25\%$\pm$0.09. Akin to the ablation in \cite{he2016deep}, the projection convolution here works in a similar way as the proposed skip connection. This proves the necessity of the auxiliary branch.
\textbf{Performance with longer epochs. } It is claimed by \cite{bi2019stabilizing} that much longer epochs lead to better convergence of the supernet, supposedly beneficial for inferring the final models. However, many of DARTS variants fail since their final models are full of skip connections. We thus evaluate how our method behaves in such a situation. Specifically, we extend the standard 50 epochs to 150, 200 and we search 3 independent times each for S0, S2 and S3. Due to the longer epochs, we slightly change our decay strategy, we keep $\beta=1$ all the way until for last 50 epochs we decay $\beta$ to 0. Other hyper-parameters are kept unchanged. The results are shown in Table~\ref{tab:long_epoch_search} and the found genotypes are listed in Figure~\ref{fig:c10_s5_decay_best_cells}, \ref{fig:c10_s2_e150_decay_cells}, \ref{fig:c10_s2_e200_decay_cells}, \ref{fig:c10_s3_e150_decay_cells} and \ref{fig:c10_s3_e200_decay_cells}. It indicates that DARTS- doesn't suffer from longer epochs since it has reasonable values for $\#P$ compared with those ($\#P=0$) investigated by \cite{bi2019stabilizing}. Notice S2 and S3 are harder cases where DARTS suffers more severely from the collapse than S0. As a result, \emph{DARTS- can successfully survive longer epochs even in challenging search spaces.} Noticeably, it is still unclear whether longer epochs can truly boost searching performance. Although we achieve a new state-of-the-art result in S2 where the best model has 2.50\% error rate (previously 2.63\%), it still has worse average performance (2.71$\pm$0.11\%) in S0 than the models searched with 50 epochs (2.59$\pm$0.08\%), and the best model in S3 (2.53\%) is also weaker than before (2.42\%).
\begin{table}[ht]
\setlength{\tabcolsep}{1pt}
\caption{Searching performance on CIFAR-10 in S0, S2 and S3 using longer epochs. Following \cite{bi2019stabilizing}, $\#P$ means the number of parametric operators in the normal cell. Averaged on 3 runs of search.}
\smallskip
\centering
\smallskip\begin{tabular}{c|*{3}{c}|*{3}{c}|*{3}{c}}
\toprule
Epoch & \multicolumn{3}{c|}{S0} & \multicolumn{3}{c|}{S2} & \multicolumn{3}{c}{S3} \\
&$\#P$ & \small{Params (M)} & \small{Error (\%)} &$\#P$ & \small{Params (M)} & \small{Error (\%)} & $\#P$ & \small{Params (M)} & \small{Error (\%)} \\
\midrule
150 & 6.6$\pm$1.1 & 3.3$\pm$0.3& 2.74$\pm$0.06 & 6.0$\pm$0.0 & 3.9$\pm$0.3 & 2.58$\pm$0.11 & 6.0$\pm$1.0 & 3.6$\pm$0.3 & 2.55$\pm$0.03 \\
200 & 7.3$\pm$0.6 & 3.2$\pm$0.3 & 2.71$\pm$0.11 & 8.0$\pm$0.0 & 4.3$\pm$0.1 & 2.65$\pm$0.21 & 7.6$\pm$0.5 & 4.3$\pm$0.2 & 2.66$\pm$0.09 \\
\bottomrule
\end{tabular}
\label{tab:long_epoch_search}
\end{table}
Besides, compared with first-order DARTS with a cost of 0.4 GPU days in S0, Amended-DARTS \citep{bi2019stabilizing}, particularly designed to survive longer epochs, reports 1.7 GPU days even with pruned edges in S0. Our approach has the same cost as first-order DARTS, which is more efficient.
\begin{comment}
\begin{table*}[t]
\small
\centering
\caption{Comparison of searched CNN in the standard space on three different datasets and report the \textbf{lowest error rate} of 3 found architectures retrained from scratch. $^\dagger$: Training setup follows SDARTS \citep{chen2020stabilizing} where all models have 20 cells and 36 initial channels. $^\ddagger$: Training setup follows R-DARTS \citep{zela2020understanding}.}\smallskip
\smallskip\begin{tabular}{c*{12}{|c}}
\hline
\multicolumn{2}{c|}{\textbf{Benchmark}} & \textbf{RS-ws} & \textbf{DARTS} & \textbf{PC-DARTS} & \textbf{R-DARTS(DP)} & \textbf{R-DARTS(L2)} & \textbf{DARTS-ES} & \textbf{DARTS-ADA} & \textbf{SDARTS-RS} & \textbf{SDARTS-ADV} & \textbf{Ours}$^\dagger$ & \textbf{Ours}$^{\ddagger}$\\
\hline
\multirow{2}*{C10} & S2 & 3.66 & 4.85 & 3.02 & 3.48 & 3.31 & 3.26 & 3.35 & 2.75 & 2.65 & \textbf{2.63} & \textbf{2.63} \\\cline{2-13}
~ & S3 & 2.95 & 3.34 & 2.51 & 2.93 & 2.51 & 2.74 & 2.59 & 2.53 & \textbf{2.49} & 2.50 & 2.50 \\\hline
\multirow{2}*{C100} & S2 & 21.21 & 26.05 & 18.23 & 22.30 & 22.24 & 23.25 & 23.52 & 17.56 & 17.24 & \textbf{17.10} & \textbf{22.15}\\\cline{2-13}
~ & S3 & 23.75 & 28.90 & 18.05 & 22.36 & 23.99 & 23.73 & 23.37 & 17.73 & 17.12 & \textbf{15.86} & \textbf{21.13}\\\hline
\multirow{2}*{SVHN} & S2 & 2.72 & 3.53 & 2.39 & 2.52 & 2.51 & 2.60 & 2.54 & 2.37 & \textbf{2.07} & 2.12 & 2.57\\ \cline{2-13}
~ & S3 & 2.87 & 3.41 & 2.27 & 2.49 & 2.48 & 2.50 & 2.50 & 2.21 & \textbf{2.05} & 2.21 & 2.69\\\hline\end{tabular}
\label{tab:comparison-rdarts-s2-s3-best}
\end{table*}
\end{comment}
\section{Analysis and Discussions}
\subsection{Failure of Hessian Eigenvalue}\label{sec:failure of eigen}
The maximal Hessian eigenvalue calculated from the validation loss w.r.t $\alpha$ is regarded as an indicator of performance collapse \citep{zela2020understanding,chen2020stabilizing}. Surprisingly, our method develops a growing eigenvalue in the majority of configurations, which conflicts with the previous observations. We visualize these statistics across different search space and datasets in Figure~\ref{fig:eigen_value} (\ref{sec:failure_eigen}). Although eigenvalues increase almost monotonously and reach a relatively large value in the end, the final models still have good performance that matches with state-of-the-art ones (see Table~\ref{tab:comparison-rdarts-s2-s3-avg}).
These models can be mistakenly deemed as bad ones or never visited according to the eigenvalue criteria. Our observations disclose one fatal drawback of these indicator-based approaches: \emph{they are prone to rejecting good models}. Further analysis can be found in \ref{sec:failure_eigen}.
\subsection{Validation Accuracy Landscape}
Recent works, R-DARTS \citep{zela2020understanding} and SDARTS \citep{chen2020stabilizing} point that the architectural weights are expected to converge to an optimal point where accuracy is insensitive to perturbations to obtain a stable architecture after the discretization process, i.e., the convergence point should have a smooth landscape. SDARTS proposes a perturbation-based regularization, which further stabilizes the searching process of DARTS. However, the perturbation regularization disturbs the training procedure and thus misleads the update of architectural weights.
Different from SDARTS that explicitly smooths landscape by perturbation, DARTS- can implicitly do the same without directly perturbing architectural weights.
\begin{figure*}[ht]
\centering
\subfigure[DARTS]{
\includegraphics[width=0.23\columnwidth]{figures/vis_landscape/s3/valid_acc-13_3dsurface.pdf}
}
\subfigure[DARTS-]{
\includegraphics[width=0.23\columnwidth]{figures/vis_landscape/s3/valid_acc-16_3dsurface.pdf}
}
\subfigure[DARTS]{
\includegraphics[width=0.23\columnwidth]{figures/vis_landscape/s3/valid_acc-13_2dcontour.pdf}
}
\subfigure[DARTS-]{
\includegraphics[width=0.23\columnwidth]{figures/vis_landscape/s3/valid_acc-16_2dcontour.pdf}
}
\caption{Comparison of the validation accuracy landscape of (a) DARTS and (b) DARTS- w.r.t. $\alpha$ on CIFAR-10 in S3. Their contour maps are shown respectively in (c) and (d), where we set the step of contour map as 0.1. The accuracy of the derived models are 94.84\% (a,c) and 97.58\% (b,d), while the maximum Hessian eigenvalues are similarly high (0.52 and 0.42)}
\label{fig:landcape-loss-val}
\end{figure*}
To analyze the efficacy of DARTS-, we plot the validation accuracy landscape w.r.t architectural weights $\alpha$, and find that auxiliary connection smooths the landscape and thus stabilizes the searching stage.
Specifically, we choose two random directions and apply normalized perturbation on $\alpha$ (following \citeauthor{li2018visualizing} \citeyear{li2018visualizing}). As shown in Figure~\ref{fig:landcape-loss-val}, DARTS- is less sensitive to the perturbation than DARTS, and the contour map of DARTS- descends more gently.
\begin{comment}
\subsection{Trajectory of Model Performance During Search}
We are also curious to know the searching performance trajectory during the whole process. Specifically, we sample a model every 10 epochs and train these models from scratch using the same setting as above, the results are shown in Figure~\ref{fig:eigen_value} (b). The performance of the inferred models continues growing, where the accuracy is boosted from $96.5\%$ to $97.4\%$. This affirms the validity of searching using our method. In contrast, the early-stopping strategies based on eigenvalues \citep{liang2019darts} would fail in this setting. We argue that the proposed auxiliary skip branch can regularize the overfitting of the supernet, leaving the architectural weights represent the ability of candidate operations. This experiment poses as a counterexample to R-DARTS \citep{zela2020understanding}, where good models can appear albeit Hessian eigenvalues change fast. It again denies the need for the costly indicator.
\end{comment}
\section{Conclusion}
We propose a simple and effective approach named DARTS- to address the performance collapse in differentiable architecture search. Its core idea is to make use of an auxiliary skip connection branch to take over the gradient advantage role from the candidate skip connection operation. This can create a fair competition where the bi-level optimization process can easily differentiate good operations from the bad. As a result, the search process is more stable and the collapse seldom happens across various search spaces and different datasets.
Under strictly controlled settings, it steadily outperforms recent state-of-the-art RobustDARTS \citep{zela2020understanding} with 3$\times$ fewer search cost.
Moreover, our method disapproves of various handcrafted regularization tricks. Last but not least, it can be used stand-alone or in cooperation with various orthogonal improvements if necessary.
This paper conveys two important messages for future research. On the one hand, the Hessian eigenvalue indicator for performance collapse \citep{zela2020understanding,chen2020stabilizing} is not ideal because it has a risk of rejecting good models. On the other hand, handcraft regularization tricks \citep{chen2019progressive} seem more critical to search a good model instead of the proposed methods. Then what's the solution? In principle, it's difficult to find a perfect indicator of the collapse. Our approach shows the potential to control the search process and doesn't impose limitations or priors on the final model. We hope more attention be paid in this direction.
\section{Acknowledgement}
This research was supported by Meituan.
| proofpile-arXiv_066-2151 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
\noindent While modal logic is classically concerned with purely
relational systems (e.g.~\cite{BlackburnEA01}), there is, nowadays,
widespread interest in flavours of modal logic interpreted over
state-based structures in a wider sense, e.g.\ featuring probabilistic
or, more generally, weighted branching. Under the term
\emph{arithmetic modal logics}, we subsume logics that feature
arithmetical constraints on the number or combined weight of
successors. The simplest logics of this type compare weights to
constants, such as graded modal logic~\cite{Fine72} or some variants
of probabilistic modal logic~\cite{LarsenSkou91,HeifetzMongin01}. More
involved examples are \emph{Presburger modal
logic}~\cite{DemriLugiez10}, which allows Presburger constraints on
numbers of successors, and probabilistic modal logic with
linear~\cite{FaginHalpern94} or
polynomial~\cite{FaginHalpernMegiddo90} inequalities over
probabilities. Presburger modal logic allows for statements like `the
majority of university students are female', or `dance classes have
even numbers of participants', while probabilistic modal logic with
polynomial inequalities can assert, for example, independence of
events.
These logics are the main examples we address in a more general
coalgebraic framework in this paper. Our main observation is that
satisfiability for coalgebraic logics can be decided in a step-by-step
fashion, peeling off one layer of operators at a time. We thus reduce
the overall satisfiability problem to satisfiability in a
\emph{one-step logic} involving only immediate successor states, and
hence no nesting of
modalities~\cite{SchroderPattinson08d,MyersEA09}. We define a
\emph{strict} variant of this \emph{one-step satisfiability problem},
distinguished by a judicious redefinition of its input size; if strict
one-step satisfiability is in {\mbox{\upshape\textsc{ExpTime}}}\xspace, we obtain a (typically
optimal) {\mbox{\upshape\textsc{ExpTime}}}\xspace upper bound for satisfiability under global
assumptions in the full logic. For our two main examples, the
requisite complexity bounds (in fact, even {\upshape\textsc{PSpace}}\xspace) on strict one-step
satisfiability follow in essence directly from known complexity
results in integer programming and the existential theory of the
reals, respectively; in other words, even in fairly involved examples
the complexity bound for the full logic is obtained with comparatively
little effort once the generic result is in place.
Applied to Presburger constraints, our results complement previous
work showing that the complexity of Presburger modal logic without
global assumptions is {\upshape\textsc{PSpace}}\xspace~\cite{DemriLugiez06,DemriLugiez10}, the
same as for the modal logic $K$ (or equivalently the description logic
$\mathcal{ALC}$). For polynomial inequalities on probabilities, our syntax
generalizes propositional \emph{polynomial weight}
formulae~\cite{FaginHalpernMegiddo90} to a full modal logic allowing
nesting of weights (and global assumptions).
In more detail, our first contribution is to show via a type
elimination algorithm~\cite{Pratt79} that also in presence of global
assumptions (and, hence, in presence of the universal
modality~\cite{GorankoPassy92}),
the satisfiability problem for coalgebraic modal logics is no harder
than for $K$, i.e.\ in {\mbox{\upshape\textsc{ExpTime}}}\xspace, provided that strict one-step
satisfiability is in {\mbox{\upshape\textsc{ExpTime}}}\xspace. Additionally, we show that this result
can be extended to cover nominals, i.e.\ to coalgebraic hybrid
logic~\cite{MyersEA09,SchroderEA09}. In the Presburger example, we
thus obtain that reasoning with global assumptions in Presburger
hybrid logic, equivalently reasoning with general TBoxes in the
extension of the description logic $\mathcal{ALCO}$ with Presburger constraints
(which subsumes $\mathcal{ALCOQ}$), remains in {\mbox{\upshape\textsc{ExpTime}}}\xspace.
We subsequently refine the algorithm to use global caching in the
spirit of Gor\'e and Nguyen~\cite{GoreNguyen13}, i.e.\ bottom-up
expansion of a tableau-like graph and propagation of satisfiability
and unsatisfiability through the graph. We thus potentially avoid
constructing the whole exponential-sized tableau, and provide
maneuvering space for heuristic optimization. Global caching
algorithms have been demonstrated to perform well in
practice~\cite{Gore:2008:EEG}. Moreover, we go on to present a
concrete algorithm, in which the fixpoint computations featuring in
the propagation step of the global caching algorithm are implemented
efficiently in the style of Liu and Smolka~\cite{lism98:simp}.
\paragraph{Organization} We discuss some preliminaries on fixpoints in
Section~\ref{sec:prelims}, and recall the generic framework of
coalgebraic logic in Section~\ref{sec:colog}. In
Section~\ref{sec:oss}, we discuss the concepts of one-step logic and
one-step satisfiability that underlie our generic algorithms. We
establish the generic {\mbox{\upshape\textsc{ExpTime}}}\xspace upper bound for reasoning with global
assumptions in coalgebraic modal logics via type elimination in
Section~\ref{sec:type-elim}. In Sections~\ref{sec:caching}
and~\ref{sec:concrete-alg}, we present the global caching algorithm
and its concretization. We extend the {\mbox{\upshape\textsc{ExpTime}}}\xspace complexity result to
coalgebraic hybrid logics in Section~\ref{sec:nominals}.
\paragraph{Related Work} Our algorithms use a semantic method, and as
such complement earlier results on global caching in coalgebraic
description logics that rely on tractable sets of tableau
rules~\cite{GoreEA10a}, which are not currently available for our
leading examples. (In fact, tableau-style axiomatizations of various
logics of linear inequalities over the reals and over the integers
have been given in earlier work~\cite{Kupke:2010:MLL}; however, over
the integers the rules appear to be incomplete: if $\sharp p$ denotes
the integer weight of successors satisfying $p$, then the formula
$2\sharp\top <1\lor 2\sharp\top>1$ is clearly valid, but cannot be
derived.)
Demri and Lugiez' proof that Presburger modal logic \emph{without}
global assumptions is in {\upshape\textsc{PSpace}}\xspace~\cite{DemriLugiez06,DemriLugiez10}
can be viewed as showing that strict one-step satisfiability in
Presburger modal logic is in {\upshape\textsc{PSpace}}\xspace (as we discuss below, more recent
results in integer programming simplify this proof). Generally, our
coalgebraic treatment of Presburger modal logic and related logics
relies on an equivalence of the standard Kripke semantics of these
logics and an alternative semantics in terms of non-negative-integer-weighted
systems called \emph{multigraphs}~\cite{DAgostinoVisser02}, the point
being that the latter, unlike the former, is subsumed by the semantic
framework of coalgebraic logic (we explain details in
Section~\ref{sec:colog}).
Work related to XML query languages has shown that reasoning in
Presburger fixpoint logic is {\mbox{\upshape\textsc{ExpTime}}}\xspace complete~\cite{SeidlEA08}, and
that a logic with Presburger constraints and nominals is in
{\mbox{\upshape\textsc{ExpTime}}}\xspace~\cite{BarcenasLavalle13}, when these logics are interpreted
\emph{over finite trees}, thus not subsuming our {\mbox{\upshape\textsc{ExpTime}}}\xspace upper bound
for Presburger modal logic with global assumptions. It may be possible
to obtain the latter bound alternatively via looping tree automata
like for graded modal logic~\cite{TobiesThesis}. The description
logic~$\mathcal{ALCN}$ (featuring the basic~$\mathcal{ALC}$ operators and number
restrictions $\ge n.\,\top$) has been extended with explicit
quantification over integer variables and number restrictions
mentioning integer variables~\cite{BaaderSattler96}, in formulae such
as $\downarrow n.\,(({=} n\ R.\,\top)\land ({=}n\ S.\,\top))$ with $n$
an integer variable, and $\downarrow$ read as existential
quantification, so the example formula says that there are as many
$R$-successors as $S$-successors. This logic remains decidable if
quantification is restricted to be existential. It appears to be
incomparable to Presburger modal logic in that it does not support
general linear inequalities or qualified number restrictions, but on
the other hand allows the same integer variable to be used at
different modal depths.
Reasoning with polynomial inequalities over probabilities has been
studied in propositional logics~\cite{FaginHalpernMegiddo90} and in
many-dimensional modal logics~\cite{GutierrezBasultoEA17}, which work
with a single distribution on worlds rather than with world-dependent
probability distributions as
in~\cite{LarsenSkou91,HeifetzMongin01,FaginHalpern94}.
This paper is a revised and extended version of a previous conference
publication~\cite{KupkeEA15}; besides including full proofs and
additional examples, it contains new material on the concretized
version of the global caching algorithm
(Section~\ref{sec:concrete-alg}) and on {\mbox{\upshape\textsc{ExpTime}}}\xspace reasoning with global
assumptions in coalgebraic hybrid logics (Section~\ref{sec:nominals}).
\section{Preliminaries}\label{sec:prelims}
\noindent Our reasoning algorithms will centrally involve fixpoint
computations on powersets of finite sets; we recall some
notation. Let~$X$ be a finite set, and let $F\colon\mathcal{P} X\to\mathcal{P} X$ be
a function that is monotone with respect to set inclusion. A set
$Y\in\mathcal{P} X$ is a \emph{prefixpoint} of~$F$ if $F(Y)\subseteq Y$; a
\emph{postfixpoint} of~$F$ if $Y\subseteq F(Y)$; and a \emph{fixpoint}
of~$F$ if $Y=F(Y)$. By the Knaster-Tarski fixpoint theorem,~$F$ has a
least fixpoint $\mu F$ and a greatest fixpoint $\nu F$. Moreover,
$\mu F$ is even the least prefixpoint of~$F$, and $\nu F$ the greatest
postfixpoint. We alternatively use a $\mu$-calculus-like notation,
writing $\mu S.\,E(S)$ and $\nu S.\,E(S)$ for the least and greatest
fixpoints, respectively, of the function on~$\mathcal{P} X$ that maps
$S\in\mathcal{P} X$ to $E(S)$, where~$E$ is an expression (in an informal
sense) depending on~$S$. Since~$X$ is finite, we can compute least and
greatest fixpoints by \emph{fixpoint iteration} according to Kleene's
fixpoint theorem: Given a monotone~$F$ as above, the sets
$F^n(\emptyset)$ (where $F^n$ denotes $n$-fold application of~$F$)
form an ascending chain
\begin{equation*}
\emptyset = F^0(\emptyset) \subseteq F(\emptyset) \subseteq
F^2(\emptyset)\subseteq\dots,
\end{equation*}
which must stabilize at some $F^k(\emptyset)$ (i.e.\
$F^{k+1}(\emptyset)=F^k(\emptyset)$), and then $\mu
F=F^k(\emptyset)$. Similarly, the sets $F^n(X)$ form a descending
chain, which must stabilize at some $F^k(X)$, and then $\nu F=F^k(X)$.
\section{Coalgebraic Logic}\label{sec:colog}
As indicated above, we cast our results in the generic framework of
\emph{coalgebraic logic}~\cite{CirsteaEA11}, which allows us to treat
structurally different modal logics, such as Presburger and
probabilistic modal logics, in a uniform way. We briefly recall the
main concepts needed. Familiarity with basic concepts of category
theory (e.g.~\cite{Awodey10}) will be helpful, but we will explain the
requisite definitions as far as necessary for the present
purposes. Overall, coalgebraic logic is concerned with the
specification of state-based systems in a general sense by means of
\emph{modalities}, which are logical connectives that traverse the
transition structure in specific ways. The basic example of such a
modal logic is what for our present purposes we shall term
\emph{relational} modal logic (e.g.~\cite{BlackburnEA01}). Here,
states are connected by a successor relation, and modalities
$\Box,\Diamond$ talk about the successors of a state: a formula of the
form~$\Box\phi$ holds for a state if \emph{all} its successors
satisfy~$\phi$, and a formula of the form~$\Diamond\phi$ holds for a
state if it has \emph{some} successor that satisfies~$\phi$. Our main
interest, however, is in logics where the transition structure of
states goes beyond a simple successor relation, with correspondingly
adapted, and often more complex, modalities.
We parametrize modal logics in terms of their syntax and their
coalgebraic semantics. In the \textbf{syntax}, we work with a modal
similarity type $\Lambda$ of modal operators with given finite
arities. The set $\mathcal{F}(\Lambda)$ of \emph{$\Lambda$-formulae} is
then given by the grammar
\begin{equation}\label{eq:grammar}
\mathcal{F}(\Lambda)\owns\phi,\psi::= \bot\mid\phi\land\psi\mid
\neg\phi\mid \heartsuit(\phi_1,\dots,\phi_n)\qquad (\heartsuit\in\Lambda\text{ $n$-ary}).
\end{equation}
We omit explicit propositional atoms; these can be regarded as nullary
modalities. The operators $\top$, $\to$, $\lor$, $\leftrightarrow$ are assumed
to be defined in the standard way. Standard examples of modal
operators include the mentioned (unary) box and diamond operators
$\Box,\Diamond$ of relational modal logic; as indicated above, in the
present setting, our main interest is in more complex examples
introduced in Sections~\ref{sec:presburger} and~\ref{sec:prob}. For
the complexity analysis of reasoning problems, we assume a suitable
encoding of the modal operators in~$\Lambda$ as strings over some
alphabet. The \emph{size}~$|\phi|$ of a formula~$\phi$ is then defined
by counting~$1$ for each Boolean operation ($\bot$, $\neg$, $\land$),
and for each modality~$\heartsuit\in\Lambda$ the length of the encoding
of~$\heartsuit$. We assume that numbers occurring in the description of
modal operators are coded in binary. To ease notation, we generally
let $\epsilon\phi$, for $\epsilon\in\{-1,1\}$, denote $\phi$ if
$\epsilon=1$ and $\neg\phi$ if $\epsilon=-1$.
The \textbf{semantics} of the logic is formulated in the paradigm of
\emph{universal coalgebra}~\cite{Rutten00}, in which a wide range of
state-based system types, e.g.\ relational, neighbourhood-based,
probabilistic, weighted, or game-based systems, is subsumed under the
notion of functor coalgebra. Here, a \emph{functor}~$T$ on the
category of sets assigns to each set~$X$ a set~$TX$, thought of as a
type of structured collections over~$X$, and to each map
$f\colon X\to Y$ a map $Tf\colon TX\to TX$, preserving identities and
composition. A standard example is the \emph{(covariant) powerset
functor} $\mathcal{P}$, which maps a set~$X$ to its powerset $\mathcal{P} X$ and a
map $f\colon X\to Y$ to the direct image map
$\mathcal{P} f\colon \mathcal{P} X\to\mathcal{P} Y$, i.e.\ $(\mathcal{P} f)(A)=f[A]$ for
$A\in\mathcal{P} X$. In this case, structured collections are thus just
sets. A further example, more relevant to our present purposes, and to
be taken up again in Section~\ref{sec:prob}, is the \emph{(discrete)
distribution functor} $\mathcal{D}$. This functor assigns to a set~$X$ the
set of discrete probability distributions on~$X$, which thus play the
role of structured collections, and to a map $f\colon X\to Y$ the map
$\mathcal{D} f\colon\mathcal{D} X\to\mathcal{D} Y$ that takes image measures; i.e.\
$(\mathcal{D} f)(\mu)(B)=\mu(f^{-1}[B])$ for $B\subseteq Y$. We recall here
that a probability distribution~$\mu$ on~$X$ is \emph{discrete} if
$\mu(A)=\sum_{x\in A}\mu(\{x\})$ for every $A\subseteq X$, i.e.\ we
can equivalently regard~$\mu$ as being given by its \emph{probability
mass function} $x\mapsto\mu(\{x\})$. Note that the support
$\{x\mid \mu(\{x\})\neq 0\}$ of~$\mu$ is then necessarily countable. A
functor~$T$ defines a system type in the shape of its class of
\emph{$T$-coalgebras}, which are pairs $C=(X,\gamma)$ consisting of a
set~$X$ of \emph{states} and a \emph{transition map}
\begin{equation}\label{eq:transition}
\gamma\colon X\to TX,
\end{equation}
thought of as assigning to each state~$x$ a structured
collection~$\gamma(x)\in TX$ of successors. For instance,
$\mathcal{P}$-coalgebras are just transition systems or Kripke frames, as
they assign to each state a set of successors (i.e.~they capture
precisely the semantic structures that underlie relational modal logic
as recalled at the beginning of the section), and $\mathcal{D}$-coalgebras
are Markov chains, as they assign to each state a distribution over
successors.
We further parametrize the semantics over an interpretation of
modalities as predicate liftings, as follows.
Recall~\cite{Pattinson04,Schroder08} that an \emph{$n$-ary predicate
lifting} for $T$ is a natural transformation
\begin{equation*}
\lambda\colon Q^n\to Q\circ T^\mathit{op}
\end{equation*}
where~$Q$ denotes the \emph{contravariant powerset functor}. We shall
use predicate liftings in connection with the transition
map~\eqref{eq:transition} to let modalities look one step ahead in the
transition structure of a coalgebra. The definition of predicate
liftings unfolds as follows. Recall that every category~$\mathbf C$ has a
\emph{dual} category~$\mathbf C^\mathit{op}$, which has the same objects as~$\mathbf C$
and the same morphisms, but with the direction of morphisms
reversed. In particular, $\mathsf{Set}^\mathit{op}$, the dual category of the
category~$\mathsf{Set}$ of sets and maps, has sets as objects, and maps
$Y\to X$ as morphisms $X\to Y$. Then the contravariant powerset
functor $Q\colon \mathsf{Set}^\mathit{op}\to\mathsf{Set}$ assigns to a set~$X$ its powerset
$QX=\mathcal{P} X$, and to a map $f:X\to Y$ the preimage map $Q f:Q Y\to QX$,
given by $(Qf)(B)=f^{-1}[B]$ for $B\subseteq Y$. By $Q^n$, we denote
the pointwise $n$-th Cartesian power of~$Q$, i.e.\ $Q^nX=(QX)^n$. The
functor $T^\mathit{op}\colon \mathsf{Set}^\mathit{op}\to\mathsf{Set}^\mathit{op}$ acts like~$T$. Thus,
$\lambda$ is a family of maps $\lambda_X\colon (QX)^n\to Q(TX)$
indexed over all sets~$X$, satisfying the \emph{naturality} equation
$\lambda_X\circ (Q f)^{n}=Q(T^\mathit{op} f)\circ\lambda_Y$ for
$f\colon X\to Y$. That is,~$\lambda_X$ takes~$n$ subsets of~$X$ as
arguments, and returns a subset of~$TX$. The naturality condition
amounts to commutation of~$\lambda$ with preimage, i.e.\
\begin{equation}\label{eq:naturality}
\lambda_X(f^{-1}[B_1],\dots,f^{-1}[B_n])=Tf^{-1}[\lambda_Y(B_1,\dots,B_n)]
\end{equation}
for $B_1,\dots,B_n\subseteq Y$. We assign an $n$-ary predicate lifting
$\Sem{\heartsuit}$ to each modality $\heartsuit\in\Lambda$, of arity~$n$,
thus determining the semantics of~$\heartsuit$. For $t\in TX$ and
$A_1,\dots,A_n\subseteq TX$, we write
\begin{equation}\label{eq:lifting-notation}
t\models\heartsuit(A_1,\dots,A_n)
\end{equation}
to abbreviate $t\in\Sem{\heartsuit}_X(A_1,\dots,A_n)$.
Predicate liftings thus turn predicates on the set $X$ of states into
predicates on the set $TX$ of structured collections of successors. A
basic example is the predicate lifting for the usual diamond modality
$\Diamond$, given by
$\Sem{\Diamond}_X(A)=\{B\in\mathcal{P} X\mid B\cap A\neq\emptyset
\rbrace$. We will see more examples in Sections~\ref{sec:presburger}
and~\ref{sec:prob}. For purposes of the generic technical development,
\emph{we fix the data $\Lambda$,~$T$, and~$\Sem{\heartsuit}$ throughout,
and by abuse of notation sometimes refer to them jointly as (the
logic)~$\Lambda$.}
Satisfaction $x\models_C\phi$ (or just $x\models\phi$ when~$C$ is
clear from the context) of formulae $\phi\in\mathcal{F}(\Lambda)$ in
states~$x$ of a coalgebra $C=(X,\gamma)$ is defined inductively by
\begin{align*}
x& \not\models_C\bot \\
x& \models_C\phi\land\psi && \hspace{-4em}\text{iff}\quad x\models_C\phi\text{ and }x\models_C\psi\\
x& \models_C\neg\phi &&\hspace{-4em} \text{iff}\quad x\not\models_C\phi\\
x&\models_C \heartsuit(\phi_1,\dots,\phi_n)&&\hspace{-4em}
\text{iff}
\quad
\gamma(x)\models\heartsuit(\Sem{\phi_1}_C,\dots,\Sem{\phi_n}_C)
\end{align*}
where we write $\Sem{\phi}_C=\{x\in X\mid x\models_C\phi\}$ (and use
notation as per~\eqref{eq:lifting-notation}). Continuing the above
example, the predicate lifting $\Sem{\Diamond}$ thus induces exactly
the usual semantics of $\Diamond$: Given a $\mathcal{P}$-coalgebra, i.e.\
Kripke frame, $(X,\gamma\colon X\to\mathcal{P} X)$, we have
$x\models_C\Diamond\phi$ iff the set $\gamma(x)$ of successors of $x$
intersects with~$\Sem{\phi}_C$, i.e.\ iff~$x$ has a successor that
satisfies~$\phi$.
We will be interested in satisfiability under global assumptions, or,
in description logic terminology, reasoning with general
TBoxes~\cite{BaaderEA03}, that is, under background axioms that are
required to hold in every state of a model:
\begin{definition}[Global assumptions]
Given a formula~$\psi$, the \emph{global assumption}, a coalgebra
$C=(X,\gamma)$ is a \emph{$\psi$-model} if $\Sem{\psi}_C=X$; and a
formula $\phi$ is \emph{$\psi$-satisfiable} if there exists a
$\psi$-model $C$ such that $\Sem{\phi}_C\neq\emptyset$. The
\emph{satisfiability problem under global assumptions} is to decide,
given~$\psi$ and~$\phi$, whether~$\phi$ is $\psi$-satisfiable. We
extend these notions to sets~$\Gamma$ of formulae: We write
$x\models_C\Gamma$ if $x\models\phi$ for all $\phi\in\Gamma$, and we
say that $\Gamma$ is \emph{$\psi$-satisfiable} if there exists a
state~$x$ in a $\psi$-model~$C$ such that $x\models_C\Gamma$. For
distinction, we will occasionally refer to satisfiability in the
absence of global assumptions, i.e. $\top$-satisfiability, as
\emph{plain satisfiability}.
\end{definition}
\noindent
\begin{rem}
While the typical complexity of plain satisfiability is {\upshape\textsc{PSpace}}\xspace,
that of satisfiability under global assumptions is {\mbox{\upshape\textsc{ExpTime}}}\xspace. In
particular, this holds for the basic example of relational modal
logic~\cite{Ladner77,FischerLadner79}.
As indicated above, global assumptions are referred to as \emph{TBox
axioms} in description logic parlance, in honour of the fact that
they capture what is, in that context, called \emph{terminological
knowledge}: They record facts that hold about the world at large,
such as `every car has a motor' (formalized, e.g., in relational
modal logic as $\psi:=(\mathsf{Car}\to\Diamond\,\mathsf{Motor})$ if
the relation that underlies~$\Diamond$ is understood as
parthood). Contrastingly, a formula~$\phi$ is satisfiable under the
global assumption~$\psi$ as soon as $\phi$ holds in \emph{some}
state of some $\psi$-model, so~$\phi$ is thought of as describing
some states (\emph{individuals} in description logic terminology)
but not as being universally true. Correspondingly, the reasoning
task of checking satisfiability (under global assumptions) is called
\emph{concept satisfiability (under general TBoxes)} in description
logic. For instance, the atomic proposition (`concept')
$\mathsf{Car}$ is $\psi$-satisfiable in the above example, but not
of course necessarily true in every state of a $\psi$-model.
\emph{Global consequence}, i.e.~entailment between global
assumptions, reduces to satisfiability under global assumptions: We
say that a formula~$\phi$ is a \emph{global consequence} of a
formula~$\psi$ if every $\psi$-model is also a $\phi$-model. Then
$\phi$ is a global consequence of~$\psi$ iff $\neg\phi$ is not
$\psi$-satisfiable. For instance, in relational modal logic,
$\Box\psi$ is always a global consequence of~$\psi$,
i.e.~$\neg\Box\psi$ is not $\psi$-satisfiable; this fact corresponds
to the well-known necessitation rule of relational modal
logic~\cite{BlackburnEA01}.
\end{rem}
\begin{rem}\label{rem:univ-mod}
As indicated in the introduction, for purposes of the complexity
analysis, global assumptions are equivalent to the universal
modality. We make this claim more precise as follows. We define
\emph{coalgebraic modal logic with the universal modality} by
extending the grammar~\eqref{eq:grammar} with an additional
alternative
\begin{equation*}
\dots \mid \mathop{[\forall]} \phi,
\end{equation*}
and the semantics with the clause
\begin{equation*}
x\models_C \mathop{[\forall]}\phi\quad\text{iff}\quad y\models_C\phi\text{ for all $y\in X$}
\end{equation*}
for a coalgebra $C=(X,\gamma)$. In this logic, we restrict attention
to plain satisfiability checking, asking whether, for a given
formula~$\phi$, there exists a state~$x$ in a coalgebra~$C$ such
that $x\models_C\phi$. Then satisfiability under global assumptions
clearly reduces in logarithmic space to plain satisfiability in
coalgebraic modal logic with the universal modality -- a
formula~$\phi$ is satisfiable under the global assumption~$\psi$ iff
$\phi\land\mathop{[\forall]}\psi$ is satisfiable.
Conversely, satisfiability of a formula~$\phi$ in coalgebraic modal
logic with the universal modality is reducible in nondeterministic
polynomial time to satisfiability under global assumptions in
coalgebraic modal logic, as follows. Call a subformula of~$\phi$ a
\emph{$\mathop{[\forall]}$-subformula} if it is of the shape $\mathop{[\forall]}\psi$,
and let $\mathop{[\forall]}\psi_1,\dots,\mathop{[\forall]}\psi_n$ be the
$\mathop{[\forall]}$-subformulae of~$\phi$. Given a subset
$U\subseteq\{1,\dots,n\}$ and a subformula~$\chi$ of~$\phi$, denote
by $\chi[U]$ the $\mathop{[\forall]}$-free formula obtained from~$\chi$ by
replacing every $\mathop{[\forall]}$-subformula $\mathop{[\forall]}\psi_k$ that is not
in scope of a further~$\mathop{[\forall]}$-operator by~$\top$ if $k\in U$, and
by $\bot$ otherwise. We claim that
\begin{quote}
($*$) $\phi$ is satisfiable (in coalgebraic modal logic with the
universal modality) iff there is~$U\subseteq\{1,\dots,n\}$ such
that~$\phi[U]$, as well as each formula~$\neg\psi_k[U]$ for
$k\in\{1,\dots,n\}\setminus U$, are (separately) satisfiable under
the global assumption~$\psi_U$ given by
$\psi_U=\bigwedge_{k\in U}\psi_k[U]$.
\end{quote}
Using~($*$), we can clearly reduce satisfiability in coalgebraic
modal logic with the universal modality to satisfiability under
global assumptions in coalgebraic modal logic as claimed by just
guessing~$U$. It remains to prove~($*$). For the `only if'
direction, suppose that $x\models_C\phi$ for some state~$x$ in a
$T$-coalgebra~$C=(X,\gamma)$. Put
$U=\{k\mid x\models_C\mathop{[\forall]}\psi_k\}$. It is readily checked that,
in the above notation,~$C$ is a $\psi_U$-model, $x\models_C\phi[U]$,
and for each $k\in\{1,\dots,n\}\setminus U$, $\neg\psi_k[U]$ is
satisfied in some state of~$C$. For the converse implication, let
$U\subseteq\{1,\dots,n\}$, let $C$ and $C_k$, for
$k\in\{1,\dots,n\}\setminus U$, be $\psi_U$-models, let
$x\models_C\phi[U]$, and let $x_k\models_{C_k}\neg\psi_k[U]$ for
$k\in\{1,\dots,n\}\setminus U$. Let $D$ be the disjoint union of~$C$
and the~$C_k$; it is straightforward to check that $x\models_D\phi.$
It follows that from the exponential-time upper bound for
satisfiability checking under global assumptions proved in
Section~\ref{sec:type-elim}, we obtain an exponential-time upper
bound for satisfiability checking in coalgebraic modal logic with
the universal modality. On the other hand, the non-deterministic
reduction described above of course does not allow for inheriting
practical reasoning algorithms. The design of tableau-based
algorithms in presence of the universal modality is faced with the
challenge that instances of $\mathop{[\forall]}$ uncovered deep in the formula
by the rule-based decomposition will subsequently influence the
entire tableau built so far. Our global caching algorithm
(Section~\ref{sec:caching}) is meant for reasoning under global
assumptions; we leave the design of a practical generic reasoning
algorithm for coalgebraic modal logic with the universal modality to
future work.
\end{rem}
\noindent Generic algorithms in coalgebraic logic frequently rely on
complete rule sets for the given modal
operators~\cite{SchroderPattinson09a} (an overview of the relevant
concepts is given in Remark~\ref{rem:rules}); in particular, such a
rule set is assumed by our previous algorithm for satisfiability
checking under global assumptions in coalgebraic hybrid
logic~\cite{SchroderEA09}. In the present paper, our interest is in
cases for which suitable rule sets are not (currently) available. We
proceed to present our leading examples of this kind, Presburger modal
logic and a probabilistic modal logic with polynomial
inequalities. For the sake of readability, we focus on the case with a
single (weighted) transition relation, and omit propositional
atoms. Both propositional atoms and indexed transition relations are
easily added, e.g.\ using compositionality results in coalgebraic
logic~\cite{SchroderPattinson11MSCS}, and in fact we use them freely
in the examples; more details on this point will be provided in
Remark~\ref{rem:atoms}.
\subsection{Presburger Modal Logic}
\label{sec:presburger}
Presburger modal logic~\cite{DemriLugiez10} admits statements in
Presburger arithmetic over numbers $\sharp\phi$ of successors
satisfying a formula $\phi$. Throughout, we let $\mathsf{Rels}$ denote the set
$\{<,>,=\}\cup\{\eqmod{k}\mid k\in\mathbb{N}\}$ of \emph{arithmetic
relations}, with $\eqmod{k}$ read as congruence modulo
$k$. Syntactically, Presburger modal logic is then defined in our
syntactic framework by taking the modal similarity type
\begin{equation*}
\Lambda =\{L_{u_1,\dots,u_n;\sim v}\mid
{\sim}\in\mathsf{Rels}, n\in\mathbb{N}, u_1,\dots,u_n,v\in\mathbb{Z}\}
\end{equation*}
where $L_{u_1,\dots,u_n;\sim v}$ has arity~$n$. The application
of a modal operator $L_{u_1,\dots,u_n;\sim v}$ to argument
formulae $\phi_1,\dots,\phi_n$ is written
\begin{equation*}\textstyle
\textstyle \sum_{i=1}^nu_i\cdot\sharp \phi_i\sim v.
\end{equation*}
We refer to these modalities as \emph{Presburger constraints}. Weak
inequalities can be coded as strict ones, replacing, e.g., $\ge k$
with $>k-1$. The numbers $u_i$ and $v$, as well as the modulus
$k$ in $\eqmod{k}$, are referred to as the \emph{coefficients} of a
Presburger constraint. We also apply this terminology (Presburger
constraint, coefficient) to constraints of the form
$\sum_{i=1}^nu_i\cdot x_i\sim v$ in general, interpreted over the
non-negative integers.
The semantics of Presburger modal logic was originally defined over
standard Kripke frames; in order to make sense of sums with arbitrary
(possibly negative) integer coefficients, one needs to
restrict to finitely branching frames. We consider an alternative
semantics in terms of \emph{multigraphs}, which have some key
technical advantages~\cite{DAgostinoVisser02}. Informally, a
multigraph is like a Kripke frame but with every transition edge
annotated with a non-negative-integer-valued multiplicity; ordinary
finitely branching Kripke frames can be viewed as multigraphs by just
taking edges to be transitions with multiplicity~$1$. Formally, a
multigraph can be seen as a coalgebra for the \emph{finite multiset
functor}~$\mathcal{B}$: For a set $X$, $\mathcal{B} X$ consists of the
\emph{finite multisets over $X$}, which are maps $\mu\colon X\to\mathbb{N}$
with finite support, i.e.\ $\mu(x)>0$ for only finitely many $x$. We
view $\mu$ as an $\mathbb{N}$-valued measure, and write
$\mu(Y)=\sum_{x\in Y}\mu(x)$ for $Y\subseteq X$. Then, $\mathcal{B} f$, for
maps $f\colon X\to Y$, acts as image measure formation in the same way
as the distribution functor~$\mathcal{D}$ described above, i.e.\
$(\mathcal{B} f)(\mu)(B)=\mu(f^{-1}[B])$ for $\mu\in\mathcal{B} X$ and
$B\subseteq Y$. A coalgebra $\gamma\colon X\to\mathcal{B} X$ assigns to each
state $x$ a multiset $\gamma(x)$ of successor states, i.e.\ each
successor state is assigned a transition multiplicity.
The semantics of the modal operators is then given by the predicate liftings
\begin{equation*}\textstyle
\Sem{L_{u_1,\dots,u_n;\sim v}}_X(A_1,\dots,A_n) =
\{\mu\in\mathcal{B} X \mid \sum_{i=1}^n u_i \cdot \mu(A_i) \sim
v\},
\end{equation*}
that is, a state $x$ in a $\mathcal{B}$-coalgebra $C=(X,\gamma)$ satisfies
$\sum_{i=1}^n u_i \cdot \sharp\phi_i\sim v$ iff
$\sum_{i=1}^n u_i \cdot \gamma(x)(\Sem{\phi_i}_C)\sim
v$.
\begin{rem}
\emph{Graded modal logic}~\cite{Fine72} is interpreted over the same
systems (originally Kripke frames, equivalently multigraphs) as
Presburger modal logic. It combines a Boolean propositional base
with modalities $\gldiamond{k}$ `in more than~$k$ successors'; these
have made their way into modern expressive description logics in the
shape of \emph{qualified number restrictions}~\cite{BaaderEA03}. The
multigraph semantics of graded modal logic is captured
coalgebraically by assigning to $\gldiamond{k}$ the predicate
lifting for~$\mathcal{B}$ given by
$\Sem{\gldiamond{k}}_X(A)=\{\mu\in\mathcal{B}(X)\mid\mu(A)>k\}$.
Presburger modal logic subsumes graded modal logic, via a
translation~$t$ of graded modal logic into Presburger modal logic
that is defined by commutation with all Boolean connectives and
$t(\gldiamond{k}\phi)=(\sharp(t(\phi))>k)$.
\end{rem}
\noindent We note that satisfiability is the same over Kripke frames
and over multigraphs:
\begin{lemma}~\cite[Remark~6]{Schroder07} \cite[Lemma 2.4]{SchroderVenema18} \label{lem:multi-vs-kripke}
A formula $\phi$ is $\psi$-satisfiable over multigraphs iff $\phi$
is $\psi$-satisfiable over Kripke frames.
\end{lemma}
\noindent (The proof of the non-trivial direction is by making copies
of states to accommodate multiplicities.)
\begin{rem}
From the point of view of the present work, the technical reason to
work with multigraphs rather than Kripke frames in the semantics of
Presburger modal logic is that the key naturality
condition~\eqref{eq:naturality} fails over Kripke semantics, i.e.\
for the powerset functor. Beyond the mere fact that for this reason,
our methods do not apply to the Kripke semantics of Presburger or
graded modal logic, we note that indeed key results of coalgebraic
modal logic fail to hold for this semantics. For instance, we shall
prove later (Lemma~\ref{lem:one-step-models}) that coalgebraic modal
logic has the exponential model property, i.e.\ every satisfiable
formula~$\phi$ has a model with at most exponentially many states in
the number of subformulae of~$\phi$. Over Kripke semantics, this
clearly fails already for simple formulae such as~$\sharp(\top)>k$.
\end{rem}
\begin{rem}\label{rem:atoms}
As indicated above, the overall setup generalizes effortlessly to
allow for both propositional atoms and multiple (weighted)
transition relations: Let~$\mathsf A$ be a set of \emph{propositional
atoms} and~$\mathsf R$ a set of \emph{relation names} (\emph{atomic
concepts} and \emph{roles}, respectively, in description logic
terminology). We then take the modal operators to be the
propositional atoms and all operators
\begin{equation*}\textstyle
L_{u_1^{r_1},\dots,u_n^{r_n};\sim v}
\end{equation*}
where ${\sim}\in\mathsf{Rels}$, $n\in\mathbb{N}$, $u_1,\dots,u_n,v\in\mathbb{Z}$,
and $r_1,\dots,r_n \in \mathsf R$. The arity of
$L_{u_1^{r_1},\dots,u_n^{r_n};\sim v}$ is~$n$, and the
application of $L_{u_1^{r_1},\dots,u_n^{r_n};\sim v}$ to
argument formulae $\phi_1,\dots,\phi_n $ is written
\begin{equation*}
\textstyle
\sum_{i=1}^nu_i\cdot\sharp_{r_i} \phi_i\sim v
\end{equation*}
where $\sharp_r(\cdot)$ is meant to represent the number of
successors along the (weighted) transition relation $r$. The logic
is then interpreted over structures that assign to each state~$x$ a
subset of~$\mathsf A$ (of propositional atoms that hold at~$x$) and
$\mathsf R$-many multisets of successors. Such structures as coalgebras
for the functor that maps a set~$X$ to
$\mathcal{P} \mathsf A\times \mathcal{B} X^\mathsf R$; the associated predicate liftings
are given by
\begin{align*}
\Sem{L_{u_1^{r_1},\dots,u_n^{r_n};\sim v}}_X(A_1,\dots,A_n) & =
\{(U,f)\in\mathcal{P}\mathsf A\times (\mathcal{B} X)^\mathsf R \mid \textstyle\sum_{i=1}^n u_i \cdot f(r_i)(A_i) \sim
v\}\\
\Sem{p}_X & = \{(U,f)\in\mathcal{P}\mathsf A\times (\mathcal{B} X)^\mathsf R \mid p\in U\}.
\end{align*}
The effect of these extensions on the technical development does not
go beyond heavier notation, so as announced above we restrict the
exposition to only a single transition relation and no propositional
atoms, for readability.
\end{rem}
\begin{rem}\label{rem:graded-rules}
Two of us (Kupke and Pattinson) have exhibited modal sequent rules
for various modal logics of linear inequalities, both over the
non-negative reals (e.g.\ probabilistic and stochastic logics) and
over the non-negative integers~\cite{Kupke:2010:MLL}. One of these
logics can be seen as the fragment of Presburger modal logic
obtained by removing modular congruence~$\equiv_k$. Soundness and
completeness of the rules for this logic would imply our upper
complexity bounds by instantiating our own previous generic results
in coalgebraic logic~\cite{SchroderEA09}, which rely on precisely
such rules. However, while the rules given for logics with
real-valued multiplicities appear to be sound and complete as
claimed, the rule system given for the integer-valued case is sound
but clearly not complete, as indicated already in
Section~\ref{sec:intro}. For instance, the formula
$\phi:=(2\sharp\top<1\lor 2\sharp\top>1)$ is valid for integer
multiplicities ($\phi$ says that the integer total weight of all
successors of a state cannot be $1/2$) but not provable in the given
rule system. The latter fact is most easily seen by comparing the
rule for integer multiplicities \cite[Section~4]{Kupke:2010:MLL}
with the rule given for the case of real-valued multiplicities
\cite[Section~5]{Kupke:2010:MLL}: The rule instances applying
to~$\phi$ are the same in both cases, and as the rules are easily
seen to be sound in the real-valued case,~$\phi$ is not provable (as
it fails to be valid in the real-valued case). There does not seem
to be an easy fix for this, so for the time being there is no known
sound and complete set of modal sequent rules (equivalently, modal
tableau rules) for Presburger modal logic.
\end{rem}
\smallskip\noindent\textbf{Expressiveness and Examples.} As mentioned
above, Presburger modal logic subsumes graded modal
logic~\cite{Fine72}. Moreover, Presburger modal logic subsumes
majority logic~\cite{PacuitSalame04} (more precisely, the version of
majority logic interpreted over finitely branching systems): The
\emph{weak majority} formula $W\phi$ (`at least half the successors
satisfy $\phi$') is expressed in Presburger modal logic as
$\sharp(\phi)-\sharp(\neg\phi)\ge 0$. Using propositional atoms,
incorporated in the way discussed above, we express the examples given
in the introduction (`the majority of university students are female',
`dance classes have even numbers of participants') by the formulae
\begin{gather*}
\ms{University} \to (\sharp_\ms{hasStudent}\ms{Female}
- \sharp_\ms{hasStudent}\ms{Male}>0)\\
\ms{DanceCourse} \to (\sharp_\ms{hasParticipant}\ms{\top}
\equiv_2 0)
\end{gather*}
where indices informally indicate the understanding of the successor
relation. In the extension with multiple successor relations
(Remark~\ref{rem:atoms}), one may also impose inequalities between
numbers of successors under different roles as in the introduction,
e.g.~in the formula
\begin{equation*}
\ms{Workaholic} \to (\rcount{\ms{hasColleague}}{\top}- \rcount{\ms{hasFriend}}{\top} >0)
\end{equation*}
(`workaholics have more colleagues than friends'). As an example
involving non-unit coefficients, a chamber of parliament in which a
motion requiring a 2/3 majority has sufficient support is described by
the formula
\begin{equation*}
\sharp_{\ms{hasMember}}(\ms{SupportsMotion})-
2\sharp_{\ms{hasMember}}(\neg\ms{SupportsMotion})\ge 0.
\end{equation*}
\subsection{Probabilistic Modal Logic with Polynomial Inequalities}
\label{sec:prob}
Probabilistic logics of various forms have been studied in different
contexts such as reactive systems~\cite{LarsenSkou91} and uncertain
knowledge~\cite{HeifetzMongin01,FaginHalpern94}. A typical feature of
such logics is that they talk about probabilities $w(\phi)$ of
formulae $\phi$ holding for the successors of a state; the concrete
syntax then variously includes only inequalities of the form
$w(\phi)\sim p$ for ${\sim}\in\{>,\ge,=,<,\le\}$ and
$p\in\mathbb{Q}\cap[0,1]$~\cite{LarsenSkou91,HeifetzMongin01}, linear
inequalities over terms $w(\phi)$~\cite{FaginHalpern94}, or polynomial
inequalities, with the latter so far treated only in either purely
propositional settings~\cite{FaginHalpernMegiddo90} or in
many-dimensional logics such as the probabilistic description logic
Prob-$\mathcal{ALC}$~\cite{GutierrezBasultoEA17}, which use a single global
distribution over worlds. An important use of polynomial inequalities
over probabilities is to express independence
constraints~\cite{GutierrezBasultoEA17}. For instance, two properties $\phi$
and $\psi$ (of successors) are independent if
$w(\phi\land\psi)=w(\phi)w(\psi)$, and we can express that the
probability that the first of two independently sampled successors
satisfies~$\phi$ and the second satisfies~$\psi$ is at least~$p$ by a
formula such as $w(\phi)w(\psi)\ge p$; the latter is similar to the
\emph{independent product} of real-valued probabilistic modal
logic~\cite{Mio11}.
We thus define the following \emph{probabilistic modal logic with
polynomial inequalities}: The system type is given by a variant of
the distribution functor~$\mathcal{D}$ as described above, viz.\ the
\emph{subdistribution functor}~$\mathcal{S}$, in which we require for
$\mu\in\mathcal{S} X$ that the measure of the whole set~$X$ satisfies
$\mu(X)\le 1$ rather than $\mu(X)=1$. Then $\mathcal{S}$-coalgebras
$\gamma:X\to\mathcal{S} X$ are like Markov chains (where $\gamma(x)$ is
interpreted as a distribution over possible future evolutions of the
system), or (single-agent) type spaces in the sense of epistemic
logic~\cite{HeifetzMongin01} (where $\gamma(x)$ is interpreted as the
subjective probabilities assigned by the agent to possible alternative
worlds in world $x$), with the difference that each state~$x$ has a
probability $1-\gamma(x)(X)$ of being deadlocked. We use the modal
similarity type
\begin{equation*}
\Lambda=\{L_p\mid p\in\mathbb{Q}[X_1,\dots,X_n], n\ge 0\};
\end{equation*}
for $p\in\mathbb{Q}[X_1,\dots,X_n]$, the modality~$L_p$ has
arity~$n$. We denote the application of~$L_p$ to formulae
$\phi_1,\dots,\phi_n$ by substituting each variable $X_i$ in $p$ with
$w(\phi_i)$ and postulating the result to be non-negative, i.e.~as
\begin{equation*}
p(w(\phi_1),\dots,w(\phi_n))\ge 0.
\end{equation*}
For instance, $L_{X_1-X_2X_3}(\phi\land\psi,\phi,\psi)$ is written
more readably as $w(\phi\land\psi)-w(\phi)w(\psi)\ge 0$, and thus
expresses one half of the above-mentioned independence constraint (the
other half, of course, being $w(\phi)w(\psi)-w(\phi\land\psi)\ge 0$)
We correspondingly interpret $L_p$ by the predicate lifting
\begin{equation*}
\Sem{L_p}_X(A_1,\dots,A_n)=\{\mu\in\mathcal{S} X\mid p(\mu(A_1),\dots,\mu(A_n))\ge 0\}.
\end{equation*}
We will use Presburger modal logic and probabilistic modal logic as
running examples in the sequel.
\begin{rem}
The use of~$\mathcal{S}$ in place of~$\mathcal{D}$ serves only to avoid
triviality of the logic in the absence of propositional atoms: Since
$|\mathcal{D}(1)|=1$ for any singleton set~$1$, all states in
$\mathcal{D}$-coalgebras (i.e.\ Markov chains) are bisimilar, and thus
satisfy the same formulae of any coalgebraic modal logic on
$\mathcal{D}$-coalgebras~\cite{Pattinson04,Schroder08}, so any formula in
such a logic is either valid or unsatisfiable. This phenomenon
disappears as soon as we add propositional atoms as per
Remark~\ref{rem:atoms}. All our results otherwise apply to~$\mathcal{D}$
in the same way as to~$\mathcal{S}$.
\end{rem}
\section{One-Step Satisfiability}\label{sec:oss}
The key ingredient of our algorithmic approach is to deal with modal
operators (i.e., in our running examples, arithmetic statements about
numbers or weights of successors) level by level; the core concepts of
the arising notion of one-step satisfiability checking go back to work
on plain satisfiability in coalgebraic
logics~\cite{Schroder07,SchroderPattinson08d,MyersEA09}. From now on,
we restrict the technical treatment to unary modal operators to avoid
cumbersome notation, although our central examples all do have modal
operators with higher arities; a fully general treatment requires no
more than additional indexing.
Considering only one level of modal operators and abstracting from
their arguments amounts to working in a \emph{one-step logic}, whose
syntax and semantics are defined as follows (subsequent to fixing some
notation).
\begin{definition}[Notation for propositional variables and
propositional logic]\label{def:prop}
We fix a countably infinite set $\mathcal{V}$ of \emph{(propositional)
variables}. We denote the set of Boolean formulae (presented in
terms of~$\bot$,~$\land$, and~$\neg$) over a set~$V\subseteq\mathcal{V}$ of
propositional variables by $\mathsf{Prop}(V)$; that is, formulae
$\eta,\rho\in\mathsf{Prop}(V)$ are defined by the grammar
\begin{equation*}
\eta,\rho::=\bot\mid\neg\eta\mid\eta\land\rho\mid a \qquad (a\in V).
\end{equation*}
We write~$2$ for the set $\{\bot,\top\}$ of truth values, and then
have a standard notion of satisfaction of propositional formulae
over~$V$ by valuations $\kappa\colon V\to 2$. As usual, a
\emph{literal} over $V$ is a propositional variable $a\in V$ or a
negated variable $\neg a$ for $a\in V$, often written~$\epsilon a$
with $\epsilon\in\{-1,1\}$ as per the previous convention
(Section~\ref{sec:colog}), and a \emph{conjunctive clause} over~$V$
is a finite conjunction $\epsilon_1a_1\land\dots\land\epsilon_na_n$
of literals over~$V$, represented as a finite set of literals. We
write $\Phi\vdash_{\mathit{PL}}\eta$ to indicate that a set
$\Phi\subseteq\mathsf{Prop}(V)$ \emph{propositionally entails}
$\eta\in\mathsf{Prop}(V)$, meaning that there exist
$\rho_1,\dots,\rho_n\in\Phi$ such that
$\rho_1\land\dots\land\rho_n\to\eta$ is a propositional
tautology. For $\{\rho\}\vdash_{\mathit{PL}}\eta$, we briefly write
$\rho\vdash_{\mathit{PL}}\eta$.
By a \emph{substitution}, we will mean a map~$\sigma$ from (some
subset of)~$\mathcal{V}$ into another set~$Z$, typically a set of formulae
of some kind. In case $Z=\mathcal{V}$, we will also refer to~$\sigma$ as a
\emph{renaming}. We write application of a substitution~$\sigma$ to
formulae~$\phi$ containing propositional variables (either
propositional formulae or formulae of the one-step logic as
introduced in the next definition) in postfix notation $\phi\sigma$
as usual (i.e.~$\phi\sigma$ is obtained from~$\phi$ by replacing all
occurrences of propositional variables~$a$ in~$\phi$ with
$\sigma(a)$). We extend the propositional entailment relation to
formulae beyond~$\mathsf{Prop}(\mathcal{V})$ by substitution, i.e.~for a
formula~$\psi$ and a set~$\Phi$ of formulae (in the one-step logic
or in coalgebraic modal logic), we write $\Phi\vdash_{\mathit{PL}}\psi$ if
$\Phi,\psi$ can be written in the form $\Phi=\Phi'\sigma$,
$\psi=\psi'\sigma$ for a substitution~$\sigma$ and
$\Phi'\subseteq\mathsf{Prop}(\mathcal{V})$, $\psi'\in\mathsf{Prop}(\mathcal{V})$ such that
$\Phi'\vdash_{\mathit{PL}}\psi'$ in the sense defined above (that is, if there
are $\phi_1,\dots,\phi_n\in\Phi$ such that
$\phi_1\land\dots\land\phi_n\to\psi$ is a substitution instance of a
propositional tautology).
\end{definition}
\noindent The \textbf{syntax} of the one-step logic is given in the following
terms:
\begin{definition}[One-step pairs]\label{def:one-step}
Given a set~$V\subseteq\mathcal{V}$ of propositional variables, we denote by
\begin{equation*}
\Lambda(V)=\{\heartsuit a\mid\heartsuit\in\Lambda,a\in V\}
\end{equation*}
the set of \emph{modal atoms over~$V$}. A \emph{modal literal
over~$V$} is a modal atom over~$V$ or a negation thereof, i.e.~has
the form either $\heartsuit a$ or $\neg\heartsuit a$ for
$\heartsuit\in\Lambda$, $a\in V$. A \emph{modal conjunctive
clause}~$\phi$ is a finite conjunction
$\epsilon_1\heartsuit_1 a_1\land\dots\land\epsilon_n\heartsuit_na_n$ of
modal literals over~$V$, represented as a finite set of modal
literals. We write $\mathsf{Var}(\phi)=\{a_1,\dots,a_n\}$ for the set of
variables occurring in~$\phi$. We say that~$\phi$ is \emph{clean}
if~$\phi$ mentions each variable in~$V$ at most once. A
\emph{one-step pair} $(\phi,\eta)$ \emph{over $V$} consists of
\begin{myitemize}
\item a clean modal conjunctive clause~$\phi$ over~$V$ and
\item a Boolean formula $\eta\in\mathsf{Prop}(\mathsf{Var}(\phi))$.
\end{myitemize}
We measure the size~$|\phi|$ of a modal conjunctive clause~$\phi$ by
counting~$1$ for each variable and each propositional operator, and
for each modality the size of its encoding (in the same way as in
the definition of the size of modal formulae in
Section~\ref{sec:colog}). The propositional component~$\eta$ is
assumed to be given as a DNF consisting of conjunctive clauses each
mentioning every variable occurring in~$\phi$ (such conjunctive
clauses are effectively truth valuations for the variables
in~$\phi$), and the size~$|\eta|$ of~$\eta$ is the size of this DNF.
\end{definition}
\noindent In a one-step pair $(\phi,\eta)$, the modal component~$\phi$
effectively specifies what happens one transition step ahead from the
(implicit) current state; as indicated above, in the actual
satisfiability checking algorithm,~$\phi$ will arise by peeling off
the top layer of modalities of a given modal formula, with the
propositional variables in~$V$ abstracting the argument formulae of
the modalities. The propositional component~$\eta$ then records the
propositional dependencies among the argument formulae. Formally, the
\textbf{semantics} of the one-step logic is given as follows:
\begin{definition}[One-step models, one-step satisfiability] A
\emph{one-step model} $M=(X,\tau, t)$ over $V$ consists of
\begin{myitemize}
\item a set $X$ together with a $\mathcal{P} X$-valuation $\tau\colon V\to\mathcal{P} X$; and
\item an element $t\in TX$ (thought of as the structured collection
of successors of an anonymous state).
\end{myitemize}
For $\eta\in\mathsf{Prop}(V)$, we write $\tau(\eta)$ for the interpretation
of $\eta$ in the Boolean algebra $\mathcal{P} X$ under the valuation
$\tau$; explicitly, $\tau(\bot)=\emptyset$,
$\tau(\neg\eta)=X\setminus\tau(\eta)$, and
$\tau(\eta\land\rho)=\tau(\eta)\cap\tau(\rho)$. For a modal atom
$\heartsuit a\in\Lambda(V)$, we put
\begin{equation*}
\tau(\heartsuit a)=\Sem{\heartsuit}_X(\tau(a))\subseteq TX.
\end{equation*}
We extend this assignment to modal atoms and modal conjunctive
clauses using the Boolean algebra structure of $\mathcal{P}(TX)$; explicitly,
\begin{align*}
\tau(\neg\heartsuit a)
&=TX\setminus\tau(\heartsuit a)\\
\tau(\epsilon_1\heartsuit_1a_1\land\dots\land\epsilon_n\heartsuit_na_n)
&=\tau(\epsilon_1\heartsuit_1a_1)\cap\dots\cap\tau(\epsilon_n\heartsuit_na_n).
\end{align*}
We say that the one-step model $M=(X,\tau,t)$ \emph{satisfies} the
one step pair $(\phi,\eta)$, and write $M\models(\phi,\eta)$, if
\begin{equation*}
\tau(\eta)=X\qquad\text{and}\qquad t\in\tau(\phi).
\end{equation*}
(That is,~$\eta$ is a global propositional constraint on the values
of~$\tau$ while~$\phi$ specifies a property of the collection~$t$ of
successors.) Then, $(\phi,\eta)$ is \emph{(one-step) satisfiable} if
there exists a one-step model~$M$ such that
$M\models(\phi,\eta)$. The \emph{lax one-step satisfiability
problem} (of~$\Lambda$) is to decide whether a given one-step pair
$(\phi,\eta)$ is one-step satisfiable; the size of the input is
measured as $|\phi|+|\eta|$ with~$|\phi|$ and~$|\eta|$ defined as
above. The \emph{strict one-step satisfiability problem}
(of~$\Lambda$) is the same problem but with the input size defined
to be just~$|\phi|$. For purposes of space complexity, we thus
assume in the strict one-step satisfiability problem that~$\eta$ is
stored on an input tape that does not count towards space
consumption. It will be technically convenient to assume moreover
that in the strict one-step satisfiability problem,~$\eta$ is given
as a bit vector indicating which conjunctive clauses (mentioning
every variable occurring in $\phi$, in some fixed order) are
contained in the DNF~$\eta$; contrastingly, we assume that in the
lax one-step satisfiability problem,~$\eta$ is given as a list of
conjunctive clauses as indicated in Definition~\ref{def:one-step}
(hence need not have exponential size in all cases). For time
complexity, we assume that the input tape is random access (i.e.\
accessed via a dedicated address tape, in the model of random access
Turing machines~\cite{FischerRosenberg68}; this is necessary to
enable subexponential time bounds for the strict one-step
satisfiability problem since otherwise it takes exponential time
just to move the head to the last bits of the input). We say
that~$\Lambda$ has the \emph{(weak) one-step small model property}
if there is a polynomial~$p$ such that every one-step satisfiable
$(\phi,\eta)$ has a one-step model $(X,\tau,t)$ with
$|X|\le p(|\mathsf{Var}(\phi)|)$ (respectively $|X|\le p(|\phi|)$). (Note
that no bound is assumed on the representation of $t$.)
\end{definition}
\noindent
As indicated above, the intuition behind these definitions is that the
propositional variables in~$V$ are placeholders for argument formulae
of modalities; their valuation $\tau$ in a one-step model $(X,\tau,t)$
over~$V$ represents the extensions of these argument formulae in a
model; and the second component~$\eta$ of a one-step pair
$(\phi,\eta)$ captures the Boolean constraints on the argument
formulae that are globally satisfied in a given model. The component
$t\in TX$ of $(X,\tau,t)$ represents the structured collection of
successors of an implicit current state, so the modal component~$\phi$
of the one-step pair is evaluated on~$t$.
We will later construct
full models of modal formulae using one-step models according to this
intuition. One may think of a one-step model $(X, \tau, \mu)$ of a
one-step pair $(\phi, \eta)$ as a counterexample to soundness of
$\eta/\neg\phi$ as a proof rule:~$\phi$ is satisfiable despite $\eta$
being globally valid in the model.
\begin{example}\label{expl:oss}
\begin{enumerate}[wide]
\item In the basic example of relational modal logic
($\Lambda=\{\Diamond\}$, $T=\mathcal{P}$, see Section~\ref{sec:colog}),
consider the one-step pair
$(\phi,\eta):=(\neg\Diamond a\land\neg\Diamond b\land \Diamond c,c\to
a\lor b)$. The propositional component~$\eta$ is represented as a
DNF $\eta=(c\land a\land b)\lor(\neg c\land\neg a\land\neg b)\lor\dots$.
A one-step model $(X,\tau,t)$ of $(\phi,\eta)$ (where $t\in\mathcal{P}(X)$)
would need to satisfy $\tau(c)\subseteq\tau(a)\cup\tau(b)$ to ensure
$\tau(\eta)=X$, as well as $t\cap\tau(c)\neq\emptyset$,
$t\cap \tau(a)=\emptyset$, and $t\cap \tau(b)=\emptyset$ to ensure
$t\in\tau(\phi)$. As this is clearly impossible, $(\phi,\eta)$ is
unsatisfiable. In fact, it is easy to see that the strict one-step
satisfiability problem of relational modal logic in this sense is in
{\upshape\textsc{NP}}\xspace: To check whether a one-step pair $(\psi,\chi)$ is satisfiable,
guess a conjunctive clause~$\rho$ in~$\chi$ for each positive modal
literal $\Diamond a$ in~$\phi$, and check that $\rho$ contains on
the one hand $a$, and on the other hand~$\neg b$ for every negative
modal literal $\neg\Diamond b$ in~$\psi$.
\item In Presburger modal logic, let
$\phi: = (\sharp(a)+\sharp(b)-\sharp(c)>0)$ (a conjunctive clause
consisting of a single modal literal). Then a one-step pair of the
form $(\phi, \eta)$ is one-step satisfiable
iff~$\eta$ is consistent with
$\rho:=(a\land b)\lor(a\land\neg c)\lor (b\land\neg c)$: For the `if'
direction, note that~$\eta$ is consistent with some disjunct~$\rho'$
of~$\rho$; we distinguish cases over~$\rho'$, and build a one-step
model $(X,\tau,\mu)$ of $(\phi,\eta)$. In each case, we take~$X$ to
consist of a single point~$1$; since~$\eta\land\rho'$ is consistent,
we can pick~$\tau$ such that~$\tau(\eta\land\rho')=X$ (and hence
$\tau(\eta)=X$). Moreover, we always take $\mu$ to be the multiset
given by $\mu(1)=1$. If $\rho'=(a\land\neg c)$, then
$\mu(\tau(a))+\mu(\tau(b))-\mu(\tau(c))=1+\mu(\tau(b))-0>0$, so
$\mu\in\tau(\phi)$, and we are done. The case $\rho'=(b\land\neg c)$
is analogous. Finally, if $\rho'=(a\land b)$, then
$\mu(\tau(a))+\mu(\tau(b))-\mu(\tau(c))=2-\mu(\tau(c))>0$. For the
`only if' direction, assume that $\eta\land\rho$ is inconsistent,
so~$\eta$ propositionally entails $a\to c$, $b\to c$, and
$\neg(a\land b)$, and let $(X,\tau,\mu)$ be a one-step model such
that $\tau(\eta)=X$; we have to show that
$\mu\notin\tau(\psi)$. Indeed, since $\tau(\eta)=X$ we have
$\tau(a)\subseteq\tau(c)$, $\tau(b)\subseteq\tau(c)$, and
$\tau(a)\cap\tau(b)=\emptyset$, so
$\mu(\tau(a))+\mu(\tau(b))-\mu(\tau(c))\le 0$.
\item The reasoning in the previous example applies in the same way to
one-step pairs of the form $(w(a)+w(b)-w(c)>0,\eta)$ in
probabilistic modal logic.
\item The example formula given in Remark~\ref{rem:graded-rules}
translates into a one-step pair $(2\sharp(a)<1\land 2\sharp(a)>0,a)$
in Presburger modal logic whose unsatisfiability does depend on
multiplicities being integers; that is, the corresponding one-step
pair $(2 w(a)<1\land 2 w(a)>0,a)$ in probabilistic modal logic is
satisfiable.
\end{enumerate}
\end{example}
\begin{rem}
For purposes of upper complexity bounds {\upshape\textsc{PSpace}}\xspace and above for the
strict one-step satisfiability problem, it does not matter whether
the propositional component~$\eta$ of a one-step pair $(\psi,\eta)$
is represented as a list or as a bit vector, as we have obvious
mutual conversions between these formats that can be implemented
using only polynomial space in~$|\mathsf{Var}(\psi)|$. For subexponential
time bounds, on the other hand, the distinction between the formats
does appear to matter, as the mentioned conversions do take
exponential time in~$|\mathsf{Var}(\psi)|$.
\end{rem}
\noindent
Note that most of a one-step pair $(\phi, \eta)$ is disregarded for
purposes of determining the input size of the \emph{strict} one-step
satisfiability problem, as~$\eta$ can be exponentially larger
than~$\phi$. Indeed, we have the following relationship between the
respective complexities of the lax one-step satisfiability problem and
the strict one-step satisfiability problem.
\begin{lemma}
The strict one-step satisfiability problem of~$\Lambda$ is in
{\mbox{\upshape\textsc{ExpTime}}}\xspace iff the lax one-step satisfiability problem of~$\Lambda$
can be solved on one-step pairs~$(\phi,\eta)$ in
time~$2^{\mathcal{O}((\log |\eta|+|\phi|)^k)}$ for some~$k$.
\end{lemma}
\noindent (Recent work on the coalgebraic $\mu$-calculus uses
essentially the second formulation~\cite{HausmannSchroder19}.)
\begin{proof}
\emph{`Only if'} is trivial, since the time bound allows converting~$\eta$
from the list representation assumed in the lax version of the
problem to the bit vector representation assumed in the strict
version. \emph{`If'}: Since we require that all variables
mentioned by~$\eta$ occur also in~$\phi$, and assume that~$\eta$ is
given in DNF, we have $|\eta|=2^{\mathcal{O}(|\phi|)}$, so
$\log|\eta|=\mathcal{O}(|\phi|)$, and hence
$2^{\mathcal{O}((\log |\eta|+|\phi|)^k)}=2^{\mathcal{O}(|\phi|^k)}$.
\end{proof}
\noindent We note that the one-step logic has an exponential-model
property (which in slightly disguised form has appeared first
as~\cite[Proposition 3.10]{SchroderPattinson06}):
\begin{lemma}\label{lem:one-step-models}
A one-step pair $(\phi,\eta)$ over $V$ is satisfiable iff it is
satisfiable by a one-step model of the form $(X,\tau,t)$ where $X$
is the set of valuations $V\to 2$ satisfying $\eta$ (where
$2=\{\top,\bot\}$ is the set of Booleans) and
$\tau(a)=\{\kappa\in X\mid \kappa(a)=\top\}$ for $a\in V$.
\end{lemma}
\begin{proof}
`If' is trivial; we prove `only if'. Let $M=(Y,\theta,s)$ be a
one-step model of $(\phi,\eta)$. Take $X$ and $\tau$ as in the claim;
it is clear that $\tau(\eta)=X$. Define a map $f\colon Y\to X$ by
$f(y)(a)=\top$ iff $y\in\theta(a)$ for $y\in Y$, $a\in V$. Then put
$t=Tf(s)\in TX$. By construction, we have $f^{-1}[\tau(a)]=\theta(a)$
for all $a\in V$. By naturality of predicate liftings and commutation
of preimage with Boolean operators, this implies that
$(Tf)^{-1}[\tau(\phi)]=\theta(\phi)$, so $s\in\theta(\phi)$ implies
$t=Tf(s)\in\tau(\phi)$; i.e.\ $(X,\tau,t)$ is a one-step model of
$(\phi,\eta)$.
\end{proof}
\noindent From the construction in the above lemma, we obtain the
following equivalent characterization of the one-step small model
property:
\begin{lemma}\label{lem:ospmp-log}
The logic~$\Lambda$ has the (weak) one-step small model property iff there
exists a polynomial~$p$ such that the following condition holds:
Whenever a one-step pair $(\phi,\eta)$ is one-step satisfiable, then
there exists $\eta'$ such that
\begin{enumerate}
\item $(\phi,\eta')$ is one-step satisfiable;
\item the list representation of~$\eta'$ according to
Definition~\ref{def:one-step} has size at most $p(|\mathsf{Var}(\phi)|)$
(respectively at most $p(|\phi|)$); and
\item $\eta'\vdash_{\mathit{PL}}\eta$.
\end{enumerate}
\end{lemma}
\begin{proof}
\emph{`Only if':} Take the conjunctive clauses of the DNF~$\eta'$ to
be the ones realized in a polynomial-sized one-step model
$(X,\tau,t)$ of $(\phi,\eta)$; that is,~$\eta'$ is the disjunction
of all conjunctive clauses~$\rho$ mentioning all variables occurring
in~$\phi$ such that $\tau(\rho)\neq\emptyset$.
\emph{`If':} Take~$X$ as in Lemma~\ref{lem:one-step-models} and note
that~$|X|$ is the number of conjunctive clauses in the
representation of~$\eta'$ as per Definition~\ref{def:one-step}.
\end{proof}
\noindent Under the one-step small model property, the two versions of
the one-step satisfiability problem coincide for our purposes, as
detailed next. Recall that a multivalued function $f$ is
\emph{NPMV}~\cite{book-long-selman:npmv} if the representation length
of values of~$f$ on~$x$ is polynomially bounded in that of~$x$ and
moreover the graph of~$f$ is in~{\upshape\textsc{NP}}\xspace; we generalize this notion
slightly to allow for size measures of~$x$ other than representation
length (such as the input size measure used in the strict one-step
satisfiability problem). Most reasonable complexity classes
containing~{\upshape\textsc{NP}}\xspace are closed under NPMV reductions; in particular this
holds for {\upshape\textsc{PSpace}}\xspace, {\mbox{\upshape\textsc{ExpTime}}}\xspace, and all levels of the polynomial
hierarchy.
\begin{lemma}\label{lem:oss}
Let $\Lambda$ have the weak one-step small model property
(Definition~\ref{def:one-step}). Then the strict one-step
satisfiability problem of~$\Lambda$ is NPMV-reducible to lax
one-step satisfiability. In particular, if lax one-step
satisfiability is in {\upshape\textsc{NP}}\xspace ({\upshape\textsc{PSpace}}\xspace/{\mbox{\upshape\textsc{ExpTime}}}\xspace), then strict one-step
satisfiability is in {\upshape\textsc{NP}}\xspace ({\upshape\textsc{PSpace}}\xspace/{\mbox{\upshape\textsc{ExpTime}}}\xspace).
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:ospmp-log}, and in the notation of its statement,
the NPMV function that maps $(\phi,\eta)$ (with~$\eta$ in bit vector
representation) to all $(\phi,\eta')$ with $\eta'$ of (list)
representation size at most $p(|\phi|)$ and $\eta'\vdash_{\mathit{PL}}\eta$
reduces strict one-step satisfiability to lax one-step
satisfiability.
\end{proof}
\noindent Of the two versions of the one-step small model property,
the stronger version (polynomial in $|\mathsf{Var}(\phi)|$) turns out to be
prevalent in the examples. The weak version (polynomial in $|\phi|$)
is of interest mainly due to the following equivalent
characterization:
\begin{theorem}
Suppose that the lax one-step satisfiability problem of~$\Lambda$ is
in {\upshape\textsc{NP}}\xspace. Then the weak one-step small model property holds
for~$\Lambda$ iff the strict one-step satisfiability problem
of~$\Lambda$ is in {\upshape\textsc{NP}}\xspace.
\end{theorem}
\begin{proof}
`Only if' is immediate by Lemma~\ref{lem:oss}; we prove
`if'. Let~$\mathsf{M}$ be a non-deterministic (random access) Turing machine
that solves the strict one-step satisfiability problem in polynomial
time, and let the one-step pair $(\phi,\eta)$ be one-step
satisfiable. Then~$\mathsf{M}$ has a successful run on~$(\phi,\eta)$. Since
this run takes polynomial time in~$|\phi|$, it accesses only
polynomially many bits in the bit vector representation
of~$\eta$. We can therefore set all other bits to~$0$, obtaining a
polynomial-sized DNF~$\eta'$ such that $\eta'\vdash_{\mathit{PL}}\eta$ and
$(\phi,\eta')$ is still one-step satisfiable, as witnessed by
otherwise the same run of~$\mathsf{M}$. By Lemma~\ref{lem:ospmp-log}, this
proves the weak one-step small model property.
\end{proof}
\noindent Although not phrased in these terms, the complexity analysis
of Presburger modal logic (without global assumptions) by Demri and
Lugiez~\cite{DemriLugiez10} is based on showing that the strict
one-step satisfiability problem is in
{\upshape\textsc{PSpace}}\xspace~\cite{SchroderPattinson08d}, without using the one-step small
model property for Presburger modal logic -- in fact, our proof of the
latter is based on more recent results from integer programming: We
recall that the classical \emph{Carath\'eodory theorem}
(e.g.~\cite{Schrijver86}) may be phrased as saying that every system
of~$d$ linear equations that has a solution over the non-negative
reals has such a solution with at most~$d$ non-zero
components. Eisenbrand and Shmonin~\cite{EisenbrandShmonin06} prove an
analogue over the integers, which we correspondingly rephrase as
follows.
\begin{lemma}[Integer Carath\'eodory theorem
\cite{EisenbrandShmonin06}] \label{lem:eisenbrand} Every system
of~$d$ linear equations $\sum u_i x_i=v$ with integer
coefficients~$u_i$ of binary length at most~$s$ that has a solution
over the non-negative integers has such a solution with at most
polynomially many non-zero components in~$d$ and~$s$ (specifically,
$\mathcal{O}(sd\log d)$).
\end{lemma}
\noindent To deal with lax one-step satisfiability, we will moreover
need the well-known result by Papadimitriou that establishes a
polynomial bound on the size of components of solutions of systems of
integer linear equations:
\begin{lemma}\label{lem:papadimitriou}\cite{Papadimitriou81}
Every system of integer linear
equations
in variables $x_1,\dots,x_n$ that has a solution over the
non-negative integers has such a solution
$(\hat x_1,\dots,\hat x_n)$ with the binary length of each
component~$\hat x_i$ bounded polynomially in the overall binary
representation size of the equation system.
\end{lemma}
\begin{corollary}\label{cor:int-constr}
Solvability of systems of Presburger constraints is in~{\upshape\textsc{NP}}\xspace.
\end{corollary}
\begin{proof}
It suffices to show that we can generalize
Lemma~\ref{lem:papadimitriou} to systems of Presburger constraints.
Indeed, we can reduce Presburger constraints to equations involving
additional variables. Specifically, we replace an inequality
$\sum u_i \cdot x_i > v$ with the equation
$\sum u_i \cdot x_i - y = v + 1$ and a modular constraint
$\sum u_i \cdot x_i \eqmod{k} v$ with either
$\sum u_i \cdot x_i -k\cdot y = v$ or $\sum u_i \cdot x_i + k\cdot y = v$,
depending on whether the given solution satisfies
$\sum u_i \cdot x_i \ge v$ or $\sum u_i \cdot x_i \le v$; in every
such replacement, choose~$y$ as a fresh variable.
\end{proof}
\noindent From these observations, we obtain sufficient tractability
of strict one-step satisfiability in our key examples:
\begin{example}\label{expl:ossmp}
\begin{enumerate}[wide]
\item \label{expl:osmp-presburger}\emph{Presburger modal logic has
the one-step small model property}. To see this, let a one-step
pair $(\phi,\eta)$ over $V=\{a_1,\dots,a_n\}$ be satisfied by a
one-step model $M=(X,\tau,\mu)$, where by
Lemma~\ref{lem:one-step-models} we can assume that $X$ consists of
satisfying valuations of $\eta$, hence has at most exponential
size in~$|\phi|$. Put
$q_i=\mu(\tau(a_i))$.
Now all we need to know about~$\mu$ to guarantee that~$M$
satisfies $\phi$ is that the (non-negative integer) numbers
$y_x:=\mu(x)$, for $x\in X$, satisfy
\begin{equation*}\textstyle
\sum_{x\in\tau(a_i)}y_x=q_i\qquad\text{for $i=1,\dots,n$}.
\end{equation*}
We can see this as a system of~$n$ linear equations in the
$y_x$, which by the integer Carath\'eodory theorem
(Lemma~\ref{lem:eisenbrand}) has a non-negative integer solution
$(y'_x)_{x\in X}$ with only~$m$ nonzero components where~$m$ is
polynomially bounded in~$n$ (the coefficients of the~$y_x$ all
being~$1$), and hence in~$|\phi|$; from this solution, we
immediately obtain a one-step model $(X',\tau',\mu')$ of
$(\phi,\eta)$ with $m$ states. Specifically, take
$X'=\{x\in X\mid y'_x>0\}$, $\tau'(a_i)=\tau(a_i)\cap X'$ for
$i=1,\dots,n$, and $\mu'(x)=y'_x$ for~$x\in X'$.
Moreover, again using Lemma~\ref{lem:one-step-models}, lax
one-step satisfiability in Presburger modal logic reduces
straightforwardly to checking solvability of Presburger
constraints over the non-negative integers, which by
Corollary~\ref{cor:int-constr} can be done in~NP. Specifically,
given a one-step pair $(\phi,\eta)$, with~$\eta$ represented as
per Definition~\ref{def:one-step}, introduce a variable~$x_{\rho}$
for every conjunctive clause~$\rho$ of~$\eta$ (i.e.\ for every
valuation satisfying~$\eta$), and translate every constraint
$\sum_i u_i\cdot\sharp(a_i)\sim v$ in~$\phi$ into
\begin{equation*}
\sum_i u_i\cdot\sum_{\rho\vdash_{\mathit{PL}}\eta\land a_i}x_\rho\sim v.
\end{equation*}
Thus, the lax one-step satisfiability problem of Presburger modal logic is in {\upshape\textsc{NP}}\xspace,
and by Lemma~\ref{lem:oss}, we obtain that \emph{strict one-step
satisfiability in Presburger modal logic is in {\upshape\textsc{NP}}\xspace}.
\item By a completely analogous argument as for Presburger modal
logic (using the standard Carath\'eodory theorem),
\emph{probabilistic modal logic with polynomial inequalities has
the one-step small model property}. Moreover, lax one-step
satisfiability reduces, analogously as in the previous item, to
solvability of systems of polynomial inequalities over the reals,
which can be checked in {\upshape\textsc{PSpace}}\xspace~\cite{Canny88} (this argument can
essentially be found in~\cite{FaginHalpernMegiddo90}). Again, we
obtain that \emph{strict one-step satisfiability in probabilistic
modal logic with polynomial inequalities is in {\upshape\textsc{PSpace}}\xspace}.
\end{enumerate}
\end{example}
\begin{rem}[Variants of the running examples]
The proof of the one-step small model property for Presburger modal
logic and probabilistic modal logic with polynomial inequalities
will in both cases work for any modal logic over integer- or
real-weighted systems, respectively, whose modalities depend only on
the measures of their arguments; call such modalities \emph{fully
explicit}. There are quite sensible operators that violate this
restriction; e.g.\ an operator $I(\phi,\psi)$ `$\phi$ is independent
of~$\psi$' would depend on the probabilities of $\phi$ and~$\psi$
but also on that of $\phi\land\psi$. Indeed, in this vein we easily
obtain a natural logic over probabilistic systems that fails to have
the one-step small model property: If we generalize the independence
modality~$I$ to several arguments and combine it with operators
$w(-)>0$ stating that their arguments have positive probability,
then every one-step model of the one-step pair
\begin{equation*}\textstyle
(I(a_1,\dots,a_n)\land\bigwedge_{i=1}^nw(a_i)>0 \land\bigwedge_{i=1}^nw(\neg a_i)>0,\top)
\end{equation*}
has at least $2^n$ states.
However, a completely analogous argument as in the proof of
Lemma~\ref{lem:one-step-models} shows that every predicate lifting
for functors such as $\mathcal{D}$, $\mathcal{S}$, or~$\mathcal{B}$ depends only on
the measures of Boolean combinations of its arguments, which can
equally well be expressed using the propositional operators of the
logic. That is, every coalgebraic modal logic over weighted systems
translates (possibly with exponential blowup) into one that has only
fully explicit modalities and hence has the one-step small model
property, as exemplified for the case of~$I$ in
Section~\ref{sec:prob}.
Incidentally, a similar example as the above produces a natural
example of a logic that does not have the one-step small model
property but whose lax one-step satisfiability problem is
nevertheless in~{\mbox{\upshape\textsc{ExpTime}}}\xspace. Consider a variant of probabilistic modal
logic (Section~\ref{sec:prob}) featuring \emph{linear} (rather than
polynomial) inequalities over probabilities $w(\phi)$, and
additionally \emph{fixed-probability conditional independence}
operators $I_{p_1,\dots,p_n}$ of arity~$n+1$ for $n\ge 1$ and
$p_1,\dots,p_n\in\mathbb{Q}\cap[0,1]$. The application of
$I_{p_1,\dots,p_n}$ to formulae $\phi_1,\dots,\phi_n,\psi$ is
written $I_{p_1,\dots,p_n}(\phi_1,\dots,\phi_n\mid \psi)$, and read
`$\phi_1,\dots,\phi_n$ are conditionally independent given~$\psi$,
and each~$\phi_i$ has conditional probability $p_i$ given~$\psi$'.
A one-step modal literal
$I_{p_1,\dots,p_n}(a_1,\dots,a_n|b)$ translates, by definition, into linear equalities
\begin{equation*}\textstyle
w(\bigwedge_{i\in I}a_i)
-(\prod_{i\in I}p_i) w(\psi)=0\qquad\text{for all $I\subseteq\{1,\dots,n\}$.}
\end{equation*}
Thus, a given one-step clause $\psi$ generates, in the same way as
previously, a system of linear inequalities, now of exponential size
in~$|\psi|$. Since solvability of systems of linear inequalities can,
by standard results in linear programming~\cite{Schrijver86}, be
checked in polynomial time, we obtain that the strict one-step
satisfiability problem is in {\mbox{\upshape\textsc{ExpTime}}}\xspace as claimed. On the other hand,
the one-step small model property fails for the same reasons as
for the~$I$ operator described above.
\end{rem}
\noindent By previous results in coalgebraic
logic~\cite{SchroderPattinson08d}, the observations in
Example~\ref{expl:ossmp}.\ref{expl:osmp-presburger} imply decidability
in {\upshape\textsc{PSpace}}\xspace of the respective \emph{plain} satisfiability problems,
reproducing a previous result by Demri and Lugiez~\cite{DemriLugiez10}
for the case of Presburger modal logic; we show in
Section~\ref{sec:type-elim} that the same observations yield an
optimal upper bound {\mbox{\upshape\textsc{ExpTime}}}\xspace for satisfiability under global
assumptions.
\begin{rem}[Comparison with tractable modal rule sets]\label{rem:rules}
Most previous generic complexity results in coalgebraic logic have
relied on complete sets of modal tableau rules that are sufficiently
tractable for purposes of the respective complexity bound,
e.g.~\cite{SchroderPattinson09a,SchroderEA09,GoreEA10a}. We briefly
discuss how these assumptions imply the ones used in the present
paper.
The rules in question (\emph{one-step tableau rules}) are of the
shape $\phi/\rho$ where $\phi$ is a modal conjunctive clause over
$V$ and $\rho\in\mathsf{Prop}(V)$, subject to the same syntactic
restrictions as one-step pairs, i.e.~$\phi$ must be clean and~$\rho$
can only mention variables occurring in~$\phi$. Such rules form
part of a tableau system that includes also the standard
propositional rules. As usual in tableau systems, algorithms for
satisfiability checking based on the tableau rules proceed roughly
according to the principle `in order to establish that~$\psi$ is
satisfiable, show that the conclusions of all rule matches to~$\psi$
are satisfiable' (this is dual to validity checking via formal proof
rules, where to show that~$\psi$ is valid one needs to find some
proof rule whose conclusion matches~$\psi$ and whose premiss is
valid). More precisely, the (one-step) soundness and completeness
requirement on a rule set~$\mathcal{R}$ demands that a one-step
pair~$(\psi,\eta)$ is satisfiable iff for every rule $\phi/\rho$
in~$\mathcal{R}$ and every injective variable renaming~$\sigma$ such that
$\psi\vdash_{\mathit{PL}}\phi\sigma$ (see Definition~\ref{def:prop} for the
notation~$\vdash_{\mathit{PL}}$), the propositional formula
$\eta\land\rho\sigma$ is satisfiable. Since~$\psi$ and~$\phi$ are
modal conjunctive clauses (and~$\psi$, being clean, cannot contain
clashing modal literals), $\psi\vdash_{\mathit{PL}}\phi\sigma$ means that
$\psi$ contains every modal literal of $\phi\sigma$.
The exact requirements on tractability of a rule set vary with the
intended complexity bound for the full logic. In connection with
{\mbox{\upshape\textsc{ExpTime}}}\xspace bounds, one uses \emph{exponential tractability} of the
rule set (e.g.~\cite{CirsteaEA11}). This condition requires that
rules have an encoding as strings such that every rule $\phi/\rho$
in~$\mathcal{R}$ that \emph{matches} a given modal conjunctive
clause~$\psi$ over~$V$ \emph{under} a given injective
renaming~$\sigma$, i.e.\ $\psi\vdash_{\mathit{PL}}\phi\sigma$, has an encoding
of polynomial size in~$\psi$, and moreover given a modal conjunctive
clause~$\psi$ over~$V$, it can be decided in exponential time
in~$|\psi|$ whether (i) an encoded rule $\phi/\rho$ matches~$\psi$
under a given renaming~$\sigma$, and (ii) whether a given
conjunctive clause~$\chi$ over~$\mathsf{Var}(\psi)$ propositionally entails
the conclusion~$\rho\sigma$ the instance $\phi\sigma/\rho\sigma$ of
an encoded rule $\phi/\rho$ under a given renaming~$\sigma$.
Now suppose that a set~$\mathcal{R}$ of modal tableau rules satisfies all
these requirements, i.e.\ is one-step sound and complete for the
given logic and exponentially tractable, with polynomial bound~$p$
on the size of rule codes. Then one sees easily that the strict
one-step satisfiability problem is in {\mbox{\upshape\textsc{ExpTime}}}\xspace: Given a one-step
pair $(\psi,\eta)$ to be checked for one-step satisfiability, we can
go through all rules $\phi/\rho$ represented by codes of length at
most $p(|\psi|)$ and all injective renamings~$\sigma$ of the
variables of~$\phi$ into the variables of~$\psi$ such that
$\phi/\rho$ matches~$\psi$ under~$\sigma$, and then for each such
match go through all conjunctive clauses~$\chi$ over $\mathsf{Var}(\psi)$
that propositionally entail~$\rho\sigma$, checking for each
such~$\chi$ that $\eta\land\chi$ is propositionally
satisfiable. Both loops go through exponentially many iterations,
and all computations involved take at most exponential time.
Summing up, complexity bounds obtained by our current semantic
approach subsume earlier tableau-based ones.
\end{rem}
\section{Type Elimination}\label{sec:type-elim}
\noindent We now describe a type elimination algorithm that realizes
an {\mbox{\upshape\textsc{ExpTime}}}\xspace upper bound for reasoning with global assumptions in
coalgebraic logics. Like all type elimination algorithms, it is not
suited for practical use, as it begins by constructing the full
exponential-sized set of types (in the initialization phase of the
computation of a greatest fixpoint). We therefore refine the algorithm
to a global caching algorithm in Section~\ref{sec:caching}.
As usual, we rely on defining a scope of relevant formulae:
\begin{definition}
We define \emph{normalized negation} $\nneg$ by taking
$\nneg\phi=\phi'$ if a formula $\phi$ has the form $\phi=\neg\phi'$,
and $\nneg\phi=\neg\phi$ otherwise. A set $\Sigma$ of formulae is
\emph{closed} if $\Sigma$ is closed under subformulae and normalized
negation. The \emph{closure} of a set~$\Gamma$ of formulae is the
least closed set containing~$\Gamma$.
\end{definition}
\noindent We \emph{fix from now on a global assumption $\psi$ and a
formula $\phi_0$ to be checked for $\psi$-satisfiability}. We denote
the closure of $\{\psi,\phi_0\}$ in the above sense by $\Sigma$. Next,
we approximate the $\psi$-satisfiable subsets of $\Sigma$ from above
via a notion of type that takes into account only propositional
reasoning and the global assumption~$\psi$:
\begin{definition}\label{def:type}
A \emph{$\psi$-type} is a subset $\Gamma\subseteq\Sigma$ such that
\begin{itemize}
\item $\psi\in \Gamma\not\owns\bot$;
\item whenever $\neg \phi\in\Sigma$, then $\neg \phi\in \Gamma$ iff
$\phi\notin \Gamma$;
\item whenever $\phi\land\chi\in\Sigma$, then
$\phi \land \chi \in \Gamma$ iff $\phi,\chi\in \Gamma$.
\end{itemize}
\end{definition}
\noindent
The design of the algorithm relies on one-step satisfiability as an
abstraction: We denote the set of all $\psi$-types by $\types{\psi}$. For a
formula $\phi\in\Sigma$, we put
\begin{equation*}
\hat \phi=\{\Gamma\in \types{\psi}\mid \phi\in \Gamma\},
\end{equation*}
intending to construct a model on a suitable subset
$S\subseteq\types{\psi}$ in such a way that $\hat\phi\cap S$ becomes
the extension of~$\phi$. We take $V_\Sigma$ to be the set of
propositional variables $a_{\heartsuit\rho}$ for all modal atoms
$\heartsuit\rho\in\Sigma$; we then define a substitution $\sigma_\Sigma} \newcommand{\nneg}{{\sim}$ by
$\sigma_\Sigma} \newcommand{\nneg}{{\sim}(a_{\heartsuit\rho})=\rho$ for
$a_{\heartsuit\rho}\in V_\Sigma$. For $S\subseteq \types{\psi}$ and
$\Gamma\in S$, we construct a one-step pair
\begin{equation*}
(\phi_\Gamma,\eta_S)
\end{equation*}
over $V_\Sigma$ by taking $\phi_\Gamma$ to be the conjunction of all
modal literals $\epsilon\heartsuit a_{\heartsuit\rho}$ over $V_\Sigma$
such that $ \epsilon\heartsuit\rho\in \Gamma$ (note that indexing the
propositional variables~$a_{\heartsuit\rho}$ over $\heartsuit\rho$ instead
of just~$\rho$ ensures that~$\psi_\Gamma$ is clean as required), and
$\eta_S$ to be the DNF (for definiteness, in bit vector representation
as per Definition~\ref{def:one-step}) containing for each
$\Delta\in S$ a conjunctive clause
\begin{equation*}
\bigwedge_{\heartsuit\rho\in\Sigma\mid\rho\in\Delta}a_{\heartsuit\rho}\land
\bigwedge_{\heartsuit\rho\in\Sigma\mid\nneg\rho\in\Delta}\neg a_{\heartsuit\rho}.
\end{equation*}
That is,~$\phi_\Gamma$ arises from~$\Gamma$ by abstracting the
arguments~$\rho$ of modalized formulae~$\heartsuit\rho\in\Gamma$ as
propositional variables~$a_{\heartsuit\rho}$, and~$\eta$ captures the
propositional dependencies that will hold in~$S$ among these arguments
if the construction works as intended. We define a functional
\begin{equation}\label{eq:elim-functional}
\begin{array}{lcll}
\mathcal{E}\colon &\mathcal{P}(\types{\psi})&\to & \mathcal{P}(\types{\psi})\\[0.3ex]
& S & \mapsto & \{\Gamma \in S \mid (\phi_\Gamma,\eta_S)\text{ is one-step satisfiable}\},
\end{array}
\end{equation}
whose greatest fixpoint $\nu\mathcal{E}$ will turn out to contain precisely
the satisfiable types. Existence of $\nu\mathcal{E}$ is guaranteed by the
Knaster-Tarski fixpoint theorem and the following lemma:
\begin{lemma}
The functional $\mathcal{E}$ is monotone w.r.t.\ set inclusion.
\end{lemma}
\begin{proof}
For $S\subseteq S'$, the DNF $\eta_{S'}$ is weaker
than~$\eta_S$, as it contains more disjuncts.
\end{proof}
\noindent By Kleene's fixpoint theorem, we can compute $\nu\mathcal{E}$ by
just iterating $\mathcal{E}$:
\begin{alg}\label{alg:type-elim}
(Decide by type elimination whether $\phi_0$ is satisfiable over $\psi$)
\begin{enumerate}
\item Set $S:=\types{\psi}$.
\item Compute $S'=\mathcal{E}(S)$; if $S'\neq S$ then put $S:=S'$ and repeat.
\item Return `yes' if $\phi_0\in \Gamma$ for some $\Gamma\in S$, and `no'
otherwise.
\end{enumerate}
\end{alg}
\noindent The run time analysis is straightforward:
\begin{lemma}\label{lem:type-elim-time}
\noindent If the strict one-step satisfiability problem of~$\Lambda$
is in {\mbox{\upshape\textsc{ExpTime}}}\xspace, then Algorithm~\ref{alg:type-elim} has at most
exponential run time.
\end{lemma}
\begin{proof}
Since~$\types{\psi}$ has at most exponential size, the algorithm
runs through at most exponentially many iterations. In a single
iteration, we have to compute $\mathcal{E}(S)$, checking for each of the at
most exponentially many $\Gamma\in S$ whether~$(\phi_\Gamma,\eta_S)$
is one-step satisfiable. The assumption of the lemma guarantees that
each one-step satisfiability check takes only exponential time, as
$\phi_\Gamma$ is of linear size.
\end{proof}
\noindent It remains to prove correctness of the algorithm; that is,
we show that, as announced above, $\nu\mathcal{E}$ consists precisely of the
$\psi$-satisfiable types. We split this claim into two inclusions,
corresponding to soundness and completeness, respectively:
\begin{lemma}\label{lem:realization}
The set of $\psi$-satisfiable types is a postfixpoint of $\mathcal{E}$.
\end{lemma}
\noindent (Since $\nu\mathcal{E}$ is also the greatest postfixpoint of~$\mathcal{E}$,
this implies that $\nu\mathcal{E}$ contains all $\psi$-satisfiable types. This
means that Algorithm~\ref{alg:type-elim} is \emph{sound}, i.e.\
answers `yes' on $\psi$-satisfiable formulae.)
\begin{proof}
Let $R$ be the set of $\psi$-satisfiable types; we have to show that
$R \subseteq \mathcal{E}(R)$. So let $\Gamma\in R$; then we have a state~$x$
in a $\psi$-model $C = (X, \gamma)$ such that $x\models_C\Gamma$. By
definition of~$\mathcal{E}$, we have to show that the one-step pair
$(\phi_\Gamma, \eta_R)$ is one-step satisfiable. We claim that the
one-step model $M=(X,\tau,\xi(x))$, where~$\tau$ is defined by
\begin{equation*}
\tau(a_{\heartsuit\rho}):=\Sem{\sigma_\Sigma} \newcommand{\nneg}{{\sim}(a_{\heartsuit\rho})}_C=\Sem{\rho}_C
\end{equation*}
for $a_{\heartsuit\rho}\in V_\Sigma$, satisfies
$(\phi_\Gamma,\eta_R)$. For the propositional part~$\eta_R$, let
$y\in X$; we have to show $y\in\tau(\eta_R)$. Put
$\Delta=\{\rho\in\Sigma\mid\ y\models\rho\}$. Then $\Delta\in R$, so
that $\eta_R$ contains the conjunctive clause
\begin{equation*}
\theta:=\bigwedge_{\heartsuit\rho\in\Sigma\mid\rho\in\Delta}a_{\heartsuit\rho}\land\bigwedge_{\heartsuit\rho\in\Sigma\mid\rho\notin\Delta}\neg
a_{\heartsuit\rho}.
\end{equation*}
By the definitions of~$\tau$ and~$\theta$, we have
$y\in\tau(\theta)\subseteq\tau(\eta_R)$, as required (e.g.~if
$\heartsuit\rho\in\Sigma$ and $\rho\in\Delta$, then $y\models\rho$,
i.e.~$y\in\Sem{\rho}_C=\tau(a_{\heartsuit\rho})$; the negative case is
similar). Finally, for $\psi_\Gamma$, let $\heartsuit\rho\in\Sigma$; we
have to show that $\heartsuit\rho\in\Gamma$ iff
$\xi(x)\in\Sem{\heartsuit}(\tau(a_{\heartsuit\rho}))=\Sem{\heartsuit}(\Sem{\rho}_C)$. But
the latter just means that $x\models\heartsuit\rho$, so the equivalence
holds because~$x\models\Gamma$.
\end{proof}
\noindent For the converse inclusion, i.e.~completeness, we show the
following (combining the usual existence and truth lemmas):
\begin{lemma}\label{lem:ex-truth}
Let $S$ be a postfixpoint of $\mathcal{E}$. Then there exists a
$T$-coalgebra $C=(S,\gamma)$ such that for each $\rho\in\Sigma$,
$\Sem{\rho}_C=\hat\rho\cap S$.
\end{lemma}
\begin{proof}
To construct the transition structure $\gamma$, let $\Gamma\in
S$. Since~$S$ is a postfixpoint of~$\mathcal{E}$, the one-step pair
$(\phi_\Gamma,\eta_S)$ is satisfiable; let $(X,\tau,t)$ be a one-step
model of $(\phi_\Gamma,\eta_S)$.
By
construction of~$\eta_S$, we then have a map $f:X\to S$ such that for all
$\heartsuit\rho\in\Sigma$,
\begin{equation}\label{eq:def-f}
x\in\tau(a_{\heartsuit\rho})\quad\text{iff}\quad\rho\in f(x)\quad\text{iff}\quad
f(x)\in\hat\rho.
\end{equation}
We put $\gamma(\Gamma)=Tf(t)\in TS$. For the $T$-coalgebra
$C=(S,\gamma)$ thus obtained, we show the claim
$\Sem{\rho}_C=\hat \rho\cap S$ by induction over $\rho\in\Sigma$.
The propositional cases are by the defining properties of types
(Definition~\ref{def:type}). For the modal case, we have (for
$\Gamma$ and associated data $f,t$ as above)
\begin{align*}
\Gamma\models\heartsuit\rho & \iff \gamma(\Gamma)=Tf(t)\in\Sem{\heartsuit}_S(\Sem{\rho}_C)\\
& \iff t\in\Sem{\heartsuit}_X(f^{-1}[\Sem{\rho}_C]) &&\by{naturality}\\
& \qquad\qquad = \Sem{\heartsuit}_X(f^{-1}[\hat\rho\cap S]) && \by{induction}\\
& \qquad\qquad = \Sem{\heartsuit}_X(\tau(a_{\heartsuit\rho})) &&\by{\ref{eq:def-f}}\\
& \iff \heartsuit\rho\in\Gamma && \by{definition of~$\phi_\Gamma$} \qedhere
\end{align*}
\end{proof}
\noindent A $T$-coalgebra as in Lemma~\ref{lem:ex-truth} is clearly a
$\psi$-model, so the above lemma implies that every postfixpoint
of~$\mathcal{E}$, including~$\nu\mathcal{E}$, consists only of $\psi$-satisfiable
types. That is, that Algorithm~\ref{alg:type-elim} is indeed complete,
i.e.\ answers `yes' \emph{only} on $\psi$-satisfiable formulae. This
completes the correctness proof of Algorithm~\ref{alg:type-elim}; in
combination with the run time analysis
(Lemma~\ref{lem:type-elim-time}) we thus obtain
\begin{theorem}[Complexity of satisfiability under global assumptions]\label{thm:exptime}
If the strict one-step satisfiability problem of the logic~$\Lambda$
is in {\mbox{\upshape\textsc{ExpTime}}}\xspace, then satisfiability under global assumptions
in~$\Lambda$ is in {\mbox{\upshape\textsc{ExpTime}}}\xspace.
\end{theorem}
\begin{example}
By the results of the previous section (Example~\ref{expl:ossmp})
and by inheriting lower bounds from reasoning with global
assumptions in $K$~\cite{FischerLadner79}, we obtain that reasoning
with global assumptions in Presburger modal logic and in
probabilistic modal logic with polynomial inequalities is
{\mbox{\upshape\textsc{ExpTime}}}\xspace-complete. We note additionally that the same holds also for
our separating example, probabilistic modal logic with linear
inequalities and fixed-probability independence operators (which
does not have the one-step small model property but whose strict
one-step satisfiability problem is nevertheless in {\mbox{\upshape\textsc{ExpTime}}}\xspace).
\end{example}
\section{Global Caching}
\label{sec:caching}
\noindent We now develop the type elimination algorithm from the
preceding section into a global caching algorithm. Roughly speaking,
global caching algorithms perform \emph{expansion} steps, in which new
nodes to be explored are added to the tableau, and \emph{propagation}
steps, in which the satisfiability (or unsatisfiability) is determined
for those nodes for which the tableau already contains enough
information to allow this. The practical efficiency of global caching
algorithms is based on the fact that the algorithm can stop as soon as
the root node is marked satisfiable or unsatisfiable in a propagation
step, thus potentially avoiding generation of all (exponentially many)
possible nodes. Existing global caching algorithms work with systems
of tableau rules (satisfiability is guaranteed if every applicable
rule has at least one satisfiable conclusion)~\cite{GoreEA10a}. The
fact that we work with a semantics-based decision procedure impacts on
the design of the algorithm in two ways:
\begin{itemize}
\item In a tableaux setting, node generation in the expansion steps is
driven by the tableau rules, and a global caching algorithm
generates modal successor nodes by applying tableau rules. In
principle, however, modal successor nodes can be generated at will,
with the rules just pointing to relevant
nodes.
In our setting, we
make the relevant nodes explicit using the concept of
\emph{children}.
\item The rules govern the propagation of satisfiability and
unsatisfiability among the nodes. Semantic propagation of
satisfiability is straightforward, but propagation of
unsatisfiability again needs the concept of children: a (modal) node
can only be marked as unsatisfiable once all its children have been
generated (and too many of them are unsatisfiable).
\end{itemize}
\noindent We continue to work with a closed set $\Sigma$ as in
Section~\ref{sec:type-elim} (generated by the global assumption $\psi$
and the target formula $\phi_0$) but replace types with
\emph{(tableau) sequents}, i.e.\ arbitrary subsets
$\Gamma,\Theta\subseteq\Sigma$, understood conjunctively; in
particular, a sequent need not determine the truth of every formula
in~$\Sigma$. We write $\mathsf{Seqs}=\mathcal{P}\Sigma$, and occasionally refer to
sequents as \emph{nodes} in allusion to an implicit graphical
structure (made more explicit in Section~\ref{sec:concrete-alg}). A
\emph{state} is a sequent consisting of modal literals only (recall
that we regard propositional atoms as nullary modalities; so if
propositional atoms in this sense are part of the logic, then states
may also contain propositional atoms or their negations). We denote
the set of states by $\mathsf{States}$.
To convert sequents into states, we employ the usual
\emph{propositional rules}
\begin{equation*}
\infrule{\Gamma,\phi_1\land \phi_2}{\Gamma,\phi_1,\phi_2}
\quad
\infrule{\Gamma,\neg(\phi_1\land \phi_2)}{\Gamma,\neg \phi_1\mid\Gamma,\neg \phi_2}
\quad
\infrule{\Gamma,\neg\neg \phi}{\Gamma,\phi}
\quad
\infrule{\Gamma,\bot}{}
\end{equation*}
where $\mid$ separates alternative conclusions (and the last rule has
no conclusion).
\begin{rem}
Completeness of the global caching algorithm will imply that the
usual clash rule $\Gamma,\phi,\neg\phi/\;$ (a rule with no
conclusions, like the rule for~$\bot$ above) is admissible. Notice
that in logics featuring propositional atoms~$p$, i.e.\ nullary
modalities, the atomic clash rule $\Gamma,p,\neg p/$ would be
considered a modal rule.
\end{rem}
\noindent As indicated above, the expansion steps of the algorithm
will be driven by the following child relation on tableau sequents:
\begin{definition}
The \emph{children} of a state $\Gamma$ are the sequents consisting
of~$\psi$ and, for each modal literal
$\epsilon\heartsuit\phi\in\Gamma$, a choice of either $\phi$ or
$\neg \phi$. The \emph{children} of a non-state sequent are its
conclusions under the propositional rules. In both cases, we write
$\children{\Gamma}$ for the set of children of~$\Gamma$.
\end{definition}
\noindent For purposes of the global caching algorithm, we modify the
functional~$\mathcal{E}$ defined in Section~\ref{sec:type-elim} to work also
with sequents (rather than only types) and to depend on a set
$G\subseteq\mathsf{Seqs}$ of sequents already generated. To this end, we
introduce for each state~$\Gamma\inG$ a set~$V_\Gamma$ containing
a propositional variable $a_{\epsilon\heartsuit\rho}$ for each modal
literal $\epsilon\heartsuit\rho\in\Gamma$, as well as a substitution
$\sigma_\Gamma$ on~$V_\Gamma$ defined by
$\sigma_\Gamma(a_{\epsilon\heartsuit\rho})=\rho$. Given
$S\subseteqG$, we then define a one-step pair
$(\phi_\Gamma,\eta_S)$ over~$V_\Gamma$ similarly as in
Section~\ref{sec:type-elim}: We take~$\phi_\Gamma$ to be the
conjunction of all modal literals
$\epsilon\heartsuit a_{\epsilon\heartsuit\rho}$ over $V_\Gamma$ such that
$ \epsilon\heartsuit\sigma_\Gamma(a_{\epsilon\heartsuit\rho})=
\epsilon\heartsuit\rho\in \Gamma$ (we need to
index~$a_{\epsilon\heartsuit\rho}$ over $\epsilon\heartsuit\rho$ instead of
just $\heartsuit\rho$ to ensure that $\phi_\Gamma$ is clean, since
sequents, unlike types, may contain clashes), and $\eta_S$ to be the
DNF containing for each $\Delta\in S$ a conjunctive clause
\begin{equation*}
\bigwedge_{\epsilon\heartsuit\rho\in\Gamma\mid \rho\in\Delta}a_{\epsilon\heartsuit\rho}\land
\bigwedge_{\epsilon\heartsuit\rho\in\Gamma\mid\nneg\rho\in\Delta}\neg a_{\heartsuit\rho}.
\end{equation*}
We now define a functional
\begin{equation*}
\mathcal{E}_{G}\colon \PowG\to\PowG
\end{equation*}
by taking $\mathcal{E}_{G}(S)$ to consist of
\begin{itemize}
\item all non-state sequents $\Gamma\in G\setminus\mathsf{States}$ such
that $S\cap\children{\Gamma} \neq\emptyset$ (i.e.\ some
propositional rule that applies to $\Gamma$ has a conclusion that is
contained in~$S$), and
\item all states $\Gamma\in G\cap\mathsf{States}$ such that the one-step pair
$(\phi_\Gamma,\eta_{S\cap\children{\Gamma}})$ is one-step satisfiable.
\end{itemize}
To propagate unsatisfiability, we introduce a second functional
$\mathcal{A}_{G}\colon \PowG\to\PowG$, where we take
$\mathcal{A}_{G}(S)$ to consist of
\begin{itemize}
\item all non-state sequents $\Gamma\in G\setminus\mathsf{States}$ such
that there is a propositional rule applying to $\Gamma$ all whose
conclusions are in $S$, and
\item all states $\Gamma\in G\cap\mathsf{States}$ such that
$\children{\Gamma} \subseteq G$ and the one-step pair
$(\phi_\Gamma,\eta_{\children{\Gamma}\setminus S})$ is one-step
unsatisfiable.
\end{itemize}
\noindent Both~$\mathcal{E}_G$ and~$\mathcal{A}_G$ are clearly monotone. We note
additionally that they also depend monotonically on~$G$:
\begin{lemma}\label{lem:functionals-monotone}
Let $G\subseteq G'\subseteq\mathsf{Seqs}$. Then
\begin{enumerate}
\item\label{item:functionals-monotone} $\mathcal{E}_G(S)\subseteq\mathcal{E}_{G'}(S)$ and
$\mathcal{A}_G(S)\subseteq\mathcal{A}_{G'}(S)$ for all~$S\in\mathcal{P} G$;
\item\label{item:fps-monotone} $\nu\mathcal{E}_G\subseteq\nu\mathcal{E}_{G'}$ and
$\mu\mathcal{A}_G\subseteq\mu\mathcal{A}_{G'}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Claim~\eqref{item:functionals-monotone} is immediate from the
definitions (for~$\mathcal{A}_G$, this hinges on the condition
$\children{\Gamma}\subseteq G$ for states~$\Gamma$); we show
Claim~\eqref{item:fps-monotone}. For $\mathcal{E}_G$, it suffices to show
that $\nu\mathcal{E}_G$ is a postfixpoint of~$\mathcal{E}_{G'}$. Indeed,
by~\eqref{item:functionals-monotone}, we have
$\nu\mathcal{E}_G=\mathcal{E}_G(\nu\mathcal{E}_G)\subseteq\mathcal{E}_{G'}(\nu\mathcal{E}_G)$. For~$\mathcal{A}_G$,
we show that $G\cap\mu\mathcal{A}_{G'}$ is a prefixpoint of $\mathcal{A}_G$. Indeed,
by~\eqref{item:functionals-monotone}, we have
$\mathcal{A}_G(\mu\mathcal{A}_{G'}\cap G)\subseteq\mathcal{A}_{G'}(\mu\mathcal{A}_{G'}\cap
G)\subseteq\mathcal{A}_{G'}(\mu\mathcal{A}_{G'})=\mu\mathcal{A}_{G'}$, and
$\mathcal{A}_G(\mu\mathcal{A}_{G'}\cap G)\subseteq G$ by the definition of~$\mathcal{A}_G$.
\end{proof}
\begin{rem}\label{rem:non-duality}
The reader will note that the functionals $\mathcal{A}_G$ and
$\mathcal{E}_G$ fail to be mutually dual, as $\mathcal{E}_G$ quantifies
existentially instead of universally over propositional rules. We
will show that the well-known commutation of the propositional rules
implies that the more permissive use of existential quantification
eventually leads to the same answers (see proof of
Lemma~\ref{lem:no-propagation}.(\ref{item:inv-AE-Gf})); it allows
for more economy in the generation of new nodes in the global
caching algorithm, described next.
\end{rem}
\noindent The global caching algorithm maintains, as global variables,
a set $G$ of sequents with subsets~$E$ and~$A$ of sequents already
decided as satisfiable or unsatisfiable, respectively.
\begin{alg}\label{alg:global-caching}
(Decide $\psi$-satisfiability of $\phi_0$ by global caching.)
\begin{enumerate}
\item Initialize $G=\{\Gamma_0\}$ with $\Gamma_0=\{\phi_0,\psi\}$, and
$E=A=\emptyset$.
\item (Expand)\label{step:expand} Select a sequent
$\Gamma\in G$ that has children that are not in $G$, and
add any number of these children to $G$. If no sequents with
missing children are found, go to Step~\ref{step:final-prop}
\item (Propagate)\label{step:prop} Optionally recalculate $E$ as the greatest fixed
point $\nu S.\,\mathcal{E}_G(S\cup E)$, and $A$ as
$\mu S.\,\mathcal{A}_G(S\cup A)$. If $\Gamma_0\in E$, return `yes'; if
$\Gamma_0\in A$, return `no'.
\item Go to Step~\ref{step:expand}.
\item \label{step:final-prop} Recalculate $E$ as
$\nu S.\,\mathcal{E}_G(S\cup E)$; return `yes' if $\Gamma_0\in E$, and
`no' otherwise.
\end{enumerate}
\end{alg}
\begin{rem}
As explained at the beginning of the section, the key feature of the
global caching algorithm is that it potentially avoids generating
the full exponential-sized set of tableau sequents by detecting
satisfiability or unsatisfiability on the fly in the intermediate
optional propagation steps. The non-determinism in the formulation
of the algorithm can be resolved arbitrarily, i.e.\ we will see that
any choice (e.g.\ of which sequents to add in the expansion step and
whether or not to trigger propagation) leads to correct results;
thus, it affords room for heuristic optimization. Detecting
\emph{un}satisfiability in Step~\ref{step:prop} requires previous
generation of all, in principle exponentially many, children of a
sequent. This is presumably not necessarily prohibitive in practice,
as the exponential dependence is only in the number of
\emph{top-level} modalities in a sequent. As an extreme example, if
we encode the graded modality $\Diamond_0\phi$ as $\sharp(\phi)>0$
in Presburger modal logic, then the sequent $\{\Diamond_0^n\top\}$
($n$ successive diamonds) induces $2^n$ types but has only two
children, $\{\Diamond_0^{n-1}\top\}$ and
$\{\neg\Diamond_0^{n-1}\top\}$.
\end{rem}
\noindent We next prove correctness of the algorithm. As a first step,
we show that a sequent can be added to~$E$ (or to~$A$) in the optional
Step~3 of the algorithm only if it will at any rate end up in~$E$ (or
outside~$E$, respectively) in the final step of the algorithm. To this
end, let~${G_f}$ denote the least set of sequents such that
$\Gamma_0 \in {G_f}$ and ${G_f}$ contains all children of nodes contained
in ${G_f}$, i.e.\ $\children{\Gamma}\subseteq{G_f}$ for each
$\Gamma\in{G_f}$; that is, at the end of a run of the algorithm without
intermediate propagation steps, we have~$G={G_f}$ and
$E=\nu S.\,\mathcal{E}_{{G_f}}(S)$. We then formulate the claim in the following
invariants:
\begin{lemma}\label{lem:no-propagation}
At any stage throughout a run of Algorithm~\ref{alg:global-caching} we have
\begin{enumerate}
\item\label{item:inv-E-G} $E \subseteq \nu S. \mathcal{E}_G(S)$
\item\label{item:inv-A-G} $A \subseteq \mu S. \mathcal{A}_G(S)$
\item \label{item:inv-E-Gf} $E \subseteq \nu S . \mathcal{E}_{{G_f}}(S)$
\item\label{item:inv-A-Gf} $A \subseteq \mu S. \mathcal{A}_{{G_f}}(S)$
\item \label{item:inv-AE-Gf} $A\cap\nu S . \mathcal{E}_{{G_f}}(S)= \mu S. \mathcal{A}_{{G_f}}(S) \cap\nu S . \mathcal{E}_{{G_f}}(S) = \emptyset$.
\end{enumerate}
\end{lemma}
\noindent In the proof, we use the following simple fixpoint laws (for
which no novelty is claimed):
\begin{lemma}\label{lem:fp-laws}
Let $X$ be a set, and let $F:\mathcal{P} X\to\mathcal{P} X$ be monotone w.r.t.\
set inclusion. Then
\begin{equation*}
\nu S.\,F(S\cup \nu S.\,F(S))=\nu S.\,F(S)\quad\text{and}\quad
\mu S.\,F(S\cup \mu S.\,F(S))=\mu S.\,F(S).
\end{equation*}
\end{lemma}
\begin{proof}
In both claims, `$\supseteq$' is trivial; we show `$\subseteq$'.
For $\nu$, we show (already using `$\supseteq$') that the left-hand side is
a fixpoint of~$F$:
\begin{align*}
& \nu S. F(S \cup \nu S. F(S)) \\
& =F((\nu S. F(S \cup \nu S. F(S)))\cup (\nu S. F(S)))&& \by{fixpoint unfolding}\\
& =F(\nu S. F(S \cup \nu S. F(S)))&& \by{$\nu S.\,F(S\cup \nu S.\,F(S))\supseteq\nu S.\,F(S)$}.
\end{align*}
For~$\mu$, we show that the right-hand side is a fixpoint of
$S\mapsto F(S \cup \mu S.F(S))$:
\begin{equation*}
F( \mu S.F(S) \cup \mu S.F(S))=F(\mu S.F(S))=\mu S.F(S). \qedhere
\end{equation*}
\end{proof}
\begin{proof}[Proof (Lemma~\ref{lem:no-propagation})]
\emph{(\ref{item:inv-E-G}) and~(\ref{item:inv-A-G}):}\/ Clearly, these
invariants hold \emph{initially}, as~$E$ and~$A$ are initialized
to~$\emptyset$.
In \emph{expansion steps}, the invariants are preserved because by
Lemma~\ref{lem:functionals-monotone}, $\nu S. \mathcal{E}_G(S)$ and
$\mu S. \mathcal{A}_G(S)$ depend monotonically on~$G$.
Finally, in a \emph{propagation step}, we change $E$ into
\begin{equation*}
E' = \nu S. \mathcal{E}_G(S \cup E) \subseteq \nu S. \mathcal{E}_G(S \cup \nu S. \mathcal{E}_G(S)) = \nu S .\mathcal{E}_G(S),
\end{equation*}
where the inclusion is by the invariant for~$E$ and the equality is
by Lemma~\ref{lem:fp-laws}. Thus, the invariant~(\ref{item:inv-E-G})
is preserved. Similarly,~$A$ is changed into
\begin{equation*}
A' = \mu S. \mathcal{A}_G(S \cup A) \subseteq \mu S. \mathcal{A}_G(S \cup \mu S.\mathcal{A}_G(S)) = \mu S.\mathcal{A}_G(S)
\end{equation*}
where the equality is by Lemma~\ref{lem:fp-laws}, preserving
invariant~(\ref{item:inv-A-G}).
\emph{(\ref{item:inv-E-Gf}) and~(\ref{item:inv-A-Gf}):}\/ Immediate
from (\ref{item:inv-E-G}) and~(\ref{item:inv-A-G}) by
Lemma~\ref{lem:functionals-monotone}, since $G\subseteq{G_f}$ at all
stages.
\emph{(\ref{item:inv-AE-Gf}):}\/ Let $\overline{\mathcal{A}_{{G_f}}}$ denote
the dual of~$\mathcal{A}_{G_f}$, i.e.\
$\overline{\mathcal{A}_{{G_f}}}(S)={G_f}\setminus\mathcal{A}_{G_f}({G_f}\setminus S)$; that
is, $\overline{\mathcal{A}_{{G_f}}}$ is defined like~$\mathcal{E}_{G_f}$ except that
$\overline{\mathcal{A}_{{G_f}}}(S)$ contains a non-state sequent
$\Gamma\in{G_f}\setminus\mathsf{States}$ if \emph{every} propositional rule
that applies to $\Gamma$ has a conclusion that is contained in~$S$
(cf.\ Remark~\ref{rem:non-duality}). Then
$\nu S.\,\overline{\mathcal{A}_{{G_f}}}(S)$ is the complement of
$\mu S.\,\mathcal{A}_{G_f}(S)$, so by~(\ref{item:inv-A-Gf}) it suffices to
show $\nu S.\,\mathcal{E}_{G_f}(S)\subseteq\nu
S.\,\overline{\mathcal{A}_{{G_f}}}(S)$. To this end, we show that
$\nu S.\,\mathcal{E}_{G_f}(S)$ is a postfixpoint of $\overline{\mathcal{A}_{{G_f}}}$. So
let $\Gamma\in\nu S.\,\mathcal{E}_{G_f}(S)=\mathcal{E}_{G_f}(\nu
S.\,\mathcal{E}_{G_f}(S))$. If~$\Gamma$ is a state, then it follows
immediately that
$\Gamma\in\overline{\mathcal{A}_{{G_f}}}(\nu S.\,\mathcal{E}_{G_f}(S))$, since the
definitions of $\mathcal{E}_{G_f}$ and $\overline{\mathcal{A}_{{G_f}}}$ agree on
containment of states (note that by definition of~${G_f}$,
$\children{\Gamma}\subseteq{G_f}$ for
every~$\Gamma\in{G_f}$). Otherwise, we proceed by induction on the
size of~$\Gamma$. By definition of~$\mathcal{E}_{G_f}$, there exists a
conclusion~$\Gamma'\in\nu S.\,\mathcal{E}_{G_f}(S)$ of a propositional
rule~$R$ applied to~$\Gamma$. By induction,
$\Gamma'\in\overline{\mathcal{A}_{{G_f}}}(\nu S.\,\mathcal{E}_{G_f}(S))$. Now
let~$\Delta$ be the set of conclusions of a propositional rule~$R'$
applied to~$\Gamma$, w.l.o.g.\ distinct from~$R$. Since the
propositional rules commute, there is a rule application
to~$\Gamma'$ (corresponding to a postponed application of~$R'$) that
has a conclusion~$\Gamma''\in\nu S.\,\mathcal{E}_{G_f}(S)$ such
that~$\Gamma''$ is, via postponed application of~$R$, a conclusion
of a propositional rule applied to some $\Gamma'''\in\Delta$. Then,
$\Gamma'''\in\mathcal{E}_{G_f}(\nu S.\,\mathcal{E}_{G_f}(S))=\nu S.\,\mathcal{E}_{G_f}(S)$ by
definition of~$\mathcal{E}_{G_f}$, showing
$\Gamma\in\overline{\mathcal{A}_{{G_f}}}(\nu S.\,\mathcal{E}_{G_f}(S))$ as required.
\end{proof}
\noindent Invariants~(\ref{item:inv-E-Gf}) and~(\ref{item:inv-AE-Gf})
in Lemma~\ref{lem:no-propagation} imply that once we prove correctness
for runs of the algorithm that perform propagation only in the last
step~5 (that is, once all children have been added), correctness of
the general algorithm follows. That is, it remains to show that
$\nu S.\,\mathcal{E}_{G_f}(S)$ consists precisely of the satisfiable sequents
in~${G_f}$. We split this claim into two inclusions respectively
corresponding to soundness and completeness in the same way as for the
type elimination algorithm (Section~\ref{sec:type-elim}). The
following statement is analogous to Lemma~\ref{lem:ex-truth}.
\begin{lemma}\label{lem:caching-completeness}
Let $E$ be a postfixpoint of $\mathcal{E}_{{G_f}}$ and denote by
$E_s = E \cap \mathsf{States}$ the collection of states contained in
$E$. Then there is a coalgebra $C=(E_s,\gamma)$ such that
$E_s \cap \{ \Gamma \mid \Gamma \vdash_{\mathit{PL}} \phi \}\subseteq
\Sem{\phi}_{C}$ for all $\phi \in \Sigma$ (recall that $\vdash_{\mathit{PL}}$
denotes propositional entailment, see
Definition~\ref{def:prop}). Consequently, whenever $\Gamma \in E$
and $\phi \in\Gamma$, then~$\phi$ is $\psi$-satisfiable.
\end{lemma}
\begin{proof}
The proof proceeds similarly to the one of Lemma~\ref{lem:ex-truth}:
In order to define a suitable $\gamma$, let $\Gamma \in E_s$. By the
definition of $\mathcal{E}_{{G_f}}$, the one-step pair
$(\phi_\Gamma,\eta_{E \cap \children{\Gamma}})$ is satisfiable. Let
$M=(X,\tau,t)$ be a one-step model satisfying
$(\phi_\Gamma,\eta_{E \cap \children{\Gamma}})$. By the definition of
$\eta_{E \cap \children{\Gamma}}$, we can then define a function
$f\colon X \to E \cap \children{\Gamma}$ such that for all $x \in X$ and all
$\epsilon\heartsuit \rho \in \Gamma$ we have $\rho\in f(x)$ iff
$x \in \tau(a_{\epsilon\heartsuit \rho})$ (noting that by the
definition of children of~$\Gamma$, $f(x)$ contains either~$\rho$
or~$\neg\rho$). Now note that since~$E$ is a postfixpoint of
$\mathcal{E}_{{G_f}}$, every non-state sequent $\Delta\in E$ has a child
in~$E$ that is a conclusion of a propositional rule applied
to~$\Delta$, and hence propositionally entails~$\bigwedge\Delta$. Since
every propositional rule removes a propositional connective, this
implies that we eventually reach a state in~$E_s$ from~$\Delta$
along the child relation; that is, for every $\Delta\in E$ there is
a state $\Delta'\in E_s$ such that $\Delta'$ propositionally
entails~$\bigwedge\Delta$. We can thus prolong $f$ to a function
$\bar f\colon X \to E_s$ such that
\begin{equation}\label{eq:barf}
\bar f (x) \vdash_{\mathit{PL}} \rho \quad \mbox{iff} \quad x \in \tau(a_{\epsilon\heartsuit \rho})
\end{equation}
for all $\epsilon\heartsuit \rho \in \Gamma$ and all $x \in X$. We now
define $\gamma(\Gamma) \mathrel{:=} T {\bar f} (t)$, obtaining
$\gamma\colon E_s \to T E_s$. We will show that
\begin{equation}
\Gamma \vdash_{\mathit{PL}} \chi \qquad \text{implies} \qquad \Gamma \in
\Sem{\chi}_C\label{eq:truth}
\end{equation}
for all $\chi \in \Sigma$ and all $\Gamma \in E_s$, which implies
the first claim of the lemma. We proceed by induction on~$\chi$; by
soundness of propositional reasoning, we immediately reduce to the
case where~$\chi\in\Gamma$, in which case $\chi$ has the form
$\chi=\epsilon\heartsuit\rho$ since~$\Gamma$ is a state. We continue to
use the data~$M=(X,t,\tau)$, $f$, $\bar f$ featuring in the
above construction of $\gamma(\Gamma)=T\bar f(t)$. Note again that
for every $x\in X$, we have by the defining property of children
of~$\Gamma$ that either $f(x)\vdash_{\mathit{PL}}\rho$ or
$f(x)\vdash_{\mathit{PL}}\neg\rho$; since the conclusions of propositional
rules are propositionally stronger than the premisses, it follows
that the same holds for $\bar f(x)$. The inductive hypothesis
therefore implies that $\bar f(x)\in\Sem{\rho}_C$ iff
$\bar f(x)\vdash_{\mathit{PL}}\rho$; combining this with \eqref{eq:barf}, we
obtain $f^{-1}[\Sem{\rho}_C]=\tau(a_{\epsilon\heartsuit\rho})$. To
simplify notation, assume that $\epsilon=1$ (the case
where~$\epsilon=-1$ being entirely analogous). We then have to
show $\gamma(\Gamma)\in\Sem{\heartsuit}_{E_s}(\Sem{\rho}_C)$, which by
naturality of~$\Sem{\heartsuit}$ is equivalent to
$t\in\Sem{\heartsuit}_X(f^{-1}[\Sem{\rho}_C])=\Sem{\heartsuit}_X(\tau(a_{\heartsuit\rho}))$,
where the equality is by the preceding calculation. But
$t\in\Sem{\heartsuit}_X(\tau(a_{\heartsuit\rho}))$ follows from
$M\models(\phi_\Gamma,\eta_{E\cap \children{\Gamma}})$ and $\heartsuit\rho\in\Gamma$ by
the definition of~$\phi_\Gamma$.
The second claim of the lemma is now immediate for states
$\Gamma \in E_s$. As indicated above, all other sequents
$\Gamma \in E\setminus E_s$ can be transformed into some
$\Gamma' \in E_s$ using the propositional rules, in which
case~$\Gamma'$ propositionally entails all $\rho\in\Gamma$; thus,
satisfiability of $\Gamma'$ implies satisfiability of
all~$\rho\in\Gamma$.
\end{proof}
\noindent Lemma~\ref{lem:caching-completeness} ensures completeness of
the algorithm, i.e.\ whenever the algorithm terminates with 'yes',
then $\phi_0$ is $\psi$-satisfiable. For soundness (i.e.\ the
converse implication, the algorithm answers `yes' if~$\phi_0$ is
$\psi$-satisfiable) we proceed similarly as for Lemma
\ref{lem:realization}:
\begin{lemma}\label{lem:caching-soundness}
The set of $\psi$-satisfiable sequents contained in ${G_f}$ is a
post-fixpoint of $\mathcal{E}_{{G_f}}$.
\end{lemma}
\begin{proof}
Let $S$ be the set of $\psi$-satisfiable sequents in $G_f$. We have
to show that $S \subseteq \mathcal{E}_{{G_f}}(S)$; so let $\Gamma \in
S$. If~$\Gamma$ is not a state, then to show $\Gamma\in\mathcal{E}_{{G_f}}(S)$
we have to check that some propositional rule that applies
to~$\Gamma$ has a $\psi$-satisfiable conclusion that is moreover
contained in ${G_f}$; this is easily verified by inspection of the
rules, noting that all children of~$\Gamma$ are in~${G_f}$. Now
suppose that~$\Gamma$ is a state; we then have to show that the
one-step pair $(\phi_\Gamma, \eta_{S \cap \children{\Gamma}})$ is
one-step satisfiable. Let~$x$ be a state in a $\psi$-model
$C=(X,\gamma)$ such that $x\models_C\Gamma$. We construct a one-step
model of $(\phi_\Gamma, \eta_{S \cap \children{\Gamma}})$ from~$C$
in the same way as in the proof of Lemma~\ref{lem:realization}. The
only point to note additionally is that for every $y\in X$, we have
some $\Delta\in S\cap\children{\Gamma}$ such that
$y\models_C\Delta$, namely
$\Delta=\{\epsilon\rho\mid\epsilon'\heartsuit\rho\in\Gamma,y\models_C\epsilon\rho\}$
(where $\epsilon$ and $\epsilon'$ range over $\{-1,1\}$).
\end{proof}
\noindent Summing up, we have
\begin{theorem}\label{thm:global-cache}
If the strict one-step satisfiability problem of~$\Lambda$ is in
{\mbox{\upshape\textsc{ExpTime}}}\xspace, then the global caching algorithm decides satisfiability
under global assumptions in exponential time.
\end{theorem}
\begin{proof}
Correctness is by Lemma~\ref{lem:caching-soundness} and
Lemma~\ref{lem:caching-completeness}, taking into account the
reduction to runs without intermediate propagation according to
Lemma~\ref{lem:no-propagation}. It remains to analyse run time; this
point is similar as in Lemma~\ref{lem:type-elim-time}: There are at
only exponentially many sequents, so there can be at most
exponentially many expansion steps, and the fixpoint calculations in
the propagation steps run through at most exponentially many
iterations. The run time analysis of a single fixpoint iteration
step is essentially the same as in Lemma~\ref{lem:type-elim-time},
using that strict one-step satisfiability is in {\mbox{\upshape\textsc{ExpTime}}}\xspace for state
sequents; for non-state sequents~$\Gamma$ just note that there are
only polynomially many conclusions of propositional rules arising
from~$\Gamma$, which need to be compared with at most exponentially
many existing nodes.
\end{proof}
\section{Concrete Algorithm}\label{sec:concrete-alg}
In the following we provide a more concrete description of the global
caching algorithm, which does not use the computation of least and
greatest fixpoints as primitive operators. The algorithm closely
follows Liu and Smolka's well-known algorithm for fixpoint computation
in what the authors call ``dependency graphs''~\cite{lism98:simp}; in
our case, these structures are generated by the derivation rules. The
main difference between the algorithm described below and Liu and
Smolka's is caused by the treatment of ``modal'' sequents, i.e.\
states, as the condition that these sequents need to satisfy is not
expressible purely as a reachability property.
As in the previous section we work with a closed set $\Sigma$
(generated by the global assumption~$\psi$ and the target formula
$\phi_0$) and \emph{(tableau) sequents}, i.e.\ arbitrary subsets
$\Gamma,\Theta\subseteq\Sigma$, understood conjunctively. We continue
to write $\mathsf{Seqs}=\mathcal{P}\Sigma$ for the set of sequents, and $\mathsf{States}$ for
the set of states, i.e.\ sequents consisting of modal literals only
(recall that we take propositional atoms as nullary operators).
The set $\mathsf{Seqs}$ of sequents carries a hypergraph structure
$E \subseteq \mathsf{Seqs} \times \mathcal{P} (\mathsf{Seqs})$ that contains
\begin{itemize}
\item for each $\Gamma \in \mathsf{States}$ the pair
$(\Gamma,\children{\Gamma})$ (recall that $\children{\Gamma} \subseteq \mathsf{Seqs}$
denotes the set of children of~$\Gamma$); and
\item for each
$\Gamma \in \mathsf{Seqs} \setminus \mathsf{States}$ the set of pairs $\{ (\Gamma,\Delta) \mid \Gamma/\Delta \mbox{ a propositional rule applicable to } \Gamma \}$.
\end{itemize}
In the following we write~$E_M$ for the ``modal'' part of~$E$ induced
by the state-child relationships as per the first bullet point, and
$E_P$ for the part of~$E$ induced by the propositional rules as per
the second bullet point (so $E$ is the disjoint union of~$E_m$ and~$E_p$).
Our algorithm maintains a {\em partial} function
$\alpha: \mathsf{Seqs} \to \{ 0,1\}$ that maps a sequent to $0$ if it is not
$\psi$-satisfiable, to $1$ if it is $\psi$-satisfiable and is
undefined in case its satisfiability cannot be determined yet. In the
terminology of the previous section $\alpha$ should have the following
properties:
\begin{itemize}
\item $\alpha(\Gamma) = 1$ iff $\Gamma \in \nu X .\, \mathcal{E}_G(X)$ and
\item $\alpha(\Gamma) = 0$ iff $\Gamma \in \mu X.\, \mathcal{A}_G(X)$
\end{itemize}
where $G$ denotes the set of sequents for which $\alpha$ is defined.
The idea of computing a partial function is that this allows
determining $\psi$-satisfiability of a given sequent without exploring
the full hypergraph. We will now describe an algorithm for computing
$\alpha$ that is inspired by Liu and Smolka's \emph{local}
algorithm~\cite[Figures~3,4]{lism98:simp} and then show its
correctness.
\begin{alg}
\label{alg:concrete} Concrete Global Caching
\begin{algorithmic}
\State Initialize $\alpha$ to be undefined everywhere;
\State $\alpha(\Gamma_0) \coloneqq 1$; $D(\Gamma_0) = \emptyset$, $W \coloneqq \{ (\Gamma_0,\Delta) \mid (\Gamma_0,\Delta) \in E\}$;
\While{$W \not= \emptyset$}
\State Pick $e = (\Gamma,\Delta) \in W$;
\State $W \coloneqq W - \{e\}$;
\If{$ \exists \Gamma' \in \Delta .\, \text{($\alpha(\Gamma')$ is undefined)}$} \Comment{Expansion step}
\State Pick non-empty $U \subseteq \{\Gamma' \in \Delta \mid \alpha(\Gamma') \mbox{ undefined}\}$;
\State For each $\Gamma' \in U$ put $\alpha(\Gamma') \coloneqq 1$, $D(\Gamma') \coloneqq \emptyset$, $W = W \cup \{ (\Gamma',\Delta') \mid (\Gamma',\Delta') \in E \}$;
\EndIf
\If{$e \in E_P$} \Comment{Propagation step}
\If{$ \forall \Gamma' \in \Delta.\, \alpha(\Gamma') = 0$} \Comment{Case $\Gamma \not\in \mathsf{States}$}
\State $\alpha(\Gamma) \coloneqq 0$; $W \coloneqq W \cup D(\Gamma)$; $D(\Gamma) \coloneqq \emptyset$;
\ElsIf{$ \exists \Gamma' \in \Delta.\, \alpha(\Gamma') = 1$}
\State pick $\Gamma' \in \Delta$ s.t. $\alpha(\Gamma') = 1$ and put $D(\Gamma') \coloneqq D(\Gamma') \cup \{(\Gamma,\Delta)\}$;
\State $W \coloneqq W - \{(\Gamma'',\Delta'') \in W \mid \Gamma'' == \Gamma \}$;
\EndIf
\ElsIf{$e \in E_M$} \Comment{Propagation step}
\State $S_0 \coloneqq \{ \Gamma' \in \Delta \mid \alpha(\Gamma') ==0 \}$ ; $S_1 \coloneqq \{ \Gamma' \in \Delta \mid \alpha(\Gamma') == 1\}$ \Comment{Case $\Gamma \in \mathsf{States}$}
\If{$\Delta == S_0 \cup S_1$ and $(\phi_\Gamma,\eta_{S_1})$ is not one-step satisfiable}
\State $\alpha(\Gamma) \coloneqq 0$; $W \coloneqq W \cup D(\Gamma)$; $D(\Gamma) \coloneqq \emptyset$;
\ElsIf{$(\phi_\Gamma,\eta_{S_1})$ is one-step satisfiable}
\For{$\Gamma' \in S_1$}
$D(\Gamma') \coloneqq D(\Gamma') \cup \{(\Gamma,\Delta)\}$;
\EndFor
\ElsIf{$\Delta \not= S_0 \cup S_1$} $W \coloneqq W \cup \{e\}$;
\EndIf
\EndIf
\EndWhile
\end{algorithmic}
\end{alg}
\begin{rem}
In Algorithm~\ref{alg:concrete}, hyperedges should be understood as
represented symbolically, i.e.\ either by describing matches of
propositional rules or by marking a hyperedge as modal (which determines
the hyperedge uniquely given the source node). This serves in particular
to avoid having to create all of the exponentially many children of
a state node at once. Target nodes $\Gamma'\in\Delta$ of hyperedges
$(\Gamma,\Delta)$ are generated explicitly only once they are picked
from~$\Delta$ in the expansion step (the propagation step only
accesses nodes that are already generated).
\end{rem}
\noindent We proceed to show correctness of
Algorithm~\ref{alg:concrete} and establish a precise connection to our
global caching algorithm. First we need a couple of lemmas that
establish key invariants of the algorithm. Note that the current state
of a run of the algorithm can be characterized by the triple
$(\alpha,D,W)$ where $\alpha$ is the current (partial) labelling of
sequents, $D$ assigns to any given sequent $\Gamma$ a set of
hyperedges that need to be investigated if the $\alpha$-value of $\Gamma$
changes, and $W$ contains the set of hyperedges that the algorithm
still has to check. The algorithm terminates when it reaches a state
of the form $(\alpha,D,\emptyset)$, i.e.\ when there are no edges left
to be checked. Given a state $s = (\alpha,D,W)$ of the algorithm, we
put $G^s_i \coloneqq \{ \Gamma \in \mathsf{Seqs} \mid \alpha(\Gamma) = i \}$
for $i=0,1$, and $G^s = G^s_0 \cup G^s_1$ (so $G^s$ is the domain of
definition of~$\alpha$).
\begin{lemma}\label{lem:unsat}
Let $\Gamma \in \mathsf{Seqs}$ and suppose $s=(\alpha,D,W)$ is a state reached during execution of the algorithm. Then
$\alpha(\Gamma) = 0$ implies that $\Gamma \in \mu X.\, \mathcal{A}_{G^s}(X)$ and therefore, by Lemma~\ref{lem:no-propagation}(5) and Lemma~\ref{lem:caching-soundness}, the sequent $\Gamma$ is not
$\psi$-satisfiable.
\end{lemma}
\begin{proof}
First note that once $\alpha(\Gamma) = 0$ for some sequent $\Gamma$,
the value $\alpha(\Gamma)$ will not change any more throughout the
run of the algorithm, as the only moment when a sequent~$\Gamma$ is
assigned value~$1$ is when~$\Gamma$ is newly added to the domain of
$\alpha$. Since $G_s$ can only grow during a run of the algorithm
and by Lemma~\ref{lem:functionals-monotone},
$\Gamma \in \mu X.\, \mathcal{A}_{G^s}(X)$ depends monotonically on $G_s$,
it suffices to establish the invariant for the point where
$\alpha(\Gamma)$ is set to~$0$. So suppose that this happens while
$e = (\Gamma,\Delta)$ is processed, with the state being
$s=(\alpha,D,W)$ before and $s'=(\alpha',D',W')$ after
processing~$e$. Suppose that~$s$ satisfies the claimed invariant; we
have to show that~$s'$ satisfies it as well. We do this for the case
where $e \in E_P$; the case $e \in E_M$ is completely
analogous.
Since $e \in E_P$, the reason for setting $\alpha'(\Gamma) = 0$ is
that for all $\Gamma' \in \Delta$ we have $\alpha(\Gamma') = 0$ --
in other words, we have $\Gamma \in \mathcal{A}_{G^{s}}(G^{s}_0)$. This
implies $\Gamma \in \mathcal{A}_{G^{s'}}(G^{s}_0)$ by
Lemma~\ref{lem:functionals-monotone} as $G^{s} \subseteq G^{s'}$.
By assumption on~$s$, we have
$G^{s}_0 \subseteq \mu X.\, \mathcal{A}_{G^{s}}(X) \subseteq\mu X.\,
\mathcal{A}_{G^{s'}}(X)$, again using
Lemma~\ref{lem:functionals-monotone} in the second
step. Monotonicity of $\mathcal{A}_{G^{s'}}$ now yields
$$\Gamma \in \mathcal{A}_{G^{s'}}(G^{s}_0) \subseteq \mathcal{A}_{G^{s'}}( \mu X.\, \mathcal{A}_{G^{s'}}(X)) = \mu X.\, \mathcal{A}_{G^{s'}}(X) $$
as required.
\end{proof}
\noindent
The following technical lemma follows by inspecting the details of the
algorithm:
\begin{lemma}\label{lem:inv}
Suppose $s=(\alpha,D,W)$ is a state reached during execution of the algorithm.
Then for all $\Gamma \in G^s_1$ and all $(\Gamma,\Delta) \in E$ precisely one of the following holds:
\begin{itemize}
\item $(\Gamma,\Delta) \in W$ or
\item $\Gamma \not\in \mathsf{States}$ and there is $(\Gamma,\Delta') \in E_P$ with $(\Gamma,\Delta') \in D(\Gamma'')$ for some $\Gamma'' \in \Delta'$ or
\item $\Gamma \in \mathsf{States}$ and $(\phi_\Gamma,\eta_{S})$ is one-step satisfiable with
$S= \{\Gamma' \in \Delta \mid (\Gamma,\Delta) \in D(\Gamma')\}$
\end{itemize}
We also note that $D(\Gamma) \not= \emptyset$ implies $\alpha(\Gamma) = 1$.
\end{lemma}
\noindent Correctness of the algorithm is established in the following
theorem.
\begin{theorem}
When Algorithm~\ref{alg:concrete} terminates at
$s=(\alpha,D,\emptyset)$ then for all $\Gamma \in \mathsf{Seqs}$ we have:
\begin{enumerate}
\item $\alpha(\Gamma) = 0$ implies $\Gamma \in \mu X.\, \mathcal{A}_{G^s}(X)$ and thus
$\Gamma$ is not $\psi$-satisfiable.
\item $\alpha(\Gamma) = 1$ implies $\Gamma \in \nu X.\, \mathcal{E}_{G^s}(X)$ and thus
$\Gamma$ is $\psi$-satisfiable.
\end{enumerate}
\end{theorem}
\begin{proof}
The first claim is immediate by Lemma~\ref{lem:unsat}.
For the second claim it suffices to prove that~$G^s_1$ is included in the greatest fixpoint of $\mathcal{E}_{G^s}(X)$ - the claim concerning $\psi$-satisfiability of~$\Gamma$ then follows
from Lemmas~\ref{lem:functionals-monotone} and~\ref{lem:caching-completeness} in the previous section.
It suffices to show that $G^s_1$ is a post-fixpoint of $\mathcal{E}_{G^{s}}$ -- but this follows immediately from
Lemma~\ref{lem:inv} together with $W = \emptyset$ and $\{ \Gamma \mid D(\Gamma) \not= \emptyset \} \subseteq G^s_1$.
\end{proof}
\noindent Algorithm~\ref{alg:concrete} is closely related to
Algorithm~\ref{alg:global-caching}: Both algorithms explore the
collection of sequents that are ``reachable'' from $\Gamma_0$, making
non-deterministic choices concerning which sequents to expand next. A
crucial difference to Algorithm~\ref{alg:global-caching} is that
Algorithm~\ref{alg:concrete} contains a concrete description of how
to compute the fixpoints of $\mathcal{E}$ and $\mathcal{A}$ by successively updating
the labelling function; to this end, it imposes a more definite
strategy regarding propagation by enforcing a propagation step after
every expansion step. We conclude by providing an estimate of the
complexity of the algorithm:
\begin{proposition}
If the strict one-step satisfiability problem of~$\Lambda$ is in
{\mbox{\upshape\textsc{ExpTime}}}\xspace, then Algorithm~\ref{alg:concrete} decides satisfiability
under global assumptions in exponential time.
\end{proposition}
\begin{proof}
To get the upper bound, we observe first that each hyperedge
$e=(\Gamma, \Delta) \in E_M$ will be checked at most
$2\cdot|\Delta|$ times by the algorithm: after $e$ has been added to
$W$ it could be tested up to $|\Delta|$ times (in the worst case,
until all of the children in~$\Delta$ have been added to the domain
of $\alpha$) and then again each time the status of one of the
children in~$\Delta$ changes.
Similarly, each hyperedge $e=(\Gamma, \Delta) \in E_P$
will be checked at most $|\Delta|+1$ times (each time when
the status of one of the children changes).
The {\mbox{\upshape\textsc{ExpTime}}}\xspace bound then follows from
the observation that (i) the hypergraph is exponential in the size of the input, (ii) for $\Gamma \in \mathsf{States}$ there is exactly one
edge $(\Gamma, \Delta) \in E_M$ and (iii) for each $\Gamma \in \mathsf{Seqs} \setminus \mathsf{States}$ the algorithm only verifies one hyperedge of the
form $(\Gamma,\Delta) \in E_P$.
\end{proof}
\section{Nominals}\label{sec:nominals}
A key feature of \emph{hybrid logic}~\cite{ArecesTenCate07} as an
extension of modal logic are \emph{nominals}, which are special atomic
predicates that are semantically restricted to hold in exactly one
state, and hence uniquely designate a state. Nominals form part of
many relational description logics (recognizable by the
letter~$\mathcal O$ in the standard naming scheme)~\cite{BaaderEA03},
where they serve as expressive means to express facts involving
specific individuals -- for instance, using nominals, concepts over an
ontology of music can not only speak about the notion of composer in
general, but also concretely about Mozart and Stockhausen. We proceed
to discuss how to extend some of the above results to cover
coalgebraic hybrid logic, i.e.\ the extension of coalgebraic modal
logic with nominals in the standard sense. Specifically, we show that
the generic {\mbox{\upshape\textsc{ExpTime}}}\xspace upper bound for reasoning under global
assumptions (Theorem~\ref{thm:exptime}) remains true in presence of
nominals; we leave the design of a global caching algorithm for this
setting as an open problem (for the case where a complete set of modal
tableau rules in the sense recalled in Remark~\ref{rem:rules} is
available, we have presented such an algorithm in previous
work~\cite{GoreEA10b}).
\textbf{Syntactically}, we introduce a set $\mathsf{N}$ of \emph{nominals}
$i,j,\dots$, i.e.\ names for individual states, and work with an
extended set $\mathcal{F}(\mathsf{N},\Lambda)$ of \emph{hybrid}
formulae~$\phi,\psi$, defined by the grammar
\begin{equation*}
\mathcal{F}(\mathsf{N},\Lambda)\owns\phi,\psi::= \bot\mid \phi\land\psi\mid
\neg\phi\mid \heartsuit(\phi_1,\dots,\phi_n)\mid i\mid @_i\phi\qquad (\heartsuit\in\Lambda\text{ $n$-ary}, i\in\mathsf{N});
\end{equation*}
that is, nominals may be used as atomic formulae and within
\emph{satisfaction operators} $@_i$, with $@_i\phi$ stating that the
state denoted by $i$ satisfies $\phi$. (We explicitly do not include
local binding $\downarrow$, with formulae ${\downarrow}i.\,\phi$ read
`$\phi$ holds if~$i$ is changed to denote the present state', which
would lead to undecidability~\cite{ArecesEA99}.)
\textbf{Semantically}, we work with \emph{hybrid models} ${\mathcal{M}}=(C,\pi)$
consisting of a $T$-coalgebra $C=(X,\gamma)$ and an assignment of a
singleton set $\pi(i)\subseteq X$ to each nominal $i\in\mathsf{N}$. We
write $\models_{\mathcal{M}}$ for the satisfaction relation between states~$x$
in hybrid models~${\mathcal{M}}=(C,\pi)$ and hybrid formulae, defined by
\begin{align*}
x &\models_{\mathcal{M}} i&& \hspace{-4em}\text{iff}\quad x\in\pi(i)\\
x &\models_{\mathcal{M}} @_i\phi && \hspace{-4em}\text{iff}
\quad y\models_{\mathcal{M}}\phi\quad\text{for the unique $y\in\pi(i)$},
\end{align*}
and otherwise the same clauses as $\models_C$
(Section~\ref{sec:colog}). Similarly as for the purely modal logic, we
sometimes refer to these data just as the coalgebraic hybrid
logic~$\Lambda$.
\begin{example}
We illustrate how the presence of nominals impacts on logical
consequence.
\begin{enumerate}
\item In Presburger modal logic, the formula
\begin{equation*}
@_i(\sharp(i)>\sharp(p)),
\end{equation*}
with~$i$ a nominal and~$p$ a propositional atom, says that state~$i$
has higher transition weight to itself than to states
satisfying~$p$. One consequence of this formula is
\begin{equation*}
@_i\neg p.
\end{equation*}
\item In probabilistic modal logic, the formula
\begin{equation*}
@_i(w(j)>w(\neg j)\land w(k)\ge w(\neg k)),
\end{equation*}
with nominals $i,j,k$, says that from state~$i$, we reach state~$j$
with probability strictly greater than $1/2$, and state~$k$ with
probability at least~$1/2$. From this, we conclude that $j=k$, i.e.\
\begin{equation*}
@_jk.
\end{equation*}
\end{enumerate}
\end{example}
\begin{rem}
In the presence of nominals, the equivalence of the Kripke semantics
and multigraph semantics of Presburger modal logic
(Lemma~\ref{lem:multi-vs-kripke}) breaks down: For a nominal~$i$,
the formula $\sharp(i)>1$ is satisfiable in multigraph semantics but
not in Kripke semantics. Using global assumptions, we can however
encode Kripke semantics into multigraph semantics, by extending the
global assumption~$\psi$ with additional conjuncts $\sharp(i)\le 1$
for all nominals $i$ appearing either in $\psi$ or in the target
formula~$\phi_0$. We therefore continue to use multigraph semantics
for Presburger hybrid logic.
\end{rem}
\begin{rem}\label{rem:univ-mod-hybrid}
As in the case of coalgebraic modal logic
(Remark~\ref{rem:univ-mod}), satisfiability under global assumptions
in coalgebraic hybrid logic is mutually reducible with plain
satisfiability in an extended logic featuring the universal
modality~$\mathop{[\forall]}$, with the same syntax and semantics as in
Remark~\ref{rem:univ-mod}. The non-trivial reduction (from the
universal modality to global assumptions) works slightly differently
than in the modal case, due to the fact that we cannot just take
disjoint unions of hybrid models: Like before, let
$\mathop{[\forall]}\psi_1,\dots,\mathop{[\forall]}\psi_n$ be the $\mathop{[\forall]}$-subformulae
of the target formula~$\phi$ (now in coalgebraic hybrid logic with
the universal modality), and guess a subset
$U\subseteq\{1,\dots,n\}$, inducing a map $\chi\mapsto\chi[U]$
eliminating $\mathop{[\forall]}$ from subformulae~$\chi$ of~$\phi$ as in
Remark~\ref{rem:univ-mod}. Then check that $\phi[U]$ is satisfiable
under the global assumption
\begin{equation*}
\psi_U = \bigwedge_{k\in U}\psi_k[U]\land\bigwedge_{k\in\{1,\dots,n\}\setminus U} (i_k\to\neg\psi_k[U])
\end{equation*}
where the $i_k$ are fresh nominals. It is easy to see that this
non-deterministic reduction is correct, i.e.\ that $\phi$ is
satisfiable iff $\phi[U]$ is $\psi_U$-satisfiable for some~$U$.
\end{rem}
\noindent
A consequence of Remark~\ref{rem:univ-mod-hybrid} is that for purposes
of estimating the complexity of satisfiability under global
assumptions, we can eliminate satisfaction operators: Using the
universal modality $\mathop{[\forall]}$, we can express $@_i\phi$ as
$\mathop{[\forall]}(i\to\phi)$. We will thus consider only the language without
satisfaction operators in the following. For a further reduction, we
say that the global assumption~$\psi$ is \emph{globally satisfiable}
if $\top$ is $\psi$-satisfiable, i.e.\ if there exists a non-empty
$\psi$-model. Then note that $\phi_0$ is $\psi$-satisfiable iff
$\psi\land(i\to\phi_0)$ is globally satisfiable for a fresh
nominal~$i$; so we can forget about the target formula and just
consider global satisfiability.
We proceed to adapt the type elimination algorithm of
Section~\ref{sec:type-elim} to this setting. Fix a global
assumption~$\psi$ to be checked for global satisfiability, and
let~$\Sigma$ be the closure of $\{\psi\}$.
\begin{definition}
For $i\in\mathsf{N}\cap\Sigma$ and $\Gamma\in\types{\psi}$, we say that
\emph{$i$ has type~$\Gamma$} in a hybrid model $(C,\pi)$ if
$y\models \Gamma$ for the unique $y\in\pi(i)$.
A \emph{type assignment} (for~$\Sigma$) is a map
\begin{equation*}
\beta\colon \mathsf{N}\cap\Sigma\to\types{\psi}.
\end{equation*}
We say that~$\beta$ is \emph{consistent} if for all
$i,j\in\mathsf{N}\cap\Sigma$, we have $i\in\beta(j)$ iff
$\beta(i)=\beta(j)$ (in particular, $i\in\beta(i)$ for
all~$i\in\mathsf{N}\cap\Sigma$). A hybrid model ${\mathcal{M}}$
\emph{satisfies}~$\beta$ if every $i\in\mathsf{N}\cap\Sigma$ has
type~$\beta(i)$ in~${\mathcal{M}}$;~$\beta$ is \emph{$\psi$-satisfiable} if
there exists a hybrid $\psi$-model that satisfies~$\beta$.
\end{definition}
\noindent (In description logic terminology, we may think of type
assignments as complete ABoxes.) We note the following obvious
properties:
\begin{fact}
\begin{enumerate}
\item The formula $\psi$ is globally satisfiable iff there exists a
$\psi$-satisfiable type assignment for~$\Sigma$.
\item There are at most exponentially many type
assignments for~$\Sigma$.
\item All satisfiable type assignments are consistent.
\item Consistency of a type assignment can be checked in polynomial
time.
\end{enumerate}
\end{fact}
\noindent To obtain an upper bound {\mbox{\upshape\textsc{ExpTime}}}\xspace for global satisfiability
of~$\psi$, it thus suffices to show that we can decide in {\mbox{\upshape\textsc{ExpTime}}}\xspace
whether a given consistent type assignment~$\beta$ is
$\psi$-satisfiable. To this end, we form the set
\begin{equation*}
\types{\beta,\psi}=\beta[\mathsf{N}\cap\Sigma]\cup \{\Gamma\in\types{\psi}\mid \Gamma\cap\mathsf{N}=\emptyset\}
\end{equation*}
of types -- that is, $\types{\beta,\psi}$ includes the assigned
types~$\beta(i)$ for all nominals~$i\in\mathsf{N}\cap\Sigma$, and moreover
all types that do not specify any nominal to be locally satisfied. To
check whether $\beta$ is $\psi$-satisfiable, we then run type
elimination on~$\types{\beta,\psi}$; that is, we compute
$\nu\mathcal{E}_\beta$ by fixpoint iteration starting from
$\types{\beta,\psi}$, where
\begin{equation*}
\begin{array}{lcll}
\mathcal{E}_\beta\colon &\mathcal{P}(\types{\beta,\psi})&\to & \mathcal{P}(\types{\beta,\psi})\\[0.3ex]
& S & \mapsto & \{\Gamma \in S \mid (\phi_\Gamma,\eta_S)\text{ is one-step satisfiable}\}
\end{array}
\end{equation*}
(in analogy to the functional~$\mathcal{E}$ according
to~\eqref{eq:elim-functional} as used in the type elimination
algorithm for the purely modal case). We answer `yes' if
$\beta[\mathsf{N}\cap\Sigma]\subseteq\nu\mathcal{E}_\beta$, i.e.\ if no
type~$\beta(i)$ is eliminated, and `no' otherwise.
By the same analysis as in Lemma~\ref{lem:type-elim-time}, we see that
the computation of $\nu\mathcal{E}_\beta$ runs in exponential time if the
strict one-step satisfiability problem of~$\Lambda$ is in
{\mbox{\upshape\textsc{ExpTime}}}\xspace. Correctness of the algorithm is immediate from the following
fact.
\begin{lemma}
Let $\beta$ be a consistent type assignment. Then $\beta$ is
$\psi$-satisfiable iff
$\beta[\mathsf{N}\cap\Sigma]\subseteq\nu\mathcal{E}_\beta$.
\end{lemma}
\begin{proof}
Soundness (`only if') follows from
\begin{equation*}
R_\beta=\{\Gamma\in \types{\beta,\psi}\mid \Gamma\text{ satisfiable in a hybrid $\psi$-model satisfying~$\beta$}\}
\end{equation*}
being a postfixpoint of~$\mathcal{E}_\beta$; the proof is completely
analogous to that of Lemma~\ref{lem:realization}.
To see completeness (`if'), construct a $T$-coalgebra
$C=(\nu\mathcal{E}_\beta,\gamma)$ in the same way as in the proof of
Lemma~\ref{lem:ex-truth}. We turn $C$ into a hybrid model
${\mathcal{M}}=(C,\pi)$ by putting
$\pi(i)=\{\Gamma\in\nu\mathcal{E}_\beta\mid i\in\Gamma\}$, noting that
$\pi(i)$ is really the singleton $\{\beta(i)\}$ because (i) $\beta$
is consistent and no type in $\types{\beta,\psi}$ other than the
$\beta(j)$ (for $j\in\mathsf{N}\cap\Sigma$) contains a nominal positively,
and (ii) $\beta(i)\in\nu\mathcal{E}_\beta$ by assumption. The truth lemma
\begin{equation*}
\Sem{\rho}_C=\hat\rho\cap\nu\mathcal{E}_\beta=\{\Gamma\in\nu\mathcal{E}_\beta\mid\rho\in\Gamma\}
\end{equation*}
is shown by induction on~$\rho\in\Sigma$. All cases are as in the
proof of Lemma~\ref{lem:ex-truth}, except for the new case
$\rho=i\in\mathsf{N}$; this case is by construction of~$\pi$. The truth
lemma implies that~${\mathcal{M}}$ is a $\psi$-model and satisfies~$\beta$.
\end{proof}
\noindent In summary, we obtain
\begin{theorem}
If the strict one-step satisfiability problem of~$\Lambda$ is in
{\mbox{\upshape\textsc{ExpTime}}}\xspace, then satisfiability with global assumptions in the
coalgebraic hybrid logic~$\Lambda$ is {\mbox{\upshape\textsc{ExpTime}}}\xspace-complete.
\end{theorem}
\begin{rem}
The {\mbox{\upshape\textsc{ExpTime}}}\xspace algorithm described above is not, of course, one that
one would wish to use in practice. Specifically, while the
computation of $\nu\mathcal{E}_\beta$ for a given consistent type assignment
can be made practical along the lines of the global caching
algorithm for the nominal-free case discussed in
Sections~\ref{sec:caching} and~\ref{sec:concrete-alg}, the initial
reductions -- elimination of satisfaction operators and, more
importantly, going through all consistent type assignments -- will
consistently incur exponential cost. We leave the design of a more
practical algorithm for coalgebraic hybrid logic with global
assumptions for future work. In particular, adapting the global
caching algorithm described in Section~\ref{sec:caching} to this
setting remains an unsolved challenge: e.g.\ types such as
$\{i,\phi\}$ and $\{i,\neg \phi\}$, where $i$ is a nominal and
$\phi$ is any formula such that both $\phi$ and $\neg\phi$ are
satisfiable, are clearly both satisfiable but cannot both form part
of a model. The generic algorithm we presented in earlier work with
Gor\'e~\cite{GoreEA10b} solves this problem by gathering up ABoxes
along strategies in a tableau game (so that no strategy will win
that uses both types mentioned above); however, the algorithm
requires a complete set of tableau-style rules, which is not
currently available for our two main examples.
\end{rem}
\noindent We record the instantiation of the generic result to our key
examples explicitly:
\begin{example}
Reasoning with global assumptions in \emph{Presburger hybrid logic}
and in \emph{probabilistic hybrid logic with polynomial
inequalities}, i.e.\ in the extensions with nominals of the
corresponding modal logics as defined in
Sections~\ref{sec:presburger} and~\ref{sec:prob}, is in {\mbox{\upshape\textsc{ExpTime}}}\xspace.
\end{example}
\section{Conclusions}
We have proved a generic upper bound {\mbox{\upshape\textsc{ExpTime}}}\xspace for reasoning with
global assumptions in coalgebraic modal and hybrid logics, based on a
semantic approach centered around \emph{one-step satisfiability
checking}. This approach is particularly suitable for logics for
which no tractable sets of modal tableau rules are known; our core
examples of this type are Presburger modal logic and probabilistic
modal logic with polynomial inequalities. The upper complexity bounds
that we obtain for these logics by instantiating our generic results
appear to be new. The upper bound is based on a type elimination
algorithm; additionally, for the purely modal case (i.e.\ in the
absence of nominals), we have designed a global caching algorithm that
offers a perspective for efficient reasoning in practice.
In earlier work on upper bounds {\upshape\textsc{PSpace}}\xspace for plain satisfiability
checking (i.e.~reasoning in the absence of global
assumptions)~\cite{SchroderPattinson08d}, we have used the more
general setting of coalgebraic modal logic over \emph{copointed}
functors. This has allowed covering logics with frame conditions that
are non-iterative~\cite{Lewis74}, i.e.~do not nest modal operators but
possibly have top-level propositional variables, such as the $T$-axiom
$\Box a\to a$ that defines reflexive relational frames; an important
example of this type is Elgesem's logic of agency~\cite{Elgesem97}. We
leave a corresponding generalization of our present results to future
work. A further key point that remains for future research is to
extend the global caching algorithm to cover nominals and satisfaction
operators, combining the methods developed in the present paper with
ideas underlying the existing rule-based global caching algorithm for
coalgebraic hybrid logic~\cite{GoreEA10b}.
\begin{acks}
We wish to thank Erwin R.\ Catesbeiana for remarks on
unsatisfiability. Work of the third author supported by the
\grantsponsor{dfg}{DFG}{www.dfg.de} under the research grant
\grantnum{dfg}{ProbDL2 (SCHR 1118/6-2)}.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| proofpile-arXiv_066-2231 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction\label{sec:introduction}}
In recent years, theoretical investigations have suggested that the spin of an unbound electron in free space can be inferred by a standing wave of light
\cite{dellweg_mueller_2016_interferometric_spin-polarizer,dellweg_mueller_extended_KDE_calculations,ahrens_2017_spin_filter,ahrens_2020_two_photon_bragg_scattering}. The idea for the underlying electron diffraction effect in a standing light wave goes back to a proposal from Kapitza and Dirac in 1933 \cite{kapitza_dirac_1933_proposal} and with the discovery of the laser, first observation attempts were made in the 1960s \cite{schwarz_1965_KDE_dispute_1,pfeiffer_1968_KDE_dispute_2,takeda_1968_dispute_3}, which, however, were in dispute. Renewed attempts reported the observation of the
Kapitza-Dirac effect in 1980s for atoms in a strong interaction regime with many
diffraction orders \cite{gould_1986_atoms_diffraction_regime} and also in a weak interaction regime with isolated diffraction orders \cite{martin_1988_atoms_bragg_regime}. In the context of the Kapitza-Dirac effect, strong and weak interaction refer to a distinction between the diffraction regime (strong interaction), in which the energy-time uncertainty allows for multiple diffraction peaks, and the Bragg regime (weak interaction), where the duration of the interaction is typically sufficiently long, such that only one diffraction order is allowed \cite{batelaan_2000_KDE_first,batelaan_2007_RMP_KDE}. This one diffraction order in the Bragg regime only appears in a resonant configuration, where the diffracted particle and the absorbed and emitted laser photons need to fulfill the conservation of energy and momentum in the interpretation of a semiclassical interaction picture \cite{ahrens_bauke_2012_spin-kde,ahrens_bauke_2013_relativistic_KDE,ahrens_2012_phdthesis_KDE}. Subsequently, also Kapitza-Dirac scattering for electrons was observed in a high intensity interaction with many diffraction orders in 1988 \cite{Bucksbaum_1988_electron_diffraction_regime}. At the beginning of this century, in 2001, a rather precise setup for electrons with only a few diffraction orders has been carried out \cite{Freimund_Batelaan_2001_KDE_first}. This demonstration was followed by a refinement with only one diffraction order \cite{Freimund_Batelaan_2002_KDE_detection_PRL}, accordingly in the Bragg regime, which matches most the initial idea from Kapitza and Dirac.
With the experimental observation of the Kapitza-Dirac effect, the question arouse about whether Kapitza-Dirac scattering can also access the electron spin \cite{Batelaan_2003_MSGE}, where spin effects could not be reported for the considered scenario in reference \cite{Batelaan_2003_MSGE}, in an investigation based on classical particle trajectories. This motivated a quantum investigation on spin effects in the Kapitza-Dirac effect which was based on perturbative solutions of the Pauli equation in the diffraction regime, which also was not able to find pronounced spin effects \cite{rosenberg_2004_first_KDE_spin_calculation}. Ten years later, in 2012, a theoretical demonstration of significant spin effects in the Kapitza-Dirac effect was discussed within the context of a relativistic investigation of the Kapitza-Dirac effect \cite{ahrens_bauke_2012_spin-kde,ahrens_bauke_2013_relativistic_KDE}, where the change of the electron spin appears in resonant Rabi oscillations in the Bragg regime. The identification of resonant Rabi oscillations was inspired by similar resonances in the process electron positron pair-creation in counterpropagating laser beams \cite{ruf_2009_pair_creation}\footnote{We point out that even though the description of electron positron pair-creation demands for a many-particle context, the underlying formulation of pair-creation processes is related to solutions of the Dirac equation \cite{Fradkin_Gitman_Shvartsman_1991_Quantum_Electrodynamics_with_Unstable_Vacuum,woellert_2015_pair_creation_tunneling,woellert_2016_multi_pair_states,lv_bauke_2017_multi_pair_creation}. From this theoretical perspective, the difference of resonances in Bragg scattering in the Kapitza-Dirac effect and resonances in pair creation is only that pair creation is related to transitions from the negative to the positive energy continuum of the Dirac equation, whereas the electron resides at the positive energy-momentum dispersion relation for the case of the Kapitza-Dirac effect.}. Note, that one expects to approach the relativistic regime of the Kapitza-Dirac effect for electron momenta and also laser photon momenta larger than $1mc$, where for photon energies on the order of or larger than $1mc^2$ may also cause pair-creation processes. One may also expect relativistic effects for amplitudes of the vector potential $qA/(mc)>1$, as the electron may reach classical momenta larger than $1mc$. Nevertheless, one can show that spin dynamics are possible even in the non-relativistic regime, which can only be accounted for by relativistic corrections beyond the Pauli equation \cite{bauke_ahrens_2014_spin_precession_1,bauke_ahrens_2014_spin_precession_2}, ie. beyond the first order Foldy-Wouthuysen transformations \cite{foldy_wouthuysen_1950_non-relativistic_theory,greiner_2000_relativistic_quantum_mechanics}.
With indications for the possibility of spin interaction in the Kapitza-Dirac effect, further theoretical investigations in bichromatic standing light waves with frequency ratio 2:1 were carried out by using the Pauli equation \cite{McGregor_Batelaan_2015_two_color_spin,dellweg_awwad_mueller_2016_spin-dynamics_bichromatic_laser_fields,dellweg_mueller_2016_interferometric_spin-polarizer}. We mention that the authors in \cite{McGregor_Batelaan_2015_two_color_spin} also looked at classical electron trajectories based on the BMT equations, but found only vanishingly small spin-flip probabilities in the classical treatment. Also relativistic quantum calculations where made for bichromatic setups with the frequency ratio 2:1 \cite{dellweg_mueller_extended_KDE_calculations} and also for higher frequency ratios \cite{ebadati_2018_four_photon_KDE, ebadati_2019_n_photon_KDE}. The capability of spin-\emph{dependent} diffraction, in which the diffraction probability \emph{depends} on the initial electron spin state, appears as a novel property among most of the theoretical calculations of the bichromatic scenarios \cite{McGregor_Batelaan_2015_two_color_spin,dellweg_mueller_2016_interferometric_spin-polarizer,dellweg_mueller_extended_KDE_calculations,ebadati_2018_four_photon_KDE,ebadati_2019_n_photon_KDE}, where reference \cite{dellweg_mueller_2016_interferometric_spin-polarizer} demonstrates that this spin-dependent effect can also be achieved by using an interferometric setup.
Spin-dependent electron diffraction can also take place in monochromatic scenarios, in particular two-photon interactions for low electron momenta along the laser beam propagation direction \cite{ahrens_2017_spin_filter,ahrens_2020_two_photon_bragg_scattering}. While the spin-dependent effect in reference \cite{ahrens_2017_spin_filter} emerges only after the evolution of multiple Rabi cycles, reference \cite{ahrens_2020_two_photon_bragg_scattering} facilitates this effect already in the rise of the Bragg peak of the diffracted electron, which is beneficial for a possible experimental implementation with X-ray lasers. In the context of spin manipulations in laser-electron interactions, as discussed here, we also point out that the occurrence of electron spin polarization is discussed for ultra-relativistic laser-electron interactions \cite{PhysRevLett.123.174801,PhysRevLett.125.044802,PhysRevLett.122.154801,PhysRevLett.122.214801,PhysRevA.96.043407,PhysRevA.84.062116,article}.
The computation of the quantum dynamics in the Kapitza-Dirac effect is commonly carried out by assuming a plane wave laser field in most of the theoretical descriptions, where a final beam width and also a longitudinal polarization component from beam focusing of the laser are neglected. The question arises, whether the predicted spin effects are influenced by a beam with finite width or whether they are indeed negligible. We pick up this question in our article and compute the quantum dynamics of the Kapitza-Dirac effect with accounting for a small longitudinal polarization component from a Gaussian beam focus in a standing wave configuration. This longitudinal component would average to zero along the beam's transverse direction, such that we implement an additional transverse momentum degree of freedom in the electron wave function for the description of the diffraction process. A decomposition of the Gaussian laser field into an approximating superposition of plane waves allows us to still solve the problem analytically, within the framework of time-dependent perturbation theory.
Our article is organized as follows. In Sec. \ref{section II} we discuss the vector potential of the Gaussian beam and apply simplifying approximations to it for later calculations. After that, we introduce the Dirac equation in Sec. \ref{section III} and use it to establish a relativistic momentum space formulation of the quantum equations of motion, which are subsequently solved by time-dependent perturbation theory. The resulting propagation equation is then evaluated numerically in Sec. \ref{section IV}, from which we deduce a scaling behavior, which depends on the photon energy and the laser beam focusing angle. Finally, we discuss the influence of the longitudinal polarization component of the Gaussian beam on the electron spin dynamics in Sec. \ref{sec:discussion_and_conclusion} and list problems and potential future improvements of our description in the outlook in Sec. \ref{sec:outlook}.
\section{Setup and the vector potential of a Gaussian beam\label{section II}}
\subsection{Geometry of the investigated Kapitza-Dirac effect\label{sec:physical_setup}}
\begin{figure}%
\includegraphics[width=0.5\textwidth]{Gaussian_beam.pdf}
\caption{Geometric setup of the electron beam and the Gaussian standing wave laser beam. The Gaussian beam with wavelength $\lambda=2 \pi/k_L$, beam focus $w_0$, Rayleigh length $x_r$ and beam divergence $2 \epsilon$ is propagating along the $x$-axis. The electron beam is mainly propagating along the $z$-axis with $k_0=m\gg k_L$, where the electron momentum along the standing light wave is getting reversed on interaction with the laser in our investigated setup of the Kapitza-Dirac effect. To account also for the longitudinal beam polarization component, we consider a transverse momentum transfer in our approach with final diffraction orders $a \in \{-2, -1, 0, 1, 2\}$. The transverse momentum change in terms of multiples of momenta $k_z$ is smaller than the longitudinal momentum change, as implied by Eq. \eqref{eq:k_z_k_L_relation} and small beam divergences $2 \epsilon$. \label{fig:Gaussian_beam}}
\end{figure}%
The considered setup of our investigation is sketched in Fig. \ref{fig:Gaussian_beam}, in which the two counterpropagating laser beams of the standing light wave are propagating along the $x$-direction. Both beams are linearly polarized and the field of the vector potential is pointing in the $z$-direction. The laser beam has the wavelength $\lambda$ with corresponding wave number $k_L=2 \pi/\lambda$, beam waist $w_0$ at its focus, and the Rayleigh length $x_R=k_L w^2_{0}/2$. The quantity
\begin{equation}
\epsilon=\frac{1}{k_L w_{0}}\,,\label{eq_epsilon_definition}
\end{equation}
as introduced in reference \cite{Quesnel_1998_gaussian_beam_coulomb_gauge} implies the ratio $w_{0}/x_R = 2 \epsilon$ and corresponds to the diffraction angle of the beam. For the momentum configuration of the electron, we follow previous investigations of such a laser setup \cite{ahrens_bauke_2013_relativistic_KDE,ahrens_2017_spin_non_conservation,ahrens_2020_two_photon_bragg_scattering}, in which spin effects occur for the transverse electron momentum $k_0=m$. Note, that we are using a Gaussian unit system with $\hbar=c=1$ in this article. Also, we use the words transverse ($z$-direction) and longitudinal ($x$-direction) with respect to the laser beam, if not stated differently. We also assume the system to be in the Bragg regime, which occurs for low
field amplitudes, and thus justifies the use of a perturbative technique for solving the quantum propagation of the electron. As mentioned in the introduction, the electron and the absorbed and emitted photons need to obey energy- and momentum conservation in the Bragg regime \cite{batelaan_2000_KDE_first,batelaan_2007_RMP_KDE}. From kinematic considerations \cite{ahrens_2012_phdthesis_KDE,ahrens_bauke_2012_spin-kde,ahrens_bauke_2013_relativistic_KDE} we know that this is only possible for initial and final electron momenta $\pm k_L \vec e_x$ along the $x$-axis, for the case of the monochromatic standing light wave which is considered here. In order to incorporate the longitudinal component of the Gaussian beam, it will also be necessary, to extend the plane wave expansion from a purely longitudinal degree of freedom for the electron momenta along the $x$-axis by adding a momentum degree of freedom along the transverse $z$-axis by multiples of momenta $k_z$. This becomes necessary for describing the non-negligible spacial $z$-dependence of the longitudinal potential \eqref{eq:Gaussain_beam_final_longitudinal_vector_potential} with the corresponding momentum space form \eqref{eq:S_piture_interaction_longitudinal_vector_potential}. In summary, the possible set of different electron momenta, which will appear in the extended plane wave ansatz \eqref{eq:the wave function of the Dirac equation}, are
\begin{equation}%
\vec k_{n,a}=(n-1)k_{L}\vec e_{x}+(ak_{z}+k_{0})\vec e_{z}\,. \label{eq:momentum_vector}
\end{equation}%
The Bragg condition, ie. the absorption and emission of one photon from each of the counterpropagating beams, implies that the electron is initially in a $n=0$ momentum state and finally in a $n=2$ momentum state. When transitioning from the $n=0$ to the $n=2$ state, a set of Kronecker deltas \eqref{D} will cause the initial transverse momentum state $a=0$ to be diffracted into a coherent superposition of momentum states $a\in\{-2,-1,0,1,2\}$ in our description. We have illustrated this form of superposition by five slightly diverging arrows, which are pointing from the origin towards the upper right in Fig. \ref{fig:Gaussian_beam}.
\subsection{Introduction of the vector potential of the Gaussian beam\label{sec:gaussian_beam_introduction}}
For the vector potential of the Gaussian beam we use a solution based on an angular spectrum representation of plane waves \cite{Quesnel_1998_gaussian_beam_coulomb_gauge}, which we write down in appendix \ref{sec:gaussian_vector_potential}, for completeness. After adjusting the solution to the desired geometry of our work, with a laser beam propagating along the $x$-axis, we obtain
\begin{subequations}%
\begin{align}%
A_{z,d}=&-A_{0}\frac{w_{0}}{w}\exp\left(-\frac{r^2}{w^2}\right)\sin\left(\phi_{G,d}\right)
\label{eq:transverse_vector_potential}
\end{align}%
for the transverse polarization component and
\begin{align}%
A_{x,d}=&-2dA_{0}\frac{w_{0}}{w}\epsilon\frac{z}{w}\exp\left(-\frac{r^2}{w^2}\right)\cos\left(\phi_{G,d}^{(1)}\right)
\label{eq:longitudinal_vector_potential}
\end{align}\label{eq:vector_field}%
\end{subequations}%
for the longitudinal polarization component of the vector potential of the Gaussian beam in Coulomb gauge. Eqs. \eqref{eq:vector_field} contain the two phases
\begin{subequations}%
\begin{align}%
\phi_{G,d}=&\omega t-dk_{L}x+\tan^{-1}\left(\frac{dx}{x_{R}}\right)-\frac{d x r^2}{x_{R}w^2}-\phi_{0,d}
\label{eq:add_equation1}\\
\phi_{G,d}^{(1)}=&\phi_{G,d}+\tan^{-1}\left(\frac{dx}{x_{R}}\right)\,.
\label{eq:add_equation2}
\end{align}\label{eq:Gaussian_beam_phase}%
\end{subequations}%
The symbol $A_0$ is the vector field amplitude and
\begin{equation}
r = \sqrt{y^2 + z^2}
\end{equation}
“is the transverse distance from the beam propagation axis beam with $y=0$. We use the index $d$ to represent the direction of the beam, where $d \in\{-1,1\}$ corresponds to the left or right moving direction, respectively. The symbol $w$ is the $x$-dependent beam waist
\begin{equation}
w(x)=w_{0}\sqrt{1+\frac{x^2}{x_{R}^2}}\,,
\end{equation}
as illustrated in Fig. \ref{fig:Gaussian_beam}. Note, that $A_x$ in Eq. \eqref{eq:transverse_vector_potential} is the additional longitudinal correction from beam focusing, which is of particular interest in this work. Since $A_x$ is proportional to $\epsilon$, it is getting vanishingly small for the case of arbitrary small beam foci.
\subsection{Application of approximations\label{sec:gaussian_beam_approximations}}
In order to carry out the perturbative calculation in section \ref{section III}, it is necessary to simplify the potentials \eqref{eq:vector_field}, such that the expressions can be solved and written down. The longitudinal potential component \eqref{eq:longitudinal_vector_potential} would vanish, when simply averaged along the transverse direction. Therefore, the pure plane-wave ansatz as in previous calculations will not be capable of representing the influence of the longitudinal beam component. Instead we attempt the next possible increase of complexity of the description within a desired plane-wave like ansatz, which is capable of accounting for the longitudinal component. For the transverse component \eqref{eq:transverse_vector_potential}, we desire the common plane wave approximation
\begin{equation}%
A_{z,d}=-A_{0}\sin\left(\phi_{G,d}\right)\,,\label{eq:simplified_transverse_plane_wave}
\end{equation}%
with the phase
\begin{equation}
\phi_{G,d}=\omega t-dk_{L}x-\phi_{0,d}\,,\label{eq:plane_wave_phase}
\end{equation}
in place of Eq. \eqref{eq:add_equation1}. We desire a similarly simple form for the longitudinal component \eqref{eq:longitudinal_vector_potential}, where now we have to pay special attention to the odd (anti-symmetric) factor $z/w$, which causes the otherwise even (symmetric) function \eqref{eq:longitudinal_vector_potential} to vanish on average along the $z$-direction. On adopting the same phase in Eq. \eqref{eq:plane_wave_phase} also for $\phi_{G,d}^{(1)}$ in Eq. \eqref{eq:add_equation2}, we see that the only $z$-dependence in Eq. \eqref{eq:longitudinal_vector_potential} is given by
\begin{equation}
\frac{z}{w}\exp\left(-\frac{z^2}{w^2}\right)\,.\label{eq:longitudinal_z_dependence}
\end{equation}
The Fourier transform and therewith functional form in momentum space of Eq. \eqref{eq:longitudinal_z_dependence} is $i p_z z w \exp[-(p_z w/2)^2]/\sqrt{8}$, with the conjugate $p_z$ of the $z$ variable. In the context of a simple approximation, the complex maximum at $0<p_z$ and complex minimum at $p_z<0$ can be represented by two spikes of delta functions with opposite signs, which constitute a sine function in position space. We display Eq. \eqref{eq:longitudinal_z_dependence} in Fig. \ref{fig:gaussian_1_node}, with the reduced $z$-coordinate $z'=z/w$.
The height of the extrema in position space is $1/\sqrt{2e}$ and with the argument $2 z'$, the approximating sine function matches Eq. \eqref{eq:longitudinal_z_dependence} over the range $-\pi/2 < z' < \pi/2$. Therefore, by imposing similar approximations as for the plane wave \eqref{eq:simplified_transverse_plane_wave} of the transverse polarization component also for the longitudinal polarization component \eqref{eq:longitudinal_vector_potential}, but also accounting for the odd $z$-dependence in Eq. \eqref{eq:longitudinal_z_dependence}, we simplify the longitudinal polarization component \eqref{eq:longitudinal_vector_potential} into
\begin{equation}
A_{x,d}=-2dA_{0}\frac{\epsilon}{\sqrt{2 e}}\cos\left(\phi_{G,d}\right)\sin(z k_z)\,,\label{eq:simplified_longitudinal_plane_wave}
\end{equation}
where we introduce the transverse momentum displacement
\begin{equation}
k_{z}=\frac{2}{w_0}\,.\label{eq_k_z_definition}
\end{equation}
We mention that the definition for $\epsilon$ in Eq. \eqref{eq_epsilon_definition} and the specification for $k_z$ in Eq. \eqref{eq_k_z_definition} imply the relation
\begin{equation}
k_z = 2 \epsilon k_L\,.\label{eq:k_z_k_L_relation}
\end{equation}
\begin{figure}%
\includegraphics[width=0.5\textwidth]{gaussian_1_node.pdf}
\caption{Illustration of the transverse beam dependence of the longitudinal polarization component \eqref{eq:longitudinal_z_dependence} (solid black line) and its approximating sine function (dashed red line). The sine function is inspired by two extrema of opposite sign at opposite locations around the origin in momentum space, as can be seen from the Fourier transform of Eq. \eqref{eq:longitudinal_z_dependence}, see main text. In position space, the sine function is chosen to match Eq. \eqref{eq:longitudinal_z_dependence} over the half period $-\pi/2 < z' < \pi/2$. The plot has been carried out over the reduced $z$-coordinate $z'=z/w$.\label{fig:gaussian_1_node}}
\end{figure}%
We also set $\phi_{0,d}=\pi$, for being consistent with the approach in reference \cite{ahrens_bauke_2013_relativistic_KDE}, resulting finally in
\begin{subequations}%
\begin{align}%
A_{z,d}&=A_{0}\sin\left(\omega t-dk_{L}x\right)\label{eq:Gaussain_beam_transverse_vector_potential}\\
A_{x,d}&=2dA_{0}\frac{\epsilon}{\sqrt{2 e}}\cos\left(\omega t-dk_{L}x\right)\sin(z k_z)\,.\label{eq:Gaussain_beam_longitudinal_vector_potential}
\end{align}\label{eq:final_plane_waves}%
\end{subequations}%
In this form, the approximated vector potential of the Gaussian beam is now suitable for conversion into a momentum space description with a manageable number of terms in Sec. \ref{sec:momentum_space_formulation} and carrying out the perturbative calculation in Sec. \ref{sec:time_dependent_perturbation_theory}. For ease of notion in subsequent calculations, we expand the trigonometric functions in Eq. \eqref{eq:final_plane_waves}: The sine part in Eq. \eqref{eq:Gaussain_beam_transverse_vector_potential} allows us to decompose the function $A_{z,d}$ into a sum of the exponential functions \\
\begin{align}
A_{z,d,o}=&-o\frac{i}{2}A_{0}e_{}^{oi(\omega t-dk_Lx)}
\label{eq:Gaussain_beam_final_transverse_vector_potential}
\end{align}
where the index $o\in\{-1,1\}$ corresponds to either emission or absorption of a laser photon by the electron.\\
Correspondingly, the sine and cosine parts in Eq. \eqref{eq:Gaussain_beam_longitudinal_vector_potential} allow us to decompose the function $A_{x,d}$ into a sum of the exponential functions\\
\begin{align}
A_{x,d,o,f}=&-df\frac{i}{2}A_{0}\frac{\epsilon}{\sqrt[2]{2e}} e_{}^{oi(\omega t-dk_Lx)}e_{}^{fizk_{z}},
\label{eq:Gaussain_beam_final_longitudinal_vector_potential}
\end{align}
with $o,f\in\{-1,1\}$, where $f$ corresponds to forward and backward motion of the electron along its propagation direction. We can therefore write Eqs. \eqref{eq:Gaussain_beam_transverse_vector_potential} and \eqref {eq:Gaussain_beam_longitudinal_vector_potential} as\\
\begin{subequations}%
\begin{align}%
A_{z,d}=\sum_{o}A_{z,d,o}\\
A_{x,d}=\sum_{o,f}A_{x,d,o,f}.
\end{align}\label{eq:potential_plane_wave_expansion}%
\end{subequations}%
\section{Theoretical description\label{section III}}
The approximated potentials \eqref{eq:Gaussain_beam_final_transverse_vector_potential} till \eqref{eq:potential_plane_wave_expansion} are consisting of plane waves, which will turn into Kronecker deltas when transforming them into the momentum space formulation \eqref{eq:S_piture_vector_potential}. This implies that only the subset of expansion coefficients $c_{n,a}^{\gamma,\sigma}(t)$ of the wave function's plane wave expansion \eqref{eq:the wave function of the Dirac equation} with the already introduced discrete momenta \eqref{eq:momentum_vector} are coupled to each other. Such a set of coefficients is suitable for applying the time-dependent perturbation theory calculation of section \ref{sec:time_dependent_perturbation_theory}, such that the result can be written down in a compact form. In order to see the emergence of the discrete Kronecker deltas in momentum space, one first needs to introduce the relativistic quantum description, on which the calculation is based. We therefore introduce the Dirac equation, which is done in the following section.
\subsection{The Dirac equation}
In quantum mechanics, the time-evolution of an electron with mass $m$ and charge $q=-e$ is governed by\\
\begin{align}
i\frac{\partial \Psi(x)}{\partial t}=H \Psi(x),
\label{eq:derivation function}
\end{align}
where we aspire a relativistic quantum description with the Hamiltonian of the Dirac equation
\begin{align}
H=\alpha\left(\vec p-q \vec A\right)+qA_{}^{0}+\beta m\,.
\label{eq:the Dirac equation}
\end{align}
Here, we have introduced the $4\times 4$ Dirac matrices\\
\begin{align}%
\alpha_{i}=\begin{pmatrix}0&\sigma_{i}\\ \sigma_{i}&0\end{pmatrix},\beta=\begin{pmatrix}\mathds{1}&0\\0&-\mathds{1}\end{pmatrix}
\end{align}%
which contain the Pauli matrices\\
\begin{align}
\sigma_{x}=\begin{pmatrix}0&1\\1&0\end{pmatrix},\sigma_{y}=\begin{pmatrix}0&-i\\i&0\end{pmatrix},\sigma_{z}=\begin{pmatrix}1&0\\0&-1\end{pmatrix}
\label{eq:Pauli martrices}
\end{align}
\\
and the $2\times 2$ identity\\
\begin{align}
\mathds{1}=\begin{pmatrix}1&0\\0&1\end{pmatrix}.
\end{align}
\subsection{Momentum space formulation of the relativistic quantum theory\label{sec:momentum_space_formulation}}
The wave function $\Psi$ of the electron can be decomposed into a set of momentum and energy eigenfunctions
\begin{align}%
\psi_{n,a}^{\gamma,\sigma}(\vec x)= &\sqrt{\frac{2\pi}{k_L}}\sqrt{\frac{2\pi}{k_z}}u_{\vec k_{n,a}}^{\gamma,\sigma}e_{}^{i\vec x \cdot \vec k_{n,a}},
\label{eq:the wave function}
\end{align}%
with the bi-spinors $u_{\vec k_{n,a}}^{\gamma,\sigma}$ defined as
\begin{subequations}%
\begin{align}%
u_{\vec k}^{+,\sigma}=\sqrt{\frac{E_{\vec k}+m}{2m}}
\begin{pmatrix}
\chi^\sigma\\ \frac{\vec \sigma \cdot \vec k}{E_{\vec k}+m}\chi^\sigma
\end{pmatrix}
\label{eq:the matrice 1}\\
u_{\vec k}^{-,\sigma}=\sqrt{\frac{E_{\vec k}+m}{2m}}
\begin{pmatrix}
- \frac{\vec \sigma \cdot \vec k}{E_{\vec k}+m}\chi^\sigma\\ \chi^\sigma
\end{pmatrix}.
\label{eq:the matrice 2}
\end{align}\label{eq:the matrice}%
\end{subequations}%
In Eqs. \eqref{eq:the wave function} and \eqref{eq:the matrice} the index $\gamma\in\{+,-\}$ denotes whether the electron is in a positive or negative energy eigenstate and the index $\sigma\in\{0,1\}$ denotes whether the electron is in a spin up $(0)$ or spin down $(1)$ state. The $n\in \mathbb{Z}$ index denotes the longitudinal momentum $(n-1)k_L$ of the electron beam in terms of laser photon momenta. The index $a$ denotes the transverse momentum $ak_z$ which is transferred from the transverse variation of the Gaussian beam's longitudinal component to the electron.
Correspondingly, in Eq. \eqref{eq:the wave function} we are using the electron momentum \eqref{eq:momentum_vector}, resulting in the phase
\begin{equation}%
\vec x \cdot \vec k_{n,a}=(n-1)k_{L}x+(ak_{z}+k_{0})z \label{eq:the phase of the electron plane wave solution}
\end{equation}%
of the electron plane wave solution. The expression $E_{\vec{k}}$ is the relativistic energy-momentum relation
\begin{subequations}%
\begin{equation}%
E_{\vec{k}}=\sqrt{m_{}^{2}+k_{}^{2}}\,,
\end{equation}%
where we write%
\begin{equation}%
E_{n,a} = \sqrt{m^2 + \vec k_{n,a}^2}
\end{equation}\label{eq:relativistic_energy_momentum_relation}%
\end{subequations}%
in place of $E_{\vec k}$ when using the discrete momenta $\vec k_{n,a}$. The variable $k_{0}$ parameterizes an initial transverse momentum of the electron along the $z$-axis.
With Eqs. \eqref{eq:the wave function} till \eqref{eq:relativistic_energy_momentum_relation}, we can write the wave function of the Dirac equation in momentum space as
\begin{align}%
\Psi(\vec x,t)=\sum_{\gamma,n,\sigma,a}c_{n,a}^{\gamma,\sigma}(t)\psi_{n,a}^{\gamma,\sigma}(\vec x).
\label{eq:the wave function of the Dirac equation}
\end{align}%
From this wave function expansion we denote the time-propagation of the initial expansion coefficients $c_{n,a}^{\gamma,\sigma}(t_0)$ into the the final expansion coefficients $c_{n',a'}^{\gamma',\sigma'}(t)$ for the plane-wave eigensolutions of the Dirac equation by\\
\begin{align}
c_{n',a'}^{\gamma',\sigma'}(t)=\sum_{\gamma,\sigma;n,a}U_{n',a';n,a}^{\gamma',\sigma';\gamma,\sigma}(t,t_0)c_{n,a}^{\gamma,\sigma}(t_0).
\label{eq:the possibility}
\end{align}
The approach in Eqs. \eqref{eq:the wave function} till \eqref{eq:the possibility} is extending similar formulations of the Dirac equation in momentum space \cite{ahrens_bauke_2012_spin-kde,ahrens_bauke_2013_relativistic_KDE,bauke_ahrens_2014_spin_precession_1,bauke_ahrens_2014_spin_precession_2,ahrens_2017_spin_filter,ahrens_2020_two_photon_bragg_scattering} by also introducing a transverse degree of freedom for the momentum of the electron wave function.
For the description of the quantum system by time-dependent perturbation theory we need a momentum space formulation of the interaction potentials. For this we denote the Dirac bra-ket notion
\begin{equation}
\braket{\phi_a|Q|\phi_b} = \int d^3 x \, \phi_a^\dagger(\vec x) Q(\vec x) \phi_b(\vec x)\label{eq:dirac_bracket}
\end{equation}
of the matrix element $\braket{\phi_a|Q|\phi_b}$ for the operator $Q$. Based on this notion, we substitute the momentum eigenfunctions \eqref{eq:the wave function} into the two quantum states $\ket{\phi_a}$ and $\ket{\phi_b}$ and obtain the matrix elements
\begin{subequations}%
\begin{align}%
&V_{S;z,d,o,n',a';n,a\phantom{,f}}^{\gamma',\sigma' ;\gamma,\sigma}=\Braket{\psi_{n',a'}^{\gamma',\sigma'}|-qA_{z,d,o}\alpha_{3}|\psi_{n,a}^{\gamma,\sigma}}\nonumber\\
&=\frac{q}{2}oiA_{0}e_{}^{oi\omega t} L_{n',a';n,a,3}^{\gamma',\sigma' ;\gamma,\sigma}\delta_{a',a}\delta_{n',n-do} \label{eq:S_piture_interaction_transverse_vector_potential}\\
&V_{S;x,d,o,f,n',a';n,a}^{\gamma',\sigma' ;\gamma,\sigma}=\Braket{\psi_{n',a'}^{\gamma',\sigma'}|-qA_{x,d,o,f}\alpha_{1}|\psi_{n,a}^{\gamma,\sigma}}\nonumber\\
&=\frac{q}{2}dfiA_{0}e_{}^{oi\omega t}\frac{\epsilon}{\sqrt{2e}} L_{n',a';n,a,1}^{\gamma',\sigma' ;\gamma,\sigma}\delta_{a',a+f}\delta_{n',n-do}
\label{eq:S_piture_interaction_longitudinal_vector_potential}
\end{align}\label{eq:S_piture_vector_potential}%
\end{subequations}%
for the potentials $-qA_{z,d,o}\alpha_{3}$ and $-qA_{x,d,o,f}\alpha_{1}$, which include the expressions \eqref{eq:Gaussain_beam_final_transverse_vector_potential} and \eqref{eq:Gaussain_beam_final_longitudinal_vector_potential}. In Eqs. \eqref{eq:S_piture_vector_potential} we have introduced the abbreviation
\begin{align}\label{eq:S_piture_interaction_vector_potential}
L_{n',a';n,a;b}^{\gamma',\sigma' ;\gamma,\sigma}=\left(u_{\vec k_{n',a'}}^{\gamma',\sigma'}\right)_{}^{\dagger}\alpha_{b}\left(u_{\vec k_{n,a}}^{\gamma,\sigma}\right)\,.
\end{align}
The result in the second lines in Eqs. \eqref{eq:S_piture_vector_potential} is obtained by carrying out the space integration $\int d^3 x$ from the matrix element expression \eqref{eq:dirac_bracket}. For this integration, we denote all space dependent terms, which are the exponentials
\begin{subequations}%
\begin{equation}%
\exp\left\{-i\left[\left(n'-n+do\right)k_L x + \left(a'-a\right)k_z z\right]\right\}\label{eq:transverse_phase}
\end{equation}%
for Eq. \eqref{eq:S_piture_interaction_transverse_vector_potential} and
\begin{equation}%
\exp\left\{-i\left[\left(n'-n+do\right)k_L x + \left(a'-a-f\right)k_z z\right]\right\}\label{eq:longitudinal_phase}
\end{equation}\label{eq:potential_phases}%
\end{subequations}%
for Eq. \eqref{eq:S_piture_interaction_longitudinal_vector_potential}. When carrying out the three dimensional integration, the phases collapse into the Kronecker deltas $\delta_{a',a}\delta_{n',n-do}$ (in Eq. \eqref{eq:S_piture_interaction_transverse_vector_potential}, originating from Eq. \eqref{eq:transverse_phase}) and $\delta_{a',a+f}\delta_{n',n-do}$ (in Eq. \eqref{eq:S_piture_interaction_longitudinal_vector_potential}, originating from Eq. \eqref{eq:longitudinal_phase}).
One can similarly obtain the momentum space formulation of the Dirac equation
\begin{align}
i\hbar\frac{\partial}{\partial t}c=Ec+\sum V_{s,z}c +\sum V_{s,x}c
\label{eq:the sum possibility}
\end{align}
by projecting the adjoint plane wave solutions \eqref{eq:the wave function} from the left on the time-evolution equation \eqref{eq:derivation function}, as done already in references \cite{ahrens_bauke_2012_spin-kde,ahrens_bauke_2013_relativistic_KDE,ahrens_2017_spin_filter,ahrens_2020_two_photon_bragg_scattering}. Note, that in Eq. \eqref{eq:the sum possibility}, we have omitted the indices and time-dependence for the expansion coefficients $c^{\gamma,\sigma}_{n,a}(t)$ and the potentials \eqref{eq:S_piture_vector_potential} in favor of a compact notion. Still, the sums in Eq. \eqref{eq:the sum possibility} run over the unprimed indices, as they appear in Eqs. \eqref{eq:S_piture_vector_potential}. The expansion coefficients on the left-hand side and the first term on the right-hand side of Eq. \eqref{eq:the sum possibility} have primed indices, ie. $c^{\gamma',\sigma'}_{n',a'}(t)$. The symbol $E$ denotes the relativistic energy-momentum relation \eqref{eq:relativistic_energy_momentum_relation}, which can be positive and negative, corresponding to the expansion coefficients $c^{\gamma',\sigma'}_{n',a'}$ of the positive and negative energy eigensolutions. We will make use of the shortened notion in Eq. \eqref{eq:the sum possibility} with omitted indices and omitted time-dependence also in subsequent expressions of similar complexity in the remaining text of this article.
\subsection{Time-dependent perturbation theory\label{sec:time_dependent_perturbation_theory}}
In order to calculate the time-evolution of the quantum state, we are making use of second order time-dependent perturbation theory \cite{sakurai2014modern}\\
\begin{align}
U(t,t_{0})=(-i)^2\int_{t_{0}}^{t}dt_{2}\int_{t_{0}}^{t_{2}}dt_{1}V_{I}(t_{2})V_{I}(t_{1})\,,
\label{eq:time pertubation theory of V}
\end{align}
where we follow the convention to carry out our calculation in the interaction picture, with operators related by\\
\begin{align}
V_{I}=e_{}^{iH_{0}t}V_{S}e_{}^{-iH_{0}t}\,.
\label{picture change_a}
\end{align}
Here, $V_S$ and $V_I$ are the operators in the Schr\"odinger and interaction picture, respectively. With the matrix elements $\gamma E_{\vec k_{n,a}}$ of the free Hamiltonian $H_0$ in momentum space, relation \eqref{picture change_a} becomes
\begin{align}
V_{I;n',a';n,a}^{\gamma',\sigma';\gamma,\sigma}=V_{S;n',a';n,a}^{\gamma',\sigma' ;\gamma,\sigma}e_{}^{i(\gamma' E_{n',a'}-\gamma E_{n,a})t}
\label{picture change_b}
\end{align}
in explicit index notation. By inserting the potentials \eqref{eq:S_piture_vector_potential} of the interaction picture \eqref{picture change_b} into the perturbation expression \eqref{eq:time pertubation theory of V}, we obtain
\begin{align}
U(t,t_{0})=-\sum\Gamma\frac{q^2A_{0}^2}{4}D\Lambda\int_{t_{0}}^{t}dt_{2}\int_{t_{0}}^{t_{2}}dt_{1}\Upsilon
\label{eq:time perturbation theory}\,.
\end{align}
In Eq. \eqref{eq:time perturbation theory} the expression $D$ is a collection of Kronecker deltas
\begin{subequations}%
\begin{align}%
D_{z,z}=&\delta_{n'',n'-d'o'}\delta_{n',n-do}\delta_{a'',a'}\delta_{a',a}\\
D_{x,z}=&\delta_{n'',n'-d'o'}\delta_{n',n-do}\delta_{a'',a'+f'}\delta_{a',a}\\
D_{z,x}=&\delta_{n'',n'-d'o'}\delta_{n',n-do}\delta_{a'',a'}\delta_{a',a+f}\\
D_{x,x}=&\delta_{n'',n'-d'o'}\delta_{n',n-do}\delta_{a'',a'+f'}\delta_{a',a+f}\,,
\end{align}\label{D}%
\end{subequations}%
which originates from the potentials \eqref{eq:S_piture_vector_potential}. The expression $\Lambda$ is the corresponding collection of the spin-dependent terms
\begin{align}
\Lambda_{n'',a'';n',a';n,a;r,t}^{\gamma'',\sigma'';\gamma',\sigma';\gamma,\sigma}=L_{n'',a'';n',a';r}^{\gamma'',\sigma'';\gamma',\sigma'}L_{n',a';n,a;t}^{\gamma',\sigma';\gamma,\sigma}\,.
\label{the collection of spin-dependent terms}
\end{align}
All time-dependent expressions have been absorbed in the time-dependent phase
\begin{multline}%
\Upsilon_{n'',a'';n',a',n,a}^{\gamma'',\gamma',\gamma;o',o}(t_{2},t_{1})= \\ e_{}^{i(\gamma ''E_{n'',a''}-\gamma'E_{n',a'}+o'\omega)t_{2}} \\ \times e_{}^{i(\gamma 'E_{n',a'}-\gamma E_{n,a}+o\omega)t_{1}}\label{the time-dependent phase from the interaction}%
\end{multline}%
behind the final, double-time integral $\int_{t_{0}}^{t}dt_{2}\int_{t_{0}}^{t_{2}}dt_{1}$. All other prefactors, which cannot be summarized in a simple way, are combined in the prefactor $\Gamma$ as
\begin{subequations}%
\begin{align}%
\Gamma_{z,z}&=oo'\\
\Gamma_{x,z}&=d'f'o\mathcal{E}\\
\Gamma_{z,x}&=dfo'\mathcal{E}\\
\Gamma_{x,x}&=d'df'f\mathcal{E}^2\,,
\end{align}\label{prefactor}%
\end{subequations}%
where the calligraphically written $\mathcal{E}$ is an abbreviation for the scaled diffraction angle $\epsilon$
\begin{align}
\mathcal{E}=\frac{\epsilon}{\sqrt{2e}}\,.\label{eq:capital_epsilon}
\end{align}
The index pairs $\{(z,z);(x,z);(z,x);(x,x)\}$, which we have attached to $D$ and $\Gamma$ are accounting on whether $V_z$ (Eq. \eqref{eq:S_piture_interaction_transverse_vector_potential}) or $V_x$ (Eq. \eqref{eq:S_piture_interaction_longitudinal_vector_potential}) have been used for the potential $V_I(t_2)$ in Eq. \eqref{eq:time pertubation theory of V} (first index $t_2$) and on whether $V_z$ or $V_x$ have been used for $V_I(t_1)$ (second index $t_1$).\\
Note, that in Eq. \eqref{eq:time perturbation theory}, we have omitted the indices for the expansion coefficients $U_{n'',a'';n,a}^{\gamma'',\sigma'';\gamma,\sigma}$ in a similar way as we have done it for Eq. \eqref{eq:the sum possibility}. Correspondingly, the sum in Eq. \eqref{eq:time perturbation theory} runs over the indices $\gamma'$, $\sigma'$ , $n'$ and $a'$ as part of the matrix product between the potentials \eqref{eq:S_piture_vector_potential}. Additionally, the sum also runs over the possible configurations $o$, $o'$, $d$ and $d'$.
\subsection{The resonance condition in the Bragg regime of the Kapitza-Dirac effect}
We proceed the computation of the perturbative expression \eqref{eq:time perturbation theory} by solving the double time integral $\int_{t_{0}}^{t}dt_{2}\int_{t_{0}}^{t_{2}}dt_{1}\Upsilon$. The integral $\int_{t_{0}}^{t_{2}}dt_{1}$ over the $t_1$ dependent exponential in \eqref{the time-dependent phase from the interaction} results in
\begin{multline}
\int_{t_{0}}^{t_{2}}dt_{1} e_{}^{i(\gamma' E_{n',a'}-\gamma E_{n,a}+o\omega)t_{1}}\\
= i F \left. e_{}^{i(\gamma' E_{n',a'}-\gamma E_{n,a}+o\omega)t_{1}} \right|_{t_0}^{t_2}\,,\label{eq:t_1_integration}
\end{multline}
where we have introduced the abbreviation
\begin{align}
F=(\gamma E_{n,a}-\gamma' E_{n',a'}-o\omega)_{}^{-1}\,.
\label{F}
\end{align}
For the upper integration limit $t_2$ in Eq. \eqref{eq:t_1_integration} we obtain
\begin{equation}
i F \int_{t_{0}}^{t}dt_{2} e_{}^{i(\gamma ''E_{n'',a''}-\gamma E_{n,a}+o'\omega+o\omega)t_{2}}\label{eq:t2_integral}
\end{equation}
in the double integral $\int_{t_{0}}^{t}dt_{2}\int_{t_{0}}^{t_{2}}dt_{1}\Upsilon$. The argument in the exponent, which we abbreviate by
\begin{equation}
\Delta E = \gamma ''E_{n'',a''}-\gamma E_{n,a}+o'\omega+o\omega\,,\label{eq:delta_E}
\end{equation}
corresponds to the net energy transfer of the two interacting laser photons with the electron. With Eq. \eqref{eq:delta_E} the solution of \eqref{eq:t2_integral} can be written as
\begin{subequations}%
\begin{align}%
&i F \int_{t_{0}}^{t}dt_{2} e_{}^{i \Delta E t_{2}} = \frac{F}{\Delta E} \left( e^{i \Delta E t} - e^{i \Delta E t_0 } \right)\label{eq:expanded_phase_integral_oscillating}\\
&\qquad = i F \sum_{g=0}^\infty \frac{(i \Delta E)^g}{(g+1)!} \left(t^{g+1} - t_0^{g+1}\right)\\
&\qquad = i F \left[ t-t_0 + \frac{i \Delta E}{2} \left(t^2 - t_0^2\right) + \dots \right]\,.\label{eq:expanded_phase_integral_explicitly}
\end{align}\label{eq:expanded_phase_integral}%
\end{subequations}%
As explained in the introduction, Kapitza-Dirac scattering takes place for
\begin{equation}
n=0\,, \qquad n=2 \,,\label{eq:x_momentum_constraint}
\end{equation}
for the positive particle solutions
\begin{equation}
\gamma=\gamma^{\prime\prime}=+1\,,
\end{equation}
with one absorbed and one emitted photon, corresponding to
\begin{equation}
o=-o'\in \{-1,1\}\,,\label{eq:photon_absorption_emission}
\end{equation}
see references \cite{ahrens_2012_phdthesis_KDE,ahrens_bauke_2012_spin-kde,ahrens_bauke_2013_relativistic_KDE} for details. In the case of no transverse momentum transfer, ie. $a=a^{\prime\prime}=0$, one sees that $\Delta E$ in Eq. \eqref{eq:delta_E} vanishes, such that the solution \eqref{eq:expanded_phase_integral_explicitly} of the upper limit of the integral over $t_1$ in the double time integral of Eq. \eqref{eq:time perturbation theory} simplifies to
\begin{align}
\int_{t_{0}}^{t}dt_{2}\int^{t_{2}}dt_{1}\Upsilon(t_2,t_1)= i F(t-t_0)\,.
\label{double-time integral}
\end{align}
In this resonant situation, in which the phase of the incoming and outgoing mode of the electron wave function are in resonance with the phase oscillations of the interacting photons, the amplitude of the diffracted mode can grow unlimited in time. In the case of a perfect resonant situation with $\Delta E = 0$, this growth would be unbound and only constrained by the unitary property of the Dirac equation, where this unitary property in turn would manifest itself only in higher order perturbative contributions (as an example see the calculation of the Kapitza-Dirac effect based on the Schr\"odinger equation in reference \cite{gush_gush_1971_higher_order_kapitza_dirac_scattering}). One can see in Eq. \eqref{eq:expanded_phase_integral} that the resonant mode, which grows linear with the interaction time $\Delta t = t - t_0$ can outgrow the oscillating solution \eqref{eq:expanded_phase_integral_oscillating}, when the product $\Delta E \Delta t$ is approximately smaller than one. This recovers the energy-time uncertainty condition, which is used to distinguish between diffraction regime ($\Delta E \Delta t$ larger one) and Bragg regime ($\Delta E \Delta t$ smaller one), according to Batelaan \cite{batelaan_2000_KDE_first,batelaan_2007_RMP_KDE} (see also the discussion in the introductory section \ref{sec:introduction}). Corresponding resonance peaks of the diffraction amplitude which illustrate this energy-time uncertainty are shown, for example, in references \cite{ahrens_2012_phdthesis_KDE,ahrens_bauke_2013_relativistic_KDE,dellweg_awwad_mueller_2016_spin-dynamics_bichromatic_laser_fields}.
For our investigation we will assume the dynamics to be on resonance, with $\Delta E=0$ in \eqref{eq:delta_E}, as the integration result \eqref{double-time integral} will outgrow all other oscillatory contributions in the integral $\int_{t_{0}}^{t}dt_{2}\int_{t_{0}}^{t_{2}}dt_{1}\Upsilon$. Also, since according to Eq. \eqref{eq:k_z_k_L_relation} the transverse momentum transfer $k_z$ is smaller than the longitudinal momentum transfer $k_L$ by the factor $2 \epsilon$, where $\epsilon$ is usually much smaller than one, we also assume that diffraction into the final states with $a^{\prime\prime} \in \{-2, -1, 0, 1, 2\}$ are also on resonance, ie. $E_{n^{\prime\prime},a^{\prime\prime}}$ being very close to $E_{n,a}$. We thus assume to obtain the result \eqref{double-time integral}, independently from the value of $a^{\prime\prime}$.
We also point out that the absolute value of the momentum transfer $k_L \vec e_x \pm k_z \vec e_z$ which is implied by the longitudinal potential \eqref{eq:S_piture_interaction_longitudinal_vector_potential} is larger than the absolute value of the corresponding momentum transfer $k_L \vec e_x$ of the transverse potential \eqref{eq:S_piture_interaction_transverse_vector_potential}, as a result of the approximation in section \ref{sec:gaussian_beam_approximations}. We are therefore using the vacuum dispersion relation of light
\begin{subequations}%
\begin{equation}%
\omega_{n,a;n',a'} = \left| \vec k_{n,a} - \vec k_{n',a'} \right|\label{eq:photon_vacuum_dispersion}
\end{equation}%
for the prefactor in Eq. \eqref{F}, which we explicitly write as
\begin{align}%
F=(\gamma E_{n,a}-\gamma' E_{n',a'} - o \, \omega_{n,a;n',a'})^{-1}\,.\label{eq:F_improved}
\end{align}\label{eq:F_with_vacuum_dispersion}%
\end{subequations}%
Using a only a constant dispersion $\omega = k_L$ would result in situations in which the bracket on the right-hand side of \eqref{eq:F_improved} goes to zero and causes the diffraction amplitude to diverge. Such a divergence only takes place for unphysically large the beam divergence angles, at which $k_z \gtrapprox k_L$. For this reason we consider the implementation of the vacuum dispersion \eqref{eq:F_with_vacuum_dispersion} as appropriate.
We finally can write the expression for the perturbative calculation as
\begin{align}
U(t,t_{0})=- i\sum\Gamma F\frac{q^2A_{0}^2}{4}\Lambda(t-t_{0})\,,
\label{eq:solution_perturbation_theory}
\end{align}
where we have substituted the solution of the double time integral \eqref{double-time integral} with the prefactor \eqref{eq:F_with_vacuum_dispersion} from the integration into the intermediate perturbative expression \eqref{eq:time perturbation theory}.
\subsection{Momentum conservation}
In order to complete the perturbative calculation, the electron momenta in Eq. \eqref{eq:solution_perturbation_theory} need to be specified, still. For that, we make use of the momentum conservation, which is implied by the Kronecker deltas in Eq. \eqref{D}. The $x$-dependent Kronecker deltas with dependence of $n$, $n'$ and $n''$ imply the conditions
\begin{subequations}%
\begin{align}%
n' &=n - do\\
n''&=n' - d'o'\,.
\end{align}\label{eq:x_momentum_conservation_separate}%
\end{subequations}%
To resolve this, we refer back to the initial and final $x$-momentum constraints \eqref{eq:x_momentum_constraint} and the photon absorption and emission condition \eqref{eq:photon_absorption_emission}. We first note that reaching from $n=0$ to $n''=2$ is only possible for $n'=1$. Secondly, combining the two conditions in \eqref{eq:x_momentum_conservation_separate} results in
\begin{equation}
n'' = n - d' o' - d o\,.\label{eq:x_momentum_conservation}
\end{equation}
For the defined range of the parameters $o, o', d, d' \in \{-1,1\}$, the conditions \eqref{eq:x_momentum_constraint}, \eqref{eq:photon_absorption_emission} and \eqref{eq:x_momentum_conservation} impose the condition
\begin{equation}
d = - o = o' = -d'\,. \label{od_index_fix}
\end{equation}
For the transverse direction ($z$-direction) we require the electron to move with momentum $k_0$, corresponding to $a=0$, as implied by the approach for the electron momentum in Eq. \eqref{eq:momentum_vector}. The $z$-dependent Kronecker deltas in Eq. \eqref{D} with dependence of $a$, $a'$ and $a''$ imply the conditions
\begin{subequations}%
\begin{align}%
a' &=a + f\\
a''&=a' + f'\,,
\end{align}\label{eq:transverse_momentum_conservation}%
\end{subequations}%
where occurrences of $\delta_{a',a}$ and $\delta_{a'',a'}$ can be accounted for, in the form \eqref{eq:transverse_momentum_conservation}, by setting $f=0$ and $f'=0$, respectively. With Eqs. \eqref{eq:transverse_momentum_conservation} we can determine the $\Gamma$ factors in Eq. \eqref{prefactor} for different values of $a'$ and $a''$, if we additionally make use of the implications $oo'=dd'=-1$ and $od'=do'=1$ from Eq. \eqref{od_index_fix}. All possible combinations for $\Gamma$ are listed in table \ref{tabel I}.
\begin{table}
\caption{
Specific values of the polarization dependent $\Gamma$ prefactor \eqref{prefactor}, as it appears in the perturbative expression \eqref{eq:solution_perturbation_theory}. The electron quantum state propagation happens on different quantum trajectories in momentum space, where the electron can be subject to different polarization components when interacting with the laser (see main text below Eq. \eqref{eq:capital_epsilon}). The index pair $(z,z)$ scales with zero power in $\mathcal{E}$, the index pairs $(x,z)$ and $(z,x)$ scale with one power in $\mathcal{E}$ and $(z,z)$ scales with two powers in $\mathcal{E}$. We have separated the different powers of $\mathcal{E}$ with double lines. We also have separated different diffraction orders $a''$ of the electron's final momentum $a'' k_z + k_0$ along the transverse laser direction by additional, single lines.\label{tabel I}}
\begin{tabular}{ r r l }
$a''$ & $a'$ & $\Gamma$\\
\hline \hline
0 & 0 & $\Gamma_{z,z}=-1$ \\
\hline \hline
1 & 0 & $\Gamma_{x,z}=+\mathcal{E}$\\
1 & 1 & $\Gamma_{z,x}=+\mathcal{E}$\\
\hline
-1 & 0 & $\Gamma_{x,z}=-\mathcal{E}$\\
-1 & -1 & $\Gamma_{z,x}=-\mathcal{E}$ \\
\hline \hline
2 & 1 & $\Gamma_{x,x}=-\mathcal{E}^2$\\
\hline
0 & 1 & $\Gamma_{x,x}=+\mathcal{E}^2$\\
0 & -1 & $\Gamma_{x,x}=+\mathcal{E}^2$\\
\hline
-2 & -1 & $\Gamma_{x,x}=-\mathcal{E}^2$\\
\end{tabular}
\end{table}
The role of all indices in the final perturbative expression \eqref{eq:solution_perturbation_theory} are determined with the above considerations and they can be classified into the four different categories:\\
Indices with fixed values:\\
{\color{white}.}\hspace{0.5 cm}$n=0$, $n'=1$, $n''=2$, $a=0$, $\gamma=1$, $\gamma''=1$\\
Indices which still appear in the sum of Eq. \eqref{eq:solution_perturbation_theory}:\\
{\color{white}.}\hspace{0.5 cm}$a'$, $\sigma'$, $\gamma'$, $o$\\
Indices which are implied by Eqs. \eqref{od_index_fix} and \eqref{eq:transverse_momentum_conservation}:\\
{\color{white}.}\hspace{0.5 cm}$o'$, $d$, $d'$, $f$, $f'$\\
Indices, which are not determined yet:\\
{\color{white}.}\hspace{0.5 cm}$a''\in\{-2,-1,0,1,2\}$, $\sigma$, $\sigma''$
\section{Resulting modification of spin-preserving and spin-changing terms\label{section IV}}
\subsection{Investigation procedure}
In this section we want to quantify the influence of the longitudinal polarization component from the Gaussian beam on the spin dynamics in the Kapitza-Dirac effect. Since the perturbative expression \eqref{eq:solution_perturbation_theory} in section \ref{section III} has a complicated structure, we want to investigate its dependence on the photon energy $k_L$ and the transverse momentum transfer $k_z$ numerically. To do that, we start in section \ref{sec:spin_propagation} with first denoting a formalism for decomposing the quantum state propagation into spin-preserving and spin-changing components, which can be seen in Eqs. \eqref{eq:spin-conserved} and \eqref{eq:spin-changing}. This makes it more easy to identify the influence of the longitudinal beam component on the spin dynamics. The formalism also has the advantage that it is independent of the initial and final electron spin configuration. We then want to numerically extract simple power law scaling relations for \eqref{eq:propagator_projections}, in the form of Eq. \eqref{eq:power_law_function}. We do that, by first plotting the functional dependence of \eqref{eq:propagator_projections} as a function of $k_L$ and/or $k_z$ in Figs. \ref{Fig.1} till \ref{Fig.4} in section \ref{sec:numeric_evaluation}. The figures are carried out as double logarithmic plots, such that power law scalings appear as a straight lines which can be fitted to linear functions, to obtain the coefficients of the power law functions \eqref{eq:power_law_function}. This is done in section \ref{sec:analysis_of_results} and results in the scaling functions \eqref{eq:scaling_longitudinal_1} till \eqref{eq:scaling_longitudinal_2}. Finally, the obtained scaling relations are then compared with the corresponding expression in which no longitudinal polarization component has been used, to obtain relation \eqref{eq:beam_focus_relevance}, from which one can see for what parameters $k_L$ and $\epsilon$ the longitudinal polarization component from beam focusing becomes relevant.
\subsection{Spin propagation\label{sec:spin_propagation}}
The initial electron spin configuration $c_{0,0}^{1,\sigma}(t_0)$ is diffracted by the laser interaction into the final electron spin configuration $c_{2,a''}^{1,\sigma''}(t)$ by
\begin{equation}
c_{2,a''}^{1,\sigma''}(t)=\sum_{\sigma} U_{2,a'';0,0}^{1,\sigma'';1,\sigma}(t,t_0)c_{0,0}^{1,\sigma}(t_0)\label{eq:spin_propagation}
\end{equation}
in terms of the general quantum state propagation equation \eqref{eq:the possibility}. The entries of
\begin{equation}
U_{2,a'';0,0}^{1,\sigma'';1,\sigma}(t,t_0) =
\begin{pmatrix}
u_{00} & u_{01} \\
u_{10} & u_{11}
\end{pmatrix}
\end{equation}
for a specific value of $a''$ are then the entries of a complex $2 \times 2$ matrix with column index $\sigma''$ and row index $\sigma$. In the abstract 4-component vector space $(u_{00}, u_{01}, u_{10}, u_{11})^T \in \mathbb{C}^4$ of complex $2 \times 2$ matrices, one can define the scaled inner product
\begin{align}
\Braket{M|U}=\frac{1}{\eta} \left(m_{00}^{*}u_{00}+m_{01}^{*}u_{01}+m_{10}^{*}u_{10}+m_{11}^{*}u_{11}\right)\,,
\label{eq:40}
\end{align}
with $u, m \in \mathbb{C}^{2\times 2}$ being the matrix entries of the corresponding $2 \times 2$ matrices $U$ and $M$, respectively. A scale parameter $\eta$ appears in \eqref{eq:40} which will be specified soon in Eq. \eqref{eq:eta_definition}. The space of complex $2\times 2$ matrices can be spanned by the $2\times 2$ identity matrix $\mathds{1}$ and the three Pauli matrices $\sigma_x$, $\sigma_y$, $\sigma_z$. Projecting the numerically evaluated expressions of $U_{2,a'';0,0}^{1,\sigma'';1,\sigma}(t,t_0)$ in the form of Eq. \eqref{eq:solution_perturbation_theory} on these four orthogonal components yields only non-vanishing contributions for $\Braket{\mathds{1}|U}$ and $\Braket{\sigma_{y}|U}$ for the field configuration of the combined fields \eqref{eq:Gaussain_beam_transverse_vector_potential} and \eqref{eq:Gaussain_beam_longitudinal_vector_potential}. In terms of the inner product notion \eqref{eq:40}, we therefore set
\begin{subequations}%
\begin{align}%
\Braket{\mathds{1}|U}_{a''}&=\frac{1}{\eta}\left[U_{2,a'';0,0}^{1,0;1,0}(t,t_{0})+U_{2,a'';0,0}^{1,1;1,1}(t,t_{0})\right]\label{eq:spin-conserved}
\\
\Braket{\sigma_{y}|U}_{a''}&=\frac{i}{\eta}\left[U_{2,a'';0,0}^{1,0;1,1}(t,t_{0})-U_{2,a'';0,0}^{1,1;1,0}(t,t_{0})\right]\,,\label{eq:spin-changing}
\end{align}\label{eq:propagator_projections}%
\end{subequations}%
where we factor out the value
\begin{equation}
\eta=2\left[-i \,\Gamma \frac{q^2A_{0}^2}{4}(t-t_0)\right]\,.\label{eq:eta_definition}
\end{equation}
In this way, $U$ appears in the form
\begin{equation}
\begin{pmatrix}
U_{2,a'';0,0}^{1,0;1,0} & U_{2,a'';0,0}^{1,0;1,1}\\
U_{2,a'';0,0}^{1,1;1,0} & U_{2,a'';0,0}^{1,1;1,1}
\end{pmatrix}
= \frac{\eta}{2}
\begin{pmatrix}
\Braket{\mathds{1}|U}_{a''} & -i \Braket{\sigma_{y}|U}_{a''}\\
i \Braket{\sigma_{y}|U}_{a''} & \Braket{\mathds{1}|U}_{a''}
\end{pmatrix}\,.\label{eq:spin_propagation_matrix}
\end{equation}
We also find numerically that $\Braket{\mathds{1}|U}$ is purely real and $\Braket{\sigma_{y}|U}$ is purely imaginary, ie.
\begin{subequations}%
\begin{align}%
\textrm{Im}(\Braket{\mathds{1}|U})&=0\\
\textrm{Re}(\Braket{\sigma_{y}|U})&=0\,,
\end{align}\label{eq:spin_decomposition_real_and_imaginary_value}%
\end{subequations}%
for each index $a''$. In the following, we want to give a more intuitive picture of the spin decomposition in this subsection, regarding the physical point of view of our description.
Based on the property \eqref{eq:spin_decomposition_real_and_imaginary_value}, one can further substitute
\begin{subequations}%
\begin{align}%
\Braket{\mathds{1}|U} &= \xi \cos \frac{\phi}{2} \\
\Braket{\mathds{\sigma}_y|U} &= -i \xi \sin \frac{\phi}{2}\,,
\end{align}%
\end{subequations}%
with an amplitude $\xi$ and an angle $\phi$, for each index $a''$. In this form Eq. \eqref{eq:spin_propagation_matrix} turns into a $\mathcal{SU}(2)$ matrix times an amplitude $\eta \xi/2$ and acts at the quantum state as a spin rotation, combined with a diffraction probability. This can be seen by assuming the initial electron quantum state $c_{0,0}^{1,\sigma}(t_0)$ in the spin propagation equation \eqref{eq:spin_propagation} to be in the spin state
\begin{equation}
\psi_i(\alpha) =
\begin{pmatrix}
\cos \frac{\alpha}{2} \\
\sin \frac{\alpha}{2}
\end{pmatrix}=
\begin{pmatrix}
c_{0,0}^{1,0}(t_0) \\
c_{0,0}^{1,1}(t_0)
\end{pmatrix}
\,.\label{eq:bloch_state}
\end{equation}
The form \eqref{eq:bloch_state} corresponds to a state on the Bloch sphere which points at some direction in the $x$-$z$ plane. On interaction of the laser with the electron, this quantum state gets diffracted by virtue of \eqref{eq:spin_propagation}, in our description. The resulting quantum state $c_{2,a''}^{1,\sigma''}(t)$ would be then of the form $(\eta \xi/2) \psi_i(\alpha + \phi)$, ie. rotated by the angle $\phi$ around the $y$-axis, with a reduced normalization, given by the factor $\eta \xi/2$. This rotation and change of normalization of the quantum space takes place for each index $a''$ with different values. The reader is reminded that the index $a''$ corresponds to the 5 different arrow directions of the diffracted wave packet, as illustrated in Fig. \ref{fig:Gaussian_beam}. Further details about spin rotations in the Kapitza-Dirac effect can be found in references \cite{ahrens_2012_phdthesis_KDE,ahrens_bauke_2013_relativistic_KDE} and generalizing concepts about possible other spin dynamics beyond a pure spin rotation are discussed in \cite{ahrens_2017_spin_filter,ahrens_2020_two_photon_bragg_scattering}.
While the rotation of the initial spin state $\psi_i$ by the spin propagation \eqref{eq:spin_propagation} with explicit form \eqref{eq:spin_propagation_matrix} is the description of the physical process, one may assign an even simpler picture to it, in the context of experimental detection. Assume, the electron was initially polarized along the $z$-axis
\begin{equation}
c_{0,0}^{1,0}(t_0)=1\,,\quad c_{0,0}^{1,1}(t_0)=0\,.\label{eq:spin_up_initial_condition}
\end{equation}
This initial state corresponds to setting $\alpha=0$ in $\psi_i$ of Eq. \eqref{eq:bloch_state}. Then, with the spin propagation \eqref{eq:spin_propagation} and \eqref{eq:spin_propagation_matrix}, we can write the absolute square of the diffracted state as
\begin{subequations}%
\begin{align}%
|c_{2,a''}^{1,0}(t)|^2 &= \frac{\eta^2}{4} |\Braket{\mathds{1}|U}_{a''}|^2 = \frac{\eta^2 \xi^2}{4} \cos^2 \frac{\phi}{2}\label{eq:no_flip_probability}\\
|c_{2,a''}^{1,1}(t)|^2 &= \frac{\eta^2}{4} |\Braket{\mathds{\sigma}_y|U}_{a''}|^2 = \frac{\eta^2 \xi^2}{4} \sin^2 \frac{\phi}{2}\,.\label{eq:spin_flip_probability}
\end{align}%
\end{subequations}%
In essence, the coefficients $\Braket{\mathds{1}|U}_{a''}$ and $\Braket{\mathds{\sigma}_y|U}_{a''}$ can be associated with a diffraction probability $(\eta \xi/2)^2$ and a corresponding spin-flip probability \eqref{eq:spin_flip_probability} and no-flip probability \eqref{eq:no_flip_probability}. The probabilities are caused by a spin rotation around the $y$-axis during diffraction for each sub-diffraction order $a''$, ie. for each of the five arrows in Fig. \ref{fig:Gaussian_beam}.
\subsection{Numeric evaluation\label{sec:numeric_evaluation}}
Now we are ready for a numeric analysis of the quantum state propagation matrix $U$ in Eq. \eqref{eq:solution_perturbation_theory}, which we have cast into the form \eqref{eq:propagator_projections}. To give an orientation for the reader we first emphasize, how our work with an additional longitudinal polarization component from the Gaussian beam is extending previous calculations: The spin effect of the Kapitza-Dirac scattering in \cite{ahrens_bauke_2013_relativistic_KDE}, which is extended in this article, shows a flip of the electron spin. This spin flip is caused by a $\sigma_y$ expression of the electron spin propagation, as it is shown in \eqref{eq:spin_propagation_matrix} and corresponds to the $(z,z)$ index pair contribution in the perturbative propagation expression \eqref{eq:solution_perturbation_theory}. This $(z,z)$ contribution corresponds to a joint interaction of the transverse polarization component \eqref{eq:S_piture_interaction_transverse_vector_potential} for the potentials $V_{I}(t_{2})$ and, at the same time, $V_{I}(t_{1})$.
We are extending this term in our calculation by contributions which contain the action of the longitudinal polarization component \eqref{eq:S_piture_interaction_longitudinal_vector_potential} once (terms with index pairs $(x,z)$ or $(z,x)$) or even twice (terms with index pair $(x,x)$). For viewing modifications from the longitudinal beam component, we plot the amplitude of the spin preserving terms $\Braket{\mathds{1}|U}_{a''}$ and spin altering terms $\Braket{\sigma_{y}|U}_{a''}$ in Figs. \ref{Fig.1} and \ref{Fig.2} for different values of the final longitudinal electron momentum index $a''$, as a function of $k_L$ and $k_z$. Note, that according to our approach in \eqref{eq:momentum_vector} the $z$-component of the electron momentum of the final wave function is $a'' k_z + k_0$, where we set the $z$ momentum offset $k_0$ to the value $m$, consistent with previously considered scenarios in the references \cite{ahrens_2017_spin_non_conservation,ahrens_2020_two_photon_bragg_scattering}.
It is more suitable to discuss the results in terms of the dimensionless variables
\begin{align}
q_L = \frac{k_L}{m} \\
q_z = \frac{k_z}{m}\,,
\end{align}
which are used in the following text and also in Figs. \ref{Fig.1} and \ref{Fig.2}. Note that in Fig. \ref{Fig.1} terms with \emph{one} longitudinal interaction are shown, which correspond to the index pairs $(x,z)$ or $(z,x)$, for which $a'' \in \{1,-1\}$, according to table \ref{tabel I}. On the contrary, in Fig. \ref{Fig.2} terms with \emph{two} longitudinal interactions are shown, corresponding to the index pair $(x,x)$, for which $a'' \in \{2,0,-2\}$. Note, that the solution for the Gaussian beam assumes, that the diffraction angle $\epsilon$ is small. Therefore, we have marked the location in Figs. \ref{Fig.1} and \ref{Fig.2} with a red dotted line, at which $\epsilon = 1/2$. Everything in the upper left corner, above this red dotted line corresponds to an unphysically large diffraction angle, at which the Gaussian beam approximation in powers of $\epsilon$ might be considered as invalid.
\begin{figure}%
\includegraphics[width=0.5\textwidth]{density_first.pdf}
\caption{Amplitude of the functions $|\Braket{\mathds{1}|U}_{\pm 1}|$ and $|\Braket{\sigma_{y}|U}_{\pm 1}|$ as simultaneous functions of $q_L$ and $q_z$. The left (right) panels display the $|\Braket{\mathds{1}|U}|$ ($|\Braket{\sigma_{y}|U}|$) as implied by Eqs. \eqref{eq:propagator_projections}, respectively. The upper (lower) panels correspond to $a''=1$ ($a''=-1$), respectively. The red dotted line marks the location where $q_L = q_z$, at which $\epsilon=1/2$. Above $\epsilon=1/2$ (ie. in the upper left corner), the Gaussian beam approximation may be be considered as invalid. \label{Fig.1}}
\end{figure}%
\begin{figure}%
\includegraphics[width=0.5\textwidth]{density_second.pdf}
\caption{Amplitude of the functions $|\Braket{\mathds{1}|U}_{a''}|$ and $|\Braket{\sigma_{y}|U}_{a''}|$ for $a'' \in \{2, 0, -2\}$ as simultaneous functions of $q_L$ and $q_z$. This figure is similar to Fig. \ref{Fig.1}, except that the upper most panels correspond to $a''=2$, the middle panels correspond to $a''=0$ and the lower most panels correspond to $a''=-2$. The red dotted line marks the location, at which $\epsilon = 1/2$, see description in Fig. \ref{Fig.1} and in the main text.\label{Fig.2}}
\end{figure}%
In order to investigate the functional behavior of $\Braket{\mathds{1}|U}$ and $\Braket{\sigma_{y}|U}$ more accurately, we plot them again in Figs. \ref{Fig.3} and \ref{Fig.4} as a function of either $q_L$ or $q_z$ in a line plot, instead the density plot of both variables in Figs. \ref{Fig.1} and \ref{Fig.2}. We set the fixed value of $q_z=10^{-5}$ in Fig. \ref{Fig.3} and $q_L=2 \times 10^{-2}$ Fig. \ref{Fig.4}.
\begin{figure}%
\includegraphics[width=0.5\textwidth]{log_log_plot_first.pdf}
\caption{Amplitude of the functions $|\Braket{\mathds{1}|U}_{\pm 1}|$ and $|\Braket{\sigma_{y}|U}_{\pm 1}|$ as either a function of $q_L$ or $q_z$. The left panels show plots with varying $q_L$, where $q_z$ has the fixed value $10^{-5}$. Accordingly, the right panels show plots with varying $q_z$, where $q_L$ has the fixed value $2 \times 10^{-2}$. The colored, dashed lines are fits with the linear functions $h q_L + g$ (left panels) or $h q_z + g$ (right panels) to the linearly growing or dropping regions of the displayed functions, respectively. The slopes $h$ of the functions are listed in table \ref{tabel II}.\label{Fig.3}}
\end{figure}%
\begin{figure}%
\includegraphics[width=0.5\textwidth]{log_log_plot_second.pdf}
\caption{Amplitude of the functions $|\Braket{\mathds{1}|U}_{a''}|$ and $|\Braket{\sigma_{y}|U}_{a''}|$, with $a'' \in \{2, 0, -2\}$ as either a function of $q_L$ or $q_z$. Similarly to Fig. \ref{Fig.3} the left panels vary in $q_L$, with $q_z=10^{-5}$ and the right panel vary in $q_z$, with $q_L=2 \times 10^{-2}$. And correspondingly to Fig. \ref{Fig.3}, the colored, dashed lines in the lower left panel are linear fitting functions $h q_L + g$ of the linearly growing regions of the $\Braket{\sigma_{y}|U}$, with the slopes $h$ displayed in table \ref{tabel II}.\label{Fig.4}}
\end{figure}%
\subsection{Analysis of results\label{sec:analysis_of_results}}
We see in Figs. \ref{Fig.3} and \ref{Fig.4} a linear behavior of the functions $\Braket{\mathds{1}|U}$ and $\Braket{\sigma_{y}|U}$ over a vast area of the parameter range, which we approximate with linear fitting functions in the double-logarithmic plot. The linear function
\begin{equation}
\log_{10}(|\braket{M|U}(\lambda)|) = h \lambda + g \label{eq:linear_fitting_function}
\end{equation}
with $\lambda = \log_{10}(q)$, ($q$ is either $q_L$ or $q_z$ and $M$ is either $\mathds{1}$ or $\sigma_y$) can be written as power law
\begin{equation}
|\braket{M|U}(q)| = q^h 10^g\,, \label{eq:power_law_function}
\end{equation}
where $q=10^\lambda$, such that the slope $h$ determines at which power $\Braket{M|U}$ is growing in $q$. We are listing the different slopes $h$ in table \ref{tabel II}.
\begin{table}
\caption{Slopes $h$ of the fitting functions in Figs. \ref{Fig.3} and \ref{Fig.4}. We show the parameter $h$ of the linear fitting functions $h \lambda + g$ in Eq. \eqref{eq:linear_fitting_function}, which are used in the double logarithmic plots of Fig. \ref{Fig.3} (above the horizontal line) and Fig. \ref{Fig.4} (below the horizontal line). The linear functions of the double logarithmic plots correspond to the power law $q^h 10^g$, corresponding to Eq. \eqref{eq:power_law_function}. For simplicity, we round the fitted slopes to integer numbers in the `approximation' column and use these simplified values for the exponents $\zeta$ and $\theta$ of the general functional form of $\braket{M|U}$ in Eq. \eqref{eq:propagator_scaling_law}. As a result, we obtain the specific functions \eqref{eq:scaling_longitudinal_1}, \eqref{eq:scaling_longitudinal_2}, \eqref{eq:rewritten_longitudinal_1} and \eqref{eq:rewritten_longitudinal_2}, according to the procedure as explained in the main text.\label{tabel II}}
\begin{tabular}{ l l l }
function$\qquad\qquad$ & slope $h$ & $\qquad\qquad$approximation\\
\hline
$\Braket{\mathds{1}|U}_{-1}(q_L)\qquad$ & -1.0014 &$\hspace{7em}$ -1 \\
$\Braket{\mathds{1}|U}_{-1}(q_z)\qquad$ & 0.998 & $\hspace{7.5em}$1\\
$\Braket{\sigma_{y}|U}'_{-1}(q_L)\qquad$ & -1.96 &$\hspace{7em}$-2\\
$\Braket{\sigma_{y}|U}_{-1}(q_z)\qquad$ & 1.015 & $\hspace{7.5em}$1\\
$\Braket{\sigma_{y}|U}'_{-1}(q_z)\qquad$ & 1.88 & $\hspace{7em}$ 2 \\ \hline
$\Braket{\sigma_{y}|U}_{-2}(q_L)\qquad$ & 1.0027 & $\hspace{7.5em}$1\\
$\Braket{\sigma_{y}|U}_{0}(q_L)\qquad$ & 1.0031 & $\hspace{7.2em}$ 1
\end{tabular}
\end{table}
In order to obtain a simple scaling behavior of the functions $\Braket{M|U}$ on their linear range in the double-logarithmic plot, we denote them as
\begin{equation}
\Braket{M|U}=C q_L^{\zeta} q_z^{\theta}\,, \label{eq:propagator_scaling_law}
\end{equation}
where we take the approximated integer numbers in table \ref{tabel II} as the values for the corresponding powers $\zeta$ and $\theta$ of $q_L$ and $q_z$, respectively. The constant $C$ we obtain by probing the functions $\Braket{M|U}$ at specific value pairs $q_L$ and $q_z$ which we show in table \ref{tabel III} and solving Eq. \eqref{eq:propagator_scaling_law} for $C$. We obtain
\begin{subequations}%
\begin{alignat}{3}%
&\Braket{\mathds{1}|U}_{\pm 1}&&=& &0.707 q_L^{-1} q_z\\
&\Braket{\sigma_{y}|U}_{1} &&=& i&0.280 q_z \\
&\Braket{\sigma_{y}|U}_{-1} &&=& i&0.305 q_z\\
&\Braket{\sigma_{y}|U}'_{1} &&=&-i&0.460 q_L^{-2} q_z^{2}\\
&\Braket{\sigma_{y}|U}'_{-1} &&=& i&0.497 q_L^{-2} q_z^{2}
\end{alignat}\label{eq:scaling_longitudinal_1}%
\end{subequations}%
for the expressions with a single longitudinal interaction, corresponding to the index pairs $(x,z)$ and $(z,x)$ and corresponding to Figs. \ref{Fig.1} and \ref{Fig.3}. Expressions with a double longitudinal interaction, corresponding to the index pair $(x,x)$ and Figs. \ref{Fig.2} and \ref{Fig.4} are resulting in
\begin{subequations}%
\begin{alignat}{3}%
&\Braket{\mathds{1}|U}_{\pm 2}&&=& &1.414\\
&\Braket{\mathds{1}|U}_{0} &&=& -&2.828\label{eq:longitudinal_2_id_a_0}\\
&\Braket{\sigma_{y}|U}_{\pm 2}&&=&-i&0.415 q_L\\
&\Braket{\sigma_{y}|U}_{0} &&=& i&0.829 q_L\,.
\end{alignat}\label{eq:scaling_longitudinal_2}%
\end{subequations}%
We can further recast the expressions \eqref{eq:scaling_longitudinal_1} and \eqref{eq:scaling_longitudinal_2} by inserting relation \eqref{eq:k_z_k_L_relation},
written in the form
\begin{equation}
q_z = 2 \epsilon q_L\,.
\end{equation}
and by multiplying with the $\Gamma$ factor from table \ref{tabel I}, resulting in
\begin{subequations}%
\begin{alignat}{3
&\Gamma_{\pm 1}\Braket{\mathds{1}|U}_{\pm 1} &&=&\pm&0.606\epsilon_{}^{2}\\
&\Gamma_{1}\Braket{\sigma_{y}|U}_{1} &&=& i&0.240\epsilon_{}^{2} q_L \\
&\Gamma_{-1}\Braket{\sigma_{y}|U}_{-1} &&=& -i&0.262\epsilon_{}^{2} q_L \\
&\Gamma_{1}\Braket{\sigma_{y}|U}'_{1} &&=& -i&0.789\epsilon_{}^{3}\\
&\Gamma_{-1}\Braket{\sigma_{y}|U}'_{-1} &&=& -i&0.853\epsilon_{}^{3}
\end{alignat}\label{eq:rewritten_longitudinal_1}%
\end{subequations}%
for the expressions which contain one longitudinal interaction. Here, we have substituted Eq. \eqref{eq:capital_epsilon} for the terms $\mathcal{E}$, which appear in the $\Gamma$ factor \eqref{prefactor} of table \ref{tabel I}. For expressions with two longitudinal interactions, in Eq. \eqref{eq:scaling_longitudinal_2} we obtain
\begin{subequations}%
\begin{alignat}{3
&\Gamma_{\pm 2}\Braket{\mathds{1}|U}_{\pm 2} &&=&-i&0.260\epsilon_{}^{2}\\
&\Gamma_{0}\Braket{\mathds{1}|U}_{0} &&=&-i&0.520\epsilon_{}^{2} \label{eq:rewritten_longitudinal_2_id_a_0}\\
&\Gamma_{\pm 2}\Braket{\sigma_{y}|U}_{\pm 2} &&=& &0.077\epsilon_{}^{2} q_L \\
&\Gamma_{0}\Braket{\sigma_{y}|U}_{0} &&=& &0.153\epsilon_{}^{2} q_L \,.
\end{alignat}\label{eq:rewritten_longitudinal_2}%
\end{subequations}%
Note, that we are using the final longitudinal diffraction order $a''$ as sub-index for the $\Gamma$ factors in Eqs. \eqref{eq:rewritten_longitudinal_1} and \eqref{eq:rewritten_longitudinal_2}.
\begin{table}
\caption{Specific function values of $\Braket{\mathds{1}|U}$ and $\Braket{\sigma_{y}|U}$ for the prefactor determination of the scaling approximation \eqref{eq:propagator_scaling_law}. The functions $\Braket{\mathds{1}|U}$ and $\Braket{\sigma_{y}|U}$ are evaluated at the parameter value pair $q_L$ and $q_z$, resulting in the column `value'. The values above the double lines correspond to Figs. \ref{Fig.1} and \ref{Fig.3} and below the double lines they correspond to Figs. \ref{Fig.2} and \ref{Fig.4}. Different types of functions are separated by single lines. The determined function values are used to solve the power law Eq. \eqref{eq:propagator_scaling_law} for the prefactor $C$, with corresponding values for $\zeta$ and $\theta$ from table \ref{tabel II}. The resulting functions are displayed in Eqs. \eqref{eq:scaling_longitudinal_1}, \eqref{eq:scaling_longitudinal_2}, \eqref{eq:rewritten_longitudinal_1} and \eqref{eq:rewritten_longitudinal_2}.\label{tabel III}}
\begin{tabular}{ c r r l }
function & $\quad q_L\quad$ & $\quad q_z\quad$ & $\qquad\qquad$value\\
\hline
$\Braket{\mathds{1}|U}_{1}$ & 2$\times 10_{}^{-2}$ & $\qquad 1 \times 10_{}^{-5}$ &$\qquad \phantom{-i}3.535\times 10_{}^{-4}$\\
$\Braket{\mathds{1}|U}_{-1}$ & 2$\times 10_{}^{-2}$ & $\qquad 1 \times 10_{}^{-5}$ &$\qquad \phantom{-i}3.535\times 10_{}^{-4}$\\
\hline
$\Braket{\sigma_{y}|U}_{1}$ & 2$\times 10_{}^{-2}$ & $\qquad1 \times 10_{}^{-5}$ &$\qquad\phantom{-} i2.804\times 10_{}^{-6}$\\
$\Braket{\sigma_{y}|U}_{-1}$ & 2$\times 10_{}^{-2}$ & $\qquad1 \times 10_{}^{-5}$ &$\qquad \phantom{-} i3.054\times 10_{}^{-6}$\\
\hline
$\Braket{\sigma_{y}|U}'_{1}$ &2$\times 10_{}^{-2}$ & $\qquad 6 \times 10_{}^{-3}$&$\qquad -i4.140\times 10_{}^{-2}$ \\
$\Braket{\sigma_{y}|U}'_{-1}$ & 2$\times 10_{}^{-2}$ & $\qquad6 \times 10_{}^{-3}$&$\qquad \phantom{-} i4.474\times 10_{}^{-2}$\\
\hline \hline
$\Braket{\mathds{1}|U}_{2}$ & 1$\times 10_{}^{-3}$ & $\qquad 1 \times 10_{}^{-5}$&$\qquad \phantom{-i} 1.414\times 10_{}^{0}$\\
$\Braket{\mathds{1}|U}_{-2}$ & 1$\times 10_{}^{-3}$ & $\qquad 1 \times 10_{}^{-5}$&$\qquad \phantom{-i} 1.414\times 10_{}^{0}$\\
$\Braket{\mathds{1}|U}_{0}$ & 1$\times 10_{}^{-3}$ & $\qquad 1 \times 10_{}^{-5}$&$\qquad - \phantom{i}2.828\times 10_{}^{0}$\\
\hline
$\Braket{\sigma_{y}|U}_{2}$ & 2$\times 10_{}^{-2}$ & $\qquad1 \times 10_{}^{-5}$ &$\qquad -i8.285\times 10_{}^{-3}$\\
$\Braket{\sigma_{y}|U}_{-2}$ & 2$\times 10_{}^{-2}$ & $\qquad1 \times 10_{}^{-5}$ &$\qquad -i8.285\times 10_{}^{-3}$\\
$\Braket{\sigma_{y}|U}_{0}$ & 2$\times 10_{}^{-2}$ & $\qquad1 \times 10_{}^{-5}$ &$\qquad \phantom{-} i1.657\times 10_{}^{-2}$
\end{tabular}
\end{table}
\section{Discussion and Conclusion\label{sec:discussion_and_conclusion}}
The resulting values in tables \ref{tabel II} and \ref{tabel III} and expressions in Eqs. \eqref{eq:scaling_longitudinal_1} till \eqref{eq:rewritten_longitudinal_2} are correction terms for the interaction of the longitudinal laser polarization component with the electron. In the introduction, the question was posed how a longitudinal polarization component from beam focusing is influencing the spin-dynamics of the Kapitza-Dirac effect. In order to answer this question, the expressions \eqref{eq:scaling_longitudinal_1} till \eqref{eq:rewritten_longitudinal_2} need to be compared with the purely transverse polarization component interaction, which corresponds to the $(z,z)$ index pair. For this index pair, we find a linear scaling of $\Braket{\sigma_{y}|U}$ of $q_L$ with power $\zeta=1$ and the value $\Braket{\sigma_{y}|U}$ at $q_L=2.0\times10^{-2}$ has the value $i 2\times10^{-2}$, in consistency with our previous expressions in reference \cite{ahrens_2020_two_photon_bragg_scattering} \footnote{We point out that different normalizations in the bi-spinor definitions have been used in this work, as compared to reference \cite{ahrens_2020_two_photon_bragg_scattering}}. The laser-electron interaction can be constructed such that $\Braket{\mathds{1}|U}$ can vanish completely for the $(z,z)$ index pair, see footnote [89] in reference \cite{ahrens_2017_spin_non_conservation} and the statement around Eq. (15) in reference \cite{ahrens_2020_two_photon_bragg_scattering}. Since the $(z,z)$ contribution does not contain any beam waist dependent longitudinal interaction components, $\Braket{\mathds{1}|U}$ and $\Braket{\sigma_{y}|U}$ are independent of $q_z$. Therefore, for the form in Eq. \eqref{eq:propagator_scaling_law} we have $\zeta=1$ and $\theta=0$ for $\Braket{\sigma_{y}|U}$, and in analogy to Eqs. \eqref{eq:scaling_longitudinal_1} and \eqref{eq:scaling_longitudinal_2} obtain
\begin{subequations}%
\begin{alignat}{3}%
&\Braket{\mathds{1}|U}_{0}&&=& &0\label{eq:zero_order_spin_not_changing_without_gamma}\\
&\Braket{\sigma_{y}|U}_{0}&&=&i&q_L\label{eq:zero_order_spin_changing_without_gamma}
\end{alignat}\label{eq:zero_order_without_gamma}%
\end{subequations}%
and correspondingly with $\Gamma_{z,z} = -1$
\begin{subequations}%
\begin{alignat}{3
&\Gamma_{0}\Braket{\mathds{1}|U}_{0}&&=& &0\label{eq:zero_order_spin_not_changing}\\
&\Gamma_{0}\Braket{\sigma_{y}|U}_{0}&&=&-i&q_L\,,\label{eq:zero_order_spin_changing}
\end{alignat}\label{eq:zero_order}%
\end{subequations}%
in analogy to Eqs. \eqref{eq:rewritten_longitudinal_1} and \eqref{eq:rewritten_longitudinal_2}, where we again substituted $a''=0$ for the index pair $(z,z)$ in the sub-index of $\Gamma$.
The spin changing electron-laser interaction without longitudinal contribution in Eq. \eqref{eq:zero_order_spin_changing} corresponds to the situation, in which beam focusing is completely neglected. It therefore is independent of the diffraction angle $\epsilon$. In contrast to that, interaction contributions with a longitudinal component in Eqs. \eqref{eq:rewritten_longitudinal_1} and \eqref{eq:rewritten_longitudinal_2} all scale at least with power 2 in $\epsilon$. In other words, the influence of longitudinal field components from laser beam focusing on the investigated, spin-dependent effect in Kapitza-Dirac scattering gets arbitrary low for arbitrary low beam foci, within the approximations which have been made in this article.
Besides these general considerations, it is also interesting to give an estimate at what values of the interaction parameters $q_L$ and $\epsilon$ the influence from the longitudinal polarization component begins to matter for the spin-dynamics. Since there are multiple sub-diffraction orders $a''$ with spin-preserving and spin-flipping contributions (9 different terms in Eqs. \eqref{eq:rewritten_longitudinal_1} and \eqref{eq:rewritten_longitudinal_2}), which have a partially different scaling behavior, it is reasonable to concentrate on contributions which scale with the smallest power in the small quantities $q_L$ and $\epsilon$, for an estimation. Most notably might be the contribution in Eq. \eqref{eq:rewritten_longitudinal_2_id_a_0}, which is proportional to the spin-preserving $2\times2$ identity and has the final longitudinal diffraction order $a''=0$. With $a''=0$, this contribution is located in the same point in momentum space as the interaction term \eqref{eq:zero_order} of an interaction without longitudinal contribution. Therefore both terms are physically indistinguishable after the interaction, by any means. Furthermore, \eqref{eq:rewritten_longitudinal_2_id_a_0} only scales with $\epsilon^2$ and is therefore one of the largest contributions from the longitudinal interactions. Eq. \eqref{eq:rewritten_longitudinal_2_id_a_0} is therefore our candidate for estimating the longitudinal influence on the spin dynamics in the following calculation. More specifically, we use the equivalent form \eqref{eq:longitudinal_2_id_a_0} of Eq. \eqref{eq:rewritten_longitudinal_2_id_a_0} for clarity of the calculation, in the following. Similarly to the considerations in section \ref{sec:spin_propagation}, one obtains the approximate probability for observing a diffraction without spin-flip
\begin{equation}
|c_{2,a''}^{1,0}(t)|^2 = \left[ \frac{\epsilon}{2}\frac{2.828}{e}\frac{q^2 A_0^2}{4} (t-t_0) \right]^2 \approx \left[ \frac{\epsilon}{2}\frac{q^2 A_0^2}{4} (t-t_0) \right]^2
\label{eq:leading_longitudinal}
\end{equation}
from Eq. \eqref{eq:longitudinal_2_id_a_0}, when inserted into \eqref{eq:spin_propagation_matrix}, with initial condition \eqref{eq:spin_up_initial_condition}. Eq. \eqref{eq:leading_longitudinal} stems from an interaction of the electron with a longitudinal polarization component and needs to be compared with the terms \eqref{eq:zero_order_without_gamma}, which do not involve interactions with the longitudinal component. The spin-preserving term \eqref{eq:zero_order_spin_not_changing_without_gamma} is zero and is therefore not of relevance for an estimation for the leading interaction contribution. The remaining spin-flipping term \eqref{eq:zero_order_spin_changing_without_gamma} inserted in Eq. \eqref{eq:spin_propagation_matrix} with initial condition \eqref{eq:spin_up_initial_condition} gives the spin-flip probability
\begin{equation}
|c_{2,a''}^{1,1}(t)|^2 = \left[ q_L \frac{q^2 A_0^2}{4} (t-t_0) \right]^2
\,.\label{eq:leading_pure_transverse}
\end{equation}
According to our explanations from above, spin-dynamics in the Kapitza-Dirac effect are getting influenced by a longitudinal polarization component, when the diffraction probability \eqref{eq:leading_longitudinal} is getting on the order of magnitude of the probability \eqref{eq:leading_pure_transverse}. Thus, setting both probabilities equal results in the scaling law
\begin{equation}
q_L = \frac{\hbar k_L}{m c} = \frac{\lambda_C}{\lambda} = \frac{\epsilon^2}{2}\,,\label{eq:beam_focus_relevance}
\end{equation}
which tells at which wavelengths $\lambda$ and diffraction angles $\epsilon$ the contributions from longitudinal interaction components are turning into non-negligible amplitudes. The reduced Planck constant $\hbar$, the vacuum speed of light $c$, the Compton wavelength $\lambda_C$ and the wavelength of the laser light $\lambda = 2 \pi/k_L$ are written out explicitly in Eq. \eqref{eq:beam_focus_relevance} for clearness
There are two interesting light frequencies (photon energies) for possible applications. One photon energy is the hard X-ray regime where 10\,keV roughly correspond to $q_L=2\times10^{-2}$, which is the value which is mainly under study in this paper. For this photon energy we obtain $\epsilon=0.2$, which implies beam foci on the order of
\begin{equation}
w_0 = \frac{\lambda}{2 \pi \epsilon} = 96\,\textrm{pm}\,.\label{eq:beam_focus}
\end{equation}
The other interesting photon energy is 2 \,eV of red light with 620\,nm, corresponding approximately to $q_L=4\times 10^{-6}$. For this parameter we have $\epsilon = 2.8\times 10^{-3}$ and Eq. \eqref{eq:beam_focus} yields $35\,\mu\textrm{m}$ for the corresponding laser beam focus. We therefore conclude, that longitudinal fields from beam focusing are not expected to have significant influence on the spin dynamics of the investigated scenario of a spin altering Kapitza-Dirac effect with a hard X-ray standing light wave. In the optical regime however, the influence of longitudinal fields might be of relevance for the electron spin dynamics.
\section{Outlook\label{sec:outlook}}
The main motivation for our study was to answer, whether a longitudinal polarization component from beam focusing has an influence on spin dynamics in the Kapitza-Dirac effect. In this context, we have only accounted for the transverse spacial dependence of the longitudinal component, but did not account for the transverse spacial dependence of the transverse polarization component, which however, one would expect to scale with at least $\epsilon^2$. In other words, this first investigation could still be improved into a study which is consistent up to order $\epsilon^2$, within our plane wave approximation of the potentials. Along this line, one might raise the question, whether the rough approximation of the potentials in Eqs. \eqref{eq:Gaussain_beam_transverse_vector_potential} and \eqref{eq:Gaussain_beam_longitudinal_vector_potential} are sufficiently accurate for solid statements at all. It is possible, to solve the relativistic quantum dynamics of the Dirac equation by exact numeric solutions, for example with the Fourier-transform split-operator method \cite{Grobe_1999_FFT_split_operator_method,bauke_2011_GPU_acceleration_FFT_split_operator}. With this type of more exact solution approach, a systematic parameter study as it is done in this article would be more difficult, but there would be less doubts about a possible oversimplification of the problem.
One interesting detail, which is not accounted for in the spin-\emph{changing} electron dynamics of this work is the investigation of spin-\emph{dependent} diffraction \cite{McGregor_Batelaan_2015_two_color_spin,dellweg_mueller_2016_interferometric_spin-polarizer,dellweg_mueller_extended_KDE_calculations,ahrens_2017_spin_filter,ebadati_2018_four_photon_KDE,ebadati_2019_n_photon_KDE,ahrens_2020_two_photon_bragg_scattering}. While the diffraction pattern in spin-dependent scattering would depend on the initial electron spin state, the diffraction pattern would be completely independent of the initial spin for the case of spin-changing dynamics. In other words, spin-dependent dynamics allows for the implementation of a Stern-Gerlach type of experiment, while spin-changing dynamics does not have this capability. However, till now, there is no purely linear field configuration known for implementing spin-dependent diffraction for the interaction with a single laser pulse and with only a two-photon-interaction in the Bragg regime. This implies complications for theoretical investigations, because the transverse spacial dependence of the longitudinal laser polarization component turns two-dimensional for any type of elliptical polarization, resulting in the necessity of a quantum simulation in three dimensions. Three dimensional solutions of the Dirac equation, in turn, are challenging, though not impossible \cite{Fu_2019_3D_dirac_spin_solution}, because one needs to numerically resolve the fast oscillations of the electron wave function in the complex plane which are implied by the mass term of the Dirac equation. One could of course think of solving the Schr\"odinger equation plus spin coupling terms for this problem, but beside the necessity of numerical time propagation techniques of operators which are neither diagonal in position space nor diagonal in momentum space, one would also encounter the question about which relativistic corrections from the Foldy-Wouthuysen transformations of the Dirac equation are of relevance for the electron spin dynamics. Configurations are known for which the plain Pauli equation is not enough for describing the system correctly \cite{bauke_ahrens_2014_spin_precession_1,bauke_ahrens_2014_spin_precession_2}.
\begin{acknowledgments}
The work was supported by the National Natural Science Foundation of China (Grants No. 11975155 and No. 11935008) and the Ministry of Science and Technology of the People's Republic of China (Grants No. QN20200213003, No. 2018YFA0404803 and No. 2016YFA0401102).
\end{acknowledgments}
\vspace{0.5 cm}
| proofpile-arXiv_066-2322 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
Searching for topological defects~\cite{Kibble:1976sj} in cosmological and astrophysical backgrounds is a promising way
to probe physics beyond the Standard Model at very high energies inaccessible at Earth based facilities. In the present work, we focus on cosmic strings, which have a rich phenomenology, notably through their effect on the matter distribution
of the Universe, gravitational lensing, and emission of gravitational waves (GWs)~\cite{Vilenkin, Hindmarsh:1994re}.
The impact of cosmic strings can be described by one dimensionless parameter $G\mu$, where $G$ is Newton's constant and $\mu$ is a tension (mass density per unit mass length), commonly assumed to be time-independent.
Consistency with the Planck data implies the limit $G\mu \lesssim 1.5 \cdot 10^{-7}$ at $95\%$ CL~\cite{Planck:2013mgr}. A considerably stronger constraint comes from pulsar timing arrays (PTAs), which reads $G \mu \lesssim 1.5 \cdot 10^{-11}$ at $95\%$ CL~\cite{Blanco-Pillado:2017rnf}. Note, however, that larger values of $G\mu$, which are in tension with this upper bound, are still of interest due to the recent results from NANOGrav~\cite{NANOGrav:2020bcs} hinting the first detection of stochastic GW background. This signal, if attributed to GWs, can be interpreted in terms of emission from cosmic strings~\cite{Ellis:2020ena, Samanta:2020cdk, Blasi:2020mfx}.
In this paper, we discuss a different type of cosmic strings, which have a tension decreasing with time\footnote{A weak logarithmic time-dependence of the tension is common for global cosmic strings.
See~\cite{Chang:2019mza} and references therein.}. The appearance of cosmic strings with such a seemingly exotic property can be well motivated in a physical setup, which involves no constant mass scale apart from the Planck scale due to a minimal coupling to gravity.
Consequently, the cosmic string tension should be related to the Hubble rate $H$, or the temperature of the Universe $T$. We choose the latter option, i.e., $\mu \propto T^2$,
so that the time-dependence of the tension is fixed to be\footnote{The former option with $\mu \sim H^2$ has been considered in Ref.~\cite{Bettoni:2018pbl}.}
\begin{equation}
\label{decreasing}
\mu \propto1/a^2 \; ,
\end{equation}
where $a$ is the scale factor of the Universe. A concrete example of a renormalizable model leading to such a behaviour is described in \hyperref[sec:Section2]{Section~2}.
Evolution of cosmic strings is considerably simpler in scenario~\eqref{decreasing} compared to the case with constant $\mu$. Due to the scale-invariance of the model we consider, the dynamics of melting strings in an expanding Universe is equivalent to those of strings with a constant tension in a flat spacetime. We further elaborate on this in~\hyperref[sec:Section3]{Section~3} and in~\hyperref[sec:AppA]{Appendix~A} for the case of Nambu-Goto strings. This equivalence plays a crucial role for defining the number density of loops in the scaling
regime: we simply use the results known from the studies of string evolution in a flat spacetime.
An interesting feature of scenario~\eqref{decreasing} is that one can have a large tension $\mu$ without conflicting with Cosmic Microwave Background (CMB) measurements. Indeed, even if $\mu$ starts from Planckian order values in the early Universe,
it redshifts to a negligible value by recombination. Nevertheless, due to gravitational radiation emitted by the loops~\cite{Vachaspati:1984gt}, melting cosmic strings do not disappear without leaving a trace.
In the present work, we estimate the GW spectrum in scenario~\eqref{decreasing} and show that it has a markedly non-flat shape. This contrasts the case of constant tension strings, cf. Fig.~\ref{gw}. In particular, the spectrum behaves
as $\Omega_{gw} \propto f^4$ in the low frequency regime, which is directly related to the behaviour~\eqref{decreasing} and hence serves as a defining property of our scenario.
We estimate the present day fractional
energy density of GWs, which can be as large as $\Omega_{gw} \simeq 10^{-8}-10^{-9}$ for $G\mu \simeq 10^{-4}$ at formation. Such energetic GWs are in the range accessible by essentially all planned detectors, provided
that the peak frequency $f_{\text{peak}} \lesssim 100~\mbox{Hz}$. The peak frequency is determined by the underlying model of melting strings. In the specific example discussed in \hyperref[sec:Section2]{Section~2},
phenomenologically interesting values of $f_{\text{peak}}$ imply extremely weak couplings of the fields constituting cosmic strings, to the thermal bath. In this regard, GW emission
from melting strings may serve as a window to a (very) dark sector of the Universe. We comment on the Dark Matter implications of our scenario in \hyperref[sec:Section6]{Section~6}.
\section{From scale-invariance to melting cosmic strings} \label{sec:Section2}
In this section, we shall demonstrate via a specific model, how cosmic strings with the tension~\eqref{decreasing} arise. With this in mind, let us consider the following scale free renormalizable Lagrangian:
\begin{equation}
\label{modelbasic}
{\cal L}=-\frac{1}{4}F^{\mu \nu}F_{\mu \nu}+ \frac{1}{2} |D_{\mu} \chi|^2-\frac{1}{4}\lambda |\chi |^4 +\frac{1}{2} g^2 \cdot |\chi|^2 \cdot |\phi|^2 \; .
\end{equation}
Here $\chi$ is a field transforming under the $U(1)$ gauge group; $F_{\mu \nu} \equiv \partial_{\mu} A_{\nu} -\partial_{\nu} A_{\mu}$ is the gauge field $A_{\mu}$ strength tensor. The covariant derivative is given by $D_{\mu} \chi=\partial_{\mu} \chi-i e A_{\mu} \chi$, where $e$ is the gauge coupling constant. The field $\phi$ is a scalar multiplet comprising $N$ degrees of freedom.
Assuming that $\phi$ is in thermal equilibrium with the surrounding plasma described by the temperature $T$, we fix its variance to be
\begin{equation}
\langle \phi^{\dagger} \phi \rangle_T \approx \frac{N T^2}{12} \; .
\end{equation}
Crucially we assume that $g^2$ is positive:
\begin{equation}
\label{choice}
g^2>0 \; .
\end{equation}
As a result, the effective potential of the field $\chi$ has non-trivial minima $\chi_{min} \neq 0$ located at
\begin{equation}
\label{minimum}
v \equiv |\chi_{\text{min}}| \approx \frac{g N^{1/2}T}{\sqrt{12 \lambda}} \; .
\end{equation}
Note that the model~\eqref{modelbasic} with the choice of the sign as in Eq.~\eqref{choice} was discussed in Ref.~\cite{Ramazanov:2021eya},
and we could readily use some of the results derived there. In particular, following Ref.~\cite{Ramazanov:2021eya} one can consider the field $\chi$ for the role
of Dark Matter. We briefly discuss this option in \hyperref[sec:Section6]{Section~6}. However, the scenario of Ref.~\cite{Ramazanov:2021eya} deals with the $Z_2$-symmetry group, thus leading to melting domain walls instead of cosmic strings and hence to quite distinct predictions regarding GWs.
We assume that initially the field $\chi$ is at zero, $\chi \simeq 0$, and remains stuck there for some time due to Hubble friction.
Rolling of the field $\chi$ to the minimum of the broken phase starts at the time $t_h$, when the Hubble rate becomes of the order of the thermal mass:
\begin{equation}
\label{startroll}
|M_{\text{thermal}, h}| \approx \frac{gN^{1/2}T_h }{\sqrt{12}} \simeq H_h \; .
\end{equation}
As the field $\chi$ reaches the minimum, a cosmic string network starts to form. Note that the rolling phase has a finite duration, and as a result, formation of cosmic strings is postponed until the time $t_l > t_h$. To capture this delay, we introduce the parameter
\begin{equation}
\label{epsilon}
\epsilon \equiv \frac{a_h}{a_l} \approx \frac{T_l}{T_h} \; ,
\end{equation}
which is independent of $g$ and $\lambda$, and depends on the quantum fluctuation of the field $\chi$ above the background $\chi=0$.
This quantum fluctuation, defined by the past evolution of the field $\chi$ at inflation and reheating, is crucial in triggering the roll towards the minimum.
It is worth remarking here, that the parameter $\epsilon$ naturally takes values in the range $\epsilon \simeq 0.1-1$.
We assume that the transition to the spontaneously broken phase occurs at the radiation-dominated stage, so that
\begin{equation}
\label{Hubble}
H (T) =\sqrt{\frac{\pi^2 g_* (T)}{90}} \cdot \frac{T^2}{M_{\text{Pl}}} \; ,
\end{equation}
where $g_* (T)$ is the number of relativistic degrees of freedom; $M_{\text{Pl}} \approx 2.44 \cdot 10^{18}~\mbox{GeV}$ is the reduced Planck mass.
Combining Eqs.~\eqref{startroll},~\eqref{epsilon}, and~\eqref{Hubble}, one obtains the temperature at the onset of cosmic string formation:
\begin{equation}
\label{formation}
T_{l} \simeq 9 \cdot 10^{-2} \cdot \epsilon \cdot g \cdot M_{Pl} \cdot N^{1/2} \cdot \left(\frac{100}{g_* (T_l)} \right)^{1/2} \; .
\end{equation}
Substituting this into Eq.~\eqref{minimum}, we get the expectation value at the time $t_l$:
\begin{equation}
\label{expectation}
v_{l} \approx \frac{2.6 \cdot 10^{-2} \cdot \epsilon \cdot M_{Pl} \cdot N}{\sqrt{\beta}} \left(\frac{100}{g_* (T_l)} \right)^{1/2}\; ,
\end{equation}
where $\beta$ is defined as
\begin{equation}
\label{beta}
\beta \equiv \frac{\lambda}{g^4} \; .
\end{equation}
The minimal possible value of $\beta$ follows from stability in the $(\chi, \phi)$ field space, i.e., $\lambda \lambda_{\phi} \geq g^4$, where $\lambda_{\phi}$ is the self-interaction coupling constant of the field $\phi$. Consequently, $\beta$ is bounded as
~\cite{Ramazanov:2021eya}
\begin{equation}
\beta \geq \frac{1}{\lambda_{\phi}} \gtrsim 1\; .
\end{equation}
The latter inequality guarantees that we are in a weakly coupled regime, so that $\lambda_{\phi} \lesssim 1$.
Furthermore, the condition $\beta \gtrsim 1$ ensures smallness of loop quantum corrections, $\delta \lambda \simeq N g^4/ (4\pi^2)$.
In what follows, we will be primarily interested in very small self-interaction coupling constants $\lambda \simeq g^4$, corresponding to $\beta \simeq 1$.
Such a choice is natural, if the model enjoys an approximate shift-invariance, becoming an exact one in the limit $g \rightarrow 0$.
The cosmic string tension is primarily defined by the expectation value $v$:
\begin{equation}
\mu =\pi v^2 h \left(\frac{\lambda}{2e^2} \right)\; ,
\end{equation}
where $h$ is a slowly varying function of its argument $\lambda/(2e^2)$ (see below). According to Eq.~\eqref{minimum}, the tension relies on the square of the Universe temperature, $\mu \propto T^2$, and hence decreases with time as $\mu \propto 1/a^2$. Using Eq.~\eqref{expectation}, we obtain for the relevant quantity $G\mu$ at cosmic string formation:
\begin{equation}
\label{atformation}
G \mu_{l} \approx \frac{0.8 \cdot \epsilon^2 \cdot 10^{-4} \cdot N^2 \cdot h \left(\frac{\lambda}{2e^2} \right)}{\beta} \cdot \left(\frac{100}{g_* (T_{l})} \right) \; .
\end{equation}
For $g_* (T_l) \simeq 100-1000$, $N \simeq 1-10$, $\epsilon \simeq 0.1-1$, and $h \simeq 0.1-1$, the quantity $G\mu_l$ varies in the range
\begin{equation}
\label{range}
G\mu_{l} \simeq \frac{(10^{-8}-10^{-2})}{\beta} \; .
\end{equation}
Following the discussion above, we choose $\beta \simeq 1$ meaning that $G\mu_l \gtrsim 10^{-8}$. Intriguingly, in our setup, even cosmic strings with $G\mu_l \gg 10^{-7}$ are harmless for the CMB, because the decreasing tension becomes negligibly small at recombination. On the other hand, early time emission of GWs, when $\mu$ is large, can be detectable by the future GW interferometers or PTAs.
Our choice $\lambda \simeq g^4$ imposes an important limitation on the gauge coupling constant $e$. Indeed, the Coleman-Weinberg one loop correction to the effective potential is given by
\begin{equation}
V_{CW} \simeq \frac{3e^4}{4 \pi^2} |\chi|^4 \ln \frac{|\chi|^2}{\sigma^2} \; ,
\end{equation}
where $\sigma$ is the renormalization scale. We require that this gives a negligible contribution to the effective potential of the field $\chi$, in particular, to the self-interaction term $\sim \lambda |\chi|^4$. This is crucial, as we want to proceed in a scale-independent fashion. Consistency with $\lambda \simeq g^4$ then implies that $e \lesssim g$. For $\lambda \simeq 2e^2$, one has $h \simeq 1$, while for $\lambda \ll 2e^2$ the following asymptotic behaviour holds~\cite{Hill:1987qx}:
\begin{equation}
\label{hlarge}
h \left(\frac{\lambda}{2e^2} \right) \simeq \left(\ln \frac{2e^2}{\lambda} \right)^{-1} \; .
\end{equation}
This explains the range of values $h \simeq 0.1-1$ assumed in Eq.~\eqref{range}\footnote{For $e \ll \sqrt{\lambda}$, one has $h \simeq \ln \frac{\lambda}{2e^2} \gg 1$. However, in this case gauge bosons
are very light. As a result, most of the energy of the long string network goes into gauge field excitations rather than into loops, by analogy with the case of global strings. Reduced loop formation implies
less gravitational radiation emitted compared to the case of larger gauge couplings $e \gtrsim \sqrt{\lambda}$.}.
An important remark is in order here. At formation, the width of cosmic strings can be estimated as $\delta_l \sim 1/(\sqrt{\lambda} v_l) \sim \epsilon H^{-1}_l$,
which, strictly speaking, constitutes a significant fraction of the horizon even for $\epsilon \simeq 0.1$, cf. Ref.~\cite{Bettoni:2018pbl}. On the other hand, as the Universe expands, one has $\delta/ H^{-1} \propto 1/a$,
so that cosmic strings become thin relative to the horizon size, and thus can treated as Nambu--Goto objects at $t\gg t_l$. Nevertheless, we will often rely on the results obtained for Nambu--Goto strings also at the times as early as $t \simeq t_l$. We expect this to give a qualitatively correct picture for large loops, at least for $\epsilon \simeq 0.1$ assumed in what follows. According to Eq.~\eqref{atformation}, for these values of $\epsilon$ and
not extremely large $N$, i.e., $N \lesssim 10$, the quantity $G \mu$ at formation does not exceed $\sim 10^{-4}$, which we set to be a fiducial value of $G\mu_l$.
\section{Number density of cosmic string loops} \label{sec:Section3}
The evolution of cosmic strings in the model~\eqref{modelbasic} is greatly simplified as a result of its scale-invariance. Indeed, by carrying out the following field redefinitions of the variables $\chi$ and $A_\mu$:
\begin{equation}
\label{redefined}
\tilde{\chi}=\chi a \qquad \tilde{A}_{\mu}=A_{\mu}\; ,
\end{equation}
we find that cosmic strings within this model are described by the action
\begin{equation}
S=\int d^{3} {\bf x} d \tau \left[-\frac{1}{4} \eta^{\lambda \mu} \eta^{\rho \nu} F_{\lambda \rho} F_{\mu \nu}+\frac{1}{2} \eta^{\mu \nu} D_{\mu} \tilde{\chi} D_{\nu} \tilde{\chi}^{*}-\frac{\lambda}{4} \left(|\tilde{\chi}|^2 -\tilde{v}^2 \right)^2 \right] \; ,
\end{equation}
where $\tau$ is the conformal time, $\eta^{\mu \nu}$ is the Minkowski metric, and $\tilde{v}$ is the expectation value of the field $\tilde{\chi}$, i.e., $\tilde{v}=v \cdot a$. Hereafter, rescaled quantities are denoted by a tilde. According to Eq.~\eqref{minimum}, $v \propto 1/a$, hence $\tilde{v}$ is constant. We conclude that evolution of melting cosmic strings in a radiation-dominated Universe is equivalent to evolution of constant tension cosmic strings in Minkowski spacetime. In \hyperref[sec:AppA]{Appendix~A}, we reiterate this statement starting from the Nambu--Goto action. There we also discuss the generalization to cosmic strings whose tension has arbitrary time dependence.
The cosmic string network enters the scaling regime soon after its formation. In the case of flat spacetime, this has been demonstrated with numerical simulations in Refs.~\cite{Vanchurin:2005yb} (for long strings) and~\cite{Vanchurin:2005pa} (for loops). In the scaling regime, the loop production
function--the number density of loops per unit conformal time per unit loop conformal length $\tilde{l} \equiv \frac{l(t)}{a(t)}$--is given by
\begin{equation}
f (\tau, \tilde{l}) =\frac{1}{\tau^5} f (x) \; ,
\end{equation}
where
\begin{equation}
x \equiv \frac{\tilde{l}}{\tau} \; .
\end{equation}
The form of $f(x)$ is independent of time.
The number density of loops per unit length produced in the scaling regime is related to the loop production function by
\begin{equation}
n (\tau, \tilde{l}) \equiv \frac{dN}{d{\bf x} d\tilde{l}}=\int^{\tau}_{\tau_{s}} \frac{d\tau'}{\tau^{'5}} f (x') \; ,
\end{equation}
where $\tau_s$ denotes the time, when the cosmic string network settles into the scaling regime. Neglecting gravitational backreaction, the loop length $\tilde{l} =x \tau$ remains constant with time. Therefore, one has
\begin{equation}
x' \tau' =x \tau.
\end{equation}
Following Ref.~\cite{Blanco-Pillado:2013qja}, we make the change of the integration variable:
\begin{equation}
n (\tau, \tilde{l}) =\int^{\tilde{l}/\tau}_{\tilde{l}/\tau_{s}} \frac{dx'}{\tau^{'5}} f (x') \frac{\partial \tau'}{\partial x'} \; ,
\end{equation}
and obtain
\begin{equation}
\label{flatscaling}
n (\tau, \tilde{l})=\frac{1}{\tilde{l}^4} \int^{\tilde{l}/\tau_{s}}_{\tilde{l}/\tau} dx' x^{'3} f(x') \; .
\end{equation}
The loop number density per comoving volume per physical length $n(t, l)$ is related to $n (\tau, \tilde{l})$ by
\begin{equation}
\label{rel}
n (t, l) \equiv \frac{dN}{d{\bf x} dl}= \frac{1}{a(\tau)} n (\tau, \tilde{l}) \; .
\end{equation}
In the limit $\tau_{s} \rightarrow 0$, the integral in Eq.~\eqref{flatscaling} takes the form of Eq.~(15) in Ref.~\cite{Blanco-Pillado:2013qja}\footnote{One should also set $\nu=0$ in Eq.~(15) of Ref.~\cite{Blanco-Pillado:2013qja}, which corresponds to the
flat spacetime limit.}. However, as we will see in \hyperref[sec:Section4]{Section~4}, the most relevant contribution to GW emission comes from very early times. Therefore, it is crucial that we do not take the limit $\tau_s \rightarrow 0$.
To proceed, we need to make a choice of the loop production function $f(x)$. When evaluating GWs, we cannot reliably account for contributions arising from small loops, with $x \ll 0.1$, for two main reasons. First, in the model described in \hyperref[sec:Section2]{Section~2}, strings are relatively thick at the time of production,
and thus we expect small loop formation to be suppressed. Second, small loops, if abundantly produced, initially exhibit a strong departure from the scaling regime, which persists for a longer time compared to large loops~\cite{Vanchurin:2005pa, Ringeval:2005kr, Martins:2005es, Olum:2006ix}. Therefore,
we mostly discard small loops with $x \ll 0.1$ in the following analysis, possibly at the price of underestimating gravitational radiation. In this situation, it is natural to stick to the velocity-dependent one-scale model (VOS)~\cite{Kibble:1984hp, Bennett:1985qt}, which assumes that the size of loops created at any time $\tau$
is a fixed fraction of $\tilde{L} \propto \tau$ corresponding to the distance between long strings.
This size is chosen to match the maximum size of loops obtained in numerical simulations~\cite{Vanchurin:2005pa}: $\tilde{l} =\alpha \tau$, where
\begin{equation}
\label{maximum}
\alpha \simeq 0.1 \; .
\end{equation}
That is, the function $f(x)$ is given by
\begin{equation}
\label{onescale}
f(x)=C \delta \left(x-\alpha \right) \; .
\end{equation}
There are two ways of getting the constant $C$ -- one is analytical and is summarized in \hyperref[sec:AppB]{Appendix~B}. It gives $C \simeq 500$.
The other involves matching to the function $f(x)$ derived from numerical simulations of cosmic strings in a flat spacetime~\cite{Vanchurin:2005pa}:
\begin{equation}
\label{loopdensity}
f (x) \approx \frac{A \Theta \left(\alpha-x \right)}{x^{\gamma}}\; ,
\end{equation}
where $A \approx 82$ and $\gamma \approx 1.63$. The constant $C$ is fixed by the requirement that Eqs.~\eqref{onescale} and~\eqref{loopdensity} give the same number density of loops, when substituted into Eq.~\eqref{flatscaling}:
\begin{equation}
\label{numerics}
C \approx 150 \; .
\end{equation}
This is only a factor three below the analytically derived value. We shall assume the value~\eqref{numerics} in the following analysis.
Due to emission of GWs, loops shrink in size as time proceeds, and this effect should be taken into account when defining the number density of loops. However, as we shall now demonstrate, gravitational backreaction practically does not affect the evolution of large loops in our case. The mass of the loop $m \equiv \mu l$ is changing according to
\begin{equation}
\frac{dm}{dt}= -H m -\Gamma G\mu^2 (t)\; ,
\end{equation}
where $\Gamma \approx 50$~\cite{Vachaspati:1984gt, Blanco-Pillado:2017oxo}. The first term on the R.H.S. follows from an approximate scale-invariance of the model: all quantities with mass dimension redshift with the scale factor as $m \sim 1/a$ (modulo gravitational backreaction). Consequently, evolution
of the loop conformal length is given by
\begin{equation}
\frac{d\tilde{l}}{d\tau}=-\frac{\Gamma G \tilde{\mu}}{a^2 (\tau)} \; .
\end{equation}
Integrating this out, we get
\begin{equation}
\tilde{l} (\tau')=\tilde{l} (\tau) - \frac{\Gamma G \tilde{\mu} \cdot \tau}{a^2 (\tau)} \cdot \left(1-\frac{\tau}{\tau'} \right) \; .
\end{equation}
We see that for any given $\tau$ the second term on the R.H.S. reaches a constant value $\Gamma G \mu (\tau) \tau$ (recall that $\mu (\tau)=\tilde{\mu}/a^2 (\tau)$).
Thus, for $\tilde{l} (\tau) \gg \Gamma G \mu (\tau) \tau$, one can neglect gravitational backreaction. As soon as we are interested in large loops with $\tilde{l}/\tau \simeq 0.1$, this inequality is fulfilled, at least if the initial tension is not too large, $G \mu_l \lesssim 10^{-3}$. On the contrary, in the case of constant tension cosmic strings, gravitational backreaction is being accumulated with time, so that loops evolve according to $l (t') =l (t)+\Gamma G \mu (t'-t)$, hence any loop evaporates at some point.
\section{Gravitational waves from cosmic strings} \label{sec:Section4}
\subsection{Generalities}
Cosmic string loops with a given length $l(t)$ emit GWs at frequencies $F^{(j)}=2j/l(t)$, where $j=1,2,3,...$ is the multipole number~\cite{Vachaspati:1984gt}.
These frequencies redshift with cosmic expansion, so that their current values are given by $f^{(j)}=(a(t)/a_0) \cdot F^{(j)}$, or
\begin{equation}
f^{(j)} =\frac{a(t)}{a_0} \cdot \frac{2j}{l (t)}=\frac{2j}{a_0 \tilde{l}} \; .
\end{equation}
Our goal is to obtain the fractional energy density of GWs emitted by loops
\begin{equation}
\Omega_{gw} (f) \equiv \frac{fd\rho_{gw}}{\rho_c d f} \; ,
\end{equation}
where $\rho_{gw}$ is the present day energy density of GWs and $\rho_c$ is the critical energy density.
We split $\Omega_{gw} (f)$ into a sum over harmonics:
\begin{equation}
\Omega_{gw} (f)=\sum^{\infty}_{j=1} \Omega^{(j)}_{gw}=\sum^{\infty}_{j=1}\frac{fd\rho^{(j)}_{gw}}{\rho_c d f} \; .
\end{equation}
Following Ref.~\cite{Blanco-Pillado:2013qja}, we relate $\frac{d\rho^{(f)}_{gw}}{df}$ to the power of GW emission per unit physical volume per unit frequency ${\cal P}^{(j)}_{gw} (t, F)$:
\begin{equation}
\frac{d\rho^{(j)}_{gw}}{d f} =\int^{t_0}_{t_{l}} dt' \left(\frac{a (t')}{a_0} \right)^4 \cdot {\cal P}^{(j)}_{gw} (t', F') \cdot \frac{\partial F'}{\partial f}=\int^{t_0}_{t_l} dt' \left(\frac{a(t')}{a_0} \right)^3 \cdot {\cal P}^{(j)}_{gw} \left( t', \frac{a_0}{a(t')} f \right) \; .
\end{equation}
Here the factors $(a(t')/a_0)^4$ and $\partial F'/\partial f=a_0/a(t')$ take into account the redshift of the GW energy density and frequency, respectively. In turn, ${\cal P}^{(j)}_{gw}$ is defined by the comoving number density of loops per unit length $n(t, l)$ through
\begin{equation}
{\cal P}^{(j)}_{gw} (t, F)=\int dl \frac{n (t, l)}{a^3 (t)} P^{(j)}(l ,F) \; ,
\end{equation}
where $P^{(j)}(l ,F)$ is the power emitted by a single loop. Assuming that loops develop and maintain only one cusp, one has approximately~\cite{Vachaspati:1984gt, Blanco-Pillado:2013qja, Blanco-Pillado:2017oxo}\footnote{The sub-dominant contributions to the GW power come from kinks and kink-kink collisions; they decay as $j^{-5/3}$ and $j^{-2}$, respectively.}
\begin{equation}
\label{singleloopgw}
P^{(j)}_{gw} (l, F)=\frac{\Gamma G \mu^2 (t)}{\zeta \left(\frac{4}{3}, \infty \right)} \frac{1}{j^{4/3}} \delta \left(F- \frac{2j}{l (t)} \right) \; ,
\end{equation}
where
\begin{equation}
\zeta \left(\frac{4}{3}, \infty \right) =\sum^{\infty}_{j=1} \frac{1}{j^{4/3}} \approx 3.60 \; .
\end{equation}
The power-law decrease with the multipole number in Eq.~\eqref{singleloopgw} has a cutoff at some value $j_*$, above which it gets replaced by an exponential decay~\cite{Vachaspati:1984gt}. Typically $j_*$ is very large and therefore we have set it to infinity.
Strictly speaking the formula~\eqref{singleloopgw} works only for $j \gg 1$,
however, the deviation from Eq.~\eqref{singleloopgw} at $j \sim 1$ is within a factor of two~\cite{Blanco-Pillado:2017oxo}, and we ignore it in the following.
Note that Eq.~\eqref{singleloopgw} is commonly written for a constant tension, nevertheless, the extrapolation to a time-independent tension is legitimate, as soon as it changes
negligibly during an oscillation period. This is indeed the case here, as $F^{(j)} \gg H$.
Combining everything altogether, we get
\begin{equation}
\label{gen}
\Omega^{(j)}_{gw} (f)=\frac{1}{\rho_c} \cdot \frac{2\Gamma G}{\zeta \left(\frac{4}{3}, \infty \right) j^{1/3} \cdot f} \cdot \int^{t_0}_{t_{l}} dt' \frac{a^2 (t') \mu^2 (t') n\left(t', \frac{2j a(t')}{a_0f} \right) }{a^5_0} \; .
\end{equation}
To proceed, we make an assumption that the scaling regime is reached almost immediately upon string formation, i.e.,
\begin{equation}
\label{instantscaling}
t_{l} \simeq t_{s} \; .
\end{equation}
In reality, it takes some time for the network to settle down into the scaling behaviour. We will see, however, that some part of GW spectrum, namely the low-frequency part, is unaffected
by the early time departure from the scaling regime. For the rest of the spectrum we will be satisfied with a crude estimate. With that said, we use Eqs.~\eqref{flatscaling} and~\eqref{rel} to define the number density of loops entering Eq.~\eqref{gen}:
\begin{equation}
n\left(t, \frac{2j a(t)}{a_0f} \right) = \frac{1}{a(\tau)} n\left(\tau, \frac{2j}{a_0f} \right) \simeq \frac{a^4_0 f^4}{16 a(\tau) j^4} \int^{\frac{2j}{a_0f \tau_{l}}}_{\frac{2j}{a_0f \tau}} dx x^3 f(x) \; .
\end{equation}
Finally, switching to conformal time and using $\rho_c =3H^2_0 M^2_{Pl}$, where $H_0$ is the Hubble constant, we rewrite Eq.~\eqref{gen} as
\begin{equation}
\label{generic}
\Omega^{(j)}_{gw} (f) \simeq \frac{\pi \Gamma G^2 f^3}{3H^2_0 \zeta \left(\frac{4}{3}, \infty \right) j^{13/3}} \cdot \int^{\tau_0}_{\tau_{l}} d\tau' \frac{a^2 (\tau')}{a_0} \mu^2 (\tau') \int^{\frac{2j}{a_0 f \tau_{l}}}_{\frac{2j}{a_0 f \tau'}} dx' x^{'3} f (x') \; .
\end{equation}
Note that given the time-dependence of the tension $\mu$, the integral over $\tau'$ is saturated at very early times close to $\tau_l$. This is in contrast to the case of cosmic strings with a constant tension.
In particular, this results into a markedly non-flat spectrum, as we discuss in details in the remainder of this section.
\subsection{Spectrum of gravitational waves}
We follow the VOS approach to model the loop distribution, such that the function $f(x)$ is chosen to be of the form~\eqref{onescale}. In doing so, we explicitly neglect the contribution of small loops. As it has been mentioned in the previous sections, strings are rather thick initially, and therefore we expect production of small loops to be suppressed. Nevertheless, at the end of this section, we comment
on the GW spectrum that would follow from the model~\eqref{loopdensity}, once an abundant production of small loops is assumed. We will see that the latter may strongly affect the spectrum in the intermediate frequency range, at the same time leaving intact the shape of the spectrum in the low- and high-frequency regimes.
{\it Low frequency range.} With the VOS model assumed, the characteristic frequency of GWs is given by
\begin{equation}
\label{peakf}
f_{\text{peak}} \equiv \frac{2}{a_0 \alpha \tau_{l}} \approx \frac{2 H_{l}}{\alpha} \cdot \frac{a_{l}}{a_0} \; .
\end{equation}
The fact that this is indeed the peak frequency will become clear shortly.
We start with the low frequency regime, $f<f_{\text{peak}}$. In this case, using Eq.~\eqref{onescale}, we can express the inner integral in Eq.~\eqref{generic} as follows:
\begin{align}
\label{twooptions}
\int^{\frac{2j}{a_0 f \tau_{l}}}_{\frac{2j}{a_0 f \tau'}} dx' x^{'3} f (x')=
\begin{cases}
0 ~\qquad \tau' < \tilde{\tau}^{(j)}\\
C \alpha^3 \quad \tau' \geq \tilde{\tau}^{(j)}\; ,
\end{cases}
\end{align}
where
\begin{equation}
\label{latertime}
\tilde{\tau}^{(j)} \equiv \frac{2j }{a_0 \alpha f } = \tau_{l} \cdot j \cdot \frac{f_{\text{peak}}}{f} \; .
\end{equation}
Substituting Eq.~\eqref{twooptions} into Eq.~\eqref{generic}, we obtain
\begin{equation}
\label{interexp}
\Omega^{(j)}_{gw} (f) \simeq\frac{\pi C\alpha^3 \Gamma G^2 f^3}{3H^2_0 \zeta \left(\frac{4}{3}, \infty \right) j^{13/3}} \cdot \frac{\mu^2 (\tilde{\tau}^{(j)}) a^2 (\tilde{\tau}^{(j)}) \tilde{\tau}^{(j)} }{a_0} \; .
\end{equation}
We observe that $\mu (\tilde{\tau}^{(j)}) a^2 (\tilde{\tau}^{(j)}) =\mu_{l} \cdot a^2_{l}$ and $a (\tilde{\tau}^{(j)})=a_l \cdot (\tilde{\tau}^{(j)})/\tau_l$, and then use Eq.~\eqref{latertime} to express the time $\tilde{\tau}^{(j)}$ in terms of $\tau_l$.
Next, we express the time $\tau_l$ through the Hubble rate $H_{l}=1/(a_{l} \tau_{l})$ and $\alpha$ through $f_{\text{peak}}$ by making use of Eq.~\eqref{peakf}. Finally, we get for $f<f_{\text{peak}}$:
\begin{equation}
\label{modesmall}
\Omega^{(j)}_{gw} (f) \simeq \frac{1}{j^{16/3}} \cdot \frac{8\pi C \Gamma (G \mu_{l})^2}{3 \zeta \left(\frac{4}{3}, \infty \right)} \cdot \left(\frac{H_{l}}{H_0} \right)^2 \cdot \left(\frac{a_{l}}{a_0} \right)^4 \cdot \left(\frac{f}{f_{\text{peak}}} \right)^4 \; .
\end{equation}
Clearly, higher multipoles with $j>1$ contribute negligibly to the low frequency range, so that the overall fractional energy density of GWs can be well approximated by counting only the fundamental
harmonic $j=1$:
\begin{equation}
\label{low}
\Omega_{gw} (f<f_{\text{peak}}) \simeq \Omega_{gw, \text{peak}} \cdot \left(\frac{f}{f_{\text{peak}}} \right)^4 \; .
\end{equation}
Here $\Omega_{gw, \text{peak}}$ is the peak energy density of GWs given by
\begin{equation}
\label{peakgen}
\Omega_{gw, \text{peak}} \simeq \frac{8\pi C \Gamma (G \mu_{l})^2}{3 \zeta \left(\frac{4}{3}, \infty \right)} \cdot \left(\frac{H_{l}}{H_0} \right)^2 \cdot \left(\frac{a_{l}}{a_0} \right)^4 \; .
\end{equation}
Using the values $\Gamma \approx 50$, $C \simeq 150$, $\zeta \left(\frac{4}{3}, \infty \right) \approx 3.60$, and
\begin{equation}
\left(\frac{H_l}{H_0} \right)^2 \cdot \left(\frac{a_l}{a_0} \right)^4 \approx 2.6 \cdot 10^{-5} \cdot \left(\frac{100}{g_* (T_l)} \right)^{1/3} \; ,
\end{equation}
we obtain
\begin{equation}
\label{peak}
\Omega_{gw, \text{peak}} \simeq 4.5 \cdot 10^{-9} \cdot \left(\frac{G\mu_{l}}{10^{-4}} \right)^2 \cdot \left(\frac{100}{g_* (T_{l})} \right)^{1/3} \; .
\end{equation}
GWs emitted in the range of frequencies $f \ll f_{\text{peak}}$ provide a particularly clean probe of our scenario, because they come from times considerably later than $\tau_l$, when the departure from the scaling regime is moderate. One can also show, that the result~\eqref{low} is largely independent of $f(x)$. In particular, we would get the same had we chosen to use the loop production function an in Eq.~\eqref{loopdensity}.
{\it High frequency range.} Now let us discuss the range of high frequencies with $f>f_{\text{peak}}$. In this case, the discussion above applies and in particular Eq.~\eqref{modesmall} holds, but only for multipole numbers $j > f/f_{\text{peak}}$. Indeed, for $j < f/f_{\text{peak}}$, one has
\begin{equation}
\frac{2j}{a_0 f \tau_l} < \frac{2}{a_0 f_{\text{peak}} \tau_l} =\alpha
\end{equation}
and hence,
\begin{equation}
\int^{\frac{2j}{a_0 f \tau_l}}_{\frac{2j}{a_0f \tilde{\tau}^{(j)}}} dx' x^{'3} f(x') = C\int^{\frac{2j}{a_0 f \tau_l}}_{\frac{2j}{a_0f \tilde{\tau}^{(j)}}} dx' x^{'3} \delta \left(x'-\alpha \right) =0 \; .
\end{equation}
Thus, according to Eq.~\eqref{generic}, we must set $\Omega^{(j)}_{gw}=0$ for $j<f/f_{\text{peak}}$. Performing the summation in Eq.~\eqref{modesmall} from $j \approx f/f_{\text{peak}}$ and using Eq.~\eqref{peakgen} we obtain
\begin{equation}
\label{high}
\Omega_{gw} (f > f_{\text{peak}}) \simeq c(f) \cdot \Omega_{gw, \text{peak}} \left(\frac{f_{\text{peak}}}{f} \right)^{1/3} \; ,
\end{equation}
where $c(f)$ is defined as
\begin{equation}
c(f) =\left(\frac{f}{f_{\text{peak}}} \right)^{13/3} \cdot \sum^{\infty}_{j > f/f_{\text{peak}}} \frac{1}{j^{16/3}} \; .
\end{equation}
The discontinuity between Eqs.~\eqref{low} and~\eqref{high} at $f=f_{\text{peak}}$ is a consequence of the one-scale approximation used. In reality, the function $f(x)$ is smooth around the peak $x \simeq \alpha$, and the discontinuity is avoided.
One can check that the function $c(f)$ quickly relaxes to a constant value as one increases $f$:
\begin{equation}
c(f) \rightarrow \frac{3}{13} \; .
\end{equation}
In practice, we will set $c(f)=3/13$ for all $f>f_{\text{peak}}$. Note that the slow decay with the frequency in Eq.~\eqref{high} is independent of the choice of the loop production function $f(x)$, and merely reflects a generic dependence on the multipole number in Eq.~\eqref{singleloopgw}.
{\it Possible contribution of small loops.} To estimate the possible effect of small loops
neglected in the analysis above, one turns to the model~\eqref{loopdensity}. Again we assume that
the string network settles immediately into the scaling regime upon its formation. The lower bound on the loop size is set by gravitational backreaction discussed at the end of \hyperref[sec:Section3]{Section~3}: $x \gtrsim \Gamma G \mu (\tau)$, where $x =\tilde{l}/\tau$. One can show
that the loops saturating this lower bound give the main contribution to the peak energy density of GWs. As a result, the peak frequency of GWs is shifted compared to the VOS model:
\begin{equation}
f_{\text{peak}} \simeq f^{VOS}_{\text{peak}} \cdot \frac{\alpha}{\Gamma G \mu_l } \; ,
\end{equation}
where $f^{VOS}_{\text{peak}}$ is given by Eq.~\eqref{peakf}. In the intermediate frequency range $f^{VOS}_{\text{peak}}\lesssim f \lesssim f_{\text{peak}}$, there is a
growth of GW energy density $\Omega_{gw} (f) \propto f^{\gamma-1}$. In particular, for the value $\gamma \approx 1.6$
fitting numerical simulations of Ref.~\cite{Vanchurin:2005pa}, one has $\Omega_{gw} (f) \propto f^{0.6}$. In other words, the inclusion of small loops leads to a larger value of $\Omega_{gw, \text{peak}}$ compared to Eq.~\eqref{peak}:
\begin{equation}
\Omega_{gw, \text{peak}} \simeq \Omega^{VOS}_{gw, \text{peak}} \cdot \left(\frac{\alpha}{\Gamma G \mu_l} \right)^{0.6} \; .
\end{equation}
Thus, Eq.~\eqref{peak} should be viewed as a conservative lower bound on the peak energy density of GWs. On the other hand, the behaviour in the low and high frequency regimes, i.e.,
$\Omega_{gw} \propto f^4$ for $f \lesssim f^{VOS}_{\text{peak}}$ and $\Omega_{gw} \propto f^{-1/3}$, for $f \gtrsim f_{\text{peak}}$, respectively, still holds.
\section{Prospects for observations}
In Fig.~\ref{gw}, we show the spectrum of GWs in terms of $\Omega_{gw} \cdot h^2_0$, where $h_0 \approx 0.7$ is the dimensionless Hubble constant, for different $G\mu_l$ and $f_{\text{peak}}$. We
compare it with the sensitivity of current and future detectors: LIGO~\cite{LIGOScientific:2019vic, LIGOScientific:2014qfs}, Einstein Telescope (ET)~\cite{Sathyaprakash:2012jk}, Cosmic Explorer (CE)~\cite{LIGOScientific:2016wof}, DECIGO~\cite{Kawamura:2011zz}, LISA~\cite{LISA:2017pwj, Auclair:2019wcv}, and PTAs~\cite{NANOGrav:2020bcs, Kramer:2013kea, Janssen:2014dka}. For the peak
frequency lying in the ranges $10^{-9}~\mbox{Hz} \lesssim f_{\text{peak}} \lesssim 10^{-7}~\mbox{Hz}$ and $10^{-5}~\mbox{Hz} \lesssim f_{\text{peak}} \lesssim 100~\mbox{Hz}$, one will be able to probe melting strings with an initial tension
as large as $G\mu_l \gtrsim 10^{-6}$. Note that astrometrical measurements have a promising capability to fill in the existing frequency gap between the SKA and LISA sensitivity curves~\cite{Garcia-Bellido:2021zgu}.
So far, our discussion of the GW spectrum has been model-independent: it only assumed that the tension decreases with time as $\mu \propto 1/a^2$.
Now, let us relate the peak frequency $f_{\text{peak}}$ of GW emission given by Eq.~\eqref{peakf} to the parameters of the model~\eqref{modelbasic}. One first connects $f_{\text{peak}}$ with the temperature $T_l$ at string formation:
\begin{equation}
\label{peaktemperature}
f_{\text{peak}} \approx 20 \cdot \frac{T_l \cdot T_0}{M_{Pl}} \cdot \left(\frac{g_* (T_l)}{100} \right)^{1/6} \; ,
\end{equation}
where $T_0 \approx 2.73~\mbox{K}$ is the present day temperature of the Universe. Next we substitute Eq.~\eqref{formation} into Eq.~\eqref{peaktemperature} and obtain
\begin{equation}
\label{peakfreq}
f_{\text{peak}} \simeq 60~\mbox{Hz} \cdot N^{1/2} \cdot \left(\frac{g}{10^{-9}} \right) \cdot \left(\frac{\epsilon}{0.1} \right) \cdot \left(\frac{100}{g_* (T_l)} \right)^{1/3} \; .
\end{equation}
Note that the peak frequency is mainly defined by the portal constant $g$ and is independent of the string tension $\mu_l$.
For $g \simeq 10^{-9}$ we enter the range accessible by LIGO, ET, and CE. Further decreasing $g$, we cover the range of DECIGO and LISA.
Frequencies characteristic to PTAs correspond to extremely small constants $g \lesssim 10^{-18}$. We conclude that gravitational
radiation from melting cosmic strings serves to probe particle physics in a very weakly coupled regime.
\begin{figure}[tb!]
\begin{center}
\includegraphics[scale=.98]{melt_string_GW.pdf}
\caption{The spectrum of GWs emitted by the network of melting cosmic strings is shown for different values of $G\mu_l$ and $f_{\text{peak}}$. The choice of peak frequencies $f_{\text{peak}}$ corresponds to the choice of
portal constants $g \simeq 10^{-9},~10^{-13},~10^{-19}$ in the model~\eqref{modelbasic}. The shaded regions correspond to those accessible with current and future
GW detectors.}\label{gw}
\end{center}
\end{figure}
For substantially larger $f_{\text{peak}}$ and $g$, away from the range accessible by future GW interferometers, emission from melting strings still may have observational consequences through its impact on Big Bang Nucleosynthesis (BBN). GW background acts as dark radiation and thus can be parameterized in terms of the departure from
the effective number of neutrino species $\Delta N_{\nu} \approx N_{\nu}-N_{\nu, \text{eff}}$, where $N_{\nu, \text{eff}} \approx 3.046$:
\begin{equation}
\Omega_{gw, \text{BBN}} \approx \frac{7}{4} \cdot \left(\frac{4}{11} \right)^{4/3} \cdot \frac{\Delta N_{\nu}}{g_* (T_{\text{BBN}})} \; .
\end{equation}
Using the Planck bounds $N_{\nu}=2.99 \pm 0.17$~\cite{Planck:2018vyg}, which gives $\Delta N_{\nu} \lesssim 0.11$, and $g_* (T_{\text{BBN}}) \approx 3.4$, we get $\Omega_{gw, \text{BBN}} \lesssim 0.015$.
Our prediction of $\Omega_{gw, \text{BBN}}$ can be easily inferred from Eq.~\eqref{peak}, which must be divided by
\begin{equation}
\frac{H^2_{\text{BBN}}}{H^2_0} \left(\frac{a_{\text{BBN}}}{a_0} \right)^4 \approx 2.6 \cdot 10^{-5} \cdot \left(\frac{100}{g_* (T_{\text{BBN}})} \right)^{1/3} \; .
\end{equation}
We get
\begin{equation}
\Omega_{gw, \text{BBN}} \simeq 1.7 \cdot 10^{-4} \cdot \left(\frac{G\mu_l}{10^{-4}} \right)^{2} \cdot \left(\frac{g_* (T_{\text{BBN}})}{g_* (T_{l})} \right)^{1/3} \; .
\end{equation}
The BBN bound on $\Omega_{gw, \text{BBN}}$ can be used to set the upper limit on $G\mu_l$:
\begin{equation}
G\mu_{l} \lesssim 1.6 \cdot 10^{-3} \; ,
\end{equation}
we have assumed $g_* (T_l) \approx 100$. This is a rather weak bound from the viewpoint of the model~\eqref{modelbasic}, which typically leads to much smaller $G\mu_l$.
While the future BBN measurements may slightly improve this bound, we expect direct detection of GWs to be the most powerful probe of melting cosmic strings.
\section{Discussions} \label{sec:Section6}
In the present work we have discussed cosmic strings with a decreasing tension~\eqref{decreasing}. Such topological defects can be anticipated if the underlying field theory of particle physics is scale-invariant at high energies (modulo inclusion of gravity).
In \hyperref[sec:Section2]{Section~2}, we considered an example of a model that enjoys scale-invariance and predicts the existence of melting cosmic strings.
The latter emit GWs, which can be observable in a well motivated range of parameter space.
Let us summarize our prediction of the spectrum of GWs produced by the network of melting cosmic strings:
\begin{align}
\label{spectrum}
\Omega_{gw} \cdot h^2_0 \simeq 2.3 \cdot 10^{-9} \cdot \left(\frac{G \mu_l}{10^{-4}} \right)^{2} \cdot \left(\frac{100}{g_* (T_l)} \right)^{1/3} \cdot
\begin{cases}
\left(\frac{f}{f_{\text{peak}}} \right)^4 \qquad \qquad f \lesssim f_{\text{peak}} \\
\frac{3}{13} \left(\frac{f_{\text{peak}}}{f} \right)^{1/3} \qquad f \gtrsim f_{\text{peak}}
\end{cases}
\,.
\end{align}
See Fig.~\ref{gw} for further details. In the particular model of cosmic string formation~\eqref{modelbasic}, the peak frequency $f_{\text{peak}}$ is given by Eq.~\eqref{peakfreq}. Let us stress that for $f \gtrsim f_{\text{peak}}$ the expression~\eqref{spectrum} should be viewed as a crude estimate, as our derivation of the GW spectrum relied on some strong assumptions. Namely, we assumed that the cosmic string network immediately
settles into the scaling regime. Furthermore, we neglected a cosmic string width, which nevertheless may constitute a sizeable fraction of the horizon at string formation time $\tau_l$ (see the discussion in the end of \hyperref[sec:Section2]{Section~2}). Note, however, that the behaviour at large frequencies follows from the dependence on the multipole number $j$ in Eq.~\eqref{singleloopgw} -- as soon as the latter holds approximately, one should expect the behaviour $\Omega_{gw} \propto f^{-1/3}$ at $f \gg f_{\text{peak}}$.
On the other hand, the expression~\eqref{spectrum}
is accurate at low frequencies $f \ll f_{\text{peak}}$ (modulo a possible factor few uncertainty of determining the coefficient $C$ in Eq.~\eqref{onescale}), because it corresponds to GWs emitted at the times $\tau \gg \tau_l$, when departures from the scaling regime are small and cosmic strings can be treated as thin objects. In this regard, the low frequency part of the spectrum, $\Omega_{gw} \propto f^4$, is a particularly clean probe of our scenario, which also does not depend on the shape of the loop production function $f(x)$.
In the present work, we did not discuss the cosmological role of the field $\chi$ constituting cosmic strings. According to Eq.~\eqref{peakfreq}, in the range accessible by GWs, the field $\chi$ is very weakly coupled to the thermal bath. Therefore, one could reasonably consider the field $\chi$ to assume the role of Dark Matter. For that purpose, we should admit a small breaking of the scale-invariance by introducing the mass term for the field $\chi$:
\begin{equation}
S_{\text{mass}}=-\int d^4 x\sqrt{-g} \frac{M^2 |\chi|^2}{2} \; .
\end{equation}
Note that for the couplings $g \lesssim 10^{-9}$, which correspond to $f_{\text{peak}} \lesssim 100~\mbox{Hz}$, the standard freeze-out and freeze-in mechanisms are not efficient. Nevertheless, one can create the right amount of Dark Matter at the inverse phase transition~\cite{Ramazanov:2021eya, Babichev:2020xeg, Ramazanov:2020ajq},
when the thermal mass of the field $\chi$ drops down to the bare mass $M$. The field $\chi$ gets offset from the minimum around this time and starts
oscillating. For the masses~\cite{Ramazanov:2021eya}
\begin{equation}
M \simeq 0.6~\mbox{eV} \cdot \frac{\beta^{3/5}}{\sqrt{N}} \cdot \left(\frac{g}{10^{-9}} \right)^{7/5} \; ,
\end{equation}
the energy density of these oscillations matches the observed Dark Matter abundance in the Universe. Despite extremely weak couplings involved, this Dark Matter scenario is naturally connected to
production of GWs through the early time formation of melting strings, and thus can be tested in future experiments.
{\it Acknowledgments.} W.~E. and S.~R. acknowledge the Czech Science Foundation, GA\v CR, for financial support under the grant number 20-16531Y. R.~S is supported by the project MSCA-IF IV FZU - CZ.02.2.69/0.0/0.0/$20\_079$/0017754 and acknowledges European Structural and Investment Fund and the Czech Ministry of Education, Youth and Sports.
\section*{Appendix A: Equation of motion of melting Nambu--Goto strings}\label{sec:AppA}
Dynamics of infinitely thin strings is described by the Nambu--Goto action, which is extrapolated to the case of the time-dependent tension in the straightforward manner~\cite{Yamaguchi:2005gp, Ichikawa:2006rw}:
\begin{equation}
\label{ng}
S=-\int d^2 \zeta \sqrt{-\gamma} \mu (\tau)\; ,
\end{equation}
where $\zeta =(\zeta^{1}, \zeta^{2})$ are the worldsheet coordinates, $\gamma_{ab} =g_{\mu \nu} x^{\mu}_{,a} x^{\nu}_{,b}$ is the worldsheet metric, and $\gamma \equiv \mbox{det} \gamma_{ab}$. Latin indices stand for the derivatives
with respect to the worldsheet coordinates. Varying this action and accounting for the time-dependence of the tension, one gets
\begin{equation}
\label{eqmod}
\frac{1}{\mu \sqrt{-\gamma}} \partial_a \left(\mu \sqrt{-\gamma}\gamma^{ab} g_{\mu \nu} x^{\nu}_{,b} \right) -\frac{1}{2} \frac{\partial g_{\lambda \nu}}{\partial x^{\mu}} \gamma^{ab} x^{\lambda}_{,a} x^{\nu}_{,b} -\frac{\partial_{\mu} \mu}{\mu}=0 \; ,
\end{equation}
which can be rewritten in a more conventional form:
\begin{equation}
\label{convenient}
\frac{1}{\mu \sqrt{-\gamma}} \partial_a \left(\mu \sqrt{-\gamma} \gamma^{ab} x^{\rho}_{,b} \right) +\gamma^{ab} \Gamma^{\rho}_{\lambda \nu} x^{\lambda}_{,a} x^{\nu}_{,b}-\frac{\partial^{\rho} \mu}{\mu}=0 \; .
\end{equation}
Substituting $g_{\mu \nu}=a^2 (\tau) \eta_{\mu \nu}$, where $\eta_{\mu \nu}$ is the Minkowski metric, one can check that the last two terms in Eq.~\eqref{eqmod} cancel each other for $\mu \propto 1/a^2$.
We used that $\gamma_{ab} \gamma^{ab}=2$. Therefore, for $\mu \propto 1/a^2$, Eq.~\eqref{convenient} simplifies to
\begin{equation}
\partial_a \left(\mu \sqrt{-\gamma}\gamma^{ab} g_{\mu \nu} x^{\nu}_{,b} \right) =0\; .
\end{equation}
We observe that $\mu g_{\mu \nu}=\tilde{\mu} \eta_{\mu \nu}$, where $\tilde{\mu}=\mu a^2=\mbox{const}$, and that $\sqrt{-\gamma} \gamma^{ab}=\sqrt{-\tilde{\gamma}} \tilde{\gamma}^{ab}$,
where $\tilde{\gamma}_{ab}$ is defined as
\begin{equation}
\tilde{\gamma}_{ab} \equiv \eta_{\mu \nu} x^{\mu}_{,a} x^{\nu}_{,b} \; .
\end{equation}
Finally, we get
\begin{equation}
\partial_a \left(\tilde{\mu} \sqrt{-\tilde{\gamma}}\tilde{\gamma}^{ab} \eta_{\mu \nu} x^{\nu}_{,b} \right) =0 \; .
\end{equation}
This equation describes evolution of a string with a constant tension in a flat spacetime. In fact, one could anticipate this result from the beginning, based on the scale-invariance of our setup.
Note that Eq.~\eqref{convenient} is generic, as it does not assume a particular time-dependence of the tension $\mu (\tau)$. Here let us make an important comment. A different equation of motion compared to Eq.~\eqref{convenient} has been derived from the same action~\eqref{ng} in Refs.~\cite{Yamaguchi:2005gp, Ichikawa:2006rw}. The latter assume that the tension
depends on the worldsheet coordinates $\zeta^{a}$ rather than spacetime coordinates $x^{\mu}$ as in our case. Therefore, in Refs.~\cite{Yamaguchi:2005gp, Ichikawa:2006rw}, the variation of the tension $\mu$ is set to zero, when applying the least action principle. However, in our case the tension is directly linked to the temperature of the Universe, which depends on the spacetime coordinates. Consequently, one cannot disregard the variation of $\mu$.
\section*{Appendix B: VOS model} \label{sec:AppB}
In the VOS model, the loop production function is given by Eq.~\eqref{onescale}. In the present Appendix, we aim to analytically derive
the coefficient $C$ entering there. Our discussion of the VOS model mainly follows Refs.~\cite{Martins:1996jp, Martins:2000cs}.
One assumes that loops are produced from long strings according to
\begin{equation}
\label{loss}
\frac{d\tilde{\rho}_{\infty}}{d\tau} =-\bar{c} v \frac{\tilde{\rho}_{\infty}}{\tilde{L}} \; ,
\end{equation}
where
\begin{equation}
\label{longenergy}
\tilde{\rho}_{\infty}=\frac{\tilde{\mu}}{\tilde{L}^2} \; ,
\end{equation}
is the rescaled energy density of long strings; $\bar{c}$ is the so-called loop chopping efficiency parameter defined numerically, and $v$ is the root mean square velocity.
Comparing Eqs.~\eqref{loss} and~\eqref{longenergy}, we obtain the scaling behaviour of long strings:
\begin{equation}
\label{longscaling}
\frac{\tilde{L}}{\tau} =\frac{\bar{c} v}{2} \; .
\end{equation}
Flat spacetime simulations of Refs.~\cite{Martins:2003vd, Moore:2001px} give
\begin{equation}
\label{chopping}
\bar{c} \approx 0.57 \; .
\end{equation}
The velocity $v$ is obtained from the equation~\cite{Martins:1996jp, Martins:2000cs}
\begin{equation}
\frac{d v}{d \tau}=(1-v^2) \cdot \frac{k (v)}{\tilde{L}} \; ,
\end{equation}
where the function $k (v)$ is given by
\begin{equation}
k (v)=\frac{2\sqrt{2}}{\pi} (1-v^2) \cdot (1+2\sqrt{2} v^3 ) \cdot \frac{1-8 v^6}{1+8 v^6} \; .
\end{equation}
The system has an attractor solution
\begin{equation}
\label{velocity}
v =\frac{1}{\sqrt{2}} \; ,
\end{equation}
as it should be in a flat spacetime~\cite{Vilenkin}.
The energy loss~\eqref{loss} is related to the loop production function $f(x)$ by
\begin{equation}
\label{energyloss}
\frac{d\tilde{\rho}_{\infty}}{d\tau} =-\tilde{\mu} \int^{+\infty}_0 \tilde{l} f(\tilde{l}, \tau) d\tilde{l} \; .
\end{equation}
Using Eqs.~\eqref{loss} and~\eqref{longscaling}, we get
\begin{equation}
\label{intvos}
\int^{+\infty}_0 x f(x) dx =\frac{8}{\bar{c}^2 v^2} \; .
\end{equation}
Substituting Eq.~\eqref{onescale}, we obtain
\begin{equation}
C=\frac{8}{\alpha \bar{c}^2 v^2} \; .
\end{equation}
Finally, substituting the values~\eqref{chopping},~\eqref{velocity} and using $\alpha \simeq 0.1$, we obtain
\begin{equation}
C \simeq 500 \; .
\end{equation}
This is in a good agreement with the value~\eqref{numerics} derived from matching to numerical simulations of Ref.~\cite{Vanchurin:2005pa}.
| proofpile-arXiv_066-2452 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
AGN comprise $\sim$34\% of the VHE sky catalog and are the
most numerous class of identified VHE $\gamma$-ray source \cite{TEVCAT}.
These objects have long dominated the observing programs of Northern VHE observatories
such as VERITAS \cite{VERITAS_spec}. As of \emph{ICRC 2021}, AGN
comprise approximately 52\% of the known VHE sources
with Northern declinations ($\delta > 0^{\circ}$), and 76\%
of the 84 VHE-detected AGN are located in the North.
These dominantly non-thermal sources have
spectral energy distributions (SEDs) spanning from radio waves
through $\gamma$-rays. Their emission is highly
variable at all wavelengths, and on time scales as short as minutes.
This leads to an emphasis on using contemporaneous multi-wavelength (MWL)
observation campaigns to probe their underlying physics. It is generally agreed that
the VHE $\gamma$-ray emission from AGN is produced in their relativistic jet, in a compact region
near their central, supermassive black hole. However, some recent detections
show indications for VHE emission produced on larger and/or more distant scales
(see, e.g., \cite{VERITAS_1441}). The characteristic double-peaked SEDs observed from
AGN naturally suggest emission models where the low-energy peak is the
synchrotron radiation from a population of relativistic electrons in the relativistic jet, and the
high-energy photons are the products of inverse-Compton scattering from this population
(e.g. a synchrotron self-Compton (SSC) model as in \cite{VERITAS_0229}). While VHE emission models dominated by
leptonic particle-acceleration processes in the accretion-powered jets remain favored,
other models remain viable (see, e.g., \cite{PKS1424_model} and the discussion / references therein).
Blazars are AGN whose relativisitc jets are pointed along the line of sight
towards Earth. They form the dominant class ($\sim$95\%) of VHE AGN.
Of the 78 VHE blazars, 67 ($\sim$90\%) are BL Lac objects,
8 are Flat Spectrum Radio Quasars (FSRQs) and 3 have uncertain sub-classification.
Of the remaining 6 VHE AGN, at least four are nearby ($z < 0.022$) FR-I radio galaxies,
and the radio-galaxy / blazar classification for another 2 remains debated. In any case,
the jets for these 6 objects are not strongly misaligned. The sub-classification of
BL Lac objects, based on the location of their lower-energy SED peak, is important.
Within the VHE catalog, 53 BL Lacs are high-frequency-peaked (HBLs), 10 are intermediate-frequency-peaked
(IBLs), 2 are low-frequency-peaked (LBLs) and two are unclassified. The VHE blazar catalog currently covers
a redshift range from $z = 0.030$ to $z = 0.954$, but at least
$\sim$20\% of the objects have unknown redshift and many catalog values are uncertain.
This is in large part due to the absence of optical features in the spectra of BL Lac objects.
Generally, the VHE AGN catalog is peaked at nearby redshifts (e.g., $\sim$55\% have $z < 0.2$)
but $\sim$10\% of the VHE objects have $z >0.5$ (primarily FSRQs). Aside from energetics
considerations, the major contributor to this redshift distribution is
attenuation of VHE photons in a distance- and energy-dependent manner by the
extragalactic background light (EBL) \cite{VERITAS_EBL}.
In general, the observational goals of the VERITAS AGN (radio galaxy and
blazar) Program are to discover new VHE AGN, and to make precision measurements
of VHE AGN spectra and their variability patterns. There are two empirical
qualities of the AGN population that drive the strategy behind the VERITAS
program. The first is that most VHE AGN have shown at
least some VHE flux variability. Indeed major outbursts
are perhaps one of the defining characteristics of the VHE field
(see, e.g., \cite{VERITAS_421_flare}), and roughly one-third of VHE AGN
are only detected during brief flares. These flare-only AGN include
most of the non-HBL blazars. While extreme episodes may define the field, these
rapid (minute-scale), large-scale (factor of 100) flux
variations are very rare. Most VHE flux variations are mild (factor of 2-3)
and many have comparatively long time scales (e.g., an observing season).
In general the time-scale and strength of the variations observed
depends on the average VHE flux, but this may have biases (e.g., instrument
sensitivity, target selection, etc.) The VERITAS AGN
Program therefore attempts to identify and follow-up on VHE AGN
flares, guided in part by brighter than average VHE flux.
The other important empirical quality is that the observed photon spectra of
VHE AGN are often soft ($\Gamma_{avg} \sim 3.5$), and very few VHE
AGN are detected above 1 TeV. This partly due to the high-energy SED peak
location, which may depend on the underlying physics,
and partly due to propagation effects (e.g., EBL absorption).
AGN with hard VHE spectra are particularly interesting, and the
VERITAS AGN Program also focuses on hard spectrum VHE blazars
and generating statistics above above 1 TeV. A study of the spectra measured by VERITAS from
these VHE blazars can be found in these proceedings \cite{Feng_ICRC21}.
The VERITAS AGN Program strongly leverages contemporaneous MWL observations from numerous ground-
and space-based facilities. Swift XRT/UVOT, \emph{Fermi}-LAT, the FLWO 48'' optical telescope form the core
of this regular lower-energy coverage, which exists for all Northern VHE AGN.
After an initial focus
on expanding the VHE AGN catalog from 2007-10, the VERITAS AGN program has since emphasized studies
of the known VHE AGN population with a goal of identifying and intensely
observing major flares. The target list has evolved but the general method
was to sample each selected VHE AGN on a regular cadence, thereby building high-statistics data sets while
searching for flares.
VERITAS AGN observations are timed to the MWL coverage to enable
fully-constrained modeling of each VHE AGN's SED (see, e.g., \cite{VERITAS_0229}).
From 2013-2018, the program sampled each Northern VHE AGN ($\sim$56 targets), but
it was recently streamlined to focus on more intense studies of select ($\sim$23) targets.
Our current focus is on the most variable VHE sources (i.e. IBLs), bright HBLs, and hard-spectrum HBLs.
This program has yielded detailed, decade-long MWL light curves for
numerous ($\sim$20) sources and significant coverage for the entire Northern population,
in addition to the identification of numerous VHE flares. The ongoing
sampling continues to improve these efforts, and the various long-term MWL light
curves will enable flux and spectral correlation studies that may indicate
commonalities in the origin of each AGN's emission. The large ensemble of
precision VHE AGN spectra will also improve and has already proved
useful for generating a variety of cosmological measurements
such as constraints on the the density of the EBL \cite{VERITAS_EBL}
and the strength of the intergalactic magnetic field (IGMF) \cite{VERITAS_IGMF}.
\begin{figure*}[t]
\centerline{ {\includegraphics[width=3.75in]{VERITAS_skymap.png} } }
\vspace{-0.3cm}
\caption{{\footnotesize The VERITAS catalog in Galactic
coordinates. The red circles show the 40 AGN detected using VERITAS; other astrophysical classes are shown with
different colors. The blue region is visible to VERITAS at
zenith angles $<$35$^{\circ}$. Most of the VERITAS AGN catalog is given in \cite{Benbow19}; only the new IBL B2\,1811+31 (see below) is missing.}}
\label{VERITAS_catalog}
\vspace{-0.3cm}
\end{figure*}
\section{VERITAS AGN Program}
VERITAS began full-scale operations in 2007 at the F. L. Whipple Observatory
in Arizona, USA. After fourteen years, it remains the world's most sensitive observatory between $\sim$85 GeV
and $\sim$30 TeV. VERITAS scientists use the array of four Cherenkov telescopes
to study the Northern sky during $\sim$10-month, monsoon-limited seasons (September $-$ July).
The present sensitivity was achieved following a series of
upgrades in 2012, and VERITAS can detect an object with $\sim$1\% Crab Nebula
flux (1\% Crab) in less than 25 hours. Photon spectra can be generated above
$\sim$100 GeV with typical systematic errors of $\sim$0.1 on the photon index
($\Gamma$) and $\sim$20\% on the flux.
The VERITAS collaboration has acquired $\sim$16,300 h of observations.
While successful, the past two seasons were challenging
due to the global pandemic. The 2019-20 season ended $\sim$4 months
early causing an estimated loss of $\sim$500 h of data ($\sim$700 h were acquired).
A rapid shift to "remote" operations largely enabled a full-scale observing program
in 2020-21, but it required pausing the bright-moon observing program
($\sim$1000 h acquired; only $\sim$40 h bright-moon data); this will resume in Fall 2021.
Excluding the pandemic-affected seasons, VERITAS averages $\sim$930 h / season
of good-weather observations during ``dark time'' (moon
illumination $<$30\%), and $\sim$200 h / season during periods of ``bright
moonlight'' (i.e. $>$30\% illumination). The bright-moon data has similar sensitivity to
dark-time observations, but has higher threshold (e.g. 250
GeV) \cite{BrightMoon_paper}.
AGN comprise 63\% of the VERITAS source
catalog (shown in Figure~\ref{VERITAS_catalog}), and their observations are the dominant
component of VERITAS data taking ($\sim$50\%). As of July 2021, AGN data comprise
a total of $\sim$5,900 h ($\sim$420 h per year) of good-weather dark time, $\sim$1,100 h ($\sim$140 h per year) of
good-weather, bright-moon time, and $\sim$1,800 h poor-weather (filler) observations.
Blazars are the dominant component of the AGN program, and the
good-weather dark time is historically split $\sim$90\% to blazars, primarily BL Lac objects,
and $\sim$10\% to radio-galaxies. From 2019$-$21, VERITAS acquired $\sim$670 h of good-weather
dark time on blazars and $\sim$100 h of good-weather dark time on radio galaxies, showing a larger
emphasis on radio galaxies. There were a further $\sim$90 h of good-weather, bright-moon time
acquired on blazars. After initially using 35$-$45\% of the bright-moon
time for blazar discoveries, almost all of this time ($\sim$90\% from 2019-21) is now used for
observing hard-spectrum BL Lac objects and searching for VHE AGN flares from targets not
in the regular VERITAS AGN monitoring program. The filler data is also used to search for AGN flares.
These supplemental flare monitoring programs have found several events not otherwise identified.
Much of the AGN observing program is based on regular monitoring of
selected objects in the Northern VHE catalog. The depth and cadence of
these observations is based on a variety of criteria and scientific goals.
These exposures aim to both to self-identify VHE flaring episodes for immediate / intense target-of-opportunity (ToO)
follow-up including MWL partners, while simultaneously building deeper, legacy
exposures on particularly interesting objects. Figure~\ref{results_panel1} shows the distribution of exposures
already achieved by VERITAS for each Northern AGN. The monitoring cadence
ranges from daily to weekly, while the minimum sample duration will detect $\sim$10\% Crab
flux. While the general goal of the program is to identify long-lasting, flare states for
intense campaigns, our experience shows that we are also
able to fortuitously catch short-duration, bright flares (e.g. BL\,Lac in 2017
\cite{BLLAC_2017}). Contemporaneous MWL data (e.g. Swift) are timed with the
VERITAS monitoring observations to both assist with triggering and to enable long-term modeling.
For the 2020-21 season, VERITAS monitored 23 VHE AGN (37\% of the Northern population).
While a major focus of the AGN program is performing deep / timely measurements of
known VHE sources, $\sim$35\% of the 2019-21 data were devoted to the
discovery of new VHE AGN. This included regular observations of
targets from a list of selected candidates, and ToO observations triggered by our partners.
Our discovery candidates include several AGN with a weak excess ($>$3$\sigma$) in
large, archival VERITAS exposures; these excesses continue to grow in limited annual exposures.
In addition, we continue a program to observe all targets from a comprehensive list of Northern discovery candidates
for at least 5 h. Only 11 objects require further observation, and most have at least some exposure.
The target list includes all the X-ray bright HBLs in the 2WHSP catalog
(``TeV Figure of Merit'' $>$ 1.0; \cite{2WHSP}), all the hardest
AGN in the {\it Fermi}-LAT $>$50 GeV catalog ($\Gamma_{2FHL} < 2.8$; \cite{2FHL_Catalog},
all nearby ($z < 0.3$) LBLs from the MOJAVE sample with relatively high maximum apparent jet speed \cite{Lister},
and all targets from prior comprehensive efforts \cite{Benbow09,Benbow11}.
ToO observations are the highest priority of the VERITAS program.
These data comprised 12\% and 15\% of the blazar dark-time in the 2019-20 and
2020-21 seasons, respectively. Approximately 70\% of these data were
follow-up observations of flares in known VHE blazars, with the remainder
including attempts to discover and/or follow-up on new VHE blazars.
\begin{figure*}[t]
\centerline{{\includegraphics[width=1.75in]{B21811_SignificanceMap.png} }
\hfil
{\includegraphics[width=4.0in]{Source_Hist.png} }
}
\caption{{\footnotesize (Left) The preliminary sky map of the significance measured from
the direction of B2\,1811+31. (Right) Histogram of the total VERITAS exposure for every Northern VHE AGN; $\sim$50\%
already have more than 50 h.}}
\label{results_panel1}
\vspace{-0.2cm}
\end{figure*}
\section{Recent Highlights}
{\bf B2\,1811+31} is an IBL at redshift $z = 0.117$ that showed an elevated MeV-GeV flux ($\sim$11x brighter) and
harder gamma-ray spectrum ($\Gamma_{\mathrm LAT} \sim 1.4$ vs. 2.1) during \emph{Fermi}-LAT observations in Oct.
2020 (ATel \#14060). Following the discovery of VHE emission by MAGIC (Oct. 4-10; ATel \#14090),
and reports of enhanced optical activity (ATel \#14103), VERITAS observed the blazar from Oct. 15-19, 2020.
A preliminary analysis of these data ($\sim$5 h) yields a strong detection ($\sim$8 standard deviations, $\sigma$) and a soft
photon spectrum ($\Gamma$ = 4.1 $\pm$ 0.5). A sky map of the significance observed
near B2\,1811+31 is shown in Figure~\ref{results_panel1}. The VERITAS light curve
is consistent with a constant flux F($>$ 250 GeV) = $(1.10 \pm 0.18_{\mathrm stat}) \times 10^{-11}$ cm$^{-2}$ s$^{-1}$.
This is approximately 6\% of the Crab Nebula flux (Crab) above the same threshold, and is similar to
the flux reported by MAGIC.
{\bf H\,1426+428} ($z = 0.129$) is an extreme HBL with synchrotron peak located at 10$^{18.1}$ Hz \cite{2WHSP}.
It was routinely detected before 2002 with VHE fluxes between $\sim$5\% and $\sim$20\%
Crab. However, since this time the reported VHE fluxes are significantly less ($<$2-3\% Crab).
H\,1426+428 was observed with VERITAS in almost every season and a $\sim$200 h exposure exists.
A preliminary analysis of $\sim$82 h of good-quality data from 2008-2016 yields a 13$\sigma$ detection.
Although MWL observations show variations (e.g., the Swift XRT count rate varies by a factor of 3),
the VHE flux F($>$250 GeV) = (1.9 $\pm$ 0.2)\% Crab shows no signs of variability.
The preliminary VHE spectrum can be described
by a power law with $\Gamma = 2.8 \pm 0.1$. During 2021 monitoring observations,
H\,1426+428 was found to be in a bright VHE state. This triggered an intense VERITAS and MWL campaign
including Swift and NuStar. Overall, $\sim$45 h of VERITAS data were acquired. A preliminary analysis
of these data yields a $\sim$19$\sigma$ detection and a time-average spectrum ($\Gamma \sim 2.6$) consistent
with the long-term measurement. The preliminary flux,
F($>$250 GeV) = $(5.6 \pm 0.4_{\mathrm stat}) \times 10^{-12}$ cm$^{-2}$ s$^{-1}$,
or (3.3 $\pm$ 0.2)\% Crab, is steady throughout 2021 but higher than the 2008-16 average.
The preliminary light curve from the 2021 observations is shown in Figure~\ref{results_panel2}.
The steady VERITAS flux contrasts with the Swift-XRT monitoring which indicates a high count rate
and significant variability.
\begin{figure*}[t]
\centerline{{\includegraphics[width=3.25in]{H1426_LC.png} }
\hfil
{\includegraphics[width=2.25in]{Mrk421_LC.jpg} }
}
\caption{{\footnotesize (Left) The preliminary nightly light curve from VERITAS
observations of H\,1426+428 in 2021. (Right) The VERITAS light curve from Mrk 421 on Feb. 17, 2010 \cite{VERITAS_421_flare}.
The dashed lines are exponential fits to the two bursts (2-min-bins).
The resulting timescales are 84 and 22 min for doubling (rising), and 28 and 65 min for halving (decay).}}
\label{results_panel2}
\vspace{-0.2cm}
\end{figure*}
{\bf 3C\,264} is an FR-I type radio galaxy at $z =0.0216$. It was observed with VERITAS for
57 hr from 2017-19, resulting in the 7.8$\sigma$ discovery of VHE emission \cite{VERITAS_3C264}.
Its VHE flux is variable on monthly timescales and was elevated in 2018.
The hard-spectrum ($\Gamma \sim 2.20$) VHE emission during 2018 has
F($>$315 GeV) = ($7.6\pm 1.2_{\mathrm stat} \pm 2.3_{\mathrm syst})\times 10^{-13}$ cm$^{-2}$ s$^{-1}$ (0.7\% Crab).
The elevated VHE state was thought to be similar to those observed
from the analogous source, M\,87, and extensive contemporaneous MWL data
were acquired in 2018, including high-resolution imaging data (e.g., Chandra, HST, VLA, VLBA). However, there was
no clearly identifiable source of the elevated flux. The 3C 264 SED
is unusual for a radio galaxy; its synchrotron peak near the X-rays has relatively high frequency.
However, many aspects of the SED can be qualitatively reproduced with an SSC model using parameters typical
of BL Lacs. Comparing the SEDs of 3C\,264 and M\,87 shows differences that are plausibly
explained by 3C\,264 being oriented closer to the line of sight.
{\bf Mrk\,421} exhibited an extraordinary flaring episode in February 2010 \cite{VERITAS_421_flare}.
The VHE flux observed by VERITAS from this HBL during an intense MWL campaign
reached 27 Crab, the highest ever observed from an AGN. The light-curve from the brightest
night is shown in Figure~\ref{results_panel2} and enables detailed cross-correlation analyses.
Limits on the Doppler factor ($\delta \gtrapprox 33$)
and the size of the emission region ($R_{B} / \delta \lessapprox 3.8 \times 10^{13}$ cm)
are obtained from the fast variability. A lag (25-55 minutes) between the VHE and optical bands
is seen (3$\sigma$) for the first time on short time scales. The VHE and X-ray fluxes show a wide range of behavior, including linear and quadratic correlations, as well as anti-correlations. The MWL data are difficult to explain using a single-zone SSC model.
{\bf TXS\,0506+056} is an important object for multi-messenger astronomy (see \cite{0506_VERITAS} and references
therein). It was observed to have an
elevated VHE and MeV-GeV gamma-ray flux in spatial and temporal coincidence ($\sim$3$\sigma$) with
the IceCube high energy neutrino event IC170922A.
This could indicate that blazar jets accelerate cosmic rays to
at least several PeV, and are hence a source of VHE cosmic rays.
The association was also used to herald the birth of neutrino astronomy.
The initial VERITAS follow-up observations of the neutrino/blazar
($\sim$35 h from Sept. 2017 to Feb. 2018) led to a soft-spectrum ($\Gamma \sim 4.8$), VHE
detection (5.8$\sigma$),
albeit at lower VHE flux than detected by MAGIC. VERITAS has since carried out deep
observations of TXS\,0506+056 and an associated MWL campaign \cite{Jin_ICRC21}. A weak excess (3.4$\sigma$)
is observed in $\sim$61 h collected from Oct. 2018 to Feb. 2021.
Interpreting this excess as a detection, it corresponds to F($>$190 GeV) $\sim 0.5$\% Crab.
This is slightly lower than, but statistically consistent with, the prior VERITAS flux ($\sim$0.7\% Crab).
{\bf The TeV luminosity function} of HBLs is important because these AGN
dominate the extragalactic VHE sky and hence the total cosmic VHE radiation.
Its measurement (i.e. the number of HBLs per unit volume per unit luminosity)
is key to understanding HBL properties, their relationship with other sources,
and their contributions to unresolved radiation fields. This measurement is challenging
due to observational biases, but enables
studies of hadronic/neutrino production in jets, the IGMF, and AGN evolution.
A program \cite{Errando_ICRC21} was designed to minimize these biases by selecting
36 HBLs from the 3WHSP catalog, and measuring their VHE fluxes at times not weighted towards high-fluxes.
These VERITAS observations are complete and leverage $\sim$1800 h of archival data and $\sim$150 h of
2019-21 data. Each target has at least 8 h of exposure ($\sim$1\% Crab sensitivity).
{\bf FSRQs:} are generally detected at VHE energies during flaring states. VERITAS has detected
3 FRSQs and most of its FSRQ observations are ToO based. In 2020, VERITAS began the first
systematic search for VHE emission from FSRQs. Twelve objects were selected for
at least 8 hours of unbiased (non-ToO) observations based on the 3FHL catalog and/or
prior VHE detection. The data provide a sensitivity to fluxes of $\sim$1\% Crab.
Upper limits from the first 4 FSRQ (GB6\,J0043+3426, S3\,0218+35, PKS\,0736+17 and 3C\,454.3)
are described in \cite{Patel_ICRC21}. When complete, the survey will
provide the first constraints on the duty cycle of VHE emission from FSRQs.
\section{Conclusion}
As of July 2021, the VERITAS collaboration has acquired $\sim$7,000 good-weather hours targeted on AGN.
Since \emph{ICRC 2019}, the array was used to acquire $\sim$860 h of these observations.
Unfortunately, each of the past two seasons were affected by the global
pandemic, with yields reduced by a $\sim$4-month observatory
closure in 2019-20, and a temporary suspension of bright-moon observing in 2020-21.
Overall, the 2019-21 AGN yield was $\sim$25\% below the two-year average.
The VERITAS AGN program continued to focus on deep, regular VHE and MWL monitoring
of known VHE AGN, and immediate and intense ToO follow-up of interesting
flaring events. We also maintained a robust discovery program, with
$\sim$35\% of our most recent observations having a discovery focus.
Highlights from our recent AGN observations and publications include the
observation of flares from B2\,1811+31, H\,1426+428, 3C\,264 and Mrk\,421.
The VERITAS array is now $\sim$14 years old and continues to run well.
Indeed, the past four seasons each rank among the five best for various technical
performance benchmarks (e.g., fraction of data with all telescopes operational).
The collaboration plans to operate VERITAS through at least 2025, and will
continue prioritizing AGN observations. Given the array's strong technical performance,
we expect the long VERITAS tradition of producing exciting AGN results to continue.
\vspace{0.2cm}
\footnotesize{This research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, by NSERC in Canada, and by the Helmholtz Association in Germany. This research used resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy's Office of Science, and resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. We acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument.}
| proofpile-arXiv_066-2498 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Determining the physical processes that drive the growth of both galaxies and their supermassive black holes (SMBHs) is a key goal of current observational and theoretical work \citep[see][for a review]{heckmanbest14}. An increasing body of evidence shows that galaxy growth mainly occurs through `secular processes' rather than by mergers. For example, \cite{kaviraj13} show that only $27\%$ of star formation is triggered by major or minor mergers at $z\sim2$, the peak of both star formation and black hole accretion activity. In addition, \cite{parry09} find that in the Millenium simulations that only $\sim 35\%$ of bulge mass is built by mergers, with the majority built through disk instabilities (triggered through interactions with nearby satellites). Similarly, many recent results have pointed to secular processes as the main driver of SMBH growth. For example, \cite{martin18} showed that in their hydro-dynamical simulations (RAMSES) only $35\%$ of the cumulative growth of SMBHs since $z\sim3$ could be attributed to mergers, both major and minor. Similarly \cite{mcalpine20} found in the EAGLE simulations that galaxy mergers do not induce a significant amount of black hole growth yet do increase the rate of luminous AGN, concluding that that on average no more than $15$\% of a SMBHs mass at $z\sim0$ comes from the enhanced accretion rates triggered by a merger.
These results, among others both observational and theoretical, challenge the long accepted paradigm whereby mergers are responsible for the correlations between SMBHs and bulges, such as velocity dispersion and bulge mass \citep{magorrian98, haringrix04, vdb16, batiste17, davis19}. Galaxies which have evolved via mergers are easily recognisable, as mergers are able to redistribute angular momentum in galaxy systems, transferring stars from rotation-supported orbits to pressure-supported orbits in a central bulge, similar to an elliptical galaxy. While there is also an increasing number of simulations finding that a disk can reform post-gas rich merger \citep{hopkins09c, sparre17, pontzen17,peschken20, jackson20}, a significant bulge component still forms even in a minor merger \citep[i.e. when the mass ratio in the merger exceeds $10:1$;][]{walker96, hopkins12, tonini16, stevens16}. Therefore, galaxies with little, to no bulge, can be assumed to have merger-free (and interaction-free) histories, at least since $z\sim2$ \citep{martig12}. The growth of both the galaxy and the SMBH in such systems, will have been dominated by non-merger processes alone.
\citet*[hereafter SSL17]{ssl17} calculated the masses of SMBHs powering such a sample of $101$ disk-dominated AGN and showed that they were over-massive (up to $\sim2$ dex) than would be expected from the black hole-bulge mass relation of \citet{haringrix04}. However, SSL17 also found that their disk-dominated AGN still lay on the total stellar mass-SMBH mass relation. This result suggested that secular processes were able to grow a SMBH at rates higher than previously thought.
\citet[hereafter S19]{smethurst19b} investigated these possible growth rates by measuring the $\mathrm{\left[ O \textsc{iii}\right] }$~outflow rates in $12$ disk-dominated galaxies using narrowband imaging from the Shane-3m telescope at the Lick Observatory. Under the assumption that the inflow rate to the AGN will be at least equal to the sum of the the outflow rate and the SMBH accretion rate, S19 found that the inflow rates they inferred could be achieved by non-merger processes, including funnelling of gas by bars and spiral arms, and cold accretion from the surrounding galaxy halo. However, this work was limited by the inability to adequately distinguish between gas ionised by the AGN outflow and star formation within the galaxy, and the subtraction of the central AGN PSF (leading to an overestimate and underestimate of the outflow rate respectively).
In this work, we aim to measure the outflow rates in 4 of the galaxies observed in S19 using spectral observations taken with the Keck Cosmic Web Imager (KCWI). High spectral resolution observations allow for the narrow component in $\mathrm{\left[ O \textsc{iii}\right] }$~(ionised by star formation or the central AGN) to be isolated from the broad component in $\mathrm{\left[ O \textsc{iii}\right] }$~(assumed to be ionised by the AGN outflow). This allows us to derive the outflow rate in these systems more accurately than in the previous study of S19. By using a sample of galaxies where we can be sure that secular processes dominate, we can isolate the merger-free growth path and understand the limitations to merger-free SMBH growth.
In the rest of this work we adopt the Planck 2015 \citep{planck16} cosmological parameters with $(\Omega_m, \Omega_{\lambda}, h) = (0.31, 0.69, 0.68)$ and any emission or absorption features referred to are in the Lick system. All uncertainties on calculated values are determined in quadrature, and all uncertainties on quoted mean values are the standard error on the mean. In Section~\ref{sec:sample} discuss our sample selection and in Section~\ref{sec:obs} describe our observations. In Section~\ref{sec:data} we describe our data reduction and analysis process, including how we determine the outflow rates in these systems. In Section~\ref{sec:results} we state our results and discuss their implications in Section~\ref{sec:discussion}. Finally, we summarise our conclusions in Section~\ref{sec:conc}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{HST_images_KCWI_targets_SDSS_scale_contrast.png}
\caption{\emph{HST} ACS WFC postage stamp images of the $4$ disk-dominated AGN observed with KCWI. North is up and a stretch of 0.55 ($Q=12$) is applied. In each image the HST filter is noted. The AGN can be seen as a bright point source in the centre of each image, which we assume is powered by merger-free processes due to the disk-dominated morphology of these sources. The white bars show $1~\rm{kpc}$ for scale in each panel.}
\label{fig:hsttargets}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{kcwi_summed_all_targets.png}
\caption{Total integrated flux across the IFU data cubes of the $4$ disk dominated AGN observed with KCWI. North is up and an arcsinh stretch is applied. The bar features seen in the HST images in Figure~\ref{fig:hsttargets} can be seen, however spiral arm detail is only apparent for Harry and Neville. The black bars show $1~\rm{kpc}$ for scale along each dimension in each panel.}
\label{fig:kcwitargets}
\end{figure*}
\section{Sample and observations}\label{sec:two}
\subsection{Sample Selection}\label{sec:sample}
We observed four disk-dominated galaxies with KCWI at the Keck Observatory, Hawai'i, USA, on the 13th December 2018. These were selected from a larger well-studied sample of $101$ disk-dominated galaxies with luminous, unobscured Type 1 AGN first identified in SSL17 ($\langle z \rangle = 0.129$). This parent sample was constructed from galaxies in the SDSS \citep{york00} Data Release 8 \citep{aihara11} imaging sample cross-matched with sources identified by \citet{edelson12} using multi-wavelength data from the Wide-field Infrared Survey Explorer \citep[WISE;][]{wright10}, Two Micron All-Sky Survey \citep[2MASS;][]{skrutskie06}, and ROSAT all-sky survey \citep[RASS;][]{voges99}. The disk-dominated morphologies were assigned by expert review of the SDSS imaging (see \citealt{simmons13} and SSL17), and were all later confirmed using images from an \emph{HST} snapshot survey with broadband imaging using ACS WFC (programme ID HST-GO-14606, PI: Simmons), which were reduced using the standard pipeline. \emph{HST} images showing the disk-dominated nature of our four targets, including spiral arms and bar features, along with the bright point source of the unobscured central AGN, are shown in Figure~\ref{fig:hsttargets}. Black hole masses for this sample were originally estimated by SSL17 using the relation between black hole mass and the FWHM and luminosity in the broadened $H\alpha$ emission line from \cite{greene05}.
$58$ galaxies within this sample showed broadened blueshifted $\mathrm{\left[ O \textsc{iii}\right] }$~components in their SDSS $3"$ fibre spectra. From this detection of a blueshifted component in the the spectra we know that there is \emph{some} outflowing material from the AGN within the $3"$ diameter central SDSS fibre, however this may not capture the full luminosity or extent of the outflow. The $12$ brightest galaxies in the blushifted $\mathrm{\left[ O \textsc{iii}\right] }$~$5007\rm{\AA}$ spectral component were observed using narrowband filters on the Shane-3m telescope from 12-14th May 2018 at the Lick Observatory, California, USA. The results of this work are described in S19. We then selected $4$ of these targets to observe with KCWI; Harry, Padma, Neville and Theodore (continuing the naming convention used in S19; see Table~\ref{table:coords} for more details). These targets were visible from Mauna Kea in December 2018 and had an appropriate redshift to ensure $\mathrm{\left[ O \textsc{iii}\right] }$~was in the wavelength range of KCWI.
\subsection{KCWI observations}\label{sec:obs}
\rowcolors{1}{lightgray}{}
\begin{table}
\centering
\caption{Co-ordinates of the four disk-dominated AGN hosts observed with KCWI. }
\label{table:coords}
\begin{tabular}{lcccc}
\hline
Name & SDSS name & RA & Dec & z \\
\hline
Harry & J0813+5422 & 123.350 & 54.377 & 0.043 \\
Padma & J1012+1017 & 153.161 & 10.289 & 0.070 \\
Neville & J1034+3938 & 158.661 & 39.641 & 0.043 \\
Theodore & J1314+4218 & 198.715 & 42.305 & 0.073 \\
\hline
\end{tabular}
\justify
\end{table}
We observed the $4$ disk-dominated AGN host galaxies listed in Table~\ref{table:coords} using KCWI at the Keck Observatory on Mauna Kea, Hawai'i, USA during dark time over half the night of the 13th December 2018. The weather was clear and the resultant seeing was $1.1''$.
Our observational setup was determined by the combination of our need for a large field of view, high spectral resolution to resolve the emission lines of interest ($\mathrm{\left[ O \textsc{iii}\right] }$~and $\rm{H}\beta$), and spectral bandpass coverage wide enough to allow for good continuum measurements for continuum subtraction. We used KCWI's blue camera with the `KBlue' filter. The field of view was $33''$ x $20''$, with a pixel scale $[0.30, 0.68] ''/\rm{pixel}$ using 2x2 binning. Using KCWI's large slicer allowed us to cover the full extent of all the galaxies in a single pointing. We used the BH3 grating, which allowed us to cover both $\mathrm{\left[ O \textsc{iii}\right] }$~and $\rm{H}\beta$~with a spectral resolution of $R = 4500$, suitable for tracing the high-velocity line emission in these sources. The targets were bright enough that we were not significantly affected by the somewhat reduced throughput of the BH3 grating.
Three targets (Harry, Padma \& Neville) were observed for $2,700$ seconds ($45$ minutes), with Theodore observed for $3,600$ seconds ($60$ minutes) to ensure a signal-to-noise ratio (SNR) of at least 10 for each target in the $\mathrm{\left[ O \textsc{iii}\right] }$~emission. An inspection of the data cubes reveals that this SNR was exceeded for each target.
\section{Data Reduction \& Analysis}\label{sec:data}
\subsection{KCWI data reduction}\label{sec:datared}
Each KCWI raw data cube was reduced using the Keck Data Reduction Pipeline (KeckDRP) written in IDL\footnote{Note that a \emph{Python} Data Reduction Pipeline is now available for Keck data; see \url{https://kcwi-drp.readthedocs.io}}. The pipeline has 8 stages: a basic CCD reduction (bias and overscan subtraction, gain-correction, trimming and cosmic ray removal), dark subtraction, geometric transformation, flat-field correction, sky subtraction, data cube generation, atmospheric refraction correction and a flux calibration. The standard stars used for flux calibration were G191-B2B and Feige 34. The total integrated flux across the data cubes for each of the four targets is shown in Figure~\ref{fig:kcwitargets}
\begin{figure*}
\centering
\includegraphics[width=0.985\textwidth]{Harry_ifscube_brightest_spaxel_fit_z.png}
\includegraphics[width=0.985\textwidth]{Padma_ifscube_brightest_spaxel_fit_z.png}
\includegraphics[width=0.985\textwidth]{Neville_ifscube_brightest_spaxel_fit_z.png}
\includegraphics[width=0.985\textwidth]{Theodore_ifscube_brightest_spaxel_fit_z.png}
\caption{The spectrum (black) and fit (red) to the brightest, central spaxel for each source. The individual components for each emission line are shown by the coloured lines (offset to 0). Each source was fitted with 2 components {\color{referee}(narrow in green and broad in blue)} for the $\mathrm{\left[ O \textsc{iii}\right] }$~$4958\rm{\AA}$ and $5007\rm{\AA}$ emission lines, and with 3 components (narrow in blue, broad in green, and broad line region in magenta) for the $\rm{H}\beta$~emission line. Note that only Harry and Padma needed all three H$\beta$ components to fit to the brightest, central spaxel. The residual between the spectrum and the fit is shown below, with the $\chi^2$ value and corresponding p-value for a model with 21 degrees of freedom (amplitude, velocity and velocity dispersion for each of the 7 components). }
\label{fig:specfits}
\end{figure*}
\subsection{Spectral fitting}\label{sec:specfit}
Once the reduced data cubes were obtained using the KeckDRP, we used the \emph{Python} module \texttt{ifscube}\footnote{\url{https://ifscube.readthedocs.io/}} to fit spectral features in the wavelength range probed by KCWI. {\color{referee} Systemic velocities were first determined using the peak of the $\rm{H}\beta$~emission in the central spaxel pre-decomposition\footnote{{\color{referee} Upon inspection of the final fits, the peak of the overall $\rm{H}\beta$~emission in the central spaxel coiincided with the peak of the narrow $\rm{H}\beta$~emission, see Figure~\ref{fig:specfits}}}, since stellar absorption lines were not available to us due to the Type 1 AGN nature of these systems (\citealt{RW18} show how $\rm{H}\beta$~is a good proxy for stellar absorption lines with an average velocity shift of $-9\pm^{41}_{45}~\rm{km}~\rm{s}^{-1}$).} Initially the flux, velocity and velocity dispersion of H$\beta$, $\mathrm{\left[ O \textsc{iii}\right] }$~$4958\rm{\AA}$ and $5007\rm{\AA}$ were fitted with two components each, with one component required to have a broader velocity dispersion. After inspection of the spectra and the initial spectral fits, it was apparent that the central $H\beta$ emission was dominated by emission from the broad line region (BLR) of the AGN, and that the H$\beta$ and $\mathrm{\left[ O \textsc{iii}\right] }$~narrow components were not kinematically linked, suggesting that the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission was ionised by the central AGN alone, rather than extended star formation in each source.
We therefore reperformed the fits with three components for $H\beta$ (narrow, broad which was kinematically tied to the broad $\mathrm{\left[ O \textsc{iii}\right] }$~components, and a BLR) and once again two components each for $\mathrm{\left[ O \textsc{iii}\right] }$~$4958\rm{\AA}$ and $5007\rm{\AA}$(narrow and broad), with the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~components no longer kinematically tied to the narrow $H\beta$ component. The BLR component is also not kinematically tied to the narrow $H\beta$ component. The fits to the central spaxel for each source are shown in Figure~\ref{fig:specfits}, clearly showing the need for a BLR $H\beta$ component along with the obvious blueshifted outflows in $\mathrm{\left[ O \textsc{iii}\right] }$. Only Harry and Padma (top panels of Figure~\ref{fig:specfits}) needed three components in H$\beta$ (narrow, BLR and outflow) in the central spaxel.
Note since these are Type 1 AGN we only expect to detect a blueshifted outflow component due to the effects of dust \citep{fischer13, muller11, baewoo14}. Indeed \cite{RW18} found that blueshifted $\mathrm{\left[ O \textsc{iii}\right] }$~is more frequently detected than redshifted $\mathrm{\left[ O \textsc{iii}\right] }$~by a factor of $3.6$ in Type 1 AGN (as opposed to a factor of 1.08 for Type 2 AGN) due to projection and orientation effects.
The integrated flux, velocity and velocity dispersion of the narrow H$\beta$ emission are shown in Figure~\ref{fig:hbetafour} {\color{referee} with the top panels showing some of the structure in each system}. In Figure~\ref{fig:oiiinfour}, the integrated flux, velocity and velocity dispersion of the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~component is shown, assumed to be ionised by the central AGN (although note that Neville does show some extended narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission presumably due to star formation along a spiral feature). Similarly, Figure \ref{fig:oiiiwfour} shows the integrated flux, velocity and velocity dispersion of the broad $\mathrm{\left[ O \textsc{iii}\right] }$~components, assumed to be ionised by the AGN outflow.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{kcwi_all_targets_hbeta_flux_v_sigma_z.png}
\caption{The fit to the H$\beta$ narrow emission for the four targets observed with KCWI, showing the integrated flux (top; with an arcsinh stretch), velocity (middle; relative to the systemic velocity) and velocity dispersion, $\sigma$ (bottom). Pixels are masked if the flux is below 3 standard deviations. Note that the KCWI spectral resolution (and therefore the minimum resolvable $\sigma$ value) is $\sim60~\rm{km}~\rm{s}^{-1}$. The bars show $1~\rm{kpc}$ in each panel for scale.}
\label{fig:hbetafour}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{kcwi_all_targets_oiii_narrow_flux_v_sigma_z.png}
\caption{The fit to the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission for the four targets observed with KCWI, showing the integrated flux (top; with an arcsinh stretch), velocity (middle; relative to the systemic velocity) and velocity dispersion, $\sigma$ (bottom). Pixels are masked if the flux is below 3 standard deviations. Note that the KCWI spectral resolution (and therefore the minimum resolvable $\sigma$ value) is $\sim60~\rm{km}~\rm{s}^{-1}$. The bars show $1~\rm{kpc}$ in each panel for scale.}
\label{fig:oiiinfour}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{kcwi_all_targets_oiii_broad_flux_v_sigma_z.png}
\caption{The fit to the broadened $\mathrm{\left[ O \textsc{iii}\right] }$~emission for the four targets observed with KCWI, showing the integrated flux (top; with an arcsinh stretch), velocity (middle; relative to the systemic velocity) and velocity dispersion, $\sigma$ (bottom). Note that the KCWI spectral resolution (and therefore the minimum resolvable $\sigma$ value) is $\sim60~\rm{km}~\rm{s}^{-1}$. The bars show $1~\rm{kpc}$ in each panel for scale; note the difference in scale to Figures~\ref{fig:hbetafour} \&~\ref{fig:oiiinfour}. Pixels are masked if the flux is below 3 standard deviations. In the top panels, the blue cross denotes the brightest point in the $\mathrm{\left[ O \textsc{iii}\right] }$~narrow emission flux. For Padma, the position of the brightest outflow ionised emission is offset from the position of the brightest narrow emission ionised by the central AGN (marked by the blue cross). Note that the KCWI spatial resolution, combined with the ground based seeing, limits any further conclusions on the geometry or morphology of the outflows in these systems.}
\label{fig:oiiiwfour}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{kcwi_ratio_oiii_narrow_hbeta_narrow_z.png}
\includegraphics[width=0.99\textwidth]{kcwi_ratio_oiii_broad_hbeta_broad_z.png}
\caption{The ratio of narrow (top) and broad (bottom) $\mathrm{\left[ O \textsc{iii}\right] }$/H$\beta$ emission for each target. Note the change of scale between the two rows; the scale bars show $1\rm{kpc}$ in each panel. Here we use only the flux from the broad $H\beta$ component ionised by the outflow, and not from the $H\beta$ BLR component in these plots (note that the central spaxels for Neville and Theodore do not have outflow ionised H$\beta$ emission; see Figure~\ref{fig:specfits}). The colour bars are scaled between the typical ranges on a BPT diagram; star formation ionised emission typically has $\log_{10} [OIII]/H\beta \mathrel{\hbox{\rlap{\hbox{\lower3pt\hbox{$\sim$}}}\hbox{\raise2pt\hbox{$<$}}}} 0$ and AGN ionised emission typically has $\log_{10} [OIII]/H\beta \gtrsim 0$ {\color{referee} \citep{kewley01, kewley06}}. All of our sources have high broad $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~values, indicating that the outflows are ionized by the AGN.}
\label{fig:ratios}
\end{figure*}
\subsection{Calculating \textsc{[OIII]} outflow rates}\label{sec:calcgasmass}
The fluxes shown in Figure~\ref{fig:oiiiwfour} enable a measurement of the outflow luminosity, $L$$\mathrm{\left[ O \textsc{iii}\right] }$~(knowing the redshift of each target), which can then be used to calculate a gas mass in the outflow following the method outlined in \cite{carniani15}:
\begin{multline}\label{eq:carni}
M_{\rm{[OIII]}} = 0.8 \times 10^8~M_{\odot} ~\times \\ \left( \frac{C}{10^{[O/H] - [O/H]_{\odot}}} \right) \left( \frac{L[\rm{O}\textsc{iii}]}{10^{44}~\rm{erg}~\rm{s}^{-1}} \right) \left( \frac{n_e}{500~\rm{cm}^{-3}} \right)^{-1}
\end{multline}
where $n_e$ is the electron density, $[O/H] - [O/H]_{\odot}$ is the metallicity relative to solar, and $C = <n_e>^2 / <n_e^2>$. Here $<n_e>^2$ is the volume averaged electron density squared and $<n_e^2>$ is the volume averaged squared electron density. This method requires some simplifying assumptions regarding the nature of the outflowing gas, particularly the temperature, metallicity and density of the gas. The largest source of uncertainty when determining the mass outflow rate is the electron density, $n_e$. Typically, the $\mathrm{\left[ S \textsc{ii}\right] }$~emission is used to determine $n_e$ (although see \citealt{davies20} for a study showing that $\mathrm{\left[ S \textsc{ii}\right] }$~underestimates $n_e$), however the wavelength of $\mathrm{\left[ S \textsc{ii}\right] }$~is not probed by KCWI for these four targets.
However, there is no general agreement on the best value of
$n_e$ to use, with conflicting estimates across the literature for AGN at different redshifts. The long assumed value of $n_e = 100~\rm{cm}^{-3}$ has recently been challenged by \citet[][$700 < n_e < 3000~\rm{cm}^{-3}$]{perna17} and \citet[][$n_e \sim 10^5~\rm{cm}^{-3}$]{villar15}. Recent IFU studies have shown that $n_e$ can also vary spatially across a galaxy, for example \cite{mingozzi19} find a wide range of electron densities from $50-1000~\rm{cm}^{-3}$, with regions of high density concentrated in localized regions (which then dominate the total flux), while the rest of the regions in the galaxy have a much lower electron density. In the outflows themselves, \cite{mingozzi19} find a median $n_e\sim250~\rm{cm}^{-3}$. This is an issue which plagues all such studies on AGN outflows since assuming a larger value of $n_e$ can lead to an underestimate of the gas mass present and vice versa. We chose to use $n_e = 500~\rm{cm}^{-3}$ in order to be consistent with \cite{carniani15}. {\color{referee} However, we note that taking the extremes in $n_e$ found by \citet[][$50-1000~\rm{cm}^{-3}$]{mingozzi19} in comparison to the $n_e=500~\rm{cm}^{-3}$ value we use in this study, would result in outflow values either 10 times larger ($n_e =50~\rm{cm}^{-3}$) or two times smaller ($n_e=1000~\rm{cm}^{-3}$). In the absence of spatially resolved information of the electron densities for the 4 galaxies in this study, using an average value of $n_e=500~\rm{cm}^{-3}$ is therefore a reasonable choice.}
We also assume a gas solar metallicity, $[O/H] = [O/H]_{\odot}$. Since we are assuming a single value of $n_e$ and solar metallicity, the first term of Equation~\ref{eq:carni} reduces to unity. Note we do not include an uncertainty on $n_e$ when calculating an error on $M_{\rm{gas}}$ (or for the geometry of the system or volume filling factor), we propagate only the background noise and Poisson noise from the total flux (estimated using {\tt photutils.calc\_total\_error} function\footnote{\url{https://photutils.readthedocs.io/}}).
We also investigate the kinematics of the outflow, including the velocity of the outflow. Since the velocities and velocity dispersions measured for the broad $\mathrm{\left[ O \textsc{iii}\right] }$~component (shown in Figure~\ref{fig:oiiiwfour}) only account for the velocity of the outflow along the line of sight, {\color{referee} whereas in reality the outflows will have a spread of observed radial velocities that will be lower than the actual bulk velocity of the outflow. The actual outflow velocity across 3-dimensions is best approximated by the most blueshifted velocity in the observed velocity distribution \citep{leung19}. A common parameter to measure this bulk velocity of the outflow is the maximum velocity , $v_{\rm{[OIII]}}$, determined as:
\begin{equation}\label{eq:velocity}
v_{\rm{[OIII]}} = |\Delta v_{\rm{max}}| + 2\sigma_{\rm{broad,[OIII],max}},
\end{equation}
where $|\Delta v_{\rm{max}}|$ is the maximum difference in the velocity of the narrow and broad $\mathrm{\left[ O \textsc{iii}\right] }$~components, $\sigma_{\rm{broad,[OIII],max}}$ is the maximum velocity dispersion of the broad $\mathrm{\left[ O \textsc{iii}\right] }$~component. The relation in Equation~\ref{eq:velocity} is defined by the properties of a normal distribution which is used to model the emission line velocity profiles \citep[see][]{rupke13}}. Not taking into account the line of sight effects on the velocity will result in an underestimate of the mass outflow rate (see Equation~\ref{eq:outflow}).
The physical extent of the outflow is also a key measurement for determining the scale over which these outflows will impact on the galaxy. We calculated the extent, $\rm{r}_{\rm{max}}$, as the most distant spatial extent of the broadened emission away from the central AGN (assumed to be the brightest pixel in the flux of the integrated $\mathrm{\left[ O \textsc{iii}\right] }$~narrow emission shown in Figure~\ref{fig:oiiinfour}, with the location highlighted by the blue crosses in the top panels of Figure~\ref{fig:oiiiwfour}). We deconvolved our estimate of $\rm{r}_{\rm{max}}$ using an estimate of the seeing from observations of the standard star Feige 34. Not performing such a deconvolution results in an overestimate of the maximum physical extent and therefore an underestimate of the mass outflow rate (see Equation~\ref{eq:outflow}).
Combining the velocity and physical extent allows for a calculation of the timesacle of the outflow:
\begin{equation}\label{eq:timescale}
t_{\rm{out}}~[\rm{yr}] = \bigg( \frac{\rm{r}_{\rm{max}}}{\rm{km}} \bigg) \bigg( \frac{\rm{v}_{\rm{[OIII]}}}{\rm{km}~\rm{yr}^{-1}} \bigg)^{-1} .
\end{equation}
The mass outflow rate is then calculated in the following way:
\begin{equation}\label{eq:outflow}
\bigg(\frac{\dot{\rm{M}}_{\rm{out}}}{\rm{M}_{\odot}~\rm{yr}^{-1}} \bigg) = B \bigg( \frac{\rm{M}_{[OIII]}}{\rm{M}_{\odot}} \bigg) \bigg( \frac{\rm{t}_{\rm{out}}}{\rm{yr}} \bigg)^{-1}.
\end{equation}
Note that this method assumes that the outflow rate is constant over the time that the outflow has been active, $t_{\rm{out}}$. A factor of B between $1-3$ is typically applied to account for the geometry of the outflows \citep{harrison18}. For example, for a spherical outflow a factor of $B=3$ would be employed, whereas a biconical outflow covering only 1/3 of a sphere would need a factor of $B=1$. Given that our AGN host galaxies are disk-dominated and are assumed to be feeding the AGN through secular processes from the disk from the same angular momentum vector, we presume the outflow will not be spherical (see S19 and \citealt{npk12}) and therefore use a conservative value of $B=1$ throughout this work. This assumption may result in an underestimate of the outflow rate in these systems.
\rowcolors{1}{lightgray}{}
\begin{table*}
\centering
\caption{Properties of the 4 disk-dominated AGN with outflow rates calculated from the extent and flux of $\mathrm{\left[ O \textsc{iii}\right] }$~in spectral observations taken with KCWI. {\color{referee} We list black hole masses, $\log_{10}$ $[\rm{M}_{\rm{BH}}$/$\rm{M}_{\odot}]$, the $\mathrm{\left[ O \textsc{iii}\right] }$~luminosity of the broad outflow component, $\log_{10}$ $[\rm{L}_{\rm{OIII}}$/$\rm{erg}~\rm{s}^{-1}]$, the Eddington ratio of the AGN, $\lambda_{\rm{Edd}}$, the accretion rate of the AGN, $\dot{m}$ (see Equation~\ref{eq:bhmdot}), the mass in the outflow, $[\rm{M}_{\rm{OIII}}$/$\rm{M}_{\odot}]$ (see Equation~\ref{eq:carni}), the bulk outflow velocity, $v_{\rm{max},[OIII]}$ (see Equation~\ref{eq:velocity}), the maximum radial extent of the outflow, $r_{\rm{max}}$ (see Section~\ref{sec:calcgasmass}), the outflow rate, $\dot{\rm{M}}_{\rm{out}}$ (see Equation~\ref{eq:outflow}), and the timescale of the outflow, $\rm{t}_{\rm{out}}$ (see Equation~\ref{eq:timescale}).}}
\label{table:rates}
\begin{tabular*}{\textwidth}{Cp{2.0cm}Cp{1.5cm}Cp{1.5cm}Cp{1.0cm}Cp{1.1cm}Cp{1.5cm}Cp{1.5cm}Cp{1.0cm}Cp{1.5cm}Cp{1.25cm}}
\hline
Name & $\log_{10}$ $[\rm{M}_{\rm{BH}}$/$\rm{M}_{\odot}]*$ & $\log_{10}$ $[\rm{L}_{\rm{OIII}}$/$\rm{erg}~\rm{s}^{-1}]$ & $\lambda_{\rm{Edd}}$* & $\dot{m}$* $[\mathrm{M_{\odot}\,yr^{-1}}]$ & $\log_{10}$ $[\rm{M}_{\rm{OIII}}$/$\rm{M}_{\odot}]$ & $v_{\rm{max},[OIII]}$ $[\rm{km}~\rm{s}^{-1}]$ & $r_{\rm{max}}$ $[\rm{kpc}]$ & $\dot{\rm{M}}_{\rm{out}}$ $[\mathrm{M_{\odot}\,yr^{-1}}]\dagger$ & $\rm{t}_{\rm{out}}$ $\rm{[Myr]}$ \\
\hline
Harry & $6.56^{+0.13}_{-0.12}$ & $41.2\pm1.2$ & $0.08^{+0.33}_{-0.02}$ & $0.02^{+0.04}_{-0.01}$ & $5.1\pm0.1$ & $836\pm28$ & $0.6\pm0.3$ & $0.19\pm0.09$ & $0.6\pm0.3$\\
Padma & $7.62^{+0.14}_{-0.14}$ & $42.2\pm0.2$ & $0.20^{+0.45}_{-0.09}$ & $0.07^{+0.4}_{-0.3}$ & $6.03\pm0.09$ & $1710\pm6$ & $2.4\pm0.4$ & $0.7\pm0.1$ & $1.4\pm0.2$ \\
Neville & $6.30^{+0.12}_{-0.12}$ & $41.6\pm0.4$ & $0.86^{+2.90}_{-0.26}$ & $0.07^{+0.11}_{-0.04}$ & $5.5\pm0.1$ & $1316\pm29$ & $2.1\pm0.3$ & $0.18\pm0.03$ & $1.6\pm0.2$ \\
Theodore & $6.73^{+0.11}_{-0.11}$ & $41.6\pm0.6$ & $0.77^{+1.68}_{-0.35}$ & $0.06^{+0.04}_{-0.02}$ & $5.4\pm0.2$ & $675\pm18$ & $1.3\pm0.4$ & $0.12\pm0.04$ & $1.9\pm0.6$ \\
\hline
\end{tabular*}
\justify
\vspace{0.5em}
* Measurements from SSL17. Black hole masses are calculated using a virial assumption by measuring the full width half maximum of the broadened H$\alpha$ ~component. SMBH accretion rates are calculated using bolometric luminosities inferred from WISE W3 magnitudes (see Section~\ref{sec:mdot}).\\
{\color{referee} $\dagger$ The quoted uncertainties on the outflow rates do not include an estimate of the uncertainty on the electron density, $n_e$ (see Section~\ref{sec:calcgasmass}). In this study we use a value of $n_e=500~\rm{cm}^{-3}$ to calculate the mass in the outflow to be consistent with \cite{carniani15}, but we note that taking the extremes in $n_e$ found by \citet[][$50-1000~\rm{cm}^{-3}$]{mingozzi19}, results in outflow rates either 10 times larger ($n_e =50~\rm{cm}^{-3}$) or two times smaller ($n_e=1000~\rm{cm}^{-3}$) than quoted here. The mean outflow rate of the four targets would therefore be in the range of $\langle\dot{M}_{\rm{out}}\rangle = 0.15-3~\rm{M_{\odot}}~\rm{yr}^{-1}$.}
\end{table*}
The kinetic energy outflow rate and momentum flux of the outflow can then be calculated as:
\begin{equation}\label{eq:kinout}
\dot{E}_{\rm{out}} = \frac{1}{2} \dot{M}_{\rm{out}} v_{\rm{[OIII]}}^2
\end{equation}
and
\begin{equation}\label{eq:momout}
\dot{P}_{\rm{out}} = \dot{M}_{\rm{out}}v_{\rm{[OIII]}}
\end{equation}
respectively.
\subsection{Black hole accretion rates}\label{sec:mdot}
The SMBH accretion rate can be inferred from the bolometric luminosity of the AGN, $L_{\rm{bol}}$;
\begin{equation}\label{eq:bhmdot}
\dot{m} = L_{\rm{bol}}/\eta c^2,
\end{equation}
where the radiative efficiency, $\eta =0.15$ (see \citealt{elvis02}). Bolometric luminosities were originally inferred by SSL17 for these four targets using the WISE W3 band magnitudes at $12\mu m$, by applying a correction from \cite{richards06}. It is possible that the W3 flux densities could be contaminated by star formation, however \cite{richards06} concluded that since there were minimal differences between their composite SEDs of Type 1 AGN around $\sim12\mu m$ this suggested minimal host galaxy contamination. Unlike for $\mathrm{\left[ O \textsc{iii}\right] }$~which could still have some star formation contamination in the narrow component for our four targets (e.g. see top panel of Figure~\ref{fig:oiiinfour} for Neville). In addition, the normalisation factor used to convert $L_{\rm{[OIII]}}$ to $L_{\rm{bol}}$ is highly uncertain. While \cite{heckman04} suggest a normalisation factor of $\sim3500$, there is some debate in the literature over the correct value, with some arguing it is $\mathrm{\left[ O \textsc{iii}\right] }$~luminosity dependent \citep[e.g.][estimate it ranges from 87-454]{lamastra09}. We therefore decided to use the bolometric luminosities previously calculated by SSL17 using the less problematic W3 flux densities.
\section{Results}\label{sec:results}
The top panels of Figure~\ref{fig:oiiiwfour} show the integrated flux in the broad $\mathrm{\left[ O \textsc{iii}\right] }$~component which are used to calculate the gas masses, velocities, physical extents and outflow rates given in Table~\ref{table:rates}. The mean $\mathrm{\left[ O \textsc{iii}\right] }$~gas mass in the outflow for the four targets is $\langle\rm{M}_{\rm{[OIII]}}\rangle = 5.5\pm0.2~\rm{M}_{\odot}$ (with a range of $5.1-6.03 ~\rm{M}_{\odot}$), with a corresponding mean outflow rate of $\langle\dot{M}_{\rm{out}}\rangle = 0.3\pm0.1~\rm{M}_{\odot}~\rm{yr}^{-1}$ (range $0.12-0.7~\rm{M}_{\odot}~\rm{yr}^{-1}$)\footnote{{\color{referee} Note that the uncertainties on these values do not include the uncertainties on the electron density $n_e$ (see Section~\ref{sec:calcgasmass}). In this study we use a value of $n_e=500~\rm{cm}^{-3}$ to be consistent with \cite{carniani15} in order to calculate the mass in the outflow, but we note that taking the extremes in $n_e$ found by \citet[][$50-1000~\rm{cm}^{-3}$]{mingozzi19}, results in outflow rates either 10 times larger ($n_e =50~\rm{cm}^{-3}$) or two times smaller ($n_e=1000~\rm{cm}^{-3}$) than quoted. The mean outflow rate of the four targets would therefore be in the range of $\langle\dot{M}_{\rm{out}}\rangle = 0.15-3~\rm{M_{\odot}}~\rm{yr}^{-1}$.}}. The outflows are substantial with a mean maximum radial extent of $\langle\rm{r}_{\rm{max}}\rangle = 1.6\pm0.4~\rm{kpc}$ (range $0.6-2.4~\rm{kpc}$), which is $\sim25\%$ of the galaxy Petrosian radius on average. {\color{referee} These extents are similar to those found in other AGN outflow studies, for example \citet{bae17} found that the mean outflow radius in their sample (20 Type 2 AGN at $z<0.1$) was $\sim1.8~\rm{kpc}$, \citet{harrison14} found a range in $\mathrm{\left[ O \textsc{iii}\right] }$~outflow extents of $1.5-4.3~\rm{kpc}$ (16 Type 2 AGN $z<0.2$), and \cite{kang18} measured outflows ranging from $0.60-7.45~\rm{kpc}$ in size (23 Type 2 AGN $z<0.2$)}. Figure~\ref{fig:ratios} shows the resolved narrow and broad $\mathrm{\left[ O \textsc{iii}\right] }$/H$\beta$ ratios and reveals how the outflows are ionized by the AGN in all four targets.
The gas mass values are consistent with those found by S19 using a narrowband imaging technique with the Shane-3m at the Lick Observatory, although are on average $\sim1$ dex larger. This is unsurprising given that S19 struggled to cleanly separate the broad and narrow emission using narrowband data (either due to extended star formation or subtraction of the central AGN PSF), and were only able to derive a lower limit on the gas mass for Neville. This suggests that the PSF subtraction dominated the uncertainty in the measurements of S19, resulting in an underestimate of the $\mathrm{\left[ O \textsc{iii}\right] }$~gas masses. Note that the values initially quoted by S19 were affected by a standard star flux calibration error, and were on average overestimated by $2.6$ dex. This has since been corrected with an erratum (Smethurst et al. 2021; erratum submitted). We are not able to directly compare the velocities or maximum extents of the outflows (and therefore the outflow rates) derived as S19 used $|\Delta v_{\rm{max}}|$ rather than $v_{\rm{[OIII]}}$ and did not deconvolve their measurement of $r_{\rm{max}}$ (see Section~\ref{sec:calcgasmass}), both of which lead to an underestimate of the outflow rates. Note that $v_{\rm{[OIII]}}$, as used in this study is a more accurate representation of the maximum outflow velocity (see Section~\ref{sec:calcgasmass} and Equation~\ref{eq:velocity}).
Despite the many limitations to narrowband imaging, it does allow for a higher spatial resolution in order to discern the basic morphology and features of each outflow. The KCWI data in this study has low spatial resolution and does not allow us to draw any conclusions about the features of each outflow (the biggest limitation is the seeing, estimated at $1.1''$). The top panels of Figure~\ref{fig:oiiiwfour} reveal how the brightest pixel in the broadened $\mathrm{\left[ O \textsc{iii}\right] }$~emission ionised by the outflow also coincides with the brightest pixel in the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission (ionised by the central AGN and star formation) for 3 of our sources (Padma has an offset). If there is structure to the outflows, it is lost due to the combination of the large pixel size of KCWI and the seeing. Therefore, in order to make any statements about the morphology of these outflows, more observations will be required with a higher spatial resolution IFU with AO capabilities (e.g. such as MUSE on the VLT).
\subsection{Harry (J0813+5422)}\label{subsec:harry}
Harry has the strongest bar feature of each of the four galaxies targetted in this study (as seen in Figures~\ref{fig:hsttargets} \& \ref{fig:kcwitargets}). The spiral features are picked up in the $\rm{H}\beta$~emission seen in the top left panel of Figure~\ref{fig:hbetafour}, with the velocity map revealing the ordered rotation in this feature. The narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission, shown in Figure~\ref{fig:oiiinfour}, is centrally concentrated and shows some ordered rotation, suggesting this emission is ionised by a combination of the AGN and central star formation. The blueshifted, broadened $\mathrm{\left[ O \textsc{iii}\right] }$~emission shown in Figure~\ref{fig:oiiiwfour} however, does not show clear rotation in the velocity map. Figure~\ref{fig:ratios} reveals how the central region and the outflow have high $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~ratios, suggesting that the outflow is indeed ionised by the AGN. Table~\ref{table:rates} reveals that Harry has the lowest ionised gas mass, the lowest SMBH accretion rate and the lowest spatial extent of the four targets. This suggests Harry's outflow is relatively new, therefore it is unsurprising that Harry has the shortest timescale over which it is estimated to have been active of all four targets: $0.6~\rm{Myr}$ (see Table~\ref{table:rates}).
\subsection{Padma (J1012+1017)}\label{subsec:padma}
Figure~\ref{fig:hsttargets} reveals that Padma has a bar lens feature \cite[an oval-like structure along the bar major axis, see][]{athan15} surrounded by spiral structure. This spiral structure is not detected in the $\rm{H}\beta$~emission flux shown in Figure~\ref{fig:hbetafour}, however the corresponding $\rm{H}\beta$~velocity map shows the most ordered rotation of the four targets studied (and similarly for the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~velocity map in Figure~\ref{fig:oiiinfour}). The brightest point in the broadened $\mathrm{\left[ O \textsc{iii}\right] }$~flux is offset from the brightest point in the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~flux (shown by the blue c ross in Figure~\ref{fig:oiiiwfour}). Padma has the largest ionised gas mass of all four targets, at an order of magnitude larger than Harry. Padma also has the largest SMBH mass, SMBH accretion rate, outflow velocity and physical extent ($2.4~\rm{kpc}$), leading to the largest outflow rate of the four targets of $0.7\pm0.1~\rm{M}_{\odot}~\rm{yr}^{-1}$. The ratio between the SMBH accretion rate and the outflow rate is therefore much larger, meaning more of the inflowing material is ejected in the outflow than is accreted by the SMBH (i.e. a higher mass loading factor, see \citealt{qui21} for example).
\subsection{Neville (J1034+3938)}\label{subsec:neville}
Neville has prominent flocculent spiral features and a possible weak bar, as revealed by the HST imaging in Figure~\ref{fig:hsttargets}. Emission from this flocculent structure is identifiable in the $\rm{H}\beta$~emission flux shown in Figure~\ref{fig:hbetafour}, where clear rotational structure can be seen in the velocity map. The centre of Neville's $\rm{H}\beta$~velocity and velocity dispersion map show broadened emission with little rotation, suggesting the central $\rm{H}\beta$~emission is ionised by the AGN and not star formation, which is confirmed by the relatively high $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~ratios seen in Figure~\ref{fig:ratios}. Extended narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission across a spiral feature can be seen in Figure~\ref{fig:oiiinfour}, suggesting ionisation by star formation is also present along with ionisation from the central AGN. S19 also reported extended $\mathrm{\left[ O \textsc{iii}\right] }$~emission from Neville in their narrowband imaging data, resulting in an uncertain isolation of the emission ionised by the outflow alone. With one of the largest SMBH accretion rates, the SMBH is accreting at a similar order of magnitude to the measured outflow rate. The outflow has one of the highest velocities and physical extents ($2.1~\rm{kpc}$) after Padma.
\subsection{Theodore (J1314+4218)}\label{subsec:theodore}
Theodore has a strong bar feature with faint, loosely wound spiral arms emerging from the ends (as seen in HST imaging in Figure~\ref{fig:hsttargets}). Figure~\ref{fig:kcwitargets} reveals how only the bar feature is picked up in the KCWI observations. This is particularly apparent in the flux in the $\rm{H}\beta$~emission shown in Figure~\ref{fig:hbetafour}, which also reveals some rotational structure in the corresponding velocity map. This bar feature is also just noticeable in the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission (Figure~\ref{fig:oiiinfour}), suggesting ionisation due to ongoing star formation in the bar. This could also extend into the central regions of the galaxy as the narrow $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~ratio in the top right panel of Figure~\ref{fig:ratios} is low, suggesting ionisation dominated by star formation. However, the broad $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~ratios in the bottom panels of the same figure are high, suggesting the outflow is ionised by the AGN and not stellar winds. Like Neville, the SMBH accretion rate of Theodore is of the same order of magnitude as the outflow rate (a factor of just $\sim2$ difference). The resulting outflow has the lowest velocity of the four targets observed.
\section{Discussion}\label{sec:discussion}
Given that the targets we have observed in this study are all disk-dominated with little to no bulge component (see Figure~\ref{fig:hsttargets}), we assume the galaxy and the SMBH have co-evolved via a non-merger process \citep{walker96,hopkins12, martig12, tonini16, stevens16}. We must therefore consider which processes are able to drive an inflow of gas of at least $0.21-0.77~\rm{M}_{\odot}\rm{yr}^{-1}$ (to power both the accretion of the SMBH and the outflow) for an extended period of $0.6-1.9~\rm{Myr}$ (the time over which the outflows in our four targets have been active, see Table~\ref{table:rates} and Equation~\ref{eq:timescale}).
Bars and spiral arms are long-lived morphological features and could therefore feasibly drive an inflow to the central regions of a galaxy over many $\rm{Gyr}$ \citep{fanali15, hunt18, jung18}\footnote{Note that these simulations only considered galactic scale inflows and did not consider how gas was transferred from kpc to sub-pc scales in the central regions. Therefore, these simulations don't provide estimates for the amount of gas that makes it to the AGN accretion disk itself, merely that which is transferred to the central gas reservoir.}. All four of our targets show clear spiral features (see Figure~\ref{fig:hsttargets}), with Harry and Theodore showing a strong bar feature, Neville a weak bar feature \citep{nair10b} and Padma a barlens feature \cite[an oval-like strucutre along the bar major axis, see][]{athan15}. Simulations suggest both bars and spiral arms can drive inflows at rates an order of magnitude larger than needed to power the combined outflow and SMBH accretion rates for all four targets \cite[$0.1-\rm{few}$ $M_{\odot}~\rm{yr}^{-1}$;][]{regan04, davies09, lin13,fanali15,slater19}. This order of magnitude difference is promising, since our simplifying assumption here is that the inflow must be at least enough to power both the SMBH accretion and the outflow, this means that the inflow would be sufficient to also fuel central star formation or contribute to the central gas reservoir \citep{tacconi86, boker03, bigiel08, leroy13, moreno21}. This suggests that bars and spiral arms would be capable of driving inflows which could sustain both the SMBH growth and an outflow from the AGN, while still contributing gas to the central gas reservoir of the galaxies.
S19 compared their AGN outflow rates and SMBH accretion rates to the results of \cite{bae17}, who studied a sample of $20$ nearby ($0.024 < z < 0.098$) Type 2 AGN with mixed morphologies (two of their sample are ongoing mergers) using the Magellan/IMACS-IFU and VLT/VIMOS-IFU\footnote{These IFUs had a large enough wavelength range to allow \cite{bae17} to empirically determine the column densities of the ionised gas, $n_e$, using the $\mathrm{\left[ S \textsc{ii}\right] }$~line ratio, unlike in this study with KCWI. They found a range of $54 < n_e < 854~\rm{cm}^{-3}$, with an average $n_e\sim360\pm230~\rm{cm}^{-3}$ which is similar to the value of $n_e=500~\rm{cm}^{-3}$ used in this study. B17 also used the $M -\sigma_*$ relation of \cite{park12} to derive black hole masses (rather than the virial assumption of \cite{greene05} as implemented by SSL17). In addition they calculated bolometric luminosities from the luminosity of the central narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission \cite[see][]{heckman04}, as opposed to deriving them using the WISE W3 band at $12\mu m$ as implemented by SSL17. The reader is urged to bear these caveats in mind while the two studies are compared.}. Although we only have four targets in this study we can still make some comparisons to the \cite{bae17} sample. The velocities of the outflows in our sample are comparable to the \cite{bae17} sample (when calculated in the same way as Equation~\ref{eq:velocity}), with our four targets having higher velocities by a factor of $\sim1.35$ on average. However, the average outflow rates for our four targets are much lower than those of the merger powered \cite{bae17} sample, $\sim15$ times lower on average. However, the black hole accretion rates are larger in our four targets than the \cite{bae17} sample by a factor of $\sim3$ on average. This is in agreement with the findings of S19, who discussed the possibility that this scenario could be explained by higher spin of the SMBHs in the disk-dominated sample, following the hypothesis of \cite{npk12}.
Given that the outflow rates of the merger-grown \cite{bae17} sample are $\sim15$ times larger than the outflow rates of the four disk-dominated galaxies studied in this work, this suggests that the inflow rates funnelled by merger processes must be much larger than in secular processes. However, given the comparable accretion rates of the black holes powering the AGN, these inflows do not contribute to the growth of the black hole, but instead are used to power a large outflow which can have considerable impact on the surrounding galaxy. This supports the conclusions of \cite{mcalpine20}, who found using the EAGLE simulations, that mergers do not induce a significant amount of SMBH growth, instead finding that the majority of mass is accreted by the SMBH outside the merger period. Similarly \cite{martin18} showed using the Horizon-AGN simulation that only $\sim35\%$ of all of the matter contained in SMBHs by $z\sim0$ is a result of mergers (either major or minor). Combining these results with our findings here suggests that secular processes are responsible for the majority of SMBH growth, whereas mergers are responsible for the majority of outflowing material and the subsequent feedback on the galaxy.
\begin{figure}
\centering
\includegraphics[width=0.475\textwidth]{outflow_energetics_compare_RW18_LbolSSL17.png}
\caption{The mass outflow rate (top), energy injection rate (middle), and momentum flux (bottom) against the AGN bolometric luminosity ($L_{\rm{bol}} = 3500~L_{[OIII]}$) for our four sources (red crosses). This figure is a recreation of Figure 11 from \protect\citet{RW18}; we compare our sources with their estimates for 5221 Type 1 AGNs from SDSS ($z<0.3$; shown by the grey circles). This figure shows how our secularly powered outflows are typical of low-redshift Type 1 AGN and that they have momentum conserving outflows.}
\label{fig:energetics}
\end{figure}
We also compare the outflow rates, kinetic energy outflow rate and momentum flux of the outflow calculated for our sample to a sample of $\sim5000$ Type 1 AGN identified in SDSS from \cite{RW18}\footnote{Note that \cite{RW18} used SDSS spectra to determine outflow gas masses, which may miss some outflow flux outside the fibre (leading to a possible underestimate of the outflow rate) and inferred the physical extent of the outflow using an empirical relation with $\mathrm{\left[ O \textsc{iii}\right] }$~luminosity from \cite{kang18}. {\color{referee} In addition, \cite{RW18} estimated bulk outflow velocities as $v_{out} = (\sigma_{\rm{broad,[OIII],max}}^2 + |\Delta v_{\rm{max}}|^{2})^{0.5}$, which is different from how we estimated the bulk velocities in this study (see Equation~\ref{eq:velocity}). Calculating our outflow velocities in this way results in lower values than quoted in Table~\ref{table:rates}, by $541~\rm{km}~\rm{s}^{-1}$ on average. This particularly affects the comparison of $\dot{E}_{out}$ which has a $v_{\rm{out}}^2$ dependency, leading to an average difference in $\log_{10}\dot{E}_{\rm{out}}$ of $\sim0.7$ dex (and $\sim0.34$ dex in $\log_{10}\dot{P}_{\rm{out}}$). Readers should bear these caveats in mind while comparing the results of this study with those from \cite{RW18} in Figure~\ref{fig:energetics}, however we note that these differences due to the alternate bulk outflow velocity estimate used do not account for the differences between our four targets and the Type 1 AGN population seen in Figure~\ref{fig:energetics}.}} in Figure~\ref{fig:energetics}. We find that the outflow rates of our four targets are comparable to the larger AGN population given their bolometric luminosities. However, given their larger velocities, this results in higher kinetic energy injection rates and momentum flux compared to the larger AGN population, but still within the typical range. This figure demonstrates that the secularly powered outflows of our four targets are typical of low-redshift Type 1 AGN. It is worth noting here that many AGN are found in non-merger systems (for example see \citealt{smethurst16, aird19}), with a wide-range of morphologies, which may also be fuelled by these same secular processes. Given that we find that our outflows and accretion rates are typical of the larger low-redshift AGN population, and given the results of simulations such as \cite{martin18} and \cite{mcalpine20}, it is possible that the majority of low-redshift AGN (both growth and outflows) are powered by secular processes.
The momentum flux of the outflows allows us to probe whether the outflows are momentum-conserving \cite[i.e. winds driven by radiation pressure][]{thompson15, costa18} or energy-conserving \cite[i.e. driven by fast, small-scale winds][]{faucher12, costa14}. The average ratio of $\log_{10}[c\dot{P}_{\rm{out}}/L_{\rm{bol}}] = -0.91\pm0.08$ suggests that these outflows are momentum-conserving. If the ratio was higher than unity, then an extra boost of momentum from an energy-conserving wind (which does work on the surrounding material, therefore increasing the momentum of the large-scale wind) would be required. The measurements of the kinetic energy injection rate allow us to probe the physical driver of the outflows observed in our four targets. For example the ratio of $\dot{E}_{\rm{out}}/L_{\rm{bol}}$ is between $0.004\%-0.12\%$ for our targets, meaning that the AGN is energetically sufficient to drive the observed outflows. This is in agreement with the high $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~ratios seen in Figure~\ref{fig:ratios} suggesting that the outflows are ionised by the AGN rather than star formation. Such low values of $\dot{E}_{\rm{out}}/L_{\rm{bol}}$ are often interpreted as outflows which are incapable of impacting their surrounding galaxy through AGN feedback. Many theoretical works claim that only those outflows with $\dot{E}_{\rm{out}}/L_{\rm{bol}} \gtrsim 0.5-5\%$ are capable of quenching galaxies \citep{dimatteo05, hopkins10, harrison18}; however Figure~\ref{fig:energetics} shows how the majority of low-redshift AGN do not achieve such high efficiencies, with the majority $<1\%$.
To determine whether the outflows of our four targets will have an effect on their host galaxies, we first compare the velocity of each outflow to the escape velocity of the galaxy at a radius equal to the maximum extent of the outflow. We assume an $n=1$ Sersic profile to model the light distribution in each galaxy and calculate the fraction within the most distant spatial extent of the outflow, $r_{max}$. We then assume a constant mass-to-light ratio in order to work out the total stellar mass of the galaxy within that radius, $M_{*,r<r_{\rm{max}}}$. The escape velocity of the galaxy at the maximum extent of each outflow is then calculated as $v_{esc, gal} = (GM_{*,r<r_{max}}/r_{max})^{0.5}$, assuming spherical symmetry. The average $v_{[OIII]}$ for the four targets in our sample is $1134\pm 205~\rm{km/s}$, which is $\sim30.5$ times larger than the average escape velocity of the galaxy. We can therefore assume that these outflows, despite their relatively lower rates, will escape the galactic potential and cause AGN feedback to the galaxy by driving gas out of the central regions, or cause feedback to the galactic halo through heating the intergalactic medium (note the large radial extent of the outflows in these four targets of $0.6-2.4~\rm{kpc}$, which is $\sim25\%$ of the galaxy Petrosian radius on average).
In order to determine whether the outflows are impacting each galaxy, we would need an estimate of the resolved SFR (e.g. from H$\alpha$ and/or D$_n4000$). The wavelength range of KCWI does not cover these spectral features in the redshift range of these sources; an IFU with a larger wavelength range would be necessary to quantify the feedback efficacy. Since these are Type 1 AGN the SFRs derived from SDSS spectra are also unreliable due to contamination from the AGN. However, it is worth noting that these four targets have galaxy $u-r$ colours\footnote{Calculated in a `donut' shaped aperture by removing the SDSS PSF magnitude from the Petrosian magntiude.} in the range $1.7-2.5$ ($\pm0.1$; although note this is not the case for the parent sample of disk-dominated galaxies, see Section~\ref{sec:sample}) and would therefore be classified as either Green Valley or Red Sequence galaxies \citep{baldry04, smethurst16}.
In addition, SSL17 demonstrated how these disk-dominated systems lay on the typical galaxy stellar mass-SMBH mass correlation (i.e. within the scatter), suggesting that non-merger co-evolution of galaxies with their SMBH is possible. Therefore, if \emph{both} merger-driven and non-merger-driven SMBH growth lead to co-evolution, this suggests that this co-evolution is regulated by feedback in both scenarios. Confirming whether AGN outflows in disk-dominated galaxies are powerful enough to cause feedback is therefore of great importance for our understanding of galaxy evolution through co-evolution. An IFU with a larger wavelength range (to cover e.g. $\rm{H}\alpha$ in order to probe the SFR), high spatial resolution (to more accurately resolve the regions impacted by the outlow) and better seeing (this is the biggest limiting factor using KCWI) would allow for a more detailed study on the feedback effects of outflows powered by secular processes in these disk-dominated systems. For example, an IFU such as MUSE on the Very Large Telescope (VLT), used with adapative optics, would be ideal for this science case.
\section{Conclusion}\label{sec:conc}
We have observed four disk-dominated galaxies hosting luminous AGN with KCWI, an IFU available at the Keck observatory. These galaxies are assumed to have their evolution (and therefore their SMBH growth) dominated by non-merger processes due to their lack of central bulge (see Figure~\ref{fig:hsttargets}).
We performed spectral fits to each of the reduced data cubes from KCWI and detected blueshifted broadened $\mathrm{\left[ O \textsc{iii}\right] }$~components in all four targets with $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$ ratios indicative of ionisation by the AGN. With these spectra we were able to spectrally isolate the broadened $\mathrm{\left[ O \textsc{iii}\right] }$~emission from the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission ionised by the central AGN (see Figures~\ref{fig:oiiinfour} \&~\ref{fig:oiiiwfour}). From these fits we calculated the integrated flux in $\mathrm{\left[ O \textsc{iii}\right] }$~$4958\rm{\AA}~\&~5007\rm{\AA}$ across each target and from this calculated the total ionised gas mass in the outflow (see Equation~\ref{eq:carni}). From the maximum extent of the outflow (see top panels of Figure~\ref{fig:oiiiwfour}) and the bulk velocity of the outflow we were able to estimate the outflow rate (see Equation~\ref{eq:outflow}), energy injection rate and momentum flux for these four systems. Our conclusions are as follows:
\begin{enumerate}
\item The outflow rates of the four targets range from $0.12-0.7~\rm{M}_{\odot}~\rm{yr}^{-1}$, with corresponding SMBH accretion rates in the range $0.02-0.7~\rm{M}_{\odot}~\rm{yr}^{-1}$. The velocities, outflow rates, kinetic energy injection rate and momentum flux of these secularly powered outflows are all typical of other low-redshift AGN outflows in the literature.
\item Secular processes such as funnelling of gas by bars and spiral arms are more than capable of providing enough gas to power both the accretion and outflow rates measured in this study, with simulations suggesting they can power inflows an order of magnitude larger than the combined SMBH accretion and AGN outflow rates observed. This suggests that a significant amount of inflow funnelled to the centre by secular processes, will not necessarily be used for SMBH growth or AGN outflows, but will contribute to the central gas reservoir of the galaxy.
\item The maximum radial extent of the outflows is substantial, ranging from $0.6-2.4~\rm{kpc}$, which is on average $\sim25\%$ of the galaxy Petrosian radius.
\item The outflow velocities in all of our AGN exceed ($\sim30$ times larger on average) the escape velocity of the galaxy at the maximum radial extent of the outflow. This suggests that these outflows will have a feedback effect on their galaxies, perhaps expelling gas from the central regions or heating the surrounding halo. This suggests that if the co-evolution of SMBHs and galaxies is possible through both merger and non-merger driven growth, then AGN feedback may be responsible for regulating this co-evolution in both scenarios. Further spectral observations using an IFU with a larger wavelength range and higher spatial resolution will be needed to quantify the resolved feedback efficacy of these outflows.
\item We find that the outflow rates in the merger-powered AGN sample of \cite{bae17} are $\sim51$ times larger than in our four disk dominated targets, whereas the SMBH accretion rates are $\sim3$ times lower. This is in agreement with the findings of \cite{smethurst19b} who attributed this to the hypothesised spin up of SMBHs due to a secular feeding mechanism.
\end{enumerate}
Combining our results with the conclusions of recent simulations \citep[e.g.][]{martin18, mcalpine20} suggests that secular processes are responsible for the majority of SMBH growth over cosmic time. A higher spatial resolution IFU study, supported by adaptive optics, of the larger parent sample of these four disk-dominated galaxies would allow for a more detailed study on the SMBH growth processes and AGN feedback effects of outflows powered by secular processes in these disk-dominated systems.
\section*{Acknowledgements}
RJS gratefully acknowledges funding from Christ Church, Oxford. BDS gratefully acknowledges current support from a UK Research and Innovation (UKRI) Future Leaders Fellowship (MR/T044136/1) and past support at the time of KCWI proposal and observing from the National Aeronautics and Space Administration (NASA) through Einstein Postdoctoral Fellowship Award Number PF5-160143 issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060.
This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. These observations are associated with program HST-GO-14606.
This research made use of Astropy,\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{astropy13, astropy18} and the affiliated {\tt ccdproc} package \citep{ccdproc}.
The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawai'ian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is \url{www.sdss.org}.
SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'orio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University and Yale University.
\section*{Data Availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mn2e}
| proofpile-arXiv_066-2509 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
\IEEEPARstart{C}{rowd} analysis is a popular application of computer vision and has achieved superb success, especially in crowd counting\cite{song2021choose, yan2019perspective, wei2021scene}. Crowd counting is a fundamental task, which estimates the sum counts of instances in crowd scenes. Many mainstream methods produce the predicted counts by directly regressing a scalar or integrating the density distribution. The above methods cannot yield an accurate location for each instance in crowd scenes, especially in congested regions. Recently, some researchers focus on crowd instance localization \cite{idrees2018composition, liu2019recurrent, wang2020nwpu}, which aims to locate the center of the head for each
person. Its instance-level predictions can provide more detailed information than the traditional counting algorithms, and it aids some high-level crowd analysis tasks more effectively, such as crowd tracking \cite{sundararaman2021tracking}, group detection \cite{sanford2020group}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figIntro}
\caption{Crowd scenes with intrinsic scale shift. To facilitate visualization, we transfer the image into the gray mode and distinguish boxes with different scales in colors. }
\label{fig_intro}
\end{figure}
However, the crowd locator is still a challenging task for its instance-level objective. In crowd scenes, the instances are represented distinctly in scale within the image, where the representation inconsistence is called intrinsic scale shift. The intrinsic scale shift incurred from the variant distances of crowd from the camera becomes an essential issue in crowd localization for being ubiquitous in crowd scenarios. Fig. \ref{fig_intro} depicts typical examples of intrinsic scale shift. The boxes with variant scales are annotated in different colors. The intrinsic scale shift makes crowd locator struggle and insufficient to catch instances with variant scales. Precisely, it is arduous for the crowd locator to converge on the non-independent identically distributed data, while the scale distribution of data with intrinsic scale shift can be recognized as the chaotic distribution. Thus, it is imperative to address the intrinsic scale shift in crowd localization.
In crowd counting, a related but more mature field to crowd localization, the intrinsic scale shift has been attacked with two mainstreams. To begin with model perspective, designing a scale-aware model meets a certain intrinsic scale shift. SAS Net \cite{song2021choose} proposes a fusion strategy among feature maps with different resolutions to aggregate different scales. Despite that the scale-aware models yield a certain promotion, manual proposal to the model architecture is hard to catch certain scale information in the wild. Therefore, the second stream is from data perspective which is to align the intrinsic scale shift. SD Net \cite{ma2021towards} aligns scale shift among orderly divided image patches. However, orderly dividing images ignores the scale variance within the patch. Moreover, the semantic information is distorted in the marginal region of the patches due to patch level dividing. In crowd localization, this semantic distortion of instances
laying in the marginal region degrades performance. To this end, the crowd locator RAZ Net \cite{liu2019recurrent} leverages a recurrent method to find a region with smallest scales and assign a scale factor to each recurrent layer. Nevertheless, it is challenging to find the smallest scales region without missing other comparing regions.
This paper aims to tackle the intrinsic scale shift from data regularization and knowledge transform in crowd localization. From data perspective, the Intrinsic Scale Shift incurs the chaos of the scale distribution in crowd scenes. Thus, we propose a Gaussian Mixture Scope (GMS), which aligns the chaotic scale distribution and constrains the normalization of data. Specifically, a Gaussian mixture distribution is leveraged to adapt to the scale distribution. Through decoupling the feature within the mixture model, the distribution is separated into normal sub-distributions. To this end, the chaos within the scale distribution is mapped to the shift among the sub-distributions.
In the light of the above shift, we utilize the scale alignment among sub-distribution, in which the comparison of probability distributions is geometrized. Concretely, in adapting the scale distribution via Gaussian mixture, constraining spatial feature as the observation of probability distribution provides spatial compactness to the sub-distributions. Therefore, the compactness makes it available to treat sub-distributions as image patches and align the shift via image interpolation.
Despite that constraining provides spatial compactness to sub-distributions, the decoupling to the scale distribution incurs certain semantic issues. Since the decision boundary is adaptative among sub-distributions, the decoupling is to adaptatively cut images. With this cutting strategy implemented, the shift alignment via interpolation incurs semantic distorted for the distinct scale factors. To this end, we propose a sub-distribution re-aggregation trick. In shift alignment, the images are kept as a whole and fed into crowd locator. And the windows are shot from the result according to the corresponding sub-distribution. As a result, there incurs less influence for the undistorted images comparing with distorted ones.
With proposed GMS aligning scale shift and sub-distribution re-aggregation alleviating semantic distortion, the chaos in data distribution is regularized. However, directly implementing GMS in training phase to regular training data dislodges the hard samples. Thus, crowd locator cannot actively \textit{learn} the knowledge, but passively \textit{receive} the knowledge, which incurs overfitting on training set. We assert the GMS regularized data can be treated as exploiting latent knowledge. To further transfer the exploited latent knowledge from regularized data to model, a Scoped Teacher playing a role of bridge in knowledge transform is proposed. The Scoped Teacher introduces a new paradigm comparing with conventional learning from manually annotated ground truth in fully-supervised crowd localization. In training, the GMS regularized images are fed into Scoped Teacher to exploit the latent knowledge, which is hard to be derived from ground truth learning. To transfer the knowledge, a consistency loss is implemented. In this way, the student model gradually learns the Scoped Teacher exploited features and converges better.
In a nutshell, our contributions are four-fold:
\begin{itemize}
\item Propose to tackle the crowd localization from the perspective of scale shift. We provide a novel scale distribution alignment which is to geometrize the issue and to implement it via image interpolation.
\item Present a Gaussian Mixture Scope (GMS) to make scale alignment via scale distribution decoupling and sub-distributions alignment. Moreover, we propose a sub-distribution re-aggregation trick to alleviate boundary semantic distortion in alignment.
\item Design a Scoped Teacher to make latent knowledge transform, which also addresses the overfitting incurred by GMS in direct training. Moreover, the Scoped Teacher is a new paradigm in fully-supervised crowd localization.
\item Quantitative results show that our proposed work achieves state-of-the-art on five popular datasets in crowd localization.
\end{itemize}
\section{RELATED WORKS}
In this section, brief reviews on related works to our method are arrayed. Firstly, since intrinsic scale shift also exists in crowd counting and has been attacked by community, it is of service to review intrinsic scale shift in crowd counting. Secondly, we array the introduction of crowd localization works. At last, to make a distinction with other teacher-student models, we also analyze some works which adopt teacher models.
\subsection{Crowd Counting}
As aforementioned, the counting community attacks intrinsic scale shift in two mainstreams. From the model perspective, a multitude of works \cite{liu2019crowd,liu2018crowd,ma2020learning, cheng2019improving, sindagi2019multi, onoro2016towards, liu2019crowd, kang2018crowd, bai2020adaptive} deal with intrinsic scale shift via multi-feature fusion. Moreover, some others \cite{shi2019revisiting, yan2019perspective, yang2020reverse, zhang2015cross} trace the essence of intrinsic scale shift namely perspective imaging, which utilizes the predicted perspective map as training strategy. Despite that perspective related works achieve certain promotion, we assert the intrinsic scale shift has not been aligned. To this end, Auto Scale \cite{xu2022autoscale} and L2SM \cite{xu2019learn} propose to scale the image patches according to density level. \cite{sajid2020plug, sajid2020zoomcount, babu2017switching} also feed patches with distinct density level into CNN with different receptive fields. However, the density level cannot represent instance scale. Therefore, SD Net \cite{ma2021towards} introduces instance scale and use it to align scale shift. But SD Net fails to keep semantic information during handle and ignores intrinsic scale shift within the divided patches.
\subsection{Crowd Localization}
Crowd localization aims to locate the precise position of each head shown in the image. The very first idea about the localization must be object detection \cite{redmon2016you, redmon2017yolo9000, redmon2018yolov3, ren2015faster, stewart2016end}. TinyFaces \cite{hu2017finding} utilizes a detection based framework via the analysis to the impacts of scales, context semantic information and image resolution to locate the tiny faces. Following TinyFaces \cite{hu2017finding}, some researchers \cite{bai2018finding, li2019pyramidbox++, li2019dsfd, lin2017focal} make extending work in tackling intrinsic scale shift. However, due to the shortage of detection structure, the detection based methods still perform poorly under extremely congested scenarios. Thus, some researchers begin to utilize regression based crowd locator. RD Net \cite{lian2019density} leverages depth information to generate spatial aware supervision map. But in mainstream datasets of crowd localization, the depth information is unavailable. Thus, FIDTM \cite{liang2021focal} introduces distance map to learn precise and spatial aware density map. But it fails to address intrinsic scale shift. \cite{abousamra2021localization, arteta2016counting, gao2020learning, gao2021congested, han2021ldc} utilize instance segmentation to locate crowd. Especially, the instance segmentation locators introduce box annotation in regression. By this way, the instance scale information can be estimated. Thus, our method follows this baseline. What’ s more, there are still some other works concentrating on intrinsic scale shift of crowd localization. Auto Scale \cite{xu2022autoscale} proposes to estimate a density region and learn to zoom it. Similarly, RAZ Net \cite{liu2019recurrent} also proposes a selection strategy to select the density region. These zooming strategies cannot cater to multi-region variance.
\subsection{Teacher-Student Model}
The original proposal of the teacher-student model serves transfer learning. \cite{hinton2015distilling, zagoruyko2016paying, cho2019efficacy, xie2020self, yang2019training} utilize teacher-student model in Knowledge Distillation (KD). Actually, our proxy teacher model is inspired by KD, in which the teacher model plays a role in bridging data with student model. However, the teacher model in KD tends to utilize a larger teacher model to exploit latent knowledge. However, our proxy teacher model shares a same architecture with student model and the images fed into teacher model have been processed by GMS, in which the latent knowledge is not from model representation capacity but from GMS. In Semi-Supervised Learning (SSL), some researchers also introduce teacher model. \cite{sohn2020fixmatch, zhou2021context, yang2021hcdg} introduce teacher model in doing Consistency Regularization. In \cite{araslanov2021self}, they introduce a momentum network to predict pseudo label for unannotated images, which is actually a teacher-student model. The teacher models used in SSL are inclined to predict pseudo labels for unannotated samples which tend to be coarse knowledge. Despite that the proposed teacher model also aims to use Consistency Regularization, our target is to transfer fine-grained knowledge not coarse knowledge which has been learned by student crowd locator with annotation.
\section{METHODOLOGY}
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{fig_pipeline.png}
\caption{Schematic illustration of our proposed framework. To begin with, we divide pipeline into three branches. The left one denotes the proposed GMS, in which the image is processed before fed into crowd locator. The up stream presents the student end, where the original images are fed. The down stream is the teacher end, in which the images processed by GMS are fed. Finally, a consistency regularization is adopted.}
\label{fig_pipeline}
\end{figure*}
\textbf{Overview.} This paper aims to tackle the intrinsic scale shift in crowd localization. As shown in Fig. \ref{fig_pipeline}, we propose a Gaussian Mixture Scope (GMS) to interpolate the images to exploit knowledge. Then, the interpolated and original images are fed into proposed Scoped Teacher and student model to make localization prediction. Then, a sub-distribution re-aggregation is to recompose the scoped prediction. Finally, a consistency loss between predictions by Scoped Teacher and student model is introduced to make knowledge transform. Section \ref{subsecA} reviews the previous Instance Segmentation method, which is our baseline method. Section \ref{subsecB} is for scale alignment process namely proposed GMS and sub-distribution re-aggregation trick. Section \ref{subsecC} describes Scoped Teacher model and knowledge transform. Section \ref{subsecD} gives a summary on our training objective.
\subsection{Instance-Segmentation Crowd Locator}\label{subsecA}
The popular density map regression method in crowd counting cannot provide precise spatial information. Therefore, some researchers \cite{abousamra2021localization, arteta2016counting} introduce to segment instance-head to make crowd localization. Concretely, they utilize a fixed and global threshold to transfer regressed confidence map activated by a sigmoid function into binary map, which is not robust. Therefore, IIM \cite{gao2020learning} proposes an additional and trainable pixel level threshold map to binarize confidence map.
Formally, given an image $\mathcal{I}_{ori}\in \mathbb{R}^{3\times H\times W}$, in which the footnote $ori$ represents the original resolution images, a confidence map $\mathcal{F}_{ori}\in \mathbb{R}^{1\times H\times W}$ is predicted, which follows the Eq. \ref{0_1},
\begin{equation}
\mathbf{0}^{1\times H\times W}\le \mathcal{F}_{ori}\le \mathbf{1}^{1\times H\times W}
\label{0_1},
\end{equation}
where $ \mathbf{0}$ and $ \mathbf{1}$ denote the tensor filled with $0 / 1$. Additionally, for the fixed threshold works, the segmented binary map $\mathcal{B}_{ori}^{fix}\in \mathbb{R}^{1\times H\times W}$ is obtained through Eq. (\ref{bin}):
\begin{equation}
\mathcal{B}_{ori}^{fix} \left(h, w\right)=\left\{\begin{array}{lr} 1, & \text { if } \mathcal{F}_{ori}\left(h, w\right) \geq \varepsilon \\
0, & \text { others }
\end{array}\right.
\label{bin},
\end{equation}
where $\varepsilon$ is a fixed threshold and $v, h$ are the pixel coordinates. As for the IIM, the binary map $\mathcal{B}_{ori}^{apt}\in \mathbb{R}^{1\times H\times W}$ is obtained through a trainable threshold map $\mathcal{T}\in \mathbb{R}^{1\times H\times W}$ as Eq. (\ref{IIM}):
\begin{equation}
\mathcal{B}_{ori}^{apt}\left(h, w\right)=\left\{\begin{array}{lr}
1, & \text { if } \mathcal{F}_{ori}\left(h, w\right) \geq \mathcal{T}\left(h, w\right) \\
0, & \text { others }
\end{array}\right.
\label{IIM}.
\end{equation}
With the adaptative threshold map $\mathcal{T}$, a robust binary map $\mathcal{B}_{ori}^{apt}$ is derived. At last, the training strategy is formulated as Eq. (\ref{lseg}):
\begin{align}
\mathcal{L}_{\text {seg }}= &\frac{1}{H\cdot W} \sum_{h=1}^{H}\sum_{w= 1}^{W}(\left\|\mathcal{F}_{ori}(h,w)-\mathcal{\widehat{B}}(h,w) \right\|^{2}+
\\ &\left\|\mathcal{B}^{apt}_{ori}(h,w) -\mathcal{\widehat{B}}(h,w)\right\|^{1}),
\label{lseg}
\end{align}
where $\mathcal{\widehat{B}}\in \mathbb{R}^{1\times H\times W}$ is the ground-truth binary map of image $\mathcal{I}_{ori}$. By this way, a precise binary map is derived. Therefore, we follow the mentality of IIM as our baseline work. To clarify the paper, we omit the $apt$ in $\mathcal{B}_{ori}^{apt}$ in the following.
\subsection{Gaussian Mixture Scope}\label{subsecB}
In instance segmentation crowd localization, since the locator derives the supervision signal from binary map with implicit scale information, in which the head-areas are annotated as the foreground, the locator is fragile and sensitive to instance scale shift. Moreover, the performance of locator is limited for being hard to catch large and tiny instances simultaneously. We dissert the crux of issue to the chaotically distributed scales. In the light of deep model trained via Empirical Risk Minimization (ERM), it is arduous for crowd locator to converge on data not satisfing with independent identically distributed conditional assumptions. Summarizing the above analysis, the regularization for the chaotic scale distribution can be the point to tackle the intrinsic scale shift.
To this end, we propose to decouple the chaotic scale distribution into several regular sub-distributions. Therefore, the chaos within the scale distribution is transferred into the distribution shift among sub-distributions. Additionally, we constrain the spatial feature to be correlated with scale distribution in decoupling. By this way, the sub-distributions are compact in spatial features, and the sub-distributions alignment is available to be implemented via image interpolation.
Specifically and formally, given an image $\mathcal{I}_{ori}$ with $N$ instances, in which the footnote $ori$ represents the image in the original resolution, and the corresponding scale distribution $\mathcal{S}_{ori}$ is as Eq. (\ref{scaledit}):
\begin{equation}
\mathcal{S}_{ori}= \sum_{i=1}^{N}\delta (\alpha _i)
\label{scaledit},
\end{equation}
where $\alpha_i$ is the scale for $i^{th}$ person and $\delta$ denotes one-dimension Dirac function. We utilize a Gaussian mixture distribution to adapt to the $\mathcal{S}_{ori}$ as Eq. (\ref{gmm}):
\begin{equation}
\mathcal{S}_{ori}\sim p(\alpha ,v|\theta )=\sum_{c=1}^{C}\pi_c \mathcal{N}(\alpha ,v|\mathbf{\theta }) ,
\label{gmm}
\end{equation}
where the mixture distribution is composed of $c$ sub-gaussian distributions $\mathcal{N}(\cdot )$ with parameters $\theta$ and probability $\pi_c$. The $v$ is vertical feature. The mixture model is initialized and updated from the Expectation Maximization.
As aforementioned, to facilitate alignment, the mixture model should be correlated with spatial feature in decoupling. Therefore, we only adopt the vertically spatial feature $v$ to reduce the computational complexity from $\mathcal{O}(n^2) $ to $\mathcal{O}(n) $. As for the horizontal ones, we demonstrate the redundance of it in scale feature representation, see Section \ref{subsubsec ver}. Practically, the fine-grained scale information is unavailable in all existing datasets. Therefore, the annotated box area $\alpha$ is adopted as the observation.
Through decoupling the mixture distribution, the chaotic scale distribution is decomposed into $c$ normal distributions, which are feasible for model to converge. What’ s more, through constraining the vertical features in adaptation and decoupling, the sub-distributions are compact spatially. Hence, $c$ patches are derived where each one has a scale distribution of normal sub-distribution in Eq. (\ref{gmm}). To this end, the issue in intrinsic scale shift is to align the scale shift among sub-distributions. Thus, we introduce some prior knowledge, in which an optimal scale $\alpha_0$ is set as the landmark to align the scale shift among sub-distributions. For each patch $p_c$, the aligned one $\widehat{p}_c$ is derived via Eq. (\ref{HHH}) with an interpolation process:
\begin{equation}
\widehat{p_c} =\mathit{Inter}(p_c, \frac{\sum_{i=0}^{N_c} \alpha_i }{\alpha _o\cdot (N_c-1)} ),
\label{HHH}
\end{equation}
where $N_c$ is the count of instance in $p_c$. Note that to avoid computational cost, we make compromise on using average scale of patches. Since the decoupling provides the compactness of scale within the sub-distribution namely the patch, the average scale is adequate to represent the patch.
Finally, the scale shift is aligned, in both inter-patch and intra-patch. However, the sub-distributions are still discrete. There are two kinds of normal process, in which one is to directly splice them and make padding on smaller ones, while the other is to keep them being discrete. Nevertheless, in sub-distributions alignment, the patches are interpolated via distinct scale factors, it is unavoidable for the junction region being distorted semantically. Moreover, since the decoupling is adaptative, the decision boundary is uncertain, which incurs that some to be detected instances could be cut off and distorted. To alleviate the issue, we propose a sub-distribution re-aggregation trick.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{fig_aggregation.png}
\caption{Depiction on the pipeline of sub-distribution re-aggregation. The transparent windows are geometrized sub-distributions. }
\label{fig_aggregation}
\end{figure}
\textbf{Re-Aggregation for Sub-Distributions.} In aforementioned two processes to discrete sub-distributions, the uncertainty of decision boundary for decoupling incurs the risk for instances being cut off. Given an image or patch with instances cut off, the locator cannot catch the semantic and detect them. Thus, we argue that it is necessary for locator to be fed with whole image. As the Fig. \ref{fig_aggregation} shown, let $\mathcal{I}_{scp}$ be the re-aggregated image. To meet the argument, the Eq. (\ref{eqrelation}) must hold:
\begin{equation}
\exists \gamma \in \mathbb{R} ,\mathcal{I}_{scp}\equiv Inter(\mathcal{I}_{ori},\gamma ).
\label{eqrelation}
\end{equation}
Therefore, given $c$ patches with $c$ counts corresponding scale factors $\left \{\alpha _1,\alpha _2,\cdots ,\alpha _c \right \} $ which are from Eq. (\ref{HHH}), we interpolate the $\mathcal{I}_{ori}$ with $\alpha_i$ for $c$ times and derive Eq. (\ref{reagg}):
\begin{equation}
\left \{ \mathcal{I}_{scp}^i=Inter(\mathcal{I}_{ori}, \alpha_i)\mid i=0,1,\cdots ,c \right \} .
\label{reagg}
\end{equation}
Then, the $\left \{\mathcal{I}_{scp}^i\mid i=0, 1, \cdots, c\right \}$ are fed into locator which makes it catch correctly semantic information. To obtain the final prediction $\mathcal{B}^{re}_{scp}$ with spatially semantic mapping relation to original image $\mathcal{I}_{ori}$, the predicted $\mathcal{B}^{i}_{scp}$ from $\mathcal{I}_{scp}^i$ are re-interpolated via $\left \{\frac{1}{ \alpha _1},\frac{1}{\alpha _2} ,\cdots ,\frac{1}{\alpha _c} \right \} $ and the ROI is shot according to $p_c$. The finaly prediction is composed of shot ROIs.
\subsection{Scoped Teacher}\label{subsecC}
In Section \ref{subsecB}, the Gaussian Mixture Scope (GMS) is proposed to regularize the chaotic scale distribution. Given a set of images $\mathcal{I}$ with chaotic scale distribution, the crowd locator represents $\mathcal{I}$ into latent space, in which the partial instances with certain scales are caught and partial backgrounds are mis-embedded. The GMS aligns scale facilitating feature embeddings with variant scale representation to be mapped to the same latent space. We name the predicted knowledge as $\mathcal{K}_{gt}$ which denotes them coming from ERM via ground truth. The GMS processes $\mathcal{I}$, in which the outliers not in $\mathcal{K}_{gt}$ are represented to the same latent space and conform $\mathcal{K}_{GMS}$. In training phase, GMS exploits the latent $\mathcal{K}_{GMS}$ to make crowd locator catch the outliers better. However, despite that the implementation of GMS provides additional $\mathcal{K}_{GMS}$ to crowd locator, it is not the active learning, but the passive reception for relationship between $\mathcal{K}_{GMS}$ and locator. As a result, the training phase provides annotation for GMS to exploit $\mathcal{K}_{GMS}$ and aid locator to perform better, while the annotations are agnostic in testing phase and there is no $\mathcal{K}_{GMS}$ which incurs the locator perform poorly with only representation capacity for $\mathcal{K}_{gt}$. What’ s more in backpropagation phase, GMS dose not have a gradient to compute. Thus, there should be another better way to deploy GMS and to make locator actively learn the $\mathcal{K}_{GMS}$.
To transfer the exploited $\mathcal{K}_{GMS}$, we propose a Scoped Teacher which is a teacher-student framework. Specifically, given an $\mathcal{I}_{ori}$, the student locator has a prediction of $\mathcal{F}_{ori}$ and $\mathcal{B}_{ori}$ which are confidence map and binary map. As for teacher end, the GMS is adopted to regularize the $\mathcal{I}_{ori}$. Then, the processed $\mathcal{I}_{scp}$ is fed into teacher locator to aggregate $\mathcal{K}_{GMS}$ and $\mathcal{K}_{gt}$. The $\mathcal{B}_{scp}^{re}$ is from further proceeding of sub-distribution re-aggregation. Then, a consistency loss is introduced as Eq. (\ref{consis}) to transfer $\mathcal{K}_{gt}$ from Scoped Teacher to student locator.
\begin{align}
\mathcal{L}_{\text {consis}}= &\frac{1}{H\cdot W} \sum_{h=1}^{H}\sum_{w= 1}^{W}(\left\|\mathcal{F}_{ori}(h,w)-\mathcal{B}_{scp}(h,w) \right\|^{2}+
\\ &\left\|\mathcal{B}_{ori}(h,w) -\mathcal{B}_{scp}(h,w)\right\|^{1}).
\label{consis}
\end{align}
In consistency regularization, the Scoped Teacher adopts GMS to exploit $\mathcal{K}_{GMS}$ and restore it in the representation of $\mathcal{B}_{scp}^{re}$. Through Eq. \ref{consis}, the consistency constraint makes the $\mathcal{F}_{ori}$ and $\mathcal{B}_{ori}$ be closer to $\mathcal{B}_{scp}^{re}$. By this way, the backpropagation of consistency loss pushes the $\mathcal{K}_{GMS}$ being transferred from Scoped Teacher to student model.
Comparing with ground truth supervision, the strength of Scoped Teacher is more than GMS exploited $\mathcal{K}_{GMS}$ transform. In settings of Scoped Teacher, we leverage a shared architecture with student locator. Empirically, to utilize a larger model in teacher end designing could make teacher with stronger representation capacity guide weaker student training. However, our Scoped Teacher shares the same architecture to student model, which means the outputs between student and teacher ends are with more consistent. Therefore, the guidance of Scoped Teacher to student is feasible to implement consistency regularization.
Finally, in parameters updating, the student crowd locator is trained via gradient descend. To aggregate the knowledge and stable knowledge transform, the teacher parameters $\theta_t$ is updated via Exponential Moving Average (EMA) with student parameters $\theta _s$ as Eq (\ref{EMA}):
\begin{equation}
\theta _t\gets m\theta _t + (1-m)\theta _s,
\label{EMA}
\end{equation}where $m$ denotes the EMA decay coefficient to control the updating rate.
At the testing phase, the original image $\mathcal{I}_{ori}$ is fed into student model. Thus, our proposed method would not incur any extra costs in inference.
\subsection{Objective}\label{subsecD}
\textbf{Instance Segmentation Loss.} The instance segmentation loss is a L2 loss for confidence map regularization and a L1 loss for binary map regularization as Eq. (\ref{lseg}).
\textbf{Consistency Regularization Loss.} Since the gradient is detached in threshold learner, to optimize the threshold learning, L1 loss is used for binary map regularization between teacher and student end as Eq. (\ref{consis}).
\textbf{Total Loss.} During training, the student model is jointly trained in an end-to-end manner. The whole parameters are updated by integrating all mentioned loss functions:
\begin{equation}
\mathcal{L}_{total}=\mathcal{L}_{seg} + \mathcal{L}_{consis}.
\end{equation}
The teacher model is optimized as Eq. (\ref{EMA}).
\section{Experiment}\label{sec_exp}
\subsection{Datasets}
\begin{itemize}
\item[1] \textbf{Shanghai Tech Part A (SHHA):} SHHA \cite{zhang2016single} contains 482 images where 270 are available for training, 30 are for validation and others are for test. There are 241, 677 instances annotated in total.
\item[2] \textbf{Shanghai Tech Part B (SHHB):} SHHB \cite{zhang2016single} consists of 716 images where 360 are prepared for training, 40 are for validation and others are for test. SHHB has 88, 488 annotated instances.
\item[3] \textbf{NWPU-Crowd (NWPU):} NWPU \cite{wang2020nwpu} is the largest dataset in crowd analysis community so far, in which there contains 5,109 images with 3,109 of them are for training, 500 are for validation and others are for test.
\item[4] \textbf{UCF-QNRF (QNRF):} QNRF \cite{idrees2018composition} is a dataset with extremely congested scenarios, where it is composed of 1,535 images and 961 of them are for training, 240 are for validation and others are for test.
\item[5] \textbf{JHU-Crowd++ (JHU):} JHU \cite{sindagi2019pushing} is also a dataset with comprehensive scenes, in which there are 2,772 images for training, 1,600 images for testing and 500 images for validation.
\end{itemize}
\subsection{Implementation Details and Metrics}
In the training phase, backbone networks of VGG-16 \cite{simonyan2014very} and HR-Net \cite{wang2020deep}, a batch size of 6, an optimizer of Adam with learning rates 1e-5 for backbone and 1e-6 for threshold encoder, a learning rate decay of 0.99 for every epoch are adopted. In the testing phase, the tested images are fed into student locator in original scale and the model with best performance on validation set is picked for testing. Moreover, our experiments are applied on two NVIDIA RTX 3090 with a total memory of 48 GB.
Following \cite{gao2020learning, wang2020nwpu}, the Precision (Pre.) , Recall (Rec.) and F1-measure (F1-m) are adopted for localization metrics as Eq. (\ref{F1}),
\begin{footnotesize}
\begin{align}
Pre. = \frac{TP}{TP+FP},Rec = \frac{TP}{TP+FN},
F1 \text{-}m = \frac{2\cdot Pre\cdot Rec}{Pre+Rec}.
\end{align}
\label{F1}
\end{footnotesize}
where F1-m is the core norm and $TP, TN, FP, FN$ denote True Positive, True Negative, False Positive and False Netgative. The MAE, MSE and NAE are adopted for counting metrics as Eq. (\ref{MAE}),
\begin{align}
MAE&=\frac{1}{N} \sum_{i=1}^{N}\left \| z_i-\widehat{z}_i \right \|^1,\\ MSE&=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left \| z_i-\widehat{z}_i \right \|^2},\\
NAE&=\frac{1}{N} \sum_{i=1}^{N}\frac{\left \| z_i-\widehat{z}_i \right \|^1 }{z_i}.
\end{align}
\label{MAE}
\subsection{Analysis on Our Method}
\subsubsection{Ablation study}\label{ablation}
In this section, our method is decomposed into components to exploit each contribution. Individually, we only implement our proposed GMS in the inference phase to explore whether it is effective to align the intrinsic scale shift. Then, the same teacher model called Plain Teacher in Tab. \ref{tab AA} without GMS is introduced as demonstration in the training phase. The same consistency regularization loss is also utilized. Finally, our whole system is deployed.
\begin{table}[h]
\centering
\caption{Ablation study tested on SHHA-val.}
\setlength{\tabcolsep}{5mm}
\renewcommand\arraystretch{1.5}
\begin{tabular}{c|c|c}
\whline
\multirow{2}{*}{Method} & Localization & Counting \\
\cline{2-3}
& \textbf{F1-m}/Pre./Rec. (\%) & MAE/MSE \\
\whline
Baseline & 67.0 /71.0 /63.4 & 119.5/ 242.1 \\
\hline
GMS Inference & 69.2/74.1/ 64.8 & 85.1 /164.8 \\
\hline
Plain Teacher & 69.1 /\textbf{75.9}/ 63.3 & 119.5 /260.4 \\
\hline
Whole Method & \textbf{71.4}/73.6/ \textbf{69.3~} & \textbf{81.7 }/\textbf{147.1} \\
\whline
\end{tabular}
\label{tab AA}
\end{table}
In Tab. \ref{tab AA}, baseline model is to leverage a pixel level threshold map to binarize confidence map into binary map. GMS Inference directly implements GMS in the inference phase and experiment shows that GMS promotes the localization performance by 2.2$\%$ and the counting performance by 34.4 on MAE. Therefore, GMS is indeed effective to align the intrinsic scale shift. Then, the Plain Teacher model with ensemble learning also has promotion which can be attributed to the knowledge aggregation. In the whole system implementation, our proposed method makes large promotion on both localization and counting.
\subsubsection{Effect on knowledge transform} The proposed GMS is an off-line regularization strategy. In Section \ref{ablation}, we demonstrate that directly implementing GMS in inference promotes the performance. However, the implementation of GMS requires ground truth of samples and incurs additional computing overhead. Thus, we propose Scoped Teacher to transfer the GMS exploited knowledge. To see if the knowledge GMS exploited transferred to student locator, we implement experiments on Tab. \ref{KT}.
\begin{table}[h]
\centering
\caption{Demonstration on Knowledge Transform.}
\begin{tabular}{c|c|c}
\whline
\multirow{2}{*}{Method} & Localization & Counting \\
\cline{2-3}
& \textbf{F1-m}~/ Pre. / Rec. & \textbf{MAE }/ MSE~ \\
\whline
Base & 67.0 / 71.0 / 63.4 & 119.5 / 242.1 \\
Base + GMS & 69.2 / 74.1/ 64.8 & 85.1 /164.8 \\
Improvement & +2.2 / +3.1 / +1.4 & +34.4 / 77.3 \\
\hline
Scoped & 71.3 / 74.3 / 68.6\textbf{~} & 86.4 / 163.8 \\
Scoped + GMS & 71.6 / 74.9 / 68.4 & 79.5 / 152.5 \\
Improvement & +0.3 / +0.6 / -0.2 & +6.9 / +11.3 \\
\whline
\end{tabular}
\label{KT}
\end{table}
In Tab. \ref{KT}, the \textit{Base} denotes the baseline crowd locator, \textit{GMS} denotes to implement GMS aligning testing data online. \textit{Scoped} means the model is trained via our whole Scoped Teacher. Specifically, we implement GMS at testing time on locators knowledge transferred and without knowledge transferred. The results show that GMS regularization is effective in improving localization and counting performance to baseline model, in which there are marginal improvement obtained. However, there is slight influence when GMS is implemented on Scoped Teacher transferred locator. It is demonstrated that the Scoped Teacher transferred locator indeed learn the GMS exploited knowledge. The additionally deployed GMS is useless in knowledge extraction.
\subsubsection{Choice of prior optimal scale} In GMS implementation, an optimal scale is introduced to compute an optimal interpolation factor for each sub-distribution. Intuitively, the optimal scale should be as large as possible. However, a considerable scale incurring large resolution is computational in convolution process. Moreover, image interpolation with a huge factor incurs serious non-semantic distortion. Therefore, some scales are selected to draw a finally optimal scale.
\begin{table}
\centering
\caption{Choice on different optimal scales.}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{5mm}
\begin{tabular}{c|c|c}
\whline
\multirow{2}{*}{Optimal Scale} & Localization & Counting \\
\cline{2-3}
& \textbf{F1-m} /Pre./ Rec.(\%) & MAE /MSE \\
\whline
100 & 67.9/ \textbf{75.2}/61.9 & 111.9 /227.6 \\
\hline
250 & \textbf{69.2}/74.1/ 64.8 & 85.1 /164.8 \\
\hline
500 & 68.9 /72.7 /65.6 & \textbf{76.9}/156.4 \\
\hline
1,000 & 67.6/ 68.9 /66.3 & 80.3/\textbf{148.3~} \\
\hline
5,000 & 62.1/ 57.4/ \textbf{67.7~} & 163.3/ 211.8 \\
\whline
\end{tabular}
\label{optimal}
\end{table}
With the optimal scale being larger in Tab. \ref{optimal}, the performance is not positively and correlatedly varying. We find a moderate scale is the best for performance promotion. For the smaller scale, the tiny instances are under zoomed. The locator tends to pay more attention on easy scales but ignore tiny instances, which is reflected on high precision, low recall and terrible counting performance. For the huge scales, we argue that it yields extreme distortion, which is shown on over-estimation. Thus, the Precision is low but Recall is high under huge scales. At last, the 250 and 500 are comparative. An optimal scale of 250 is finally chosen. This is because larger scale incurs higher computational complexity.
\subsubsection{Comparison on three sub-distribution processing strategies} In this section, we compare three kinds of interpolation methods during inference phase to demonstrate the effect of proposed Re-Aggregation. Firstly, the image is divided into patches as GMS decoupled. Then, the patches are fed into crowd locator successively and the results have been arrayed as Patch Divide in Tab. \ref{patches}. Secondly, based on Patch Divide, the patches are spliced into a hierarchically arrayed image, whose results have been arrayed as Patch Whole. Finally, our proposed Re-Aggregation is shown.
\begin{table}[h]
\centering
\caption{Comparison among three kinds of sub-distribution processes.}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{5mm}
\begin{tabular}{c|c|c}
\whline
\multirow{2}{*}{Method} & Localization & Counting \\
\cline{2-3}
& \textbf{F1-m}/Pre./Rec. (\%) & MAE/MSE \\
\whline
Baseline & 67.0/71.0/63.4 & 119.5/242.1 \\
\hline
Patch Divide & 63.5/64.4/62.7 & 87.0/130.4 \\
\hline
Patch Whole & 68.2/68.8/\textbf{\textbf{67.7}} & \textbf{76.3}/\textbf{118.27} \\
\hline
Re-Aggregation & \textbf{69.2}/\textbf{74.1}/64.8 & 85.1/164.8 \\
\whline
\end{tabular}
\label{patches}
\end{table}
\\
According to results, our Re-Aggregation performs best on F1-m but fails on counting performance. Therefore, we analyze the binary map from methods. We notice that in the marginal regions, the instances semantic information is distorted. To this end, the heads laying on the boundary line are divided into two parts. Thus, an additional prediction is generated. The counting results are higher which is closer than ground truth count. Moreover, for the imbalanced dividing, the patch with bigger part of heads cannot represent true position, which incurs the corresponding prediction to be recognized as False-Positive. Therefore, the Precision of Patch Divide and Patch Whole is even lower than Baseline model. In summary, our Re-Aggregation indeed alleviates the semantic distortion in the marginal region.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{pearson.pdf}
\caption{Pearson correlation coefficient on scale with two directions on four datasets.}
\label{pearson}
\end{figure}
\subsubsection{Why did only vertical features work}\label{subsubsec ver}
In crowd scenes, the scale distribution is inclined to be correlated with spatial distribution. This is caused by imagining process, the adjacent instances in physical space have similar scale in image. In our setting, the adaptation in scale distribution further introduces spatial feature to facilitate scale alignment via image interpolation. However, introducing spatial feature from two directions namely vertical and horizontal ones incurs a computational complexity of $\mathcal{O}(n_v\cdot n_h)$, where $n_v$ and $n_h$ denote the number of sub-distributions in vertical and horizontal direction. From the point of saving training cost, we analyze how vital for some direction in scale representation. To this end, we introduce Pearson correlation coefficient to measure how correlated between scale with the two spatial features.
In Tab. \ref{pearson}, the correlation coefficients between scale with vertical feature and horizontal feature show that the horizontal feature is almost independent with scale. In adaptation, we aim to utilize spatial feature to represent scale. Thus, the horizontal feature is slight in our objective.
\begin{table*}[b]
\centering
\caption{The leaderboard of NWPU-Crowd Localization (test set).}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{5mm}
\begin{tabular}{c|c|c|c|c|c}
\whline
\multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c|}{Overall Performance} & \multicolumn{2}{c}{Scale Level} \\
\cline{3-6}
& & \textbf{F1-m}/Pre/Rec(\%) & MAE/MSE/NAE & Avg. & A0\textasciitilde{}A5 \\
\whline
Tiny Faces & ResNet-101 & 56.7/52.9/61.1 & 272.4/764.9/0.750 & 59.8 & 4.2/22.6/59.1/\textbf{90.0}/\textbf{93.1}/\uline{89.6} \\
\hline
RAZ\_Loc & VGG-16 & 59.8/66.6/54.3 & 151.5/634.7/0.305 & 42.4 & 5.1/28.2/52.0/79.7/64.3/25.1 \\
\hline
VGG+GPR & VGG-16 & 52.5/55.8/49.6 & 127.3/439.9/0.410 & 37.4 & 3.1/27.2/49.1/68.7/49.8/26.3 \\
\hline
IIM & VGG-16 & 73.2/77.9/69.2 & 96.1/414.4/0.235 & 58.7 & 10.1/44.1/70.7/82.4/83.0/61.4 \\
\hline
TopoCount & VGG-16 & 69.2/68.3/70.1 & 107.8/438.5/- & \uline{63.3} & 5.7/39.1/72.2/85.7/87.3/\textbf{89.7} \\
\hline
AutoScale & VGG-16 & 62.0/67.4/57.4 & 123.9/515.5/0.304 & 48.4 & 4.1/29.7/57.2/76.1/78.7/44.6 \\
\hline
Ours & VGG-16 & 74.3/80.8/68.7 & 102.9/446.8/0.245 & 60.3 & 10.7/42.6/69.8/83.3/86.2/69.0 \\
\whline
Crowd-SDNet & ResNet-50 & 63.7/65.1/62.4 & -/-/- & 55.1 & 7.3/43.7/62.4/75.7/71.2/70.2 \\
\hline
FIDTM & HR-Net & 75.5/79.8/71.7 & 86.0/\textbf{312.5}/0.277 & 47.5 & \textbf{22.8}/\textbf{66.8}/\uline{76.0}/72.0/37.4/10.3 \\
\hline
IIM & HR-Net & 76.2/\uline{81.3}/71.7 & 87.1/406.2/\textbf{0.152} & 61.3 & 12.0/46.0/73.2/85.5/86.7/64.3 \\
\hline
DCST & Swin-ViT & \uline{77.5}/\textbf{82.2}/\uline{73.4} & \textbf{84.2}/374.6/\uline{0.153} & 60.9 & 14.5/51.0/75.3/85.0/81.7/57.8 \\
\hline
Ours & HR-Net & \textbf{78.1}/79.8/\textbf{76.5} & \uline{84.7}/\uline{361.5}/0.232 & \textbf{66.7} & \uline{17.1}/\uline{54.1}/\textbf{78.0}/\uline{88.0}/\uline{90.6}/72.3 \\
\whline
\end{tabular}
\label{TabNWPPUU}
\end{table*}
\subsection{Comparison with State-of-the-Art Methods}
In this section, five chosen datasets are grouped into three parts. NWPU is a comprehensive dataset where the congested and sparse scenarios are all included. QNRF and SHHA are congested datasets, while SHHB is the sparse dataset.
\subsubsection{Comparison with SOTA methods on comprehensive datasets}
In this section, we compare our proposed method with SOTA methods on NWPU-Crowd and JHU-Crowd++.
Tab. \ref{TabNWPPUU} arrays the comprehensive results on Localization and Counting on NWPU. In Tab. \ref{TabNWPPUU}, the chosen methods are divided as their used backbone network for a fair comparison. The Scale Level norm is Recall value. A0\textasciitilde{}A5 denotes the instance-scale is in [$10^0, 10^1$], ($10^1, 10^2$], ($10^2, 10^3$], ($10^4, 10^5$] and ($10^5$, +$\infty$). The bold text denotes the first place and the underlined text denotes the second place.The compared methods are TinyFaces\cite{bai2018finding}, RAZ$\_$Loc\cite{liu2019recurrent}, VGG+GPR\cite{gao2019domain, gao2019c}, IIM\cite{gao2020learning}, TopoCount\cite{abousamra2021localization}, AutoScale\cite{xu2022autoscale}, Crowd-SDNet\cite{wang2021self}, DCST\cite{gao2021congested}. Additionally, TinyFaces and Crowd-SDNet utilize \cite{he2016deep} as backbone network. With comparing primary norms (F1-m and MAE), our proposed method achieves the \textbf{first place} on Localization performance (a F1-m of 78.1$\%$). Furthermore, the Recall value on different scales is also arrayed. Tab. \ref{TabNWPPUU} shows that our work is the first or second place on almost all scales.
What’ s more, we intuitively depict localization results. Following \cite{wang2020nwpu}, we pick three representative methods to compare with our methods. Concretely, TinyFaces \cite{bai2018finding} is the object detection crowd locator. FIDTM \cite{liang2021focal} is the density regression crowd locator. IIM \cite{gao2020learning} is the instance segmentation crowd locator. Fig. \ref{figsum} illustrates four groups of typical samples, in which the $3114^{th}$ is the low resolution scene, $3277^{th}$ is the sparse scene, $3348^{th}$ is the negative scene and $3375^{th}$ is the extremely congested scene. Firstly, for the low resolution scene namely the region in the top of $3114^{th}$, our Secoped Teacher performs better than others, duo to the zoom strategy to the tiny scales. Secondly, the sparse scenes like $3277^{th}$ tend to suffer more serious scale shift empirically. Thus, we surpass the others in an untrivial margin. Thirdly, our scale alignment dose not break the robustness under the negative scenes, \textit{i.e.}, $3348^{th}$. As last, in extremely congested scenes, the density regression based FIDTM performs best. In the instance segmentation based locator, the congested scenarios incur tremendous overlapping, so the performance is relatively poor.
\begin{table*}[b]
\centering
\renewcommand\arraystretch{1.5}
\caption{Comparison with SOTA methods on JHU.}
\begin{tabular}{c|c|c|c|c|c}
\whline
\multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c|}{Overall Performance} & \multicolumn{2}{c}{Scale Level} \\
\cline{3-6}
& & \textbf{F1-m }/ Pre. / Rec. (\%) & \textbf{MAE~}~/ MSE / NAE & Avg. & A0-A5 \\
\whline
IIM & VGG-16 & 68.8 / 74.0 / 67.6 & 101.8 / 392.3 / 0.357 & 51.0 & 28.1 / 21.4 / 62.4 / 79.9 / 72.0 / \textbf{42.1} \\
\hline
Ours & VGG-16 & 71.6 / 74.9 / 71.6 & 78.6 / 336.8 / 0.355 & 54.8 & 34.7 / 28.0 / 68.5 / 82.6 / 75.2 / 39.7 \\
\whline
IIM & HR-Net & 72.7 / 76.4 / 72.1 & 89.4 / 367.5 / 0.357 & 53.3 & 30.1 / 22.9 / 66.0 / \textbf{85.1} / 78.5 / 37.4 \\
\hline
Ours & HR-Net & \textbf{75.1 }/ \textbf{78.7 }/\textbf{ 74.1} & \textbf{70.2} / \textbf{316.8} / \textbf{0.240} & \textbf{55.8} & \textbf{37.4~}/ \textbf{30.5} / \textbf{70.9} / 84.8 / \textbf{78.9} / 32.6 \\
\whline
\end{tabular}
\label{JHU}
\end{table*}
In Tab. \ref{JHU}, we array the localization and counting performance on JHU-Crowd++. The Recall values under distinct scales are also provided. Due to the lack of previous work results, we only compare our work with IIM and DCST. As the Tab. \ref{JHU} shown, the comparison between VGG version methods demonstrates we surpass IIM on F1-m, Precision and Recall comprehensively. Significantly, we reduce the MAE from 101.8 to 78.6, in which there is 23$\%$ improvement. In the light of HRNet version, we achieve the first place on localization and counting. Moreover, the MAE has an improvement of 21$\%$ comparing with HRNet-IIM.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{fig_big.jpg}
\caption{
Qualitative results on the NWPU-Crowd validation set. The predicted TP, FN and FP are respectively denoted as green, red and magenta. }
\label{figsum}
\end{figure*}
\subsubsection{Comparison with SOTA methods on Congested Datasets}
In this section, we compare our proposed method with other five state-of-the-art crowd locators in two congested datasets (QNRF and SHHA). The compared locators are TinyFaces, RAZ Loc, LSC-CNN, IIM and DCST. Specifically, TinyFaces is trained via official project with default parameters. RAZ Loc is adopted from \cite{wang2020nwpu}. LSC-CNN and IIM also come from official implementation. The performance of DCST is from arxiv preprinted paper. The performance (Localization: F1-m, Precision, Recall; Counting: MAE and MSE) are arrayed in Tab. \ref{qnrf}.
In SHHA, our proposed method achieves first place on F1-m and second place on MAE. Comparing with instance segmentation crowd locators IIM and DCST, we outperform them (76.0$\%$ vs. 73.9$\%$ and 74.5$\%$) only with VGG-16 being backbone network. In QNRF, our work achieves first place on localization and counting. Significantly, the VGG-16 version of our work surpasses Swin-Transformer \cite{liu2021swin} based DCST (72.6$\%$ vs. 72.4$\%$).
\begin{table*}[t]
\centering
\caption{Comparison with SOTA methods on congested datasets.}
\renewcommand\arraystretch{1.5}
\setlength{\tabcolsep}{5mm}
\begin{tabular}{c|c|c|c|c|c}
\whline
\multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c|}{QNRF} & \multicolumn{2}{c}{SHHA} \\
\cline{3-6}
& & \textbf{F1-m}/Pre./Rec. (\%) & MAE/MSE & \textbf{\textbf{F1-m}}/Pre./Rec. (\%) & MAE/MSE \\
\whline
TinyFaces & ResNet-101 & 49.4/36.3/\textbf{77.3~} & 336.8/741.6 & 57.3/43.1/\textbf{85.5~} & 237.8/422.8 \\
\hline
RAZ\_Loc & VGG-16 & 53.3/59.4/48.3 & \underline{118.0}/\underline{198.0} & 69.2/61.3/\underline{79.5} & 71.6/\underline{120.1} \\
\hline
LSC CNN & VGG-16 & 58.2/58.6/57.7 & 120.5/218.3 & 68.0/79.6/66.5 & \textbf{66.4}/\textbf{117.0} \\
\hline
IIM & VGG-16 & 68.8/\underline{78.2}/61.5 & 160.6/290.0 & 72.5/72.6/72.5 & 83.6/164.2 \\
\hline
Ours & VGG-16 & \underline{72.6}/77.0/68.7 & 137.6/263.2 & \underline{76.0}/76.4/75.5 & 71.8/128.1 \\
\whline
IIM & HR-Net & 72.0/\textbf{79.3}/65.9 & 142.6/261.1 & 73.9/\underline{79.8}/68.7 & 69.3/138.7 \\
\hline
DCST & Swin-ViT & 72.4/77.1/68.2 & 127.2/234.3 & 74.5/77.2/72.1 & 78.4/153.2 \\
\hline
Ours & HR-Net & \textbf{75.5}/77.9/\underline{73.4} & \textbf{104.4}/\textbf{197.4} & \textbf{78.1}/\textbf{81.7}/74.9 & \underline{68.8}/138.6 \\
\whline
\end{tabular}
\label{qnrf}
\end{table*}
\subsubsection{Comparison with SOTA methods on Sparse Dataset SHHB}
In this section, we list the results on sparse dataset SHHB. Tab. \ref{shhb} shows that we are the first place on F1-m (86.3$\%$). Despite that there is a trace of backwardness on counting performance, we still derive a certain of improvement comparing with most related instance segmentation locator IIM. We dissert the crux to the baseline method. In segmentation localization, each predicted instance represents true semantic information, while the density map regressors cannot promise the responding value has the true semantic information. To be specific, a locator with high counting performance and low localization performance cannot be recognized as a good pedestrians learner. Moreover, there exists a contradiction phenomenon. With a higher localization precision, the more boxes proposals tend to be lost which incurs worse counting performance.
\begin{table}
\centering
\caption{Comparison with SOTA methods on sparse dataset.}
\renewcommand\arraystretch{1.3}
\setlength{\tabcolsep}{3mm}
\begin{tabular}{c|c|c|c}
\whline
\multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c}{SHHB} \\
\cline{3-4}
& & \textbf{F1-m}/Pre./Rec. (\%) & MAE/MSE \\
\whline
TinyFaces & ResNet-101 & 71.1/64.7/79.0 & -/- \\
\hline
RAZ Loc & VGG-16 & 68.0/60.0/78.3 & \underline{9.9}/\underline{15.6} \\
\hline
LSC CNN & VGG-16 & 71.2/71.7/70.6 & \textbf{8.1}/\textbf{12.7~} \\
\hline
IIM & VGG-16 & 80.2/84.9/76.0 & 22.1/44.4 \\
\hline
Ours & VGG-16 & 83.8/89.4/78.0 & 18.2/37.8 \\
\whline
IIM & HR-Net & \underline{86.2}/\underline{90.7}/\underline{82.1} & 13.5/28.1 \\
\hline
DCST & Swin-ViT & 86.0/88.8/\textbf{83.3~} & 11.0/23.6 \\
\hline
Ours & HR-Net & \textbf{86.3}/\textbf{91.9}/81.2 & 16.0/33.5 \\
\whline
\end{tabular}
\label{shhb}
\end{table}
\subsection{Discussion}
In this section, we discuss how GMS and Scoped Teacher improves the final performance based on experiments in Section \ref{sec_exp}. To facilitate clear discussion, we pick one typical sample from SHHA and visualize its predicted confidence maps, threshold maps and binary maps from models of baseline method, GMS inferred and Scoped Teacher transferred, see Fig. \ref{fig_ctb}.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{fig_conf_thre.png}
\caption{Visualization on typical sample from SHHA. The depicted information includes the confidence maps, threshold maps and binary maps predicted by baseline model (\textit{Base}), GMS inference (\textit{GMS}) and Scoped Teacher trained model (\textit{Scope}).}
\label{fig_ctb}
\end{figure*}
To begin with Tab. \ref{tab AA}, we notice that directly implementing GMS in the testing time brings improvement. Thus, it demonstrates the effect of scale alignment. However, it goes as a common sense that the conventional image interpolation would not provide any additional information and sometimes the interpolation factor being too large incurs non-semantic distortion which damages model performance, see Tab. \ref{optimal}. To this end, we argue that the improvement by GMS comes from distribution regularization and other forms of knowledge exploitation. In the light of relatively smaller instances which are still included within the feature distribution caught by crowd locator, the normal representations are so small that the embeddings are recognized as outliers. Then, with scale regularized, GMS aids feature extractor mapping the latent space of smaller instances to the normal one. See the column of Confidence in Fig. \ref{fig_ctb}, the red box selects a region filled with tiny instances. In the bottom of box, the GMS provides higher confidence than baseline. This is because the original representation of those instances with improved confidence is still in the latent space of crowd locator. Hence, this is the GMS exploited knowledge. Nevertheless, since there is only an effective resolution (dataset author provided resolution) of 1024 * 768 to the image in Fig. \ref{fig_ctb}, the instances in the top of the red box have representations of outliers, which are still outliers after alignment via GMS. With confidence variance by GMS explained, we put concentration on threshold learning. See the black box in the Threshold column of Fig. \ref{fig_ctb}, GMS brings unsmooth distribution to threshold map. Actually, the regularization of GMS does not introduce any influence on model parameters. In the right and left side of the black box, the two regions with obviously low thresholds should be negative. It is blamed on the poor robustness of the baseline model. Since the corresponding area in red box shows better prediction on confidence, the abnormal low threshold area in black box is hard to be explained by non-semantic distortion. In the Scoped Teacher training, GMS exploits the wrong prediction actively to teacher model to correct them, see black box in the Scope row of Fig. \ref{fig_ctb}. Thus, this is also the GMS exploited knowledge. At last, we analyze the binary map. See the yellow box in Binary column of Fig. \ref{fig_ctb}, the GMS brings more predicted boxes than baseline, which makes an untrivial improvement on recall value (71.8 vs. 62.2). Obviously, this is also the GMS exploited knowledge.
Then, we discuss the effect of the Scoped Teacher. As aforementioned GMS exploited knowledge, the Scoped Teacher transfers knowledge via consistency regularization. However, there may be interests on how the knowledge transfers and what role of the Scoped Teacher plays beyond bridge in transform. Similarly, beginning from confidence analysis namely Confidence column in Fig. \ref{fig_ctb}, the red boxes select our ROI. It has been discussed that GMS exploits latent instances representation. We assert that Scoped Teacher teaches student to build a connection between normal tiny representations with GMS aligned representations. This is because the student locator fed with original outliers is inclined to make prediction being similar with teacher fed with mapped embeddings and the process makes implicit mapping transform from outliers embedding to normal embedding. What’ s more, there is another interesting phenomenon that the confidences are higher in red boxes when comparing Scope column with GMS column. We argue that the Scoped Teacher makes a further aggregation to the knowledge. The scope of GMS is limited within the temporary input. However, the Scoped Teacher learns to build the connection from all similar representation in the training set. Thus, the knowledge in confidence is the connection and the role Scoped Teacher plays is the connector and aggregator. Then, we analyze the Threshold learning. See Threshold column in Fig. \ref{fig_ctb}, the black boxes also select our ROI. We notice that the threshold from Scope is more smooth than GMS and the issues on negative regions analyzed on the last paragraph are settled. In consistency regularization, the negative regions outputs FP samples which exposes the un-robustness of locator and the corresponding loss is deployed to optimize the un-robustness. What’ s more, since GMS treats distribution discretely, it makes images lost physical features. The Scoped Teacher aids locator learn GMS exploited useful information and ignore the issues incurred by losing physical feature. Thus, the knowledge transferred is the punishment on un-robustness and the role Scoped Teacher plays is a filter to select useful knowledge. Finally, we put analysis on Binary column of Fig. \ref{fig_ctb}. See the yellow boxes, the Scope depicts less boxes than GMS. And the recall comparison shows there are more boxes are removed, while the precision comparison shows there are more accurate boxes predicted. Thus, the knowledge transferred is box refinement and the role Scoped Teacher plays is the refiner.
\section{Conclusions}
This paper aims to tackle the essential issue, intrinsic scale shift in crowd localization. Specifically, we propose to regularize the chaotic scale distribution to align scale shift. Gaussian Mixture Scope (GMS) is proposed to implement the scale distribution regularization which is from distribution decoupling and alignment among sub-distributions perspective. Moreover, the GMS introduces spatial feature in regularization facilitating to geometrize the alignment which can thus be deployed via image interpolation. To further address the semantic distortion incurred by adaptative decoupling, we propose a novel sub-distribution re-aggregation strategy. What’ s more, a Scoped Teacher model with corresponding consistency regularization is further introduced to transfer knowledge from GMS processed data to locator and be a novel manner to implement GMS to make locator actively learn the knowledge. Furthermore, it is demonstrated that the Scoped Teacher brings more consistence feature stabilizing the training of consistency regularization. The proposed GMS is remarkably visible in improving localization performance. The Scoped Teacher model bridges between data with model to aid the implementation of GMS in training phase and promote final localization results. Extensive experiments show that the proposed work achieves state-of-the-art on popular datasets of the crowd localization. In the future, we will discuss how to align the average scale shift among datasets namely extrinsic scale shift, which is to locate the crowds towards the open-set.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_066-2656 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Today, the existence of dark matter (DM) is certain, its underlying nature, however, is still largely unknown.
At present, the DAMA/LIBRA experiment\footnote{DAMA/LIBRA will be shortened to DAMA in the following.} claims to observe dark matter via its expected annual modulation \cite{bernabei_final_2013}, which disagrees with null results from many other direct DM searches (fig.~\ref{fig:limit}, \cite{savage_compatibility_2009}). Stating disagreement, however, only holds for certain assumption on the interaction mechanism of DM with Standard Model particles. The major model-dependence in the comparison of experiments comes from the use of different target materials: DAMA uses NaI(Tl), while the excluding null results are obtained with Ar, CaWO$_4$, CsI, Ge, Si and Xe targets.
The R\&D project COSINUS\footnote{Cryogenic Observatory for SIgnals seen in Next-generation Underground Searches} aims to develop a NaI-based cryogenic detector \cite{angloher_cosinus_2016} offering particle discrimination and, as also using NaI, its results can be directly compared to DAMA.
The heart of a COSINUS detector is an undoped NaI crystal, cooled to mK temperatures. Any particle interaction in the crystal will excite phonons. Recording this phonon signal with a so-called transition edge sensor (TES) provides a very precise measurement of the energy deposited in the crystal, quasi independent of the type of interacting particle. In addition, also scintillation light is produced by a particle interaction which we measure by pairing the NaI crystal with a cryogenic light detector, read-out with a second TES\footnote{The TESs are produced by the Max-Planck-Institute of Physics in Munich.}. Since the amount of produced scintillation light strongly depends on the interacting particle, measuring phonons and light provides particle identification. In particular, it allows to discriminate e$^-$/$\gamma$-events from nuclear recoils; the former being the main background, the latter being believed (in the vast majority of models) to be induced by DM.
Fig.~\ref{fig:schema} depicts the detector design. Since the hygroscopic nature of NaI (blue) and its low melting point prevents a direct evaporation of the TES, we instead evaporate it on a carrier crystal (purple).\footnote{For the first prototype (see fig.~\ref{fig:schema}) we used a CdWO$_4$ carrier \textit{connected} with silicon oil to the NaI-crystal.} The light detector is a silicon beaker (black) completely surrounding the target crystal. The beaker shape offers two attractive features: Firstly, it maximizes light collection and secondly it avoids - in combination with the carrier crystal exceeding the diameter of the beaker - any line of sight between the target crystal and non-active surfaces. Such a geometry is mandatory to veto any $\alpha$-induced surface backgrounds \cite{strauss_detector_2015}.
\begin{figure}[htb]
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=.80\textwidth]{schema}
\caption{Schematic of a COSINUS detector.}
\label{fig:schema}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=.80\textwidth]{LY}
\caption{Simulated events for an exposure of 100kg days incl.~e$^-$/$\gamma$-background (black) and DM signal (red). See text for details.}
\label{fig:LY}
\end{minipage}
\end{figure}
A subset of the authors recently published results of measurements of CsI \cite{angloher_csi_2016}, performed with a setup comparable to fig.~\ref{fig:schema}. As both, CsI and NaI, belong to the family of alkali halides we are convinced that the experience gained with CsI allows a realistic estimate on the achievable performance of a COSINUS detector. Thereby, a distinct advantage of cryogenic detectors is a very low threshold for nuclear recoils: the design goal of COSINUS is a threshold of 1 keV.\footnote{Currently, the CRESST-II experiment, which is based on the same technology, has word-leading sensitivity for light DM achieved by a nuclear recoil threshold of 0.3 keV \cite{angloher_results_2016}. Furthermore, first prototype measurements indicate that the new CRESST-III detectors reach nuclear recoil thresholds well below 100 eV.}
Fig. \ref{fig:LY} shows a simulation for an exposure of 100 kg-days in the light yield - energy plane. The light yield is defined by the ratio of light to phonon signal. e$^-$/$\gamma$-events produce most light and, thus, get assigned a light yield of one. Nuclear recoils, instead, have lower light yields indicated by blue and green solid lines for Na and I, respectively.\footnote{The bands for nuclear recoils off Na and I were calculated with the energy-dependent quenching factors of \cite{tretyak_semi-empirical_2010}.} The black events correspond to the background budget reached by the DAMA experiment (flat background of 1 count/keV/kg/day + $^{40}$K activity of 600 $\mu$Bq). To illustrate the discrimination power of a COSINUS detector module we added a DM signal corresponding to the benchmark point shown in fig.~\ref{fig:limit} (m=10 GeV/c$^2$,$\sigma=2\cdot 10^{-4}$pb). We want to stress that this signal reflects the standard scenario of DM particles scattering elastically and coherently off nuclei. The benchmark point was chosen in accordance with the interpretation of the DAMA signal in this scenario as put forward by the authors of \cite{savage_compatibility_2009}.
As COSINUS detectors measure a phonon signal the threshold is (practically) identical for nuclear and electron recoils. However, for experiments like DAMA only measuring scintillation light, the reduced light output of nuclear recoils (commonly referred to as quenching) has to be considered. This is indicated by the magenta boxes in fig.~\ref{fig:LY} corresponding to electron-equivalent energy of (2-6)keVee which is the energy window where DAMA observes the modulation.
In total, the simulation (for the benchmark point) predicts ~2400 events above threshold with 45\% ranging between energies of (1-2)keV. This simulation clearly points out the two main benefits of a cryogenic calorimeter: low threshold for nuclear recoils and particle discrimination. In conclusion we are convinced that COSINUS detectors will be able to clarify whether the DAMA signals is nuclear recoils or not, even with a moderate exposure of $\mathcal{O}$(10-100 kg-days).
\begin{figure}[htb]
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=.80\textwidth]{limit}
\caption{Anticipated sensitivity of COSINUS (blue band) for spin-independent elastic scattering. Also shown: the benchmark point (blue cross, see text) and results from other experiments. Plot from \cite{angloher_results_2016}, references given therein.}
\label{fig:limit}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=.80\textwidth]{pulse}
\caption{e$^-$/$\gamma$-event with an energy of 240 keV measured with the first NaI prototype. Simultaneously recorded are the light signal (blue) and the phonon signal (red).}
\label{fig:Pulse}
\end{minipage}
\end{figure}
This conclusion also becomes evident in fig.~\ref{fig:limit} showing the sensitivity (blue band) of COSINUS in comparison to the interpretation of the DAMA signal in the standard scenario of elastic DM nucleus scattering (blue islands \cite{savage_compatibility_2009}). Also shown are the benchmark point and current results from other experiments.
In August 2016 we completed the measurement of the first COSINUS prototype. While its analysis is still ongoing right now, a first event is depicted in fig. \ref{fig:Pulse}. It shows the simultaneous phonon (red) and light (blue) signals\footnote{For the prototype measurement a standard CRESST-like light detector (silicon-on-sapphire disk) was used, instead of the beaker-shaped light detector foreseen for the final COSINUS detector design.} of an e$^-$/$\gamma$-event depositing 240 keV in the detector. To our knowledge this is the first measurement of a NaI-crystal at mK temperatures.
In conclusion, we demonstrate that NaI can be operated as cryogenic calorimeter at mK temperatures. This sheds a positive light on the reachability of the COSINUS performance goals and, therefore, on the possibility to give new insight on the nature of the DAMA signal.
\section*{Acknowledgements}
\footnotesize{This work was carried out in the frame of the COSINUS R\&D project funded by the INFN (CSN5). In particular, we want to thank the LNGS mechanical workshop team E. Tatananni, A. Rotilio, A. Corsi, and B. Romualdi for continuous and constructive help in the overall set-up construction and M. Guetti for his cryogenic expertise and his constant support.}\newline
\bibliographystyle{iopart-num}
| proofpile-arXiv_066-2701 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The main goal of this survey paper is to show how certain finite groups, in particular, Salingaros vee groups~\cite{salingaros1,salingaros2,salingaros3}, and elementary abelian group
$(\mathbb{Z}_2)^n = \mathbb{Z}_2 \times \cdots \times \mathbb{Z}_2$ ($n$-times), and their group algebras and twisted groups algebras, arise in the context of Clifford algebras $\clpq{p}{q}.$
Chernov's observation~\cite{chernov} that Clifford algebras $\clpq{p}{q}$ can be viewed as images of (non-twisted) group algebras of suitable $2$-groups, conjectured to be Salingaros vee groups~\cite{walley}, allows one to gain a new viewpoint on these algebras and to relate classical group-theoretical results\cite{dornhoff,gorenstein,mckay}, in particular, on finite $2$-groups, to the theory of Clifford algebras. Salingaros classified the groups $\Gpq{p}{q}$ -- referred to as \textit{Salingaros vee groups} -- into five non-isomorphic classes $N_{2k-1},$ $N_{2k},$ $\Omega_{2k-1},$ $\Omega_{2k},$ and $S_{k}$. These groups, according to the theory of finite $2$-groups~\cite{dornhoff,mckay}, are central products of extra special groups $D_8$ -- the dihedral group and $Q_8$ -- the quaternionic group, both of order~$8$, and their centers $\mathbb{Z}_2$, $\mathbb{Z}_2 \times \mathbb{Z}_2$, or $\mathbb{Z}_4$. Thus, the properties of these groups and the fact that they fall into the five classes, is reflected by the fact that Clifford algebras $\clpq{p}{q}$ also fall into five isomorphism classes which is well known~\cite{chevalley,lam,lounesto} and references therein. The structure theorem on these algebras is recalled in Appendix~A. Furthermore, the ``periodicity of eight" of Clifford algebras viewed as the images of Salingaros vee groups, seems to be related to, if not predicted by, the structure of these groups and their group algebras. Thus, Section~2 is devoted to this approach to Clifford algebras.
Section~3 is devoted to a review of the basic properties of Salingaros vee groups $\Gpq{p}{q}$ appearing as finite subgroups of the group of units $\clpq{p}{q}^{\times}.$ Furthermore, we will review certain important subgroups of these groups appearing in the context of certain stabilizer groups of primitive idempotents in $\clpq{p}{q} $~\cite{ablamowicz2,ablamowicz3}.
Section~4 is devoted to a review of the central product structure of Salingaros vee groups.
In Section~5, we recall how the elementary abelian group $(\mathbb{Z}_2)^n$ appears in the context of defining Clifford product on the set of monomials $\mathbf{e}_{\underline{a}}$ indexed by binary $n$-tuples~$\underline{a}$ from $(\mathbb{Z}_2)^n$. In this first context, Walsh functions -- essentially, irreducible characters of $(\mathbb{Z}_2)^n$ -- and Gray code -- as a certain isomorphism of $(\mathbb{Z}_2)^n$ -- are used to define the $\clpq{p}{q}$ algebra product~\cite[Page 284]{lounesto} and references therein. In particular, a formula given by Lounesto dates back to 1935 and is being attributed to Brauer and Weyl~\cite{brauer}. It will be shown how this formula, applicable only to real Clifford algebras $\clpq{p}{q}$ over quadratic vector spaces $(V,Q)$ with a non-degenerate quadratic form $Q$ of signature $(p,q)$, and for an orthonormal set of basis elements (group generators), can be easily extended to Clifford algebras $\clpqr{p}{q}{r}$ for degenerate quadratic form $Q$ with $\dim V^{\perp}=r.$
Finally, in Section~6, we briefly recall the group $(\mathbb{Z}_2)^n$ as it appears again in the context of the Clifford algebra $\clpq{p}{q}$ as a twisted group algebra $\mathbb{R}^t[(\mathbb{Z}_2)^n]$ viewed as a Hopf algebra with a certain quasi-triangular structure~\cite{albuquerque,downs}. This structure is needed to twist the commutative product in the group algebra $\mathbb{R}[(\mathbb{Z}_2)^n]$ in a manner similar to the Brauer and Weyl formula, so that the twisted product is the Clifford product in $\clpq{p}{q}.$ It is recalled that the ``transposition" anti-involution of $\clpq{p}{q}$ introduced in~\cite{ablamowicz1,ablamowicz2,ablamowicz3} is actually the antipode in the Hopf algebra $\mathbb{R}^t[(\mathbb{Z}_2)^n]$.\footnote{We remark that twisted group rings can also be described as certain special Ore extensions known as skew polynomial rings~\cite{bueso}.}
Our standard references on the group theory are~\cite{dornhoff, gorenstein, rotman}; in particular, for the theory of $p$-groups we rely on~\cite{mckay}; for Clifford algebras we use~\cite{chevalley,lam,lounesto} and references therein; on representation theory we refer to~\cite{james}; and for the theory of Hopf algebras we refer to~\cite{majid}.
\section{Clifford Algebras as Images of Group Algebras}
Using Chernov's idea \cite{chernov}, in this section we want to show how Clifford algebras $\clpq{p}{q}$ can be viewed as images of group algebras $\mathbb{R}[G]$ of certain $2$-groups. It is conjectured~\cite{walley} that the group~$G$, up to an isomorphism, is the Salingaros vee group
$\Gpq{p}{q}$~\cite{salingaros1,salingaros2,salingaros3}. These groups, and their subgroups, have been recently discussed in \cite{ablamowicz2,ablamowicz3,brown,maduranga,maduranga2}.
\begin{definition}
Let $G$ be a finite group and let $\mathbb{F}$ be a field\footnote{Usually, $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$ although finite fields are also allowed. In this paper, we will be looking at the real Clifford algebras
$\clpq{p}{q}$ as images of real group algebras or as real twisted group algebras.}. Then the \textit{group algebra} $\mathbb{F}[G]$ is the vector space
\begin{gather}
\mathbb{F}[G] = \left\{\sum_{g \in G} \lambda_g g, \; \lambda_g \in \mathbb{F}\right\}
\end{gather}
with multiplication defined as
\begin{gather}
\left(\sum_{g \in G} \lambda_g g\right)
\left(\sum_{h \in G} \mu_h h\right)=
\sum_{g,h \in G} \lambda_g \mu_h (gh)=
\sum_{g \in G} \sum_{h \in G} \lambda_h \mu_{h^{-1}g} g
\end{gather}
where all $\lambda_g,\mu_h \in \mathbb{F}.$~\cite{james}
\end{definition}
Thus, group algebras are associative unital algebras with the group identity element playing the role of the algebra identity. In the theory of representations of finite groups, all irreducible inequivalent representations are related to a complete decomposition of the group algebra over $\mathbb{C}$ viewed as a \textit{regular} $\mathbb{C}$-module (cf. \cite[Maschke Theorem]{james}). The theory is rich on its own. The theory of group characters can then be derived from the representation
theory~\cite{james}, or, as it is often done, from the combinatorial arguments and the theory of characters of the symmetric group~\cite{sagan}. Since in this survey we are only interested in finite groups, we just recall for completeness that every finite group is isomorphic to a subgroup of a symmetric group~\cite{rotman}.
We begin by recalling a definition of a $p$-group.
\begin{definition}
Let $p$ be a prime. A group $G$ is a \textit{$p$-group} if every element in $G$ is of order $p^k$ for some $k \geq 1$.
\end{definition}
Note that any finite group $G$ of order $p^n$ is a $p$-group. A classical result states that a center of any $p$-group is nontrivial, and, by Cauchy's theorem we know that every finite $p$-group has an element of order $p$. Thus, in particular, the center of any finite $p$-group has an element of
order~$p$~\cite{dornhoff, gorenstein, rotman}. In the following, we will be working only with finite $2$-groups such as, for example, the group $(\mathbb{Z}_2)^n$ and Salingaros vee groups $\Gpq{p}{q}$ of order $2^{1+p+q}.$
Two important groups in the theory of finite $2$-groups and hence in this paper, are the \textit{quaternionic group} $Q_8$ and the \textit{dihedral group} $D_8$ (the symmetry group of a square under rotations and reflections), both of order $|Q_8|=|D_8|=8.$ These groups have the following presentations:
\begin{definition}
The \textit{quaternionic group} $Q_8$ has the following two presentations:
\begin{subequations}
\begin{align}
Q_8 &= \langle a,b \mid a^4=1, a^2=b^2, bab^{-1}=a^{-1} \rangle \label{eq:q8a}\\
&= \langle I,J,\tau \mid \tau^2=1, I^2=J^2=\tau, IJ=\tau J I\rangle \label{eq:q8b}
\end{align}
\end{subequations}
\end{definition}
\noindent
Thus, $Q_8=\{1,a,a^2,a^3,b,ab,a^2b,a^3b\}$ where the group elements have orders as follows: $|a^2|=2$, $|a|=|a^3|=|b|=|ab|=|a^2b|=|a^3b|=4,$ so the order structure of $Q_8$ is $(1,1,6),$\footnote{That is, $Q_8$ has one element of order 1; one element of order 2; and six elements of order 4.} and the center $Z(Q_8)=\{1,a^2\} \cong \mathbb{Z}_2$. Here, we can choose $\tau=a^2.$ While the presentation~(\ref{eq:q8a}) uses only two generators, for convenience and future use, we prefer presentation~(\ref{eq:q8b}) which explicitly uses a central element $\tau$ of order~$2$.
\begin{definition}
The \textit{dihedral group} $D_8$ (the symmetry group of a square) has the following two presentations:
\begin{subequations}
\begin{align}
D_8 &= \langle a,b \mid a^4=b^2=1, bab^{-1}=a^{-1} \rangle \label{eq:d8a}\\
&= \langle \sigma, \tau \mid \sigma^4=\tau^2=1, \tau \sigma \tau^{-1}=\sigma^{-1} \rangle \label{eq:d8b}
\end{align}
\end{subequations}
\end{definition}
\noindent
Thus, $D_8=\{1,a,a^2,a^3,b,ab,a^2b,a^3b\}$ where $|a^2|=|b|=|ab|=|a^2b|=|a^3b|=2,$ $|a|=|a^3|=4,$ the order structure of $D_8$ is $(1,5,2),$ and $Z(D_8)=\{1,a^2\} \cong \mathbb{Z}_2$. Here, we can choose $\tau=b,$ $\sigma=a$, hence, $\sigma^2 \in Z(D_8).$ That is, $\sigma^2$ is our central element of order~$2$, and our preferred presentation of $D_8$ is~(\ref{eq:d8b}).
In the following two examples, we show how one can construct the Clifford algebra
$\clpq{0}{2} \cong \mathbb{H}$ (resp. $\clpq{1}{1}$) as an image of the group algebra of $Q_8$ (resp. $D_8$).
\begin{example}(Constructing $\mathbb{H} \cong C \kern -0.1em \ell_{0,2}$ as $\mathbb{R}[Q_8]/\mathcal{J}$)\\
Define an algebra map $\psi$ from the group algebra $\mathbb{R}[Q_8] \rightarrow \mathbb{H} = \spn_\mathbb{R}\{1,\mathbf{i},\mathbf{j},\mathbf{i}\mathbf{j} \}$ as follows:
\begin{gather}
1 \mapsto 1,\quad \tau \mapsto -1,\quad I \mapsto \mathbf{i},\quad J \mapsto \mathbf{j},
\end{gather}
Then, $\cb{J}=\ker \psi = (1+\tau)$ for the central element $\tau$ of order~$2$ in $Q_8$\footnote{Here, $(1+\tau)$ denotes an ideal in $\mathbb{R}[Q_8]$ generated by $1+\tau$. Note that the two elements $\frac12(1\pm\tau)$ are idempotents which provide an \textit{orthogonal decomposition} of the unity in $\mathbb{R}[Q_8]$.}, so $\dim_{\mathbb{R}} \cb{J} = 4$ and $\psi$ is surjective. Let $\pi:\mathbb{R}[Q_8]\rightarrow \mathbb{R}[Q_8]/\cb{J}$ be the natural map $u \mapsto u+\cb{J}.$ There exists an isomorphism
$\varphi: \mathbb{R}[Q_8]/\cb{J} \rightarrow \mathbb{H}$ such that $\varphi \circ \pi = \psi$ and
\begin{gather*}
\pi(I^2) = I^2+\cb{J} = \tau+\cb{J} \mbox{ and } \varphi(\pi(I^2)) = \psi(\tau)=-1
=(\psi(I))^2=\mathbf{i}^2,\\
\pi(J^2) = J^2+\cb{J} = \tau+\cb{J} \mbox{ and } \varphi(\pi(J^2)) = \psi(\tau)=-1
=(\psi(J))^2=\mathbf{j}^2,\\
\pi(IJ+JI)=IJ+JI+\cb{J}=(1+\tau)JI+\cb{J} = \cb{J} \mbox{ and }\\
\varphi(\pi(IJ+JI)) = \psi(0)=0 = \psi(I)\psi(J)+\psi(J)\psi(I)=\mathbf{i}\mathbf{j}+\mathbf{j}\mathbf{i}.
\end{gather*}
Thus, $\mathbb{R}[Q_8]/\cb{J} \cong \psi(\mathbb{R}[Q_8]) = \mathbb{H} \cong C \kern -0.1em \ell_{0,2}$ provided the central element $\tau$ is mapped to $-1$ (see also~\cite{chernov}).
\label{ex:example1}
\end{example}
\begin{example} (Constructing $C \kern -0.1em \ell_{1,1}$ as $\mathbb{R}[D_8]/\mathcal{J}$)\\
Define an algebra map $\psi$ from the group algebra $\mathbb{R}[D_8] \rightarrow C \kern -0.1em \ell_{1,1}$ such that:
\begin{gather}
1 \mapsto 1,\quad \tau \mapsto \mathbf{e}_1,\quad \sigma \mapsto \mathbf{e}_2,
\label{eq:rq8}
\end{gather}
where $C \kern -0.1em \ell_{1,1}=\spn_\mathbb{R} \{1,\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_1\mathbf{e}_2 \}$. Then, $\ker \psi = (1+\sigma^2)$ where $\sigma^2$ is a central element of order~$2$ in $D_8$. Let $\cb{J} = (1+\sigma^2)$. Thus, $\dim_{\mathbb{R}} \cb{J} = 4$ and
$\psi$ is surjective. Let $\pi:\mathbb{R}[D_8]\rightarrow \mathbb{R}[D_8]/\cb{J}$ be the natural map
$u \mapsto u+\cb{J}.$ There exists an isomorphism $\varphi: \mathbb{R}[D_8]/\cb{J} \rightarrow C \kern -0.1em \ell_{1,1}$ such that $\varphi \circ \pi = \psi$ and
\begin{gather*}
\pi(\tau^2) = \tau^2+\cb{J} = 1+\cb{J} \mbox{ and } \varphi(\pi(\tau^2)) = \psi(1)=1=\psi(\tau^2)=(\mathbf{e}_1)^2,\\
\pi(\sigma^2) = \sigma^2+\cb{J} \mbox{ and } \varphi(\pi(\sigma^2)) = \psi(\sigma^2)=-1=
(\mathbf{e}_2)^2,\\
\pi(\tau\sigma+\sigma\tau)=\tau\sigma+\sigma\tau+\cb{J}=\sigma\tau(1+\sigma^2)+\cb{J}=\cb{J} \mbox{ and }\\ \varphi(\pi(\tau\sigma+\sigma\tau) = \psi(0)=0=\psi(\tau)\psi(\sigma)+\psi(\sigma)\psi(\tau)=\mathbf{e}_1\mathbf{e}_2+\mathbf{e}_2\mathbf{e}_1.
\end{gather*}
Thus, $\mathbb{R}[D_8]/\cb{J} \cong C \kern -0.1em \ell_{1,1}$ provided the central element $\sigma^2$ is mapped to $-1.$
\label{ex:example2}
\end{example}
It is not difficult to modify Example~2 and construct $\clpq{2}{0}$ as the quotient algebra
$\mathbb{R}[D_8]/\cb{J}$ by changing only the definition of the algebra map $\psi$ given in~(\ref{eq:rq8}) to
\begin{gather}
1 \mapsto 1,\quad \tau \mapsto \mathbf{e}_1,\quad \sigma \mapsto \mathbf{e}_1\mathbf{e}_2,
\label{eq:rq88}
\end{gather}
Then, the rest of Example~2 follows except that of course now $(\mathbf{e}_1)^2=(\mathbf{e}_2)^2=1$. Thus, one can construct $C \kern -0.1em \ell_{2,0}$ as $\mathbb{R}[D_8]/\mathcal{J}$ with again $\cb{J} = (1+\sigma^2).$
We remark that the fact that can use the group $D_8$ twice should not come as surprise since
$\clpq{1}{1} \cong \clpq{2}{0}$ (as real Clifford algebras) due to one of the isomorphism theorems stating that
$\clpq{p}{q} \cong \clpq{q+1}{p-1},$~\cite[Page 215]{lounesto} (see also~\cite{chevalley,lam,periodicity}) and that we only have, up to an isomorphism, two non-abelian groups of order eight, namely, $Q_8$ and $D_8.$
We summarize our two examples as follows. In preparation for Chernov's theorem~\cite{chernov}, notice
that elements in each group $Q_8$ and $D_8$ can be written as follows:
\begin{itemize}
\item The quaternionic group $Q_8$:
$$
Q_8 = \{\tau^{\alpha_0}g_1^{\alpha_1}g_2^{\alpha_2} \mid \alpha_k \in \{0,1\},\, k=0,1,2\}
$$
where $\tau=a^2$ is the central element of order~$2$ in $Q_8$, $g_1=a$, and $g_2=b$. Thus,
$$
(g_1)^2=a^2=\tau, \quad (g_2)^2=b^2=a^2=\tau, \quad \tau g_1g_2 =g_2g_1.
$$
Observe that $|g_1|=|g_2|=4$ and $\mathbb{R}[Q_8]/\cb{J} \cong C \kern -0.1em \ell_{0,2}$ where
$\cb{J} = (1+\tau).$
\item The dihedral group $D_8$:
$$
D_8 = \{\tau^{\alpha_0}g_1^{\alpha_1}g_2^{\alpha_2} \mid \alpha_k \in \{0,1\},\, k=0,1,2\}
$$
where $\tau=a^2$ is the central element of order~$2$ in $D_8$, $g_1=b$, and $g_2=a$. Thus,
$$
(g_1)^2=b^2=1, \quad (g_2)^2=a^2=\tau, \quad \tau g_1g_2 =g_2g_1.
$$
Observe that $|g_1|=2,$ $|g_2|=4$ and $\mathbb{R}[D_8]/\cb{J} \cong C \kern -0.1em \ell_{1,1}$ where
$\cb{J} = (1+\tau).$
\end{itemize}
Chernov's theorem states the following.
\begin{theorem}[Chernov]
Let $G$ be a finite $2$-group of order $2^{1+n}$ generated by a central element~$\tau$ of order~$2$ and additional elements $g_1,\ldots,g_n,$ which satisfy the following relations:
\begin{subequations}
\begin{gather}
\tau^2=1, \quad (g_1)^2=\cdots=(g_p)^2=1, \quad (g_{p+1})^2=\cdots=(g_{p+q})^2=\tau,\label{eq:squares}\\
\tau g_j = g_j\tau, \quad g_ig_j=\tau g_jg_i, \quad i,j=1,\ldots,n=p+q, \label{eq:tau}
\end{gather}
\label{eq:both}
\end{subequations}
\hspace*{-1ex}so that $G=\{\tau^{\alpha_0}g_1^{\alpha_1}\cdots g_n^{\alpha_n} \mid \alpha_k \in \{0,1\},\, k=0,1,\ldots,n\}$.
Let $\cb{J}=(1+\tau)$ be an ideal in the group algebra $\mathbb{R}[G]$ and let $C \kern -0.1em \ell_{p,q}$ be the universal real Clifford algebra generated by $\{\mathbf{e}_k\}, k=1,\ldots,n=p+q,$ where
\begin{subequations}
\begin{gather}
\mathbf{e}_i^2 = Q(\mathbf{e}_i) \cdot 1 = \varepsilon_i \cdot 1 =
\begin{cases}
1 & \textit{for $1 \leq i \leq p$;}\\
-1 & \textit{for $p+1 \leq i \leq p+q$;}
\end{cases}
\label{eq:B1aa}\\
\mathbf{e}_i\mathbf{e}_j + \mathbf{e}_j \mathbf{e}_i = 0, \quad i \neq j,\quad 1 \le i,j \leq n.
\label{eq:B1bb}
\end{gather}
\label{eq:B11}
\end{subequations}
\noindent
\hspace*{-1ex}Then, (a) $\dim_{\mathbb{R}}\cb{J}=2^n;$ (b) There exists a surjective algebra homomorphism $\psi$ from the group algebra $\mathbb{R}[G]$ to $C \kern -0.1em \ell_{p,q}$ so that $\ker \psi = \cb{J}$ and $\mathbb{R}[G]/\cb{J} \cong C \kern -0.1em \ell_{p,q}.$
\label{thm:thm2}
\end{theorem}
\begin{remark}
Chernov's theorem does not give the existence of the group $G$. It only states that should such group
exist whose generators satisfy relations~(\ref{eq:both}), the result follows. It is not difficult to conjecture that the group~$G$ in that theorem is in fact the Salingaros vee group $\Gpq{p}{q}$, that is, $\mathbb{R}[\Gpq{p}{q}]/\cb{J} \cong \clpq{p}{q}$ (see \cite{walley}). In fact, we have seen it in
Examples~\ref{ex:example1} and \ref{ex:example2} above.
\end{remark}
\begin{proof}[Chernov's theorem]
Observe that $G=\{\tau^{\alpha_0}g_1^{\alpha_1}\cdots g_n^{\alpha_n}\} \mid
\alpha_k \in \{0,1\},\, k=0,1,\ldots,n\}$. The existence of a central element $\tau$ of order~$2$ is guaranteed by a well-known fact that the center of any $p$-group is nontrivial, and by Cauchy
Theorem.~\cite{rotman} Define an algebra homomorphism $\psi:\mathbb{R}[G] \rightarrow C \kern -0.1em \ell_{p,q}$ such that
\begin{gather}
1 \mapsto 1, \quad \tau \mapsto -1, \quad g_j \mapsto \mathbf{e}_j, \quad j=1,\ldots,n.
\end{gather}
Clearly, $\cb{J} \subset \ker \psi$. Let $u \in \mathbb{R}[G]$. Then,
\begin{gather}
u=\sum_{\alpha}\lambda_{\alpha}\tau^{\alpha_0}g_1^{\alpha_1}\cdots g_n^{\alpha_n}
=u_{1}+\tau u_{2}
\end{gather}
where
\begin{subequations}
\begin{gather}
u_i = \sum_{\widetilde{\alpha}}\lambda_{\widetilde{\alpha}}^{(i)}g_1^{\alpha_1}\cdots g_n^{\alpha_n}, \quad i=1,2,\\
\alpha=(\alpha_0,\alpha_1,\ldots,\alpha_n) \in \mathbb{R}^{n+1} \quad \mbox{and} \quad
\widetilde{\alpha}=(\alpha_1,\ldots,\alpha_n) \in \mathbb{R}^{n}.
\end{gather}
\end{subequations}
Thus, if $u \in \ker \psi$, then
\begin{gather}
\psi(u) =
\sum_{\widetilde{\alpha}}(\lambda_{\widetilde{\alpha}}^{(1)}-
\lambda_{\widetilde{\alpha}}^{(2)})
\mathbf{e}_1^{\alpha_1}\cdots\mathbf{e}_n^{\alpha_n} = 0
\end{gather}
implies $\lambda_{\widetilde{\alpha}}^{(1)}=\lambda_{\widetilde{\alpha}}^{(2)}$ since
$\{\mathbf{e}_1^{\alpha_1}\cdots\mathbf{e}_n^{\alpha_n}\}$ is a basis in $C \kern -0.1em \ell_{p,q}$. Hence,
\begin{gather}
u=(1+\tau)\sum_{\widetilde{\alpha}}\lambda_{\widetilde{\alpha}}^{(1)}g_1^{\alpha_1}\cdots g_n^{\alpha_n} \in \cb{J}.
\end{gather}
Thus, $\dim_{\mathbb{R}} \ker \psi=2^n,$ $\ker \psi = \cb{J}$, $\dim_{\mathbb{R}}\mathbb{R}[G]/\cb{J} = 2^{1+n}-2^n=2^n,$ so $\psi$ is surjective. Let $\varphi:\mathbb{R}[G]/\cb{J}\rightarrow C \kern -0.1em \ell_{p,q}$ be such that $\varphi \circ \pi = \psi$ where $\pi:\mathbb{R}[G]\rightarrow \mathbb{R}[G]/\cb{J}$ is the natural map. Then, since $\psi(g_j)=\mathbf{e}_j,$ $\pi(g_j)=g_j+\cb{J},$ we have
$\varphi(\pi(g_j)) = \varphi(g_j+\cb{J})=\psi(g_j)=\mathbf{e}_j$ and
\begin{align}
\pi(g_j)\pi(g_i)+\pi(g_i)\pi(g_j)
&=(g_j+\cb{J})(g_i+\cb{J}) + (g_j+\cb{J})(g_i+\cb{J}) \notag \\
&=(g_jg_i+g_ig_j) + \cb{J} = (1+\tau)g_jg_i + \cb{J} = \cb{J}
\end{align}
because $g_ig_j=\tau g_jg_i$ in $\mathbb{R}[G]$, $\tau$ is central, and $\cb{J}=(1+\tau).$ Thus, $g_j+\cb{J},g_i+\cb{J}$ anticommute in $\mathbb{R}[G]/\cb{J}$ when $i \neq j.$ Also,
\begin{gather}
\pi(g_i)\pi(g_i) = (g_i+\cb{J})(g_i+\cb{J}) = (g_i)^2+\cb{J} =
\begin{cases}
1 + \cb{J}, & \text{$1 \leq i \leq p$;}\\
\tau + \cb{J}, & \text{$p+1 \leq i \leq n$;}
\end{cases}
\end{gather}
due to the relations~(\ref{eq:squares}) on $g_i$ in $G$. Observe, that
\begin{gather}
\tau +\cb{J} = (-1)+(1+\tau) +\cb{J} = (-1)+\cb{J} \mbox{ in } \mathbb{R}[G]/\cb{J}.
\end{gather}
To summarize, the factor algebra $\mathbb{R}[G]/\cb{J}$ is generated by the cosets $g_i+\cb{J}$ which satisfy these relations:
\begin{subequations}
\begin{gather}
(g_j+\cb{J})(g_i+\cb{J}) + (g_j+\cb{J})(g_i+\cb{J}) = \cb{J},\\
(g_i)^2+\cb{J} =
\begin{cases}
1 + \cb{J}, & \text{$1 \leq i \leq p$;}\\
(-1) + \cb{J}, & \text{$p+1 \leq i \leq n$;}
\end{cases}
\end{gather}
\end{subequations}
Thus, the factor algebra $\mathbb{R}[G]/\cb{J}$ is a Clifford algebra isomorphic to $C \kern -0.1em \ell_{p,q}$ provided
$\cb{J}=(1+\tau)$ for the central element $\tau$ of order~$2$ in $G$.
\end{proof}
\section{Salingaros Vee Groups $G_{p,q} \subset C \kern -0.1em \ell_{p,q}^{\times}$}
Let $\dim_\mathbb{R} V=n$ and $Q$ be a non-degenerate quadratic form on~$V$:
\begin{align}
Q(\mathbf{x}) &= \varepsilon_1x_1^2 + \varepsilon_2x_2^2 + \cdots + \varepsilon_nx_n^2,
\label{eq:Q}
\end{align}
$\varepsilon_i = \pm 1$ and $\mathbf{x} = x_1\mathbf{e}_1 + \cdots + x_n\mathbf{e}_n \in V$ for an orthonormal basis $\mathcal{B}_1 = \{\mathbf{e}_i,1\le i \le n\}$. $Q$ has an arbitrary signature $-n \le p-q \le n$ where $p$ (resp. $q$) denotes the number of $+1$'s (resp. $-1$'s) in~(\ref{eq:Q}), and $p+q=n$. Let $C \kern -0.1em \ell_{p,q}$ be the universal Clifford algebra of $(V,Q)$ obtained, for example, via Chevalley's construction~\cite[Chapter 22]{lounesto}.
Then, let $\cb{B}$ be the canonical basis of $\bigwedge V$ generated by $\cb{B}_1,$
$[n]=\{1,2,\ldots,n\}$ and denote arbitrary, canonically ordered subsets of $[n]$, by underlined Roman characters. The basis elements of $\bigwedge V$, or, of $\clpq{p}{q}$ due to the linear space isomorphism $\bigwedge V \rightarrow \clpq{p}{q}$, can be indexed by these finite ordered subsets as $\mathbf{e}_{\underline{i}} = \wedge_{i \in {\underline{i}}}\, \mathbf{e}_i$.
Now, let $G_{p,q}$ be a finite group in any real Clifford algebra $C \kern -0.1em \ell_{p,q}$ (simple or semisimple) with a with a binary operation being just the Clifford product, namely:
\begin{equation}
G_{p,q} = \{ \pm \mathbf{e}_{\underline{i}} \; | \; \mathbf{e}_{\underline{i}} \in \cb{B} \; \mbox{with} \;
\mathbf{e}_{\underline{i}} \mathbf{e}_{\underline{j}} \; \mbox{denoting the Clifford product}\}.
\end{equation}
Thus, $G_{p,q}$ may be presented as follows:
\begin{gather}
G_{p,q}=\langle -1,\mathbf{e}_1,\ldots,\mathbf{e}_n \mid \mathbf{e}_i\mathbf{e}_j = -\mathbf{e}_j\mathbf{e}_i\, \mbox{ for }\, i \neq j \, \mbox{ and }
\mathbf{e}_i^2= \pm 1
\rangle ,
\label{eq:Gpq}
\end{gather}
where $\mathbf{e}_i^2=1$ for $1 \leq i \leq p$ and $\mathbf{e}_i^2=-1$ for $p+1 \leq i \leq n=p+q$. In the following, the elements $\mathbf{e}_{\underline{i}} = \mathbf{e}_{i_1}\mathbf{e}_{i_2}\cdots \mathbf{e}_{i_k}$ will be denoted for short as $\mathbf{e}_{i_1i_2\cdots i_k}$ for $k \ge 1$ while $\mathbf{e}_{\emptyset}$ will be denoted as $1,$ the identity element of $G_{p,q}$ (and $C \kern -0.1em \ell_{p,q}).$
This $2$-group of order $2 \cdot 2^{p+q} = 2^{n+1}$ is known as \textit{Salingaros vee group} and it has been discussed, for example, by Salingaros~\cite{salingaros1,salingaros2,salingaros3}, Varlamov~\cite{varlamov1,varlamov}, Helmstetter~\cite{helmstetter1}, Ab\l amowicz and Fauser~\cite{ablamowicz2,ablamowicz3},
Maduranga and Ab\l amowicz~\cite{maduranga2}, and most recently by Brown~\cite{brown}. We should recall here that $G_{p,q}$ is a discrete subgroup of $\mathbf{Pin}(p,q) \subset \mathbf{\Gamma}_{p,q}$ (Lipschitz group) (Lounesto~\cite{lounesto}).
In preparation for discussing properties of the groups $\Gpq{p}{q}$ and related to them subgroups, we recall a definition of the derived subgroup $G' \subset G$ and a proposition that gives some of its properties~\cite{rotman}.
\begin{definition}
If $G$ is a group and $x,y \in G,$ then their \textit{commutator} $[x,y]$ is the element $xyx^{-1}y^{-1}.$ If $X$ and $Y$ are subgroups of $G$, then the \textit{commutator subgroup} $[X,Y]$ of $G$ is defined by
\begin{gather}
[X,Y]=\langle [x,y] \mid x \in X, y \in Y \rangle,
\end{gather}
that is, the group $[X,Y]$ is generated by all the commutators $[x,y].$ In particular, the
\textit{derived subgroup} $G'$ of $G$ is defined as $G'=[G,G].$
\end{definition}
\begin{proposition}
Let $G$ be a group.
\begin{itemize}
\item[(i)] $G'$ is a normal subgroup of $G$, and $G/G'$ is abelian.
\item[(ii)] If $H$ is a normal subgroup of G and $G/H$ is abelian, then $G'\subseteq H.$
\end{itemize}
\end{proposition}
\subsection{Transposition Anti-Involution in $\clpq{p}{q}$}
\label{sub:sectt}
Let us now recall a definition and some of its basic properties of a special anti-involution $\ta{T}{\varepsilon}$ in a Clifford algebra $\clpq{p}{q}$ referred to as ``transposition". This anti-involution was introduced in \cite{ablamowicz1,ablamowicz2,ablamowicz3} where its properties were investigated at length. In particular, it allowed for an introduction of a reciprocal basis in a Clifford algebra $\clpq{p}{q}$ and, subsequently, a new spinor product on spinor spaces, and a classification of its (infinite) groups of invariance. In the following, we limit ourselves only to reviewing certain finite groups appearing in this context.
\begin{definition}
The \textit{transposition} $\ta{T}{\varepsilon}$ of $C \kern -0.1em \ell_{p,q}$ is defined as:
\begin{gather}
\ta{T}{\varepsilon} :C \kern -0.1em \ell_{p,q} \rightarrow C \kern -0.1em \ell_{p,q},\qquad
\sum_{{\underline{i}} \in 2^{[n]}} u_{\underline{i}} \mathbf{e}_{{\underline{i}}} \mapsto
\sum_{{\underline{i}} \in 2^{[n]}} u_{\underline{i}} (\mathbf{e}_{{\underline{i}}})^{-1}
\label{eq:tpdef}
\end{gather}
\end{definition}
\noindent
It is the \textit{antipode} map $S$ known from the theory of group algebras $\mathbb{F}[G]$
\begin{gather}
\mathbb{F}[G] \rightarrow \mathbb{F}[G], \qquad \sum_{g\in G} \lambda_gg \mapsto \sum_{g\in G} \lambda_g g^{-1}
\label{eq:S}
\end{gather}
viewed as Hopf algebras~\cite{majid}. Here are a few of its properties and a few finite related groups. For more details, see \cite{ablamowicz1,ablamowicz2,ablamowicz3}.
\begin{itemize}
\item $\ta{T}{\varepsilon}$ is an anti-involution of $C \kern -0.1em \ell_{p,q}$ which reduces to reversion in
$C \kern -0.1em \ell_{p,0}$ and to conjugation in $C \kern -0.1em \ell_{0,q}$.
\item Depending on the value of $(p-q) \bmod 8$, where $(p,q)$ is the signature of $Q$, $\ta{T}{\varepsilon}$ gives rise to transposition, Hermitian complex, and Hermitian quaternionic conjugation of spinor representation matrices.
\item $\ta{T}{\varepsilon}(\mathbf{e}_{\underline{i}}) = \mathbf{e}_{\underline{i}}^{-1}$ hence $\ta{T}{\varepsilon}(\mathbf{e}_{\underline{i}}) = \mathbf{e}_{\underline{i}}$ (resp. $\ta{T}{\varepsilon}(\mathbf{e}_{\underline{i}}) = -\mathbf{e}_{\underline{i}}$) when $(\mathbf{e}_{\underline{i}})^2=1$ (resp. $(\mathbf{e}_{\underline{i}})^2=-1$) (elements of order 2 and 4, respectively, in $G_{p,q}$).
\item $\ta{T}{\varepsilon}(f) = f$ for any primitive idempotent $f$.
\item Let $S=C \kern -0.1em \ell_{p,q}f$ be a spinor (minimal left) ideal in a simple algebra $C \kern -0.1em \ell_{p,q}$ generated by a primitive idempotent~$f$. Then, $\ta{T}{\varepsilon}$ defines a dual spinor space
$S^\ast=\ta{T}{\varepsilon}(S)$ and a $\mathbb{K}$-valued, where $\mathbb{K}=f\clpq{p}{q}f,$ spinor norm $(\psi,\phi) = \ta{T}{\varepsilon}(\psi)\phi$ on $S$ invariant under (infinite) group $G_{p,q}^\varepsilon$ (with $G_{p,q} < G_{p,q}^\varepsilon$) different, in general, from spinor norms related to reversion and conjugation in $C \kern -0.1em \ell_{p,q}$.
\item $G_{p,q}$ act transitively on a complete set $\cb{F}$, $|\cb{F}|=2^{q-r_{q-p}}$, of mutually annihilating primitive idempotents where $r_i$ is the Radon-Hurwitz number. See a footnote in Appendix~A for a definition of $r_i$.
\item The normal stabilizer subgroup $G_{p,q}(f) \lhd G_{p,q}$ of $f$ is of order
$2^{1+p+r_{q-p}}$ and monomials $m_i$ in its (non-canonical) left transversal together with $f$ determine a spinor basis in $S$.
\item The stabilizer groups $G_{p,q}(f)$ and the invariance groups $G_{p,q}^\varepsilon$ of the spinor norm have been classified according to the signature $(p,q)$ for $(p+q) \leq 9$ in simple and semisimple algebras $C \kern -0.1em \ell_{p,q}$.
\item $G_{p,q}$ permutes the spinor basis elements modulo the commutator subgroup $G'_{p,q}$ by left multiplication.
\item The ring $\mathbb{K}=fC \kern -0.1em \ell_{p,q}f$ is $G_{p,q}$-invariant.
\end{itemize}
\subsection{Important Finite Subgroups of $C \kern -0.1em \ell_{p,q}^{\times}$}
In this section, we summarize properties and definitions of some finite subgroups of the group of invertible elements $C \kern -0.1em \ell_{p,q}^{\times}$ in the Clifford algebra $\clpq{p}{q}.$ These groups were defined in~\cite{ablamowicz1,ablamowicz2,ablamowicz3}.
\begin{itemize}
\item $G_{p,q}$ -- Salingaros vee group of order $|G_{p,q}|=2^{1+p+q}$,
\item $G'_{p,q} = \{1,-1\}$ -- the commutator subgroup of $G_{p,q}$,
\item Let $\cb{O}(f)$ be the orbit of $f$ under the conjugate action of $G_{p,q}$, and let $G_{p,q}(f)$ be the stabilizer of~$f$. Let
\begin{gather}
N=|\cb{F}| = [G_{p,q}:G_{p,q}(f)]=|\cb{O}(f)|=|G_{p,q}|/|G_{p,q}(f)|=2\cdot 2^{p+q}/|G_{p,q}(f)|
\end{gather}
then $N=2^k$ (resp. $N=2^{k-1}$) for simple (resp. semisimple) $C \kern -0.1em \ell_{p,q}$ where
$k=q-r_{q-p}$ and $[G_{p,q}:G_{p,q}(f)]$ is the index of $G_{p,q}(f)$ in $G_{p,q}$.
\item $G_{p,q}(f) \lhd G_{p,q}$ and $|G_{p,q}(f)|=2^{1+p+r_{q-p}}$
(resp. $|G_{p,q}(f)|=2^{2+p+r_{q-p}}$) for simple (resp. semisimple) $C \kern -0.1em \ell_{p,q}.$
\item The set of commuting monomials $\cb{T}= \{\mathbf{e}_{{\underline{i}}_1},\ldots,\mathbf{e}_{{\underline{i}}_k}\}$ (squaring to $1$) in the primitive idempotent
$
f = \frac12(1\pm\mathbf{e}_{{\underline{i}}_1}) \cdots \frac12(1\pm\mathbf{e}_{{\underline{i}}_k})
$
is point-wise stabilized by $G_{p,q}(f).$
\item $T_{p,q}(f) := \langle \pm 1, \cb{T}\rangle
\cong \Gpq{p}{q}' \times \langle \mathbf{e}_{{\underline{i}}_1},\ldots, \mathbf{e}_{{\underline{i}}_k} \rangle
\cong \Gpq{p}{q}' \times (\mathbb{Z}_2)^k,$ the \textit{idempotent group} of $f$ with
$|\Tpqf{p}{q}{f}|=2^{1+k}$,
\item $\Kpqf{p}{q}{f} = \langle \pm 1, m \mid m \in \cb{K}\rangle < \Gpqf{p}{q}{f}$ -- the \textit{field group} of where $f$ is a primitive idempotent in $C \kern -0.1em \ell_{p,q}$,
$\mathbb{K}=fC \kern -0.1em \ell_{p,q}f$, and $\cb{K}$ is a set of monomials (a transversal) in~$\cb{B}$ which span~$\mathbb{K}$ as a real algebra. Thus,
\begin{gather}
|\Kpqf{p}{q}{f}| = \begin{cases} 2, & p - q = 0,1,2 \bmod 8;\\
4, & p - q = 3,7 \bmod 8;\\
8, & p - q = 4,5,6 \bmod 8.
\end{cases}
\label{eq:orderKpqf}
\end{gather}
\item $G_{p,q}^{\varepsilon} = \{g \in C \kern -0.1em \ell_{p,q} \mid \ta{T}{\varepsilon}(g)g=1\} $ (infinite group)
\end{itemize}
Before we state the main theorem from~\cite{ablamowicz3} that relates the above finite groups to the Salingaros vee groups, we recall the definition of a \textit{transversal}.
\begin{definition}
Let $K$ be a subgroup of a group $G$. A \textit{transversal} $\ell$ of $K$ in~$G$ is a subset of~$G$ consisting of exactly one element $\ell(bK)$ from every (left) coset $bK$, and with $\ell(K)=1$.
\end{definition}
\begin{theorem}[Main Theorem]
Let $f$ be a primitive idempotent in $C \kern -0.1em \ell_{p,q}$ and let $\Gpq{p}{q}$, $\Gpqf{p}{q}{f}$, $\Tpqf{p}{q}{f}$, $\Kpqf{p}{q}{f}$, and $\Gpq{p}{q}'$ be the groups defined above. Let $S=C \kern -0.1em \ell_{p,q}f$ and $\mathbb{K}=fC \kern -0.1em \ell_{p,q}f$.
\begin{itemize}
\item[(i)] Elements of $\Tpqf{p}{q}{f}$ and $\Kpqf{p}{q}{f}$ commute.
\item[(ii)] $\Tpqf{p}{q}{f} \cap \Kpqf{p}{q}{f} = \Gpq{p}{q}' = \{\pm 1 \}$.
\item[(iii)] $\Gpqf{p}{q}{f} = \Tpqf{p}{q}{f}\Kpqf{p}{q}{f} =
\Kpqf{p}{q}{f}\Tpqf{p}{q}{f}$.
\item[(iv)] $|\Gpqf{p}{q}{f}| = |\Tpqf{p}{q}{f}\Kpqf{p}{q}{f}| =
\frac12 |\Tpqf{p}{q}{f}||\Kpqf{p}{q}{f}|$.
\item[(v)] $\Gpqf{p}{q}{f} \lhd \Gpq{p}{q}$, $\Tpqf{p}{q}{f} \lhd \Gpq{p}{q}$,
and $\Kpqf{p}{q}{f} \lhd \Gpq{p}{q}$. In particular, $\Tpqf{p}{q}{f}$ and
$\Kpqf{p}{q}{f}$ are normal subgroups of $\Gpqf{p}{q}{f}$.
\item[(vi)] We have:
\begin{align}
\Gpqf{p}{q}{f} /\Kpqf{p}{q}{f} &\cong \Tpqf{p}{q}{f} /\Gpq{p}{q}',\\
\Gpqf{p}{q}{f} /\Tpqf{p}{q}{f} &\cong \Kpqf{p}{q}{f} /\Gpq{p}{q}'.
\label{eq:conj6}
\end{align}
\item[(vii)] We have:
\begin{gather}
(\Gpqf{p}{q}{f}/\Gpq{p}{q}')/(\Tpqf{p}{q}{f}/\Gpq{p}{q}')
\cong \Gpqf{p}{q}{f}/\Tpqf{p}{q}{f} \cong \Kpqf{p}{q}{f}/\{\pm 1 \}
\label{eq:conj7}
\end{gather}
and the transversal of $\Tpqf{p}{q}{f}$ in $\Gpqf{p}{q}{f}$ spans $\mathbb{K}$
over $\mathbb{R}$ modulo~$f$.
\item[(viii)] The transversal of $\Gpqf{p}{q}{f}$ in $\Gpq{p}{q}$ spans $S$
over $\mathbb{K}$ modulo~$f$.
\item[(ix)] We have $(\Gpqf{p}{q}{f}/\Tpqf{p}{q}{f}) \lhd
(\Gpq{p}{q}/\Tpqf{p}{q}{f})$ and
\begin{gather}
(\Gpq{p}{q}/\Tpqf{p}{q}{f})/(\Gpqf{p}{q}{f}/\Tpqf{p}{q}{f})
\cong \Gpq{p}{q}/\Gpqf{p}{q}{f}
\label{eq:conj8}
\end{gather}
and the transversal of $\Tpqf{p}{q}{f}$ in $\Gpq{p}{q}$ spans $S$ over
$\mathbb{R}$ modulo~$f$.
\item[(x)] The stabilizer $\Gpqf{p}{q}{f}$ can be viewed as
\begin{gather}
\Gpqf{p}{q}{f} = \bigcap_{x \in \Tpqf{p}{q}{f}} C_{\Gpq{p}{q}}(x)
= C_{\Gpq{p}{q}}(\Tpqf{p}{q}{f})
\end{gather}
where $C_{\Gpq{p}{q}}(x)$ is the centralizer of $x$ in $\Gpq{p}{q}$ and
$C_{\Gpq{p}{q}}(\Tpqf{p}{q}{f})$ is the centralizer of $\Tpqf{p}{q}{f}$ in
$\Gpq{p}{q}$.
\end{itemize}
\label{maintheorem}
\end{theorem}
\subsection{Summary of Some Basic Properties of Salingaros Vee Groups $G_{p,q}$}
In the following, we summarize some basic properties of Salingaros vee groups $G_{p,q}$.
\begin{itemize}
\item $|G_{p,q}| =2^{1+p+q}$, $|G'_{p,q}|= 2$ because $G'_{p,q}= \{\pm 1\}$,
\item When $p+q\geq 1,$ $G_{p,q}$ is not simple as it has a nontrivial normal subgroup of order $2^m$ for every $m < 1+p+q$ (because every $p$-group of order $p^n$ has a normal subgroup of order $p^m$ for every $m \neq n$).
\item When $p+q\geq 1,$ the center of any group $G_{p,q}$ is non-trivial since $2 \mid |Z(G_{p,q})|$ and so every group $G_{p,q}$ has a central element $\tau$ of order $2$. It is well-known that for any prime $p$ and a finite $p$-group $G \neq \{1\}$, the center of $G$ is non-trivial (Rotman~\cite{rotman}).
\item Every element of $G_{p,q}$ is of order $1,$ $2,$ or $4$.
\item Since $[G_{p,q}:G'_{p,q}] = |G_{p,q}|/|G'_{p,q}| = 2^{p+q},$ each $G_{p,q}$
has $2^{p+q}$ linear characters (James and Liebeck~\cite{james}).
\item The number $N$ of conjugacy classes in $G_{p,q}$, hence, the number of irreducible inequivalent representations of $G_{p,q}$, is $1+2^{p+q}$ (resp. $2+2^{p+q}$) when $p+q$ is even (resp. odd)
(Maduranga~\cite{maduranga}).
\item We have the following result (see also Varlamov~\cite{varlamov}):
\begin{theorem}
\label{centerofGpq}
Let $G_{p,q} \subset C \kern -0.1em \ell_{p,q}$. Then,
\begin{gather}
Z(G_{p,q})=
\begin{cases}
\{\pm 1\} \cong \mathbb{Z}_2 & \text{if $p-q \equiv 0,2,4,6 \pmod{8}$};\\
\{\pm 1,\pm \beta\} \cong \mathbb{Z}_2 \times \mathbb{Z}_2 & \text{if $p-q \equiv 1,5 \pmod{8}$};\\
\{\pm 1,\pm \beta\} \cong \mathbb{Z}_4 & \text{if $p-q \equiv 3,7 \pmod{8}$}.
\end{cases}
\end{gather}
\label{ZGpqlemma}
\end{theorem}
\hspace*{-2ex}as a consequence of $Z(C \kern -0.1em \ell_{p,q})= \{1\}$ (resp. $\{1,\beta\})$ when $p+q$ is even resp. odd) where $\beta=\mathbf{e}_1\mathbf{e}_2\cdots \mathbf{e}_n,$ $n=p+q,$ is the unit pseudoscalar in~$C \kern -0.1em \ell_{p,q}$.
\item In Salingaros' notation, the five isomorphism classes denoted as $N_{2k-1},N_{2k},\Omega_{2k-1},\Omega_{2k},S_k$ correspond to our notation $\Gpq{p}{q}$ as follows:
\begin{table}[h]
\begin{center}
\caption{Vee groups $G_{p,q}$ in Clifford algebras $C \kern -0.1em \ell_{p,q}$}
\label{t1}
\renewcommand{\arraystretch}{1.0}
\begin{tabular}{ | c | c | c | c |}
\hline
\multicolumn{1}{|c|}{Group} &
\multicolumn{1}{|c|}{Center} &
\multicolumn{1}{|c|}{Group order}&
\multicolumn{1}{|c|}{$\dim_\mathbb{R} C \kern -0.1em \ell_{p,q}$}
\\ \hline
$N_{2k-1}$
& \multicolumn{1}{|c|}{$\mathbb{Z}_2$}
& \multicolumn{1}{|c|}{$2^{2k+1}$}
& \multicolumn{1}{|c|}{$2^{2k}$}
\\\hline
$N_{2k}$
& \multicolumn{1}{|c|}{$\mathbb{Z}_2$}
& \multicolumn{1}{|c|}{$2^{2k+1}$}
& \multicolumn{1}{|c|}{$2^{2k}$}
\\\hline
$\Omega_{2k-1}$
& \multicolumn{1}{|c|}{$\mathbb{Z}_2 \times \mathbb{Z}_2$}
& \multicolumn{1}{|c|}{$2^{2k+2}$}
& \multicolumn{1}{|c|}{$2^{2k+1}$}
\\\hline
$\Omega_{2k}$
& \multicolumn{1}{|c|}{$\mathbb{Z}_2 \times \mathbb{Z}_2$}
& \multicolumn{1}{|c|}{$2^{2k+2}$}
& \multicolumn{1}{|c|}{$2^{2k+1}$}
\\\hline
$S_{k}$
& \multicolumn{1}{|c|}{$\mathbb{Z}_4$}
& \multicolumn{1}{|c|}{$2^{2k+2}$}
& \multicolumn{1}{|c|}{$2^{2k+1}$}
\\\hline
\end{tabular}
\end{center}
\end{table}
\begin{align*}
N_{2k-1} & \leftrightarrow G_{p,q} \subset C \kern -0.1em \ell_{p,q},\,\, p-q \equiv 0,2 \pmod 8,
\,\, \mathbb{K} \cong \mathbb{R};\\[-0.5ex]
N_{2k} & \leftrightarrow G_{p,q} \subset C \kern -0.1em \ell_{p,q},\,\, p-q \equiv 4,6 \pmod 8,
\,\, \mathbb{K} \cong \mathbb{H};\\[-0.5ex]
\Omega_{2k-1} & \leftrightarrow G_{p,q} \subset C \kern -0.1em \ell_{p,q},\,\, p-q \equiv 1 \pmod 8,
\,\, \mathbb{K} \cong \mathbb{R} \oplus \mathbb{R};\\[-0.5ex]
\Omega_{2k} & \leftrightarrow G_{p,q} \subset C \kern -0.1em \ell_{p,q},\,\, p-q \equiv 5 \pmod 8,
\,\, \mathbb{K} \cong \mathbb{H} \oplus \mathbb{H};\\[-0.5ex]
S_{k} & \leftrightarrow G_{p,q} \subset C \kern -0.1em \ell_{p,q},\,\, p-q \equiv 3,7 \pmod 8,
\,\, \mathbb{K} \cong \mathbb{C}.
\end{align*}
(Salingaros~\cite{salingaros1,salingaros2,salingaros3}, Brown~\cite{brown}, Varlamov~\cite{varlamov})
\end{itemize}
\noindent
The first few vee groups $\Gpq{p}{q}$ of low orders $4,8,16$ corresponding to Clifford algebras $C \kern -0.1em \ell_{p,q}$ in dimensions $p+q=1,2,3$, are:
\begin{align*}
\mbox{Groups of order $4$:}\quad G_{1,0}&=D_4 ,\quad G_{0,1}=\mathbb{Z}_4,\\
\mbox{Groups of order $8$:}\quad G_{2,0}&=D_8 = N_1,\quad G_{1,1}=D_8 = N_1,\quad G_{0,2}=Q_8 = N_2,\\
\mbox{Groups of order $16$:}\quad G_{3,0}&=S_1, \quad G_{2,1}=\Omega_1, \quad G_{1,2}=S_1, \quad G_{0,3}=\Omega_2.
\end{align*}
where $D_8$ is the dihedral group of a square, $Q_8$ is the quaternionic group, and
$D_4 \cong \mathbb{Z}_2 \times \mathbb{Z}_2.$ For a construction of inequivalent irreducible representations and characters of these groups see~Maduranga and Ab\l amowicz~\cite{maduranga2}, and Maduranga~\cite{maduranga}.
\section{Central Product Structure of $G_{p,q}$}
We recall first a few definitions and results pertaining to finite $p$-groups that will be needed in the sequel.
\begin{definition}[Gorenstein \cite{gorenstein}]
A finite abelian $p$-group is \textit{elementary abelian} if every nontrivial element has order~$p$.
\end{definition}
\begin{example}($D_4=\mathbb{Z}_2 \times \mathbb{Z}_2$ is elementary abelian)\\
$(\mathbb{Z}_p)^k = \mathbb{Z}_p \times \cdots \times \mathbb{Z}_p$ ($k$-times), in particular, $\mathbb{Z}_2$,
$\mathbb{Z}_2 \times \mathbb{Z}_2$, etc, are elementary abelian.
\label{ex:example3}
\end{example}
\begin{definition}[Dornhoff \cite{dornhoff}]
A finite $p$-group $P$ is \textit{extra-special} if (i) $P'=Z(P),$ (ii) $|P'| = p,$ and (iii) $P/P'$ is elementary abelian.
\end{definition}
\begin{example}($D_8$ is extra-special)\\
$D_8=\langle a,b \mid a^4=b^2=1, bab^{-1}=a^{-1}\rangle$ is extra-special because:
\begin{itemize}
\item $Z(D_8) = D_8'=[D_8,D_8] = \langle a^2 \rangle$, $|Z(D_8)|=2,$
\item $D_8/D_8'=D_8/Z(D_8) =
\langle \langle a^2 \rangle, a\langle a^2 \rangle,
b\langle a^2 \rangle, ab\langle a^2 \rangle\rangle \cong \mathbb{Z}_2 \times \mathbb{Z}_2.$
\end{itemize}
\label{ex:example4}
\end{example}
\begin{example}($Q_8$ is extra-special)\\
$Q_8=\langle a,b \mid a^4=1, a^2=b^2, bab^{-1}=a^{-1}\rangle$ is extra-special because:
\begin{itemize}
\item $Z(Q_8) = Q_8'=[Q_8,Q_8] = \langle a^2 \rangle$, $|Z(Q_8)|=2,$
\item $Q_8/Q_8'=Q_8/Z(Q_8) =
\langle \langle a^2 \rangle, a\langle a^2 \rangle,
b\langle a^2 \rangle, ab\langle a^2 \rangle\rangle \cong \mathbb{Z}_2 \times \mathbb{Z}_2.$
\end{itemize}
\label{ex:example5}
\end{example}
Let us recall now definitions of internal and external central products of groups.
\begin{definition}[Gorenstein \cite{gorenstein}]
\leavevmode
\begin{enumerate}
\item A group $G$ is an \textit{internal central product} of two subgroups $H$ and $K$ if:
\begin{enumerate}
\item $[H,K] = \langle 1 \rangle$;
\item $G = HK$;
\end{enumerate}
\item A group $G$ is an \textit{external central product} $H \circ K$ of two groups $H$ and $K$ with $H_{1} \leq Z(H)$ and $K_{1} \leq Z(K)$ if there exists an isomorphism
$\theta :H_{1} \rightarrow K_{1}$ such that $G$ is $(H \times K)/N$ where
$$
N = \lbrace (h,\theta (h^{-1})) \mid h \in H_{1} \rbrace.
$$
Clearly: $N \lhd (H \times K)$ and $|H \circ K| = |H||K|/|N| \leq |H \times K|=|H||K|.$
\end{enumerate}
\end{definition}
Here we recall an important result on extra-special $p$-groups as central products.
\begin{lemma}[Leedham-Green and McKay \cite{mckay}]
An extra-special $p$-group has order $p^{2n + 1}$ for some positive integer~$n$, and is the iterated central product of non-abelian groups of order~$p^{3}$.
\end{lemma}
As a consequence, we have the following lemma and a corollary. For their proofs, see~\cite{brown}.
\begin{lemma}
$Q_{8} \circ Q_{8} \cong D_{8} \circ D_{8} \ncong D_{8} \circ Q_{8}$, where $D_{8}$ is the dihedral group of order~8 and $Q_{8}$ is the quaternion group.
\label{lem:lem2}
\end{lemma}
\begin{corollary}
\leavevmode
\begin{itemize}
\item $\Gpq{3}{1} \cong D_8 \circ D_8 \cong Q_8 \circ Q_8,$
\item $\Gpq{4}{0} \cong D_8 \circ Q_8 \cong Q_8 \circ D_8.$
\end{itemize}
\end{corollary}
The following theorem is of critical importance for understanding the central product structure of Salingaros vee groups.
\begin{theorem}[Leedham-Green and McKay \cite{mckay}]
There are exactly two isomorphism classes of extra-special groups of order $2^{2n + 1}$ for positive integer $n$. One isomorphism type arises as the iterated central product of $n$ copies of $D_{8}$; the other as the iterated central product of $n$ groups isomorphic to $D_{8}$ and $Q_{8}$, including at least one copy of $Q_{8}$.
That is,
\begin{enumerate}
\item[1:] $D_{8} \circ D_{8} \circ \cdots \circ D_{8} \circ D_{8}$, or,
\item[2:] $D_{8} \circ D_{8} \circ \cdots \circ D_{8} \circ Q_{8}$.
\end{enumerate}
where it is understood that these are iterated central products; that is, $D_{8} \circ D_{8} \circ D_{8}$ is really $(D_{8} \circ D_{8}) \circ D_{8}$ and so on.
\label{ther:ther1}
\end{theorem}
Thus, the above theorem now explains the following theorem due to Salingaros regarding the iterative central product structure of the finite $2$-groups named after him.
\begin{theorem}[Salingaros Theorem \cite{salingaros3}]
Let $ {N_{1}} = {D_{8}}$,
$ {N_{2}} = {Q_{8}}$, and
$(G)^{\circ k}$ be the iterative central product
$G \circ G \circ \dots \circ G$ ($k$ times) of $G$.
Then, for $k \geq 1$:
\begin{enumerate}
\item $ {N_{2k-1}} \cong ( {N_1})^{\circ k}
=( {D_8})^{\circ k}$,
\item $ {N_{2k}} \cong ( {N_1})^{\circ k} \circ
{N_2} = ( {D_8})^{\circ (k-1)} \circ
{Q_8}$,
\item $\Omega_{2k-1} \cong {N_{2k-1}} \circ
( {\mathbb{Z}_2 \times \mathbb{Z}_2})
=( {D_8})^{\circ k} \circ ( {\mathbb{Z}_2 \times \mathbb{Z}_2})$,
\item $\Omega_{2k} \cong {N_{2k}} \circ
( {\mathbb{Z}_2 \times \mathbb{Z}_2})
=( {D_8})^{\circ (k-1)} \circ {Q_8}
\circ ( {\mathbb{Z}_2 \times \mathbb{Z}_2})$,
\item $S_k \cong {N_{2k-1}} \circ \mathbb{Z}_4 \cong {N_{2k}} \circ \mathbb{Z}_4
=( {D_8})^{\circ k} \circ \mathbb{Z}_4 \cong
( {D_8})^{\circ (k-1)} \circ {Q_8} \circ \mathbb{Z}_4 $.
\end{enumerate}
\end{theorem}
\noindent
In the above theorem:
\begin{itemize}
\item $\mathbb{Z}_2,$ $\mathbb{Z}_4$ are cyclic groups of order~$2$ and~$4$, respectively;
\item $D_8$ and $Q_8$ are the dihedral group of a square and the quaternionic group;
\item $ {\mathbb{Z}_2 \times \mathbb{Z}_2}$ is elementary abelian of order~$4$;
\item $ {N_{2k-1}}$ and $ {N_{2k}}$ are extra-special of order $2^{2k+1}$;
e.g., $ {N_1}= {D_8}$ and $ {N_2}= {Q_8}$;
\item $\Omega_{2k-1},\Omega_{2k}, S_k$ are of order $2^{2k+2}$.
\item $\circ$ denotes the iterative central product of groups with, e.g.,
$({D_8})^{\circ k}$ denotes the iterative central product of $k$-copies of $ {D_8}$, etc.,
\end{itemize}
We can tabulate the above results for Salingaros vee groups $\Gpq{p}{q}$ of orders $\leq 256,$ $(p+q \leq 7)$ (Brown~\cite{brown}) in the following table:
\begin{table}[ht]
\renewcommand{\arraystretch}{1.5}
\caption{Salingaros Vee Groups $|\Gpq{p}{q}| \leq 256$}
\begin{center}
\begin{tabular}{|c|c|} \hline
Isomorphism Class & Salingaros Vee Groups \\ \hline
$N_{2k}$ & $N_{0} \cong G_{0,0},\; N_{2} \cong Q_{8} \cong G_{0,2},\; N_{4} \cong G_{4,0},\; N_{6} \cong G_{6,0}$ \\ \hline
$N_{2k-1}$ & $N_{1} \cong D_{8} \cong G_{2,0},\; N_{3} \cong G_{3,1},\; N_{5} \cong G_{0,6}$ \\ \hline
$\Omega_{2k}$ & $\Omega_{0} \cong G_{1,0},\; \Omega_{2} \cong G_{0,3},\; \Omega_{4} \cong G_{5,0},\; \Omega_{6} \cong G_{6,1}$ \\ \hline
$\Omega_{2k-1}$ & $\Omega_{1} \cong G_{2,1},\; \Omega_{3} \cong G_{3,2},\; \Omega_{5} \cong G_{0,7}$ \\ \hline
$S_{k}$ & $S_{0} \cong G_{0,1},\; S_{1} \cong G_{3,0},\; S_{2} \cong G_{4,1},\; S_{3} \cong G_{7,0}$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Clifford Algebras Modeled with Walsh Functions}
Until now, the finite $2$-groups such as the Salingaros vee groups $\Gpq{p}{q}$ have appeared either as finite subgroups of the group of units $\clpq{p}{q}^{\times}$ in the Clifford algebra, or, as groups whose group algebra modulo a certain ideal generated by $1+\tau$ for some central element $\tau$ of order $2$ was isomorphic to the given Clifford algebra $\clpq{p}{q}.$ In these last two sections, we recall how the (elementary abelian) group $(\mathbb{Z}_2)^n$ can be used to define a Clifford product on a suitable vector space.
In this section, we recall the well-known construction of the Clifford product on the set of monomial terms $\mathbf{e}_{\underline{a}}$ indexed by binary $n$-tuples $\underline{a} \in (\mathbb{Z}_2)^n$, which, when extended by linearity, endows the set with the structure of the Clifford algebra $\clpq{p}{q}.$ This approach is described in
Lounesto~\cite[Chapter 21]{lounesto}. We will show how it can be extended to Clifford algebras
$\clpqr{p}{q}{r}$ over (real) quadratic vector spaces with degenerate quadratic forms.
In the last section we will briefly mention the approach of Albuquerque and Majid~\cite{majid} in which the Clifford algebra structure is introduced in a suitably twisted group algebra $\mathbb{R}^t[(\mathbb{Z}_2)^n]$ using Hopf algebraic methods.
Let $\cb{B}^n=\{\underline{a} = a_1a_2\ldots a_n \mid a_i =0,1,\, \underline{a} \oplus \underline{b} = \underline{c} \mbox{ as } c_i=a_i+ b_i \bmod 2 \}$ be a group of binary $n$-tuples with addition $\oplus$, that is, $\cb{B}^n \cong (\mathbb{Z}_2)^n$.
\begin{definition}[Walsh function]
A \textit{Walsh function} $w_{\underline{a}}$ indexed by $\underline{a} \in \cb{B}^n$ is a function
from $\cb{B}^n$ to the multiplicative group $\{\pm 1\}$ defined as
\begin{gather}
w_{\underline{a}}(\underline{b}) = (-1)^{\sum_{i=1}^{n} a_ib_i} = \pm 1, \quad \underline{a},\underline{b} \in \cb{B}^n,
\end{gather}
which satisfies $w_{\underline{k}}(\underline{a} \oplus \underline{b}) = w_{\underline{k}}(\underline{a}) w_{\underline{k}}(\underline{b})$ and
$w_{\underline{a}}(\underline{b}) = w_{\underline{b}}(\underline{a})$.
\end{definition}
Observe that the first condition on $w_{\underline{a}}$ simply states that each $w_{\underline{a}}$ is a group homomorphism from $\cb{B}^n$ to the group $\{\pm 1\}.$
\begin{definition}[Gray code]
A \textit{Gray code} $g: \cb{B}^n\rightarrow \cb{B}^n$ with the property
$g(\underline{a} \oplus \underline{b}) = g(\underline{a}) \oplus g(\underline{b})$ is defined as
\begin{gather}
g(\underline{k})_1 = k_1, \quad g(\underline{k})_i=k_{i-1}+k_i \bmod 2, \quad i=2, \ldots, n.
\end{gather}
Thus, $g$ is a group isomorphism which reorders Walsh functions into a \textit{sequency order} with a
\textit{single digit change code}~\cite[Section 21.2, page 281]{lounesto}.
\end{definition}
Given that the Gray code $g$ is an isomorphism, Lounesto defines its inverse $h: \cb{B}^n\rightarrow \cb{B}^n$ as
\begin{gather}
h(\underline{a})_i = \sum_{j=1}^i a_j \bmod 2.
\end{gather}
\noindent
Now, take an $\mathbb{R}$-vector space $\cb{A}$ with a basis consisting of $2^n$ elements
$\mathbf{e}_{\underline{a}}$ labeled by the binary $n$-tuples $\underline{a}=a_1a_2\ldots a_n$ as
\begin{gather}
\mathbf{e}_{\underline{a}} = \mathbf{e}_1^{a_1}\mathbf{e}_2^{a_2} \cdots \mathbf{e}_n^{a_n}, \quad a_i = 0,1;
\end{gather}
for some $n$ symbols $\mathbf{e}_1,\mathbf{e}_2,\ldots,\mathbf{e}_n,$ and define an algebra product on
$\cb{A}$ which on the basis elements $\mathbf{e}_{\underline{a}}$ is computed as follows:
\begin{gather}
\mathbf{e}_{\underline{a}}\mathbf{e}_{\underline{b}} = (-1)^{\sum_{i=1}^p a_ib_i} w_{\underline{a}}(h(\underline{b}))\mathbf{e}_{\underline{a} \oplus \underline{b}},
\label{eq:eaeb}
\end{gather}
for some $1\leq p \leq n.$ Then, together with this product, $\cb{A}$ becomes the Clifford algebra $C \kern -0.1em \ell_{p,q}$, where $q=n-p$, of a non-degenerate quadratic form~$Q$ of signature $(p,q)$. See
Lounesto \cite[Page 284]{lounesto} and his reference to~(\ref{eq:eaeb}) as the formula of
Brauer and Weyl from 1935~\cite{brauer}.
\begin{remark}
Observe that if the scalar factor in front of $\mathbf{e}_{\underline{a} \oplus \underline{b}}$ in~(\ref{eq:eaeb}) were set to be identically equal to~$1$, then we would have $\mathbf{e}_{\underline{a}}\mathbf{e}_{\underline{b}}=\mathbf{e}_{\underline{b}}\mathbf{e}_{\underline{a}}$ for any
$\mathbf{e}_{\underline{a}},\mathbf{e}_{\underline{b}} \in \cb{A}.$ Thus, the algebra $\cb{A}$ would be isomorphic to the (abelian) group algebra $\mathbb{R}[G]$ where $G \cong (\mathbb{Z}_2)^n.$ That is, the scalar factor introduces a twist in the algebra product in $\cb{A}$ and so it makes~$\cb{A},$ hence the Clifford algebra $\clpq{p}{q},$ isomorphic to the twisted group algebra~$\mathbb{R}^t[(\mathbb{Z}_2)^n]$.
\end{remark}
Formula~(\ref{eq:eaeb}) is encoded as a procedure \texttt{cmulWalsh3} in \texttt{CLIFFORD}, a Maple package for computations with Clifford algebras~\cite{clifford,parallelizing}. It has the following pseudo-code.
{\Fontv
\begin{lstlisting}[language = GAP]
cmulWalsh3:=proc(eI::clibasmon,eJ::clibasmon,B1::{matrix,list(nonnegint)})
local a,b,ab,monab,Bsig,flag,i,dim_V_loc,ploc,qloc,_BSIGNATUREloc;
global dim_V,_BSIGNATURE,p,q;
options `Copyright (c) 2015-2016 by Rafal Ablamowicz and Bertfried Fauser.
All rights reserved.`;
if type(B1,list) then
ploc,qloc:=op(B1);
dim_V_loc:=ploc+qloc:
_BSIGNATUREloc:=[ploc,qloc]:
else
ploc,qloc:=p,q; <<<-- this reads global p, q
dim_V_loc:=dim_V: <<<-- this reads global dim_V
_BSIGNATUREloc:=[ploc,qloc]:
if not _BSIGNATURE=[ploc,qloc] then _BSIGNATURE:=[p,q] end if:
end if:
a:=convert(eI,clibasmon_to_binarytuple,dim_V_loc);
b:=convert(eJ,clibasmon_to_binarytuple,dim_V_loc);
ab:=oplus(a,b);
monab:=convert(ab,binarytuple_to_clibasmon);
return twist(a,b,_BSIGNATUREloc)*Walsh(a,hinversegGrayCode(b))*monab;
end proc:
\end{lstlisting}
}
Now let us take a real quadratic vector space $(V,Q)$ with a degenerate quadratic form $Q$ such that
$\dim V^{\perp} = r, $ while $Q$ restricted to the orthogonal complement of $V^{\perp}$ in $V$ has signature $(p,q)$, $(\dim V = n=p + q + r)),$ and we let a basis $\mathbf{e}_i, 1\leq i \leq n$ be such that
$Q(\mathbf{e}_i)=1$ resp. $Q(\mathbf{e}_i)=-1$, resp. $Q(\mathbf{e}_i)=0$, for $0 \leq i \leq p,$ resp. $p+1 \leq i \leq p+q,$ resp. $p+q+1 \leq i \leq p+q+r$. We can now generate a universal Clifford algebra as the graded tensor product $\clpq{p}{q}{r} \cong \clpq{p}{q} \hotimes \bigwedge V^{\perp}$ with a Clifford product obtained by modifying the above formula~(\ref{eq:eaeb}) as follows: we introduce an extra scalar factor in front of
$\mathbf{e}_{\underline{a} \oplus \underline{b}}$. This factor equals $1$ or, resp. $0$, depending whether the monomial elements
$\mathbf{e}_{\underline{a}}$ and $\mathbf{e}_{\underline{b}}$ do not share, resp. do share, a common basis element $\mathbf{e}_i$ which squares to $0$ in $\clpqr{p}{q}{r}$, that is, such that $Q(\mathbf{e}_i)=0$.
A modified pseudo-code of such procedure called \texttt{cmulWalsh3pqr} has been encoded in a new experimental package \texttt{eClifford} for computations in $C \kern -0.1em \ell_{p,q,r}$~\cite{eclifford}.
{\Fontv
\begin{lstlisting}[language = GAP]
cmulWalsh3pqr:=proc(eI::eclibasmon,eJ::eclibasmon,B1::list(nonnegint))
local ploc,qloc,rloc,dim_V_loc,_BSIGNATUREloc,a,b,ab,monab,maxmaxindex,r_factor;
global twist,Walsh,hinverseGrayCode,oplus;
options `Copyright (c) 2015-2016 by Rafal Ablamowicz and Bertfried Fauser.
All rights reserved.`;
if nops(B1)=2 then
ploc,qloc:=op(B1);
rloc:=0;
elif nops(B1)=3 then
ploc,qloc,rloc:=op(B1);
else
error `three non-negative integers p,q,r are needed in the list entered as
the last argument but received \%1 instead`,B1
end if;
dim_V_loc:=ploc+qloc+rloc:
maxmaxindex:=max(op(eClifford:-eextract(eI)),op(eClifford:-eextract(eJ)));
if evalb(maxmaxindex>dim_V_loc) then
error `maximum index \%1 found in the arguments \%2 and \%3 is larger
then dim_V = \%4 derived from the last argument \%5`,
maxmaxindex,eI,eJ,dim_V_loc,B1
end if;
_BSIGNATUREloc:=[ploc,qloc]:
a:=convert(eI,eclibasmon_to_binarytuple,dim_V_loc);
b:=convert(eJ,eclibasmon_to_binarytuple,dim_V_loc);
if rloc=0 then
r_factor:=1
else
r_factor:=mul((1+(-1)^(a[i]*b[i]))/2,i=ploc+qloc+1..(ploc+qloc+rloc));
end if;
if r_factor=0 then return 0 else
ab:=oplus(a,b);
monab:=convert(ab,binarytuple_to_eclibasmon);
return twist(a,b,_BSIGNATUREloc)*Walsh(a,hinversegGrayCode(b))*monab;
end if;
end proc:
\end{lstlisting}
}
\noindent
In the above, the code lines 25-33 accommodate the additional factor called \texttt{r\_factor} which equals $1$ or $0$ as indicated above\footnote{Note that such factor can also be computed by an \texttt{XOR} operation~\cite{private}.}. In particular, the Clifford algebra $\clpqr{0}{0}{n} \cong \bigwedge V$, the exterior (Grassmann) algebra.
\section{Clifford Algebras $C \kern -0.1em \ell_{p,q}$ as Twisted Group Algebras}
In this last section we give a formal definition of a \textit{twisted group ring} (algebra) following
Passman~\cite[Section 2]{passman}, and briefly refer to the paper by Albuquerque and Majid~\cite{majid} in which the authors discuss twisting of a real group algebra of $(\mathbb{Z}_2)^n$ by using Hopf algebraic methods.
\begin{definition}[Passman \cite{passman}]
The \textit{twisted group ring} $k^t[G]$~\cite[Sect. 2]{passman} is an associative $k$-algebra, $k$ is a field, with a basis $\{\bar{x} \mid x \in G\}$ and multiplication defined distributively for all $x,y \in G$ as
\begin{gather}
\bar{x} \bar{y} = \gamma(x,y)\, \overline{xy}, \qquad \gamma(x,y) \in k^{\times} = k \setminus \{0\}.
\end{gather}
where the function $\gamma: G \times G \rightarrow k^{\times}$ satisfies
\begin{gather}
\gamma(x,y)\gamma(xy,z) = \gamma(y,z) \gamma(x,yz), \quad \forall z,y,z \in (\mathbb{Z}_2)^n \quad \mbox{(cocycle condition)}
\end{gather}
to assure associativity
$(\bar{x} \bar{y}) \, \bar{z} = \bar{x} \, (\bar{y} \, \bar{z})$ in $k^t[G]$ for any $x,y,z \in G.$
\end{definition}
\begin{lemma}[Passman \cite{passman}]
The following relations hold in $k^t[G]$.
\begin{itemize}
\item[(i)] $\gamma(1,1)^{-1}\overline{1}$ is the identity in $k^t[G]$;
\item[(ii)] $\bar{x}^{-1}= \gamma(x,x^{-1})\gamma(1,1)^{-1} \overline{x^{-1}}
= \gamma(x^{-1},x)\gamma(1,1)^{-1} \overline{x^{-1}},\, \forall x \in G$;
\item[(iii)] $(\bar{x}\bar{y})^{-1} = \bar{y}^{-1}\bar{x}^{-1},\, \forall x, y \in G$.
\end{itemize}
\label{lem:lemp}
\end{lemma}
\noindent
If $\gamma(1,1)=1$ in part (i) of the above lemma, then we call $\gamma$ \textit{normalized}, which can always be achieved by scaling. In part (ii), the inverse $\bar{x}^{-1}$ is the result of the action of the antipode on $\bar{x}$ in the Hopf algebra sense, or, it can be viewed as the (un-normalized) action of the transposition map $\ta{T}{\varepsilon}$ introduced in \cite{ablamowicz1,ablamowicz2,ablamowicz3} and mentioned in Section~\ref{sub:sectt}.
For a Hopf algebraic discussion of Clifford algebras $C \kern -0.1em \ell_{p,q}$ as twisted group
algebras $\mathbb{R}^t[(\mathbb{Z}_2)^n]$, where the twisting is accomplished via a $2$-cocycle $F$ which twists the group algebra $k[(\mathbb{Z}_2)^n]$ into a cotriangular Hopf algebra with a suitable cotriangular
structure~$\cb{R}$, see~\cite{albuquerque,downs} and references therein. Note that if $\gamma$ is trivial, then the twist is trivial and the twisted group algebra is just the group algebra $k[G]$;
if it is given by the \texttt{XOR} function on binary tuples, we get the Grassmann product (including a graded tensor product, or a graded switch; if $\gamma$ is the choice described by Lounesto
in~(\ref{eq:eaeb}), we get the Clifford algebra $\clpq{p}{q}$~\cite{private}.
\section{Conclusions}
As stated in the Introduction, the main goal of this survey paper has been to collect and summarize properties of certain finite $2$-groups which appear in Clifford algebras~$\clpq{p}{q}$. On one hand, these Salingaros-defined groups $\Gpq{p}{q}$ appear as subgroups of the group of invertible elements. These subgroups play an important role in relation to the set of orthogonal primitive idempotents, with the help of which one defines spinorial representations. It has been observed by Salingaros, that these groups belong to five non-isomorphic families. On the other hand, one knows that all Clifford algebras $\clpq{p}{q}$ are classified into five different families of simple and semisimple algebras depending on the values of $(p,q)$ and $p+q$ (the Periodicity of Eight). Another connection with finite Salingaros groups appears via Chernov's observation that the algebras $\clpq{p}{q}$ can be viewed as images of group algebras, most likely of the groups $\Gpq{p}{q}$ modulo a suitable ideal generated by a central nontrivial idempotent in the group algebra. This shows that the theory of extra-special $2$-groups has a direct bearing on the structure of the Clifford algebras $\clpq{p}{q}$. Finally, we have observed how Clifford algebras can be obtained by twisting a group algebra of $(\mathbb{Z}_2)^n$, either by using the Walsh functions, or equivalently but in a more sound mathematical way, by using a $2$-cocycle and the formalism of cotriangular Hopf algebras \cite{downs}.
\section{Acknowledgments}
Author of this paper is grateful to Dr. habil. Bertfried Fauser for his remarks and comments which have
helped improve this paper.
| proofpile-arXiv_066-2795 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction and main results}\label{s:intro}
The RSK algorithm was introduced in \cite{knuth70} as a generalisation of the Robinson-Schensted (RS) algorithm introduced in \cite{robinson38,schensted61}.
It transforms a matrix to a pair of semi-standard Young tableaux.
For an introduction of the RS(K) algorithms and Young tableaux see e.g. \cite{fulton97,sagan00}.
The gRSK algorithm, as a geometric lifting of the RSK algorithm, that is replacing the max-plus algebra by the usual algebra in its definition, was introduced in \cite{kirillov01}.
There are several equivalent formulations of the (g)RSK algorithms.
The commonest definition of the RSK algorithm is based on inserting a row of the input matrix into a semi-standard Young tableau.
However for the needs of defining gRSK, the insertion was reformulated as a map transforming a tableau row and an input row into a new tableau row and the input row to insert into the next tableau row.
This was introduced in \cite{noumi-yamada04}, and henceforce we call it the Noumi-Yamada description.
It will be the first reformulation of the algorithms in this article, from which we derive all of our main results.
The symmetry properties state that the pair of output tableaux are swapped if the input matrix is transposed.
One way to prove this is to reformulate the RSK algorithms as growth diagrams.
The growth diagram was developed in \cite{fomin86,fomin95}, see also exposition in \cite[Section 5.2]{sagan00}.
It is a rectangular lattice graph whose vertices record the growth of the shape of the output tableaux, and can be generated recursively by the local growth rule.
Much of the attention the (g)RSK algorithms receive these days come from the relation to the directed last passage percolation (DLPP) and the directed polymer (DP).
The Greene's theorem \cite{greene74} (see for example \cite{sagan00} for a modern exposition) characterises the shape of the output tableaux as lengths of longest non-decreasing subsequences.
As an immmediate consequence, this can be viewed as the RSK algorithm transforming a matrix to a multilayer non-intersecting generalisation of the DLPP. Specifically the first row of the output tableaux corresponds to precisely the DLPP.
When randomness is introduced into the input matrix, this connection yields exact formulas for the distribution of DLPP in geometric and exponential environments \cite{johansson00}.
The geometric lifting of the DLPP is the partition function of the directed polymer (DP) where the solvable environment is that of the log-Gamma weights \cite{seppalainen12}.
And unsurprisingly the gRSK is related to the DP the same way as RSK is related to the DLPP.
This was used in \cite{corwin-oconnell-seppalainen-zygouras14} to obtain exact formulas for the distribution of the partition function of DP in a log-Gamma environment.
The DLPP and DP can be defined locally using a similar growth rule to the (g)RSK. And given the solvable environment there present reversibility results of this local growth rule of the partition function called the Burke property.
It is used to show the cube root variance fluctuations of the partition functions \cite{balazs-cator-seppalainen06,seppalainen12}.
Also in these solvable models, the distribution of the shape of the tableaux are related to certain special functions.
In the RSK setting it is the Schur measure \cite{oconnell03a}, related to the Schur functions, and in gRSK setting it is the Whittaker measure, related to the $\mathfrak{gl}_{\ell + 1}$-Whittaker functions \cite{corwin-oconnell-seppalainen-zygouras14}.
This kind of results can often be obtained using a combination of Doob's $h$-transform and the Markov function theorem \cite{rogers-pitman81}.
In \cite{oconnell-seppalainen-zygouras14} a reformulation of the gRSK called the local moves was used to give a more direct treatment than the Markov function theorem to show the connection between the gRSK and the Whittaker functions.
The local moves can be generalised to take an array of Young diagram shape.
This idea can be found in the proof of the Two Polytope Theorem in \cite{pak01}.
\footnote{See also the historical remarks in Section 8 of \cite{pak01}. An exposition of this idea can be also found in \cite{hopkins14}.}
In \cite{nguyen-zygouras16} this idea was used to yield the joint laws of the partition functions of the log-Gamma polymer in the space-like direction.
Specifically it was used to formulate a geometric version of the multilayer polynuclear growth model (PNG) introduced in \cite{johansson03}, from which the joint law of the polymer partition functions at a fixed time could be written down.
One direction for generalisation of the (g)RSK algorithms is to interpolate using $q$-deformation.
The Macdonald polynomials were introduced in \cite{macdonald88}. See \cite{macdonald98} for a detailed introduction.
They are symmetric polynomials of two parameters $q$ and $t$.
We only consider $t = 0$, in which case they are also the $q$-Whittaker functions with some prefactors, as they are eigenfunctions of the $q$-deformed quantum Toda Hamiltonian \cite{gerasimov-lebedev-oblezin10}.
On the one hand the $q$-Whittaker functions interpolate between the Schur functions ($q = 0$) and the Whittaker functions ($q \to 1$ with proper scalings \cite{gerasimov-lebedev-oblezin12}).
On the other hand the simiarlity among structures of the Macdonald polynomials, Schur polynomials and the Whittaker functions makes the Macdonald processes and measures \cite{forrester-rains02,borodin-corwin14} possible.
This motivates the search for $q$-deformed RS(K) algorithms.
The $q$RS algorithms were introduced in \cite{oconnell-pei13} (column insertion version) and in \cite[Dynamics 3, $h = (1, 1, \dots, 1)$]{borodin-petrov13} (row insertion version).
In \cite{matveev-petrov15} several $q$-deformed RSK ($q$RSK) algorithms were introduced.
In all these $q$-deformations the algorithms transform inputs into random pairs of tableaux, rather than just one pair of tableaux.
These $q$-algorithms all have the desired property of transforming the input into various $q$-Whittaker processes.
In this article we work on the $q$RSK row insertion algorithm introduced in \cite[Section 6.1 and 6.2]{matveev-petrov15}.
It was shown in that paper that the $q$RSK algorithm transforms a matrix with $q$-geometric weights into the $q$-Whittaker process, and the push-forward measure of the shape of the output tableaux is the $q$-Whittaker measure.
We give the Noumi-Yamada description of this algorithm, from which we obtain a branching growth diagram construction similar to that in \cite{pei14}, and show that the algorithm is symmetric:
\begin{thm}\label{t:qsym}
Let $\phi_A(P, Q) = \mathbb P(q\text{RSK}(A) = (P, Q))$ be the probability of obtaining the tableau pair $(P, Q)$ after performing $q$RSK on matrix $A$, then
\begin{align*}
\phi_A(P, Q) = \phi_{A'}(Q, P)
\end{align*}
where $A'$ is the transpose of $A$.
\end{thm}
We also formulate a $q$-polymer model which corresponds to the first row of the output tableaux of the $q$RSK.
It interpolates between the DLPP ($q = 0$) and the DP ($q \to 1$ with proper scaling).
The Burke's property also carries over to the $q$-setting naturally, with which one immediately obtains some asymptotic results for the $q$-polymer with stationary boundary conditions.
See Section \ref{s:qpolymer} for more details.
Also see Section \ref{s:qdef} for definition of $(x; q)_\infty$ that appears in the theorem.
\begin{thm}\label{t:lln}
Let $Z$ be the partition function of the $q$-polymer.
With stationary boundary conditions defined in Section \ref{s:qburke},
\begin{align}
\mathbb E Z(\ell, j) = \ell \gamma(\alpha) + j \gamma(\beta) \label{eq:qpolyexp}
\end{align}
where
\begin{align*}
\gamma(x) = x (\log E_q)'(x)
\end{align*}
where $E_q (x) = (x; q)_\infty^{-1}$.
Moreover, almost surely
\begin{align}\label{eq:lln}
\lim_{N \to \infty} {Z(\lfloor N x \rfloor, \lfloor N y \rfloor) \over N} = x \gamma(\alpha) + y \gamma(\beta).
\end{align}
\end{thm}
Finally we formulate a $q$-local move that agrees with the $q$RSK when taking a matrix input.
Like in \cite{hopkins14,nguyen-zygouras16}, we use the $q$-local moves to generalise the $q$RSK to take arrays on a Young diagram as the input, propose the corresponding $q$PNG model, and write down the joint distribution of the $q$-polymer partition functions in the space-like direction.
Like in \cite{oconnell-seppalainen-zygouras14,nguyen-zygouras16}, the basic operation of the $q$-local moves, called $\rho_{n,k}$, works on diagonal strips $(i, j)_{i - j = n - k}$ of the input.
In those two papers, when the gRSK is defined as a composition of the $\rho_{n, k}$, they are defined in two different ways, row-by-row or antidiagonal-by-antidiagonal.
In \cite{hopkins14}, $\rho_{n, k}$ (or more precisely the map $b_{n, k}$ in \cite[(3.5)]{oconnell-seppalainen-zygouras14}, see also \eqref{eq:t1}\eqref{eq:t2}) were referred to as ``toggles''.
It was shown there the map called $\mathcal{RSK}$ can be of any composition of the toggles whenever they agree with a growth sequence of the underlying Young diagram of the input array.
In this article, we generalise this to the $q$-setting.
By identifying the input pattern as an array on a Young diagram $\Lambda$, we show that the $q$RSK map $T_\Lambda$ can be of any composition of the $\rho$'s whenever they agree with a growth sequence of $\Lambda$.
For details of definitions of $\rho_{n, k}$ and $T_\Lambda$ see Section \ref{s:qlocalmoves}.
We fit the input $(w_{i, j})_{(i, j) \in \Lambda}$ into an infinite array $A = (w_{i, j})_{i, j \ge 1} \in \mathbb N^{\mathbb N_+ \times \mathbb N_+}$ and define $T_\Lambda$ such that when acting on an infinite array like $A$ it only alters the topleft $\Lambda$ part of the array.
Let\footnote{See Section \ref{s:notations} for explanation of notations like $a : b$}
\begin{align*}
r_1 &= t_{\Lambda'_1, 1} \\
r_j &= \sum_{k = 1 : (j \wedge \Lambda'_j)} t_{\Lambda'_j - k + 1, j - k + 1} - \sum_{k = 1 : ((j - 1) \wedge \Lambda'_j)} t_{\Lambda'_j - k + 1, j - k}, \qquad j = 2 : \Lambda_1 \\
\hat r_1 &= t_{1, \Lambda_1} \\
\hat r_i &= \sum_{k = 1 : (i \wedge \Lambda_i)} t_{i - k + 1, \Lambda_i - k + 1} - \sum_{k = 1 : ((i - 1) \wedge \Lambda_i)} t_{i - k, \Lambda_i - k + 1} \qquad i = 2 : \Lambda'_1
\end{align*}
Given a $q$-geometrically distributed input array, we can write down the explicit formula of the push-forward measure of $T_\Lambda$.
In this article we let $(\hat\alpha_i)$ and $(\alpha_j)$ be parameters such that $\hat \alpha_i \alpha_j \in (0, 1)$ for all $i, j$.
Also note for integer $n$, denote $(n)_q$ to be the $q$-Pochhammer $(q; q)_n$ (see Section \ref{s:qdef}).
\begin{thm}\label{t:lmpush}
Given that the input pattern $(w_{i, j})_{(i, j)}$ have independent $q$-geometric weights
\begin{align*}
w_{i, j} \sim q\text{Geom}(\hat\alpha_i \alpha_j), \qquad \forall i, j
\end{align*}
the distribution of $T_\Lambda A(\Lambda)$ is
\begin{align*}
\mathbb P(T_\Lambda A(\Lambda) &= (t_{i, j})_{(i, j) \in \Lambda}) \\
&= \mu_{q, \Lambda}(t) := (t_{1 1})_q^{-1} {\prod_{(i, j) \in \Lambda: (i - 1, j - 1) \in \Lambda} (t_{i j} - t_{i - 1, j - 1})_q \over \prod_{(i, j) \in \Lambda: (i, j - 1) \in \Lambda} (t_{i j} - t_{i, j - 1})_q \prod_{(i, j) \in \Lambda: (i - 1, j) \in \Lambda} (t_{i j} - t_{i - 1, j})_q} \\
&\;\;\;\;\;\;\;\;\;\;\;\;\times \alpha^r \hat\alpha^{\hat r} \prod_{(i, j) \in \Lambda} (\hat \alpha_i \alpha_j; q)_\infty \mathbb I_{t \in D_\Lambda},
\end{align*}
where
\begin{align*}
D_\Lambda := \{t \in \mathbb N^\Lambda: t_{i - 1, j} \le t_{i, j} \forall \{(i, j), (i - 1, j)\} \subset \Lambda, t_{i, j - 1} \le t_{i, j} \forall \{(i, j), (i, j - 1)\} \subset \Lambda\}.
\end{align*}
\end{thm}
We define an outer corner of a Young diagram to be any cell without neighbours to the right of below itself.
More precisely, $(n, m)$ is an outer corner of $\lambda$ if $\lambda_n = m$ and $\lambda_{n + 1} < m$.
Given a Young diagram $\Lambda$ with outer corners $(n_1, m_1), (n_2, m_2), \dots, (n_p, m_p)$, summing over the non-outer-corner points we can show the multipoint distribution of the $q$-polymer:
\begin{cly}\label{c:qpdist}
For $m_1 \le m_2 \le \dots \le m_p$ and $n_1 \le n_2 \le \dots \le n_p$.
Let $\Lambda$ be the Young diagram with outer corners $((n_i, m_{p - i + 1}))_{i = 1 : p}$.
The partition functions $(Z(n_1, m_p), Z(n_2, m_{p - 1}), \dots, Z(n_p, m_1))$ of the $q$-polymer in a $(\hat\alpha, \alpha)$-$q$-geometric environment has the following joint distribution:
\begin{align*}
\mathbb P(Z(n_1, m_p) = x_1, Z(n_2, m_{p - 1}) = x_2, \dots, Z(n_p, m_1) = x_p) = \sum_{t \in D_\Lambda, t_{n_i, m_{p - i + 1}} = x_i \forall i = 1 : p} \mu_{q, \Lambda} (t)
\end{align*}
\end{cly}
Furthermore, if we specify $m_i = n_i = i$ for $i = 1 : p$, that is, $\Lambda$ is a staircase Young diagram, then we may define a $q$PNG multilayer noncolliding process, as in \cite{johansson03,nguyen-zygouras16}.
By recognising $p$ as the time, we can write down the joint distribution of the partition functions of the $q$-polymer at time $p$.
\begin{cly}\label{c:qpng}
The partition functions of the $q$-polymer at time $p$ with the $(\hat\alpha, \alpha)$-$q$-geometric environment has the following joint distribution
\begin{align*}
\mathbb P(Z(1, p) &= x_1, Z(2, p - 1) = x_2, \dots, Z(p, 1) = x_p) \\
&= \prod_{i + j \le p + 1} (\hat \alpha_i \alpha_j; q)_\infty \sum_{t \in D_\Lambda, t_{i, p - i + 1} = x_i, \forall i = 1 : p} \left((t_{11})_q^{-1} {\prod_{i + j \le p - 1} (t_{i + 1, j + 1} - t_{i, j})_q \over \prod_{i + j \le p} \left( (t_{i + 1, j} - t_{i, j})_q (t_{i, j + 1} - t_{i, j})_q \right)}\right. \\
&\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times{\prod_{i + j = p + 1} (\hat \alpha_i \alpha_j)^{t_{i, j}} \over \prod_{i, j > 1, i + j = p + 2} (\hat \alpha_i \alpha_j)^{t_{i - 1, j - 1}}}\right)
\end{align*}
\end{cly}
If we restrict to $p = 1$, that is, $\Lambda$ is a rectangular Young diagram, we obtain the following (recall the definition of $r$ and $\hat r$ just before Theorem \ref{t:lmpush}):
\begin{cly}
Given a $q$-geometric distributed matrix $(w_{i, j} \sim q\text{Geom}(\hat\alpha_i \alpha_j))_{i = 1 : n, j = 1 : m}$, the push-forward measure of $q$RSK taking this matrix is
\begin{align*}
\mu_q (t) &= (t_{11})_q^{-1} {\prod_{i = 2 : n, j = 2 : m} (t_{i, j} - t_{i - 1, j - 1})_q \over \prod_{i = 1 : n, j = 2 : m}(t_{i, j} - t_{i, j - 1})_q \prod_{i = 2 : n, j = 1 : m} (t_{i, j} - t_{i - 1, j})_q} \\
&\times \alpha^r \hat\alpha^{\hat r} \prod_{i = 1 : n, j = 1 : m} (\hat\alpha_i \alpha_j; q)_\infty \mathbb I_{t \in D_\Lambda}
\end{align*}
\end{cly}
By summing over all $t_{i, j}$ with fixed diagonals $(t_{n, m}, t_{n - 1, m - 1}, \dots, t_{(n - m)^+ + 1, (m - n)^+ + 1}) = (\lambda_1, \dots, \lambda_{m \wedge n})$, we recover a result in \cite{matveev-petrov15}:
\begin{cly}\label{c:qwmeasure}
Given a $q$-geometric distributed matrix $(w_{i, j} \sim q\text{Geom}(\hat\alpha_i \alpha_j))_{i = 1 : n, j = 1 : m}$, the shape of the output tableaux after applying $q$RSK on $(w_{i, j})$ is distributed according to the $q$-Whittaker measure:
\begin{align*}
\mathbb P((t_{n, m}, t_{n - 1, m - 1}, \dots, t_{(n - m)^+ + 1, (m - n)^+ + 1}) = (&\lambda_1, \dots, \lambda_{m \wedge n})) \\
&= P_\lambda(\alpha) Q_\lambda(\hat\alpha) \prod_{i = 1 : n, j = 1 : m} (\hat\alpha_i \alpha_j; q)_\infty,
\end{align*}
where $P_\lambda$ and $Q_\lambda$ are $(t = 0)$-Macdonald polynomials.
\end{cly}
The rest of the article is organised as follows.
In Section \ref{s:sym} we review some preliminaries on (g)RSK, $q$-deformations and the $q$-Whittaker functions. Then we give the Noumi-Yamada description and growth diagram formulations of the $q$RSK algorithm, with which we prove the symmetry property Theorem \ref{t:qsym}.
In Section \ref{s:qpolymer} we formulate the $q$-polymer, define and discuss the Burke relations, prove the $q$-Burke property, with which we prove Theorem \ref{t:lln}
In Section \ref{s:qlocalmoves} we formulate the $q$-local moves, show their relation to the $q$RSK, prove Theorem \ref{t:lmpush}, propose the $q$PNG, and discuss a measure on the matrix and its classical limit to a similar measure in \cite{oconnell-seppalainen-zygouras14}.
\subsection{Notations}\label{s:notations}
We list some notations we use in this article.
\begin{itemize}
\item $\mathbb N$ is the set of the nonnegative integers, and $\mathbb N_+$ the set of the positive integers.
\item $\mathbb I_A$ is the indicator function on the set $A$.
\item $a : b$ is $\{a, a + 1, \dots, b\}$
\item $[n]$ is $1 : n$
\item $i = a : b$ means $i \in \{a, a + 1, \dots, b\}$
\item $(\lambda_{1 : m})$ denotes $(\lambda_1, \lambda_2, \dots, \lambda_m)$.
\item $w_{n, 1 : k}$ is $(w_{n, 1}, w_{n, 2}, \dots, w_{n, k})$
\item $w_{1 : n, 1 : m}$ is a matrix $(w_{i, j})_{n \times m}$
\item $:=$ means (re)evaluation or definition. For example $u := u + a$ means we obtain a new $u$ which has the value of the old $u$ adding $a$.
\item For $a = (a_{1 : m})$, $b = (b_{1 : m})$, $a^b := \prod_{i = 1 : m} a_i^{b_i}$.
\item For an array $A = (w_{i, j})_{(i, j) \in \mathbb N_+^2}$ and a index set $I$, $A(I) := (w_{i, j})_{(i, j) \in I}$.
\item $\mathbf{e}_i$ is the vector with a $1$ at the $i$th entry and $0$ at all other entries: $\mathbf{e}_i = (0, 0, \dots, 0, 1, 0, 0,\dots)$.
\end{itemize}
\noindent\textbf{Acknowledgement.} The author would like to thank Ziliang Che for fruitful discussions.
The author would also like to acknowledge communication and discussions with Vu-Lan Nguyen, Jeremy Quastel, Neil O'Connell, Dan Romik, Konstantin Matveev, Timo Sepp\"al\"ainen, Alexei Borodin, Nikos Zygouras and Ivan Corwin.
Furthermore the author would like to thank an anonymous reviewer for reading the article and providing valuable feedbacks which results in much improvement of the article.
This work was supported by the Center of Mathematical Sciences and Applications at Harvard University.
\section{Noumi-Yamada description, growth diagrams and the symmetry property}\label{s:sym}
In this section we review the basics of the theory of Young tableaux and the Noumi-Yamada description of the (g)RSK. We also review some $q$-deformations and related probability distributions. Then we formulate the Noumi-Yamada description and growth diagram for the $q$RSK, and show how to use the latter to prove the symmetry property Theorem \ref{t:qsym}.
\subsection{A review of the RSK and gRSK algorithms}
A Young diagram $\lambda = (\lambda_1, \lambda_2, \dots, \lambda_m)$ is a nonincreasing nonnegative integer sequence.
One may represent a Young diagram as a collection of coordinates in $\mathbb N_+^2$.
More specifically, in this representation $\lambda = \{(i, j): \lambda_i \ge j\}$.
For example $\lambda = (4, 3, 1, 1)$ has the 2d coordinate representation $\{(1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (2, 3), (3, 1), (4, 1)\}$, and it can be visualised as follows, where we labelled some coordinates:
\begin{center}
\includegraphics{figure-youngdiagram.pdf}
\end{center}
We use these two representations without specifying under unambiguous contexts.
For example, an array restricted to $\lambda$ and denoted as $A(\lambda)$ uses the 2d coordinates representation.
A Gelfand-Tsetlin (GT) pattern is a triangular array $(\lambda^k_j)_{1 \le j \le k \le m}$ satisfying the interlacing constraints, that is
\begin{align*}
\lambda^k \prec \lambda^{k + 1},
\end{align*}
where $a \prec b$ means
\begin{align*}
b_1 \ge a_1 \ge b_2 \ge a_2 \ge \dots.
\end{align*}
The exact constraints of the GT pattern are thus
\begin{align*}
\lambda^{k + 1}_{j + 1} \le \lambda^k_j \le \lambda^{k + 1}_j \qquad \forall k \ge j \ge 1
\end{align*}
We refer to the indices of the GT pattern coordinates in the following way.
Given a coordinate $\lambda^k_j$, we call the superscript ($k$ here) the level, the subscript ($j$ here) the edge.
Later when we consider the time evolution of the GT patterns, we use an argument in the bracket to denote time.
Therefore $\lambda^k_j (\ell)$ is the coordinate at time $\ell$, $k$th level and $j$th edge.
We visualise a GT pattern, for example with 5 levels as follows, where we also annotate the interlacing relations:
\begin{center}
\includegraphics[scale=.3]{figure-gtpattern.pdf}
\end{center}
So the levels correspond to rows and edges corresponds to edges from the right in the picture.
Throughout this article we do not take powers of $\lambda_j$ so the superscript on $\lambda$ is always an index rather than a power.
The same applies to notations $a^j_k$ in Noumi-Yamada description of the $q$RSK, as well as the $t$ in the proof of Theorem \ref{t:qlocalmoves}.
A semi-standard Young tableau, which we simply refer to as a tableau, is an Young diagram-shaped array filled with positive integers that are non-descreasing along the rows and increasing along the columns.
The underlying Young diagram is called the shape of the tableau.
A tableau $T$ corresponds to a GT pattern $(\lambda^k_j)$ in the following way:
\begin{align*}
\lambda^k_j = \#\left\{\text{Number of entries in row }j\text{ that are no greater than }k \right\}
\end{align*}
For example
$\begin{array}{ccccc}
1&2&2&3&3\\2&3&4&\\4&&&
\end{array}$
is a tableau with shape $\lambda = (5, 3, 1)$ and GT pattern
\begin{align*}
(\lambda^1_1, \lambda^2_1, \lambda^2_2, \lambda^3_1, \lambda^3_2, \lambda^3_3, \lambda^4_1, \lambda^4_2, \lambda^4_3, \lambda^4_4) = (1, 3, 1, 5, 2, 0, 5, 3, 1, 0)
\end{align*}
In this article we work on the GT patterns only since it is easier to manipulate symbolically than tableaux.
We use the terms GT patterns and tableaux interchangeably hereafter.
Clearly the shape of a tableau $(\lambda^k_j)_{1 \le j \le k \le m}$ is the bottom row in the visualisation of the GT pattern $\lambda^m = (\lambda^m_1, \lambda^m_2, \dots, \lambda^m_m)$.
The RSK algorithm takes in a matrix $A = (w_{i j})_{n\times m}$ as the input and gives a pair of tableaux $(P, Q)$ as the output.
We call the output $P$-tableau the insertion tableau, and the $Q$-tableau the recording tableau.
When we mention the output tableau without specifying, it is by default the $P$-tableau, as most of the time we focus on this tableau.
We usually denote the corresponding GT pattern for the $P$- and $Q$- tableaux as respectively $(\lambda^k_j)$ and $(\mu^k_j)$.
The RSK algorithm is defined by the insertion of a row $(a^1, a^2, \dots, a^m) \in \mathbb N^m$ of nonnegative integers into a tableau $(\lambda^j_k)$ to obtain a new tableau $(\tilde\lambda^j_k)$.
We call this insertion operation the RSK insertion and postpone its exact definition to Definition \ref{d:nyrsk}.
When applying the RSK algorithm to a matrix $w_{1 : n, 1 : m}$, we start with an empty tableau $\lambda^k_j (0) \equiv 0$, and insert $w_{1, 1 : m}$ into $(\lambda^k_j (0))$ to obtain $(\lambda^k_j (1))$, then insert $w_{2, 1 : m}$ into $(\lambda^k_j (1))$ to obtain $(\lambda^k_j (2))$ and so on and so forth.
The output $P$-tableau is the GT pattern at time $n$: $\lambda^k_j = \lambda^k_j (n)$, and the $Q$-tableau is the GT pattern at level $m$: $\mu^\ell_j = \lambda^m_j (\ell)$.
For a traditional definition of the RSK insertion, see e.g. \cite{fulton97}.
The definition we give here is the Noumi-Yamada description.
\begin{dfn}[The Noumi-Yamada description of the RSK insertion]\label{d:nyrsk}
Suppose at time $\ell - 1$ we have a tableau $(\lambda^j_k) = (\lambda^j_k (\ell - 1))$ and want to RSK-insert row $w_{\ell, 1 : m}$ into it to obtain a new tableau $(\tilde\lambda^j_k) = (\lambda^j_k (\ell))$.
This is achieved by initialising $a^{1 : m}_1 = w_{\ell, 1 : m}$ and recursive application (first along the edges, starting from $1$ and incrementing, then along the levels, starting from the edge index and incrementing) of the following
\begin{align*}
\tilde\lambda^k_k &= \lambda^k_k + a^k_k\\
\tilde\lambda^j_k &= a^j_k + (\lambda^j_k \vee \tilde\lambda^{j - 1}_k), \qquad j > k \\
a^j_{k + 1} &= a^j_k + \lambda^j_k - \tilde\lambda^j_k + \tilde\lambda^{j - 1}_k - \lambda^{j - 1}_k
\end{align*}
\end{dfn}
The Noumi-Yamada description does not rely on $w_{ij}$ being integers and hence extends the RSK algorithm to take real inputs.
Similarly one can define the Noumi-Yamada description for the gRSK algorithm, which is simply a geometric lifting of the RSK algorithm.
It also takes real inputs.
\begin{dfn}[The Noumi-Yamada description for the gRSK algorithm]
Suppose at time $\ell - 1$ we have a tableau $(z^j_k) = (z^j_k (\ell - 1))$ and want to gRSK-insert a row $w_{\ell, 1 : m}$ into it to obtain a new tableau $(\tilde z^j_k) = (z^j_k (\ell))$.
This is done by initialising $(a^i_1)_{i = 1 : m} = (e^{w_{\ell, i}})_{i = 1 : m}$ and the recursive application (in the same way as in the RSK insertion) of the following
\begin{align*}
\tilde z^k_k &= z^k_k a^k_k\\
\tilde z^j_k &= a^j_k (z^j_k + \tilde z^{j - 1}_k), \qquad j > k \\
a^j_{k + 1} &= a^j_k {z^j_k \tilde z^{j - 1}_k \over \tilde z^j_k z^{j - 1}_k}
\end{align*}
\end{dfn}
Before defining the $q$RSK algorithm, let us review some $q$-deformations.
\subsection{$q$-deformations}\label{s:qdef}
A good reference of the $q$-deformations is \cite{gasper-rahman04}. In this article we assume $0 \le q < 1$.
Define the $q$-Pochhammers and the $q$-binomial coefficients as
\begin{align*}
(\alpha; q)_k &=
\begin{cases}
\prod_{i = 0 : k - 1} (1 - \alpha q^i) & k > 0\\
1 & k = 0 \\
\prod_{i = 1 : - k} (1 - \alpha q^{-i})^{-1} & k < 0
\end{cases}\\
(k)_q &= (q; q)_k \\
{n \choose k}_q &= {(n)_q \over (k)_q (n - k)_q}
\end{align*}
We also define three $q$-deformed probability distributions.
\subsubsection{$q$-geometric distribution}
\begin{dfn}
Given $\alpha \in (0, 1)$, a random variable $X$ is said to be distributed according to the $q$-geometric distribution $q\text{Geom}(\alpha)$ if it has probability mass function (pmf)
\begin{align*}
f_X(k) = {\alpha^k \over (k)_q} (\alpha; q)_\infty, \qquad k = 0, 1, 2, \dots
\end{align*}
\end{dfn}
The first moment of the $q$-geometric distribution with parameter $\alpha$ is
\begin{align}
\sum_k {k \alpha^k \over (k)_q} (\alpha; q)_\infty = (\alpha; q)_\infty \alpha \partial_{\alpha} \sum {\alpha^k \over (k)_q} = \alpha (\log E_q)'(\alpha). \label{eq:qgeommoment}
\end{align}
where $E_q(\alpha) = (\alpha; q)_\infty^{-1}$ is a $q$-deformation of the exponential function (see for example (1.3.15) of \cite{gasper-rahman04}).
\subsubsection{$q$-binomial distribution}
There are several $q$-deformations of the binomial distribution.
The one that is used in \cite{matveev-petrov15} to construct the $q$RSK is also called $q$-Hahn distribution.
It appeared in \cite{povolotsky13}.
Apart from the dependency on $q$, it is has three parameters ($\xi, \eta, n$).
For $0 \le \eta \le \xi < 1$, and $n \in \mathbb N \cup \{\infty\}$, the pmf is
\begin{align*}
\phi_{q, \xi, \eta} (k | n) = \xi^k {(\eta / \xi; q)_k (\xi; q)_{n - k} \over (\eta; q)_n} {n \choose k}_q, \qquad k = 0 : n
\end{align*}
The fact that it is a probability distribution can be found in, for example \cite[Exercise 1.3]{gasper-rahman04}.
\subsubsection{$q$-hypergeometric distribution}
The $q$-hypergeometric distribution we consider here is defined as follows.
For $m_1, m_2, k \in \mathbb N$ with $m_1 + m_2 \ge k$, $X \sim q\text{Hyp}(m_1, m_2, k)$ if the pmf of $X$ is
\begin{align*}
f_X(\ell) = q^{(m_1 - \ell)(k - \ell)}{{m_1 \choose \ell}_q {m_2 \choose k - \ell}_q \over {m_1 + m_2 \choose k}_q}
\end{align*}
The corresponding $q$-Vandermonde identity
\begin{align*}
\sum_\ell q^{(m_1 - \ell) (k - \ell)} {m_1 \choose \ell}_q {m_2 \choose k - \ell}_q = {m_1 + m_2 \choose k}_q
\end{align*}
can be proved directly by writing $(1 + x) (1 + q x) \cdots (1 + q^{m_1 + m_2 - 1} x)$ in two different ways.
As with the usual hypergeometric distribution, the support of $q\text{Hyp}(m_1, m_2, k)$ is
\begin{align}
0 \vee (k - m_2) \le \ell \le m_1 \wedge k \label{eq:qhypsupp}
\end{align}
When $m_2 = \infty$, the distribution is symmetric in $m_1$ and $k$:
\begin{align*}
f_{q\text{Hyp}(m_1, \infty, k)} (\ell) = f_{q\text{Hyp}(k, \infty, m_1)} (\ell) = q^{(m_1 - \ell)(k - \ell)} {(m_1)_q (k)_q \over (\ell)_q (m_1 - \ell)_q (k - \ell)_q}, \qquad 0 \le \ell \le m_1 \wedge k
\end{align*}
This distribution appeared in \cite{blomqvist52}.
When $k = 0$ or $m_1 = 0$, by \eqref{eq:qhypsupp} the distribution is supported on $\{0\}$:
\begin{align}
f_{q\text{Hyp}(0, m_2, k)} (s) = f_{q\text{Hyp}(m_1, m_2, 0)} (s) = \mathbb I_{s = 0}. \label{eq:qhypk=0}
\end{align}
The fact that the $q$Hyp is a probability distribution yields the following identities, where the second follows by taking $m_2 = \infty$:
\begin{align}
\sum_{s} q^{(m_1 - s)(k - s)} (m_1 - s)_q^{-1} (k - s)_q^{-1} (s)_q^{-1} (m_2 - k + s)_q^{-1} &= {(m_1 + m_2)_q \over (m_1)_q (m_2)_q (k)_q (m_1 + m_2 - k)_q}
\label{eq:qhyp}\\
\sum_{s} q^{(m_1 - s)(k - s)} (m_1 - s)_q^{-1} (k - s)_q^{-1} (s)_q^{-1} &= (m_1)_q^{-1} (k)_q^{-1} \label{eq:qhypinf}
\end{align}
\subsubsection{From the $q$-binomial distribution to the $q$-hypergeometric distribution}
The $q$-binomial distribution is related to the $q$-hypergeometric distribution in the following way:
\begin{lem}\label{l:qbinqhyp}
For nonnegative integers $a \le b \ge c$, let $X$ be a random variable distributed according to $\phi_{q^{-1}, q^a, q^b} (\cdot | c)$, then $c - X$ is distributed according to $q\text{Hyp}(c, b - c, a)$.
\end{lem}
\begin{proof}
By the definition of the $q$-hypergeometric distribution it suffices to show
\begin{align}
\phi_{q^{-1}, q^a, q^b}(s | c) = q^{s (s + a - c)} {b \choose a}_q^{-1} {c \choose s}_q {b - c \choose s + a - c}_q \label{eq:qbinhyp}
\end{align}
First we apply $(x; q^{-1})_n = (- 1)^n x^n q^{-{n \choose 2}} (x^{-1}; q)_n$ and ${n \choose k}_{q^{-1}} = q^{- k (n - k)} {n \choose k}_q$ to the left hand side to turn the $q^{-1}$-Pochhammers into $q$-Pochhammers.
The left hand side thus becomes
\begin{align*}
q^{(a - b)(c - s)} {(q^{a - b}; q)_s (q^{-a}; q)_{c - s} \over (q^{-b}; q)_c} {c \choose s}_q.
\end{align*}
Furthermore using $(q^{-n}; q)_k = {(n)_q \over (n - k)_q} (-1)^k q^{{k \choose 2} - n k}$ the above formula becomes the right hand side of \eqref{eq:qbinhyp}.
\end{proof}
\subsubsection{The ($t = 0$)-Macdonald polynomials and the $q$-Whittaker functions}
Let us define the $(t = 0)$-Macdonald polynomials.
For $x = (x_{1 : N})$ and $\lambda = (\lambda_{1 : \ell})$ with $\ell \le N$, we redefine $\lambda$ by padding $N - \ell$ zeros to the end of it:
\begin{align*}
\lambda := (\lambda_1, \lambda_2, \dots, \lambda_\ell, 0, 0, \dots, 0).
\end{align*}
Given a tableau $(\lambda^k_j)$ define its type $\text{ty}((\lambda^k_j))$ by
\begin{align*}
\text{ty}((\lambda^k_j))_i :=
\begin{cases}
\lambda^1_1 & i = 1 \\
\sum_{\ell = 1 : i} \lambda^i_\ell - \sum_{\ell = 1 : i - 1} \lambda^{i - 1}_\ell & i > 1
\end{cases}
\end{align*}
Then the $(t = 0)$-Macdonald polynomials of rank $N - 1$ indexed by $\lambda$ and the $q$-Whittaker function $\psi_x(\lambda)$ are defined as
\begin{align*}
P_\lambda (x) &= \sum_{(\lambda^k_j)_{1 \le j \le k \le N}, \lambda^{k - 1} \prec \lambda^k \forall k, \lambda^N = \lambda} x^{\text{ty}((\lambda^k_j))} \prod_{1 \le j < k \le N} {\lambda^k_j - \lambda^k_{j + 1} \choose \lambda^k_j - \lambda^{k - 1}_j}_q,\\
Q_\lambda (x) &= (\lambda_N)_q^{-1} P_\lambda(x) \prod_{i = 2 : N} (\lambda_i - \lambda_{i - 1})_q^{-1},\\
\psi_x (\lambda) &= (\lambda_N)_q Q_\lambda (x).
\end{align*}
The $q$-Whittaker measure discussed in this article is the one induced by the Cauchy-Littlewood identity:
\begin{align*}
\mu_{q\text{-Whittaker}} (\lambda) = P_\lambda (\alpha) Q_\lambda (\hat\alpha) \prod_{i j} (\hat\alpha_i \alpha_j; q)_\infty.
\end{align*}
\subsubsection{Classical limits}
In this section we let $q = e^{- \epsilon}$ for small $\epsilon > 0$ and collect some results concerning the classical limits.
Let
\begin{align*}
A(\epsilon) = - {\pi^2 \over 6} \epsilon^{-1} - {1 \over 2} \log \epsilon + {1 \over 2} \log 2 \pi \\
\end{align*}
\begin{lem}\label{l:qclassicallimits} Let $q = e^{- \epsilon}$ and $m(\epsilon) = \epsilon^{-1} \log \epsilon^{-1}$, then
\begin{enumerate}
\item $(q^t; q)_\infty = \Gamma(t)^{- 1} \exp(A(\epsilon) + (1 - t) \log \epsilon + O(\epsilon))$. Specifically, $(q; q)_\infty = \exp(A(\epsilon) + O(\epsilon))$
\item For $\alpha \ge 1$,
$f_\alpha(y) := (\lfloor\alpha m(\epsilon) + \epsilon^{-1} y\rfloor)_q =
\begin{cases}
\exp(A(\epsilon) + e^{- y} + O(\epsilon)) & \alpha = 1 \\
\exp(A(\epsilon) + O(\epsilon)) & \alpha > 1
\end{cases}$
\item $\log (\lfloor \epsilon^{-1} y \rfloor)_q = \epsilon^{-1} \left( \text{Li}_2(e^{- y}) - {\pi^2 \over 6} \right) + o(\epsilon^{-1})$, where
\begin{align*}
\text{Li}_2(x) = \sum_{n \ge 1} {x^n \over n^2} = - \int_0^x {\log(1 - u) \over u} du
\end{align*}
is the dilogarithm function.
\end{enumerate}
\end{lem}
Item 1 can be found, for example as a special case of Theorem 3.2 in \cite{banerjee-wilkerson16}. Item 2 was proved as Lemma 3.1 of \cite{gerasimov-lebedev-oblezin12}.
Item 3 can be derived as follows:
\begin{align*}
\epsilon \log(\lfloor \epsilon^{-1} y \rfloor)_q = \epsilon \sum_{k = 1 : \lfloor \epsilon^{-1} y \rfloor} (1 - e^{-\epsilon k}) \approx \int_0^y \log (1 - e^{-t}) dt = \text{Li}_2(e^{- y}) - {\pi^2 \over 6}.
\end{align*}
\subsection{The Noumi-Yamada description of the $q$RSK}
Now we can define a Noumi-Yamada description for the $q$RSK.
Throughout this article we adopt the convention that for any Young diagram $\lambda$, the $0$th edge are $\infty$: $\lambda_0 = \infty$.
\begin{thm}\label{t:noumiyamada}
The $q$RSK algorithm can be reformulated as the following Noumi-Yamada description.
Suppose at time $\ell - 1$ we have a tableau $(\lambda^j_k) = (\lambda^j_k (\ell - 1))$ and want to insert row $w_{\ell, 1 : m}$ into it to obtain a new tableau $(\tilde\lambda^j_k) = (\lambda^j_k (\ell))$.
We initialise $a^{1 : m}_1 = w_{\ell, 1 : m}$ and recursively apply the following
\begin{align}
\tilde\lambda^k_k &= \lambda^k_k + a^k_k \label{eq:leftboundary}\\
\tilde\lambda^j_k &= a^j_k + \lambda^j_k + \tilde\lambda^{j - 1}_k -\lambda^{j - 1}_k - q\text{Hyp}(\tilde\lambda^{j - 1}_k - \lambda^{j - 1}_k, \lambda^{j - 1}_{k - 1} - \tilde\lambda^{j - 1}_k, \lambda^j_k - \lambda^{j - 1}_k) \qquad j > k \notag\\
a^j_{k + 1} &= a^j_k + \lambda^j_k - \tilde\lambda^j_k + \tilde\lambda^{j - 1}_k - \lambda^{j - 1}_k \notag
\end{align}
\end{thm}
\begin{proof}
Let us recall the algorithm as described in \cite[Section 6.1 and 6.2]{matveev-petrov15}.
In natural language it works as follows.
Suppose we want to insert row $(a_{1 : m})$ into the tableau $(\lambda^j_k)_{1 \le j \le k \le m}$.
The top particle $\lambda^1_1$ receives a push $a_1$ from the input row and finishes its movement.
Recursively, when all the particles at level $j - 1$ finishes moving, the increment of the $k$th particle splits into two parts $l^{j - 1}_k$ and $r^{j - 1}_k$, which contribute to the increment of the $k + 1$th and the $k$th particles at level $j$ respectively.
The right increment $r^{j - 1}_k$ is a $q$-binomial distributed random variable.
On top of that the rightmost particle $\lambda^j_1$ of the GT pattern receives a push $a_j$ from the input row.
To be more precise we present a pseudocode description.
\begin{algorithm}[H]
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{tableau $(\lambda^j_k)_{1 \le j \le k \le m}$, row $(a_{1 : m}) \in \mathbb N^n$.}
\Output{tableau $(\tilde\lambda^k_j)_{1 \le j \le k \le \ell}$.}
\BlankLine
Initialise $\tilde\lambda^1_1 := \lambda^1_1 + a_1$\;
\For {$j := 2 : m$}{
\For {$k := 1 : j$}{
$\tilde\lambda^j_k := \lambda^j_k$
}
$\tilde\lambda^j_1 := \tilde\lambda^j_1 + a_k$\;
\For {$k := 1 : j - 1$}{
sample $r^{j - 1}_k \sim \phi_{q^{-1}, q^{\lambda^j_k - \lambda^{j - 1}_k}, q^{\lambda^{j - 1}_{k - 1} - \lambda^{j - 1}_k}}(\cdot | \tilde \lambda^{j - 1}_k - \lambda^{j - 1}_k)$\;
$\tilde\lambda^j_k := \tilde\lambda^j_k + r^{j - 1}_k$\;
$l^{j - 1}_k := \tilde\lambda^{j - 1}_k - \lambda^{j - 1}_k - r^{j - 1}_k$\;
$\tilde\lambda^j_{k + 1} := \tilde\lambda^j_{k + 1} + l^{j - 1}_k$\;
}
}
\caption{$q$RSK}
\end{algorithm}
Next, by matching $a^j_{k + 1}$ as $l^{j - 1}_k$, and noting $\tilde\lambda^{j - 1}_k - \lambda^{j - 1}_k = l^{j - 1}_k + r^{j - 1}_k$ and $\tilde\lambda^j_k = \lambda^j_k + l^{j - 1}_{k - 1} + r^{j - 1}_k$, and rewriting the $q$-binomial distribution as the $q$-hypergeometric distribution using Lemma \ref{l:qbinqhyp} we arrive at the Noumi-Yamada description of the $q$RSK.
\end{proof}
In this article we write $a^j_k(n)$ in place of $a^j_k$ when the insertion is performed at time $n$, namely to transform $(\lambda^j_k(n - 1))$ into $(\lambda^j_k(n))$.
An alternative way of writing down the Noumi-Yamada description is as follows
\begin{equation}
\begin{aligned}
\tilde\lambda^k_k &= \lambda^k_k + a^k_k\\
\tilde\lambda^j_k &= a^j_k + \lambda^j_k + \tilde\lambda^{j - 1}_k -\lambda^{j - 1}_k - a^j_{k + 1} \qquad j > k\\
a^j_{k + 1} &\sim q\text{Hyp}(\tilde\lambda^{j - 1}_k - \lambda^{j - 1}_k, \lambda^{j - 1}_{k - 1} - \tilde\lambda^{j - 1}_k, \lambda^j_k - \lambda^{j - 1}_k)
\end{aligned}
\label{eq:altny}
\end{equation}
\subsection{Properties of the $q$RSK}
As with the usual RSK, the $q$RSK preserves the interlacing constraints of the GT patterns along levels and time.
\begin{lem}[Lemma 6.2 of \cite{matveev-petrov15}]
For $\lambda^{j - 1} \prec \lambda^j$ and $\lambda^{j - 1} \prec \tilde\lambda^{j - 1}$, after applying $q$RSK-inserting a row, we have
\begin{align*}
\lambda^j \prec \tilde \lambda^j, \qquad \tilde\lambda^{j - 1} \prec \tilde\lambda^j.
\end{align*}
\end{lem}
\begin{proof}
By \eqref{eq:qhypsupp} and \eqref{eq:altny}
\begin{align}
a^j_k &\le \lambda^j_{k - 1} - \lambda^{j - 1}_{k - 1} \label{eq:l31} \\
a^j_k &\le \tilde\lambda^{j - 1}_{k - 1} - \lambda^{j - 1}_{k - 1} \label{eq:l32} \\
a^j_{k + 1} &\le (\tilde\lambda^{j - 1}_k \vee \lambda^j_k ) - \lambda^{j - 1}_k \label{eq:l33}\\
a^j_{k + 1} &\ge \lambda^j_k - \lambda^{j - 1}_k - \lambda^{j - 1}_{k - 1} + \tilde\lambda^{j - 1}_k \label{eq:l34}
\end{align}
When $j = k$, by \eqref{eq:altny},
\begin{align*}
\tilde\lambda^k_k = \lambda^k_k + a^k_k \ge 0
\end{align*}
When $j > k$, by \eqref{eq:altny}\eqref{eq:l33}
\begin{align*}
\tilde\lambda^j_k \ge a^j_k + (\lambda^j_k \vee \tilde\lambda^{j - 1}_k)
\end{align*}
Thus we have shown $\tilde\lambda^j_k \ge \lambda^j_k$ and $\tilde\lambda^j_k \ge \tilde\lambda^{j - 1}_k$.
By \eqref{eq:altny}\eqref{eq:l31}\eqref{eq:l34} we have
\begin{align*}
\tilde\lambda^j_k - \lambda^j_{k - 1} \le \lambda^j_k - \lambda^j_{k - 1} + \tilde\lambda^{j - 1}_k - \lambda^{j - 1}_k + \lambda^j_{k - 1} - \lambda^{j - 1}_{k - 1} - (\lambda^j_k - \lambda^{j - 1}_k - \lambda^{j - 1}_{k - 1} + \tilde\lambda^{j - 1}_k) = 0
\end{align*}
Similarly for $j > k$ by \eqref{eq:altny}\eqref{eq:l32}\eqref{eq:l34} we have
\begin{align*}
\tilde\lambda^j_k - \tilde\lambda^{j - 1}_{k - 1} \le \lambda^j_k + \tilde\lambda^{j - 1}_k - \lambda^{j - 1}_k + \tilde\lambda^{j - 1}_{k - 1} - \lambda^{j - 1}_{k - 1} - (\lambda^j_k - \lambda^{j - 1}_k - \lambda^{j - 1}_{k - 1} + \tilde\lambda^{j - 1}_k) = 0
\end{align*}
and for $j = k$ by \eqref{eq:altny}\eqref{eq:l32} and that $\lambda^{j - 1} \prec \lambda^j$ we have
\begin{align*}
\tilde\lambda^j_k - \tilde\lambda^{j - 1}_{k - 1} \le \lambda^j_k + \tilde\lambda^{j - 1}_{k - 1} - \lambda^{j - 1}_{k - 1} - \tilde\lambda^{j - 1}_{k - 1} = \lambda^j_k - \lambda^{j - 1}_{k - 1} \le 0.
\end{align*}
\end{proof}
As with the usual RSK, we set the initial tableau to be empty: $\lambda^j_k (0) \equiv 0, \forall k \ge j \ge 1$.
The following lemma shows that the insertion does not ``propagate'' to the $k$th edge before time $k$.
\begin{lem}\label{l:0}
Starting from the empty initial condition,
\begin{align*}
\lambda^j_k (n) = 0 \qquad \forall 0 \le n < k \le j
\end{align*}
\end{lem}
\begin{proof}
We show this by induction.
The empty initial condition is the initial condition for the induction.
Assume for any $n', j', k'$ such that $0 \le n' < k' \le j', n' \le n, j' \le j, k' \le k, (n', j', k') \neq (n, j, k)$ we have
\begin{align*}
\lambda^{j'}_{k'}(n') = 0.
\end{align*}
If $j > k > 1$, then
\begin{align*}
\lambda^j_k (n) &= \lambda^j_k (n - 1) + \lambda^{j - 1}_k (n) - \lambda^{j - 1}_k (n - 1) \\
&+ q\text{Hyp}(\lambda^{j - 1}_{k - 1} (n) - \lambda^{j - 1}_{k - 1} (n - 1), \lambda^{j - 1}_{k - 2}(n - 1) - \lambda^{j - 1}_{k - 1}(n), \lambda^j_{k - 1}(n - 1) - \lambda^{j - 1}_{k - 1} (n - 1)) \\
&- q\text{Hyp}(\lambda^{j - 1}_{k}(n) - \lambda^{j - 1}_k(n - 1), \lambda^{j - 1}_{k - 1} (n - 1) - \lambda^{j - 1}_k (n), \lambda^j_k(n - 1) - \lambda^{j - 1}_k(n - 1))\\
&= 0
\end{align*}
by the induction assumption and \eqref{eq:qhypk=0}.
The other cases ($j = k$ and $k = 1$) are similar and less complicated.
\end{proof}
The following lemma can be viewed as the boundary case ``dual'' to \eqref{eq:leftboundary}. This duality will become clear when defining the $q$-local moves.
\begin{lem}\label{l:upperboundary}
With empty initial condition, we have
\begin{align*}
\lambda^k_n (n) =
\begin{cases}
\lambda^{k - 1}_n (n) + a^k_n (n) & k > n \\
a^k_n (n) & k = n
\end{cases}
\end{align*}
\end{lem}
\begin{proof}
When $k > n$, by Theorem \ref{t:noumiyamada}, Lemma \ref{l:0} and \eqref{eq:qhypk=0}
\begin{align*}
\lambda^k_n(n) &= \lambda^{k - 1}_n (n) + a^k_n(n) + \lambda^k_n(n - 1) - \lambda^{k - 1}_n (n - 1)\\
&- q\text{Hyp}(\lambda^{k - 1}_n(n) - \lambda^{k - 1}_n (n - 1), \lambda^{k - 1}_{n - 1}(n) - \lambda^{k - 1}_n (n), \lambda^k_n (n - 1) - \lambda^{k - 1}_n (n - 1))\\
&= \lambda^{k - 1}_n (n) + a^k_n(n).
\end{align*}
When $k = n$, by Theorem \ref{t:noumiyamada} and Lemma \ref{l:0}
\begin{align*}
\lambda^k_n (n) = \lambda^k_n (n - 1) + a^k_n(n) = a^k_n (n)
\end{align*}
\end{proof}
The next lemma shows that $q$RSK, like the usual RSK, is weight-preserving.
\begin{lem}\label{l:wt}
Given empty initial condition, let $(\lambda^k_j (n))$ be the output of $q$RSK taking a matrix $(w_{i, j})$, then almost surely
\begin{align*}
\lambda^k_1 (n) + \lambda^k_2 (n) + \dots + \lambda^k_{k \wedge n}(n) = \sum_{i = 1 : n, j = 1 : k} w_{i, j}
\end{align*}
\end{lem}
\begin{proof}
We first show by induction that
\begin{align}
\sum_{j = 1 : k} \lambda^k_j (n) = \sum_{i = 1 : n, j = 1 : k} w_{i, j} \label{eq:wt}
\end{align}
When $n = 0$ the empty initial condition shows that the above formula is true for all $k \ge 1$.
By recursively applying \eqref{eq:leftboundary} and noting $a^1_1(n) = w_{n, 1}$, we see the above formula is true for $k = 1$ and all $n \ge 0$.
Assuming the above formula is true for $(k', n') \in \{(k - 1, n), (k, n - 1), (k - 1, n - 1)\}$, summing over $j = 1 : k$ in \eqref{eq:altny}, and by noting $a^k_1(n) = w_{n, k}$ one has
\begin{align*}
\sum_{j = 1 : k} \lambda^k_j (n) &= \sum_{j = 1 : k} \lambda^k_j (n - 1) + \sum_{j = 1 : k - 1} \lambda^{k - 1}_j (n) - \sum_{j = 1 : k - 1} \lambda^{k - 1}_j (n - 1) + w_{n, k}\\
&= \sum_{i = 1 : n - 1, j = 1 : k} w_{i, j} + \sum_{i = 1 : n, j = 1 : k - 1}w_{i, j} - \sum_{i = 1 : n - 1, j = 1 : k - 1} w_{i, j} + w_{n, k} = \sum_{i = 1 : n, j = 1 : k} w_{i, j}.
\end{align*}
Then using Lemma \ref{l:0} on \eqref{eq:wt} we arrive at the identity in the statement of the lemma.
\end{proof}
\subsection{The growth diagrams and the symmetry property}
The growth diagram was developed in \cite{fomin86,fomin95}, see also for example \cite[Section 5.2]{sagan00} for an exposition. For the RSK algorithm it is an integer lattice $[n] \times [m]$, where each vertex is labelled by a Young diagram, and each cell labelled by a number.
More specifically, the $(\ell, j)$-cell is labelled by $w_{\ell, j}$, the $(\ell, j)$-th entry of the input matrix, whereas the label of vertex $(\ell, j)$ is the Young diagram $\lambda^j_{1 : j} (\ell)$.
The local growth rule is a function $F_{\text{RSK}}(\lambda, \mu^1, \mu^2, x)$ such that
\begin{align*}
\lambda^j (\ell) = F_{\text{RSK}} (\lambda^{j - 1} (\ell - 1), \lambda^{j - 1} (\ell), \lambda^j (\ell - 1), w_{j, \ell})
\end{align*}
for all $j$ and $\ell$.
The local growth rule generates the whole diagram.
To see this, one may label the boundary vertices $(0, 0 : m)$ and $(0 : n, 0)$ with the empty Young diagrams, and apply $F_\text{RSK}$ recursively.
By the definition of the $P$- and $Q$-tableaux, the labels of the top row and the labels of the right most column of the growth diagram characterise the $P$- and $Q$-tableaux respectively.
Therefore the symmetry property of the RSK algorithm is reduced to the symmetry property of the local rule:
\begin{align*}
F_{\text{RSK}} (\lambda, \mu^1, \mu^2, x) = F_{\text{RSK}} (\lambda, \mu^2, \mu^1, x).
\end{align*}
To see this, note that transposing the matrix amounts to transposing the lattice with the cell labels.
By the symmetry property of the local rule, it is invariant under this transposition, therefore one can transpose the vertex labels as well.
As a result the $P$- and $Q$-tableaux are swapped.
This argument will be made more symbolic in the proof of Theorem \ref{t:qsym}.
\subsection{The symmetry property for the $q$RSK}
In the case where the algorithm itself is randomised, or weighted, the local rule branches, and the weights can be placed on the edges.
One example of this is both the column and the row $q$RS defined in \cite{oconnell-pei13,borodin-petrov13} whose symmetry property was proved using this branching version of the growth diagram in \cite{pei14}.
In this section we prove the symmetry property for the $q$RSK in the same way.
\begin{proof}[Proof of Theorem \ref{t:qsym}]
As in the RSK and $q$RS cases, we first show that the local rule is symmetric.
That is, writing $\lambda^m(n) = F_{qRSK}(\lambda^{m - 1} (n - 1), \lambda^{m - 1} (n), \lambda^m (n - 1), w_{m, n})$ then we show $F_{qRSK}(\lambda, \mu^1, \mu^2, x)$ is symmetric in $\mu^1$ and $\mu^2$:
\begin{align*}
F_{q\text{RSK}}(\lambda, \mu^1, \mu^2, x) \overset{d}{=} F_{q\text{RSK}}(\lambda, \mu^2, \mu^1, x)
\end{align*}
In the rest of the proof we write $F = F_{qRSK}$.
As remarked before, we use the convention that for any Young diagram $\lambda$, $\lambda_0 = \infty$.
By Theorem \ref{t:noumiyamada} we can write
\begin{align}
F(\lambda, \mu^1, \mu^2, x) = (F_k(\lambda, \mu^1, \mu^2, x_k))_{k \ge 1} \label{eq:localrule}
\end{align}
where
\begin{align}
F_k(\lambda, \mu^1, \mu^2, x_k) = \mu^2_k + \mu^1_k - \lambda_k + x_k - x_{k + 1} \label{eq:localrule2}
\end{align}
where $x_1 = x$ and $x_{k + 1} \sim q\text{Hyp}(\mu^1_k - \lambda_k, \lambda_{k - 1} - \mu^1_k, \mu^2_k - \lambda_k)$ has pmf symmetric in $\mu^1$ and $\mu^2$:
\begin{align*}
f_{x_{k + 1}} (s) &= q^{(\mu^1_k - \lambda_k - s) (\mu^2_k - \lambda_k - s)}\\
&\times {(\mu^1_k - \lambda_k)_q (\lambda_{k - 1} - \mu^1_k)_q (\mu^2_k - \lambda_k)_q (\lambda_{k - 1} - \mu^2_k)_q \over (s)_q (\mu^1_k - \lambda_k - s)_q (\mu^2_k - \lambda_k - s)_q (\lambda_k + \lambda_{k - 1} - \mu^1_k - \mu^2_k + s)_q (\lambda_{k - 1} - \lambda_k)_q}.
\end{align*}
Note that the local rule $F$ does not ``see'' either the level or the time of the insertion.
Therefore the Young diagrams have to be padded with infinitely many trailing $0$s.
This is why we let the edge index $k$ range from $1$ to $\infty$ in \eqref{eq:localrule}.
It is consistent with the Noumi-Yamada description in the boundary case $j = k$ and the ``null'' case $j < k$.
When $j = k$,
\begin{align*}
\tilde \lambda^k_k = a^k_k + \lambda^k_k + \tilde \lambda^{k - 1}_k - \lambda^{k - 1}_k - q\text{Hyp}(\tilde\lambda^{k - 1}_k - \lambda^{k - 1}_k, \lambda^{k - 1}_{k - 1} - \tilde\lambda^{k - 1}_k, \lambda^k_k - \lambda^{k - 1}_k) = \lambda^k_k + a^k_k
\end{align*}
due to $\tilde\lambda^{k - 1}_k = \lambda^{k - 1}_k = 0$ and \eqref{eq:qhypk=0}.
Similarly when $j < k$ all terms in the right hand side of \eqref{eq:localrule2} are $0$, so that $\tilde\lambda^j_k$ can stay $0$.
The rest follows the same argument as in the proof of \cite[Theorem 3]{pei14}.
Here we produce a less visual and more symbolic argument.
Let $\pi = ((0, m) = (j_0, k_0) \to (j_1, k_1) \to \dots \to (j_{m + n}, k_{m + n}) = (n, 0))$ be a down-right path from $(0, m)$ to $(n, 0)$, that is $(j_i, k_i) - (j_{i - 1}, k_{i - 1}) \in \{(0, -1), (1, 0)\}$.
Let $G$ be the enclosed area of $(j, k)$:
\begin{align*}
G = \{(j', k'): \exists i \text{ such that } 0 \le j' \le j_i, 0 \le k' \le k_i\}
\end{align*}
Let $A(G) = \{w_{i, j}: i, j \ge 1, (i, j) \in G\}$ be the weights in the $G$-area.
Denote by $L(A, \pi) = (\lambda^{k_i}(j_i))_i$ the Young diagrams labelled along $\pi$ with input $A$.
It suffices to show that for any $(A, \pi)$ the $L$ satisfies a symmetry property:
\begin{align}
L(A, \pi) \overset{d}{=} L(A', \pi')^r \label{eq:sympath}
\end{align}
where $\pi' := ((k_{m + n}, j_{m + n}), (k_{m + n - 1}, j_{m + n - 1}), \dots, (k_1, j_1))$ is the transpose of $\pi$, and $\cdot^r$ is the reverse: $b^r := (b_\ell, b_{\ell - 1}, \dots, b_1)$ for a tuple $b = (b_1, b_2, \dots, b_\ell)$.
The symmetry property of the follows when one takes $\pi = \pi^* = ((0, m) \to (1, m) \to \dots \to (n, m) \to (n, m - 1) \to \dots \to (n, 0))$.
We show \eqref{eq:sympath} by induction.
When $\pi = ((0, m) \to (0, m - 1) \to \dots \to (0, 0) \to (1, 0) \to \dots \to (n, 0))$ it is true because by the boundary condition all the diagrams along $\pi$ are empty.
When \eqref{eq:sympath} is true for path $\pi$, assume $\pi \neq \pi^*$ (otherwise we are done), then there exists at least one $\ell$ such that $j_\ell + 1= j_{\ell + 1}$ and $k_\ell + 1 = k_{\ell - 1}$, that is $(j_{\ell - 1}, k_{\ell - 1}), (j_\ell, k_\ell), (j_{\ell + 1}, k_{\ell + 1})$ forms an L-shape.
Now one can apply the symmetry of the local growth rule $F$ to the cell containing this L-shape, to obtain \eqref{eq:sympath} for $\pi^+$, where $\pi^+$ has the same coordinates as $\pi$ except that $(j_\ell, k_\ell) := (j_\ell + 1, k_\ell + 1)$.
\end{proof}
\section{$q$-polymer}\label{s:qpolymer}
\subsection{From RSK algorithms to polymers}
For the RSK algorithm, due to the Greene's Theorem \cite{greene74} the first edge of the output tableaux are the partition function of the directed last passage percolation (DLPP) of the input matrix:
Let $(\lambda^j_k(\ell))$ be the output of the RSK algorithm taking matrix $(w_{i, j})_{n \times m}$, then
\begin{align*}
Z_0(\ell, j) := \lambda^j_1(\ell) = \max_{\pi: (1, 1) \to (\ell, j)} \sum_{(i, j) \in \pi} w_{i, j},
\end{align*}
where $\pi: (1, 1) \to (\ell, j)$ indicates $\pi$ is an upright path from $(1, 1)$ to $(\ell, j)$.
Locally, $Z_0$ satisfies the following recursive relation, which is what happens at the first edge in the Noumi-Yamada description:
\begin{align*}
Z_0 (\ell, j) = (Z_0(\ell, j - 1) \vee Z_0(\ell - 1, j)) + w_{\ell, j}.
\end{align*}
Similarly for the gRSK, the first edge corresponds to the partition functions of the directed polymer (DP) of the matrix:
\begin{align*}
Z_1(\ell, j) := \log z^j_1(\ell) = \log \left(\sum_{\pi: (1, 1) \to (\ell, j)} \prod_{(i, j) \in \pi} e^{w_{i, j}}\right)
\end{align*}
And locally we have
\begin{align*}
Z_1(\ell, j) = \log(e^{Z_1(\ell, j - 1)} + e^{Z_1(\ell - 1, j)}) + w_{\ell, j}.
\end{align*}
Because of this, we define the $q$-polymer by focusing on the first edge $Z(n, m) := \lambda^m_1(n)$.
\footnote{Note that the first edge of the $q$RSK was regarded as an interacting particle system called the geometric $q$-pushTASEP in~\cite[Section 6.3]{matveev-petrov15}, which we will also consider in the next section.}
Then by the Noumi-Yamada description of the $q$RSK the $q$-polymer can be defined locally by
\begin{align*}
Z_q(1, 1) &= w_{1, 1}, \\
Z_q(n, 1) &= Z_q(n - 1, 1) + w_{n, 1}, \qquad n > 1\\
Z_q(1, m) &= Z_q(1, m - 1) + w_{1, m}, \qquad m > 1\\
Z_q(n, m) &= w_{n, m} + Z_q(n - 1, m) + Z_q(n, m - 1) - Z_q(n - 1, m - 1) - X' \\
X' &\sim q\text{Hyp}(Z_q(n, m - 1) - Z_q(n - 1, m - 1), \infty, Z_q(n - 1, m) - Z_q(n - 1, m - 1)),\; m, n > 1
\end{align*}
It is not known whether a more global interpretation of $Z_q$ for $0 < q < 1$ exists, like the first definitions of $Z_0$ and $Z_1$ involving directed paths.
More generally, the full Greene's theorem interpretes the sum of the first $k$ edges of a fixed level of the (g)RSK-output triangular patterns as similar statistics of $k$ directed non-intersecting paths, but the $q$ version of this theorem is also unknown, so is a sensible definition of the $q$ version of $k$-polymers.
But locally, the DLPP, DP and $q$-polymer models are very similar, as we shall see now.
\subsection{$q$-Burke property}\label{s:qburke}
Fix $\ell$ and $j$, let
\begin{align*}
U_q &= Z_q(\ell, j - 1) - Z_q(\ell - 1, j - 1) \\
V_q &= Z_q(\ell - 1, j) - Z_q(\ell - 1, j - 1) \\
X_q &= w_{\ell, j} \\
U_q' &= Z_q(\ell, j) - Z_q(\ell - 1, j) \\
V_q' &= Z_q(\ell, j) - Z_q(\ell, j - 1)
\end{align*}
\begin{lem}
For $0 \le q \le 1$ we have
\begin{align*}
U_q' - U_q = V_q' - V_q = X_q - X_q' \qquad (\text{B1}.q)
\end{align*}
where
\begin{align*}
X_q' = X_q'(U_q, V_q)
\begin{cases}
= U_0 \wedge V_0 & q = 0 \qquad (\text{B2}.0)\\
\sim q\text{Hyp}(U_q, \infty, V_q) & 0 < q < 1 \qquad (\text{B2}.q)\\
= - \log (e^{- U_1} + e^{- V_1}) & q = 1 \qquad (\text{B2}.1)
\end{cases}
\end{align*}
\end{lem}
\begin{proof}
Immediate from the Noumi-Yamada descriptions at the first edge.
\end{proof}
We call (B1.$q$) (B2.$q$) the Burke relations. When $q = 0$ or $1$, the Burke relations define the RSK algorithms because the dynamics are the same along all edges of the GT patterns, whereas when $q \in (0, 1)$ the $q$RSK dynamics in the non-first edges are different from the Burke relation.
Also for $q = 0$ or $1$, when $U_q$, $V_q$ and $X_q$ are random with certain distributions, the Burke relations yield the Burke properties in the DLPP and DP cases.
Let us recall the Burke properties in these two cases.
For convenience we omit the subscripts.
\begin{prp}\label{p:burkegeom}
Let $(U, V, X, U', V', X')$ satisfy the Burke relations (B1.$q$) and (B2.$q$).
Suppose $(U, V, X)$ are independent random variables with one of the following distributions
\begin{itemize}
\item When $q = 0$
\begin{itemize}
\item Fix $0 < \alpha, \beta < 1$. Suppose $U \sim Geom(1 - \alpha)$, $V \sim Geom(1 - \beta)$ and $X \sim Geom(1 - \alpha \beta)$.
\item Or fix $\alpha, \beta > 0$. Suppose $U \sim Exp(\alpha)$, $V \sim Exp(\beta)$ and $X \sim Exp(\alpha + \beta)$.
\end{itemize}
\item When $q = 1$, fix $\alpha, \beta > 0$. Suppose $\exp(-U) \sim Gamma(\alpha)$, $\exp(-V) \sim Gamma(\beta)$ and $\exp(-X) \sim Gamma(\alpha + \beta)$.
\end{itemize}
Then in each of the above cases
\begin{align*}
(U', V', X') \overset{d}{=} (U, V, X).
\end{align*}
\end{prp}
The Burke property with geometric weights can be found in e.g. \cite[Lemma 2.3]{seppalainen09}, the one with the exponential weights in \cite{balazs-cator-seppalainen06}, the one with log-gamma weights in \cite{seppalainen12}.
The $q$-Burke property is similar.
\begin{prp}\label{t:qburke}
Let $(U, V, X, U', V', X')$ satisfy (B1.$q$) and (B2.$q$) with $0 < q < 1$. Let $0 < \alpha, \beta < 1$. Let $U, V$ and $X$ be independent random variables such that $U \sim q\text{Geom}(\alpha)$, $V \sim q\text{Geom}(\beta)$ and $X \sim q\text{Geom} (\alpha \beta)$.
Then $(U', V', X') \overset{d}{=} (U, V, X)$.
\end{prp}
\begin{proof} By the definitions of the $q$-geometric and the $q$-hypergeometric distributions,
\begin{align*}
\mathbb P(U' &= u, V' = v, X' = x) \\
&= \mathbb P(U + X = u + x, V + X = v + x, X' = x) \\
&= \sum_y \mathbb P(X = y, U = u + x - y, V = v + x - y, X' = x)\\
&= \sum_y {(\alpha\beta)^y \over (y)_q} (\alpha\beta; q)_\infty {\alpha^{u + x - y} \over (u + x - y)_q} (\alpha; q)_\infty {\beta^{v + x - y} \over (v + x - y)_q} (\beta; q)_\infty \\
&\;\;\;\;\;\;\;\times q^{(u - y)(v - y)} {(u + x - y)_q (v + x - y)_q \over (x)_q (u - y)_q (v - y)_q} \\
&= (\alpha\beta; q)_\infty (\alpha; q)_\infty (\beta; q)_\infty {(\alpha\beta)^x \over (x)_q} \alpha^{u} \beta^{v} \sum_y {q^{(u - y) (v - y)} \over (y)_q (u - y)_q (v - y)_q} \\
&= (\alpha\beta; q)_\infty (\alpha; q)_\infty (\beta; q)_\infty {(\alpha\beta)^x \over (x)_q} {\alpha^{u} \over (u)_q} {\beta^{v} \over (v)_q}
\end{align*}
where the last identity is due to \eqref{eq:qhypinf}.
\end{proof}
When $q = 0$ or $1$ the converse of Proposition \ref{p:burkegeom} is also true (see e.g. \cite{seppalainen12} for the $q = 1$ case).
That is, the Burke relation and the indentification in law of the triplets implies the specific distributions (geometric, exponential and loggamma) under reasonable assumptions thanks to the characterisation results in~\cite{ferguson64,ferguson65,lukacs55}.
The converse of the $q$-Burke property is open.
The $q$-Burke property allows one to tackle the $q$-polymer on the $\mathbb N^2$ lattice (obtained by a simple shift of the model on the $\mathbb N_{>0}^2$ lattice in previous considerations) with the following condition:
\begin{align*}
w_{0, 0} &= 0 \\
w_{i, 0} &\sim q\text{Geom}(\alpha), \qquad i \ge 1 \\
w_{0, j} &\sim q\text{Geom}(\beta), \qquad j \ge 1 \\
w_{i, j} &\sim q\text{Geom}(\alpha \beta) \qquad i, j \ge 1
\end{align*}
We call such a configuration a $q$-polymer with stationary boundary conditions.
Now we can show the strong law of large numbers of the partition functions.
\begin{proof}[Proof of Theorem \ref{t:lln}]
The proof is similar to the version of DLLP with geometric weights, see e.g. \cite[Theorem 4.12]{romik14}.
Let us consider the increment of $Z$ along the paths $(0, 0) \to (1, 0) \to \dots \to (\ell, 0) \to (\ell, 1) \to (\ell, 2) \to \dots \to (\ell, j)$.
Let
\begin{align*}
U(k) &= Z(k, 0) - Z(k - 1, 0), \qquad k = 1 : \ell \\
V(k') &= Z(\ell, k') - Z(\ell, k' - 1), \qquad k' = 1 : j
\end{align*}
The horizontal increment $U(1 : \ell) = w_{1 : \ell, 0}$ are i.i.d. random variables with distribution $q\text{Geom}(\alpha)$.
And by using Proposition \ref{t:qburke} recursively, we have that the vertical increments $V(1 : j)$ are i.i.d. random variable with distribution $q\text{Geom}(\beta)$.
Using \eqref{eq:qgeommoment} we obtain \eqref{eq:qpolyexp}, and with the usual strong law of large numbers we obtain \eqref{eq:lln}.
\end{proof}
In \cite[Section 6.3]{matveev-petrov15}, the dynamics of the first edge of the tableaux was formulated as an interacting particle system, called the geometric $q$-pushTASEP.
Therefore it has a natural correspondence with the $q$-polymer, where $Z(n, m) + m = \xi_m(n)$ is the location of the $m$th particle at time $n$.
Here we describe the geometric $q$-pushTASEP whose initial condition corresponds to the $q$-polymer with stationary boundary condition.
\begin{dfn}[Stationary geometric $q$-pushTASEP]
Let $(\xi_0 (n), \xi_1 (n), \dots)$ be the locations of the particles at time $n$, such that $\xi_0(n) \le \xi_1(n) \le \dots$ for all $n$.
Initially, $\xi_0(0) = 0$, $\xi_m(0) - \xi_{m - 1}(0) - 1 \sim q\text{Geom}(\beta)$.
That is, the $0$th particle is at $0$, and the gaps between consecutive particles are independently $q$-geometric distributed random variables with parameter $\beta$.
At time $n$, the $0$th particle jumps forward by a distance distributed according to $q$-geometric distribution with paramter $\alpha$, and sequentially given that the $m - 1$th particle has jumped, the $m$th particle jumps forward by a distance as a sum of a $q$-geometric random variable with paramter $\alpha \beta$ and a random variable $Y$ distributed according to $\xi_{m - 1} (n) - \xi_{m - 1} (n - 1) - 1 - Y \sim q\text{Hyp}(\xi_{m - 1} (n) - \xi_{m - 1}(n - 1), \infty, \xi_m (n - 1) - \xi_{m - 1}(n - 1) - 1)$.
\end{dfn}
Thus via the translation of (the arguments in the proof of) Theorem \ref{t:lln} we have
\begin{cly}
Let $\xi_{0 : \infty}$ be the locations of the stationary geometric $q$-pushTASEP.
Then we have the following
\begin{enumerate}
\item For any $j \ge 0$, the $j$th particle performs a simple random walk with increments distributed according to $q\text{Geom}(\alpha)$.
\item At any time, the gap between neighbouring particles are independently distributed according to $q\text{Geom}(\beta)$.
\item Almost surely, $\lim_{N \to \infty} {\xi_{\lfloor N y \rfloor} (\lfloor N x \rfloor)} / N = x \gamma(\alpha) + y (\gamma(\beta) + 1)$ for $x, y \ge 0$.
\end{enumerate}
\end{cly}
\subsection{Classical limit of the Burke relations}
It is natural to guess that under the classical limit of $q$-Burke relation (B1.$q$) and (B2.$q$) becomes the Burke relation (B1.1) and (B2.1) of the DP.
Here we give a heuristic argument justifying this statement.
The argument may be compared to that in the proof of \cite[Lemma 8.17]{matveev-petrov15}.
In the rest of this section, for convenience we omit the $\lfloor \cdot \rfloor$ when an integer is required.
Given $U, V$ we define
\begin{align*}
U_\epsilon &= m(\epsilon) + \epsilon^{-1} U \\
V_\epsilon &= m(\epsilon) + \epsilon^{-1} V
\end{align*}
and sample $X_\epsilon \sim q\text{Hyp}(U_\epsilon, \infty, V_\epsilon)$, then by Items 2 and 3 of Lemma \ref{l:qclassicallimits}
\begin{align*}
\mathbb P(X_\epsilon &= m(\epsilon) + \epsilon^{-1} x) = q^{\epsilon^{- 2} (U - x) (V - x)} {(m(\epsilon) + \epsilon^{-1} U)_q (m(\epsilon) + \epsilon^{-1} V)_q \over (m(\epsilon) + \epsilon^{-1} x)_q (\epsilon^{-1}(U - x))_q (\epsilon^{-1} (V - x))_q}\\
&= \exp( \epsilon^{-1}(- (U - x) (V - x) + {\pi^2 \over 6} - \text{Li}_2(e^{x - U}) - \text{Li}_2(e^{x - V})) + o(\epsilon^{-1})) \\
&=: \exp(\epsilon^{-1} f(x) + o(\epsilon^{-1}))
\end{align*}
Using the reflection property of the dilogarithm function
\begin{align*}
\text{Li}_2(z) + \text{Li}_2(1 - z) = {\pi^2 \over 6} - \log z \log (1 - z)
\end{align*}
we have
\begin{align*}
f(- \log (e^{- U} + e^{- V})) = 0.
\end{align*}
By taking derivatives of $f$ we also have
\begin{align*}
f'(- \log (e^{- U} + e^{- V})) = 0\\
f''(x) < 0
\end{align*}
Hence $f$ achieves unique maximum $0$ at $- \log (e^{- U} + e^{- V})$.
Now we can define $X(\epsilon)$ by the relation $X_\epsilon = m(\epsilon) + \epsilon^{-1} X(\epsilon)$ and obtain
\begin{align*}
X(\epsilon) \to X' = - \log (e^{- U} + e^{- V})
\end{align*}
and we have recovered (B2.1).
\section{$q$-local moves}\label{s:qlocalmoves}
In this section we define the $q$-local moves and prove Theorem \ref{t:lmpush}.
In a sense, the local moves are very fundamental building blocks, as they unify the PNG and the RSK.
Let us define an object by adding to a 2 by 2 matrix a labelled edge connecting the $21$- and $22$-entries:
$
\begin{pmatrix}
a & & b \\ c & \underset{e}{\text{---}} & d
\end{pmatrix}
$
The $q$-deformation of local moves consist of two maps:
\begin{align*}
l:
\begin{pmatrix}
a & & b \\ c & \underset{e}{\text{---}} & d
\end{pmatrix}
\mapsto
\begin{pmatrix}
a' & b \\ c & b + c + d - a - a'
\end{pmatrix};
\qquad
l':
\begin{pmatrix}
a & b \\ c & d
\end{pmatrix}
\mapsto
\begin{pmatrix}
a & & b \\ c & \underset{d - c}{\text{---}} & d
\end{pmatrix}
\end{align*}
where $a'$ is a random variable with $q$-hypergeometric distribution $q\text{Hyp}(c - a, e, b - a)$.
On the boundary we define
\begin{align}
l:
\begin{pmatrix}
& & \\ c & \underset{-}{\text{---}} & d
\end{pmatrix}
&\mapsto
\begin{pmatrix}
& \\ c & c + d
\end{pmatrix}; \label{eq:lmb1}\\
\begin{pmatrix}
& & b \\ & \underset{-}{\text{---}} & d
\end{pmatrix}
&\mapsto
\begin{pmatrix}
& b \\ & b + d
\end{pmatrix}; \label{eq:lmb2}\\
\begin{pmatrix}
&& \\ & \underset{-}{\text{---}} & d
\end{pmatrix}
&\mapsto
\begin{pmatrix}
& \\ & d
\end{pmatrix}. \label{eq:lmb3}
\end{align}
And
\begin{align*}
l':
\begin{pmatrix}
& \\ c & d
\end{pmatrix}
&\mapsto
\begin{pmatrix}
& & \\ c & \underset{-}{\text{---}} & d
\end{pmatrix};\\
\begin{pmatrix}
& b \\ & d
\end{pmatrix}
&\mapsto
\begin{pmatrix}
& & b \\ & \underset{-}{\text{---}} & d
\end{pmatrix}; \\
\begin{pmatrix}
& \\ & d
\end{pmatrix}
&\mapsto
\begin{pmatrix}
&& \\ & \underset{-}{\text{---}} & d
\end{pmatrix}.
\end{align*}
Given an array $A = (w_{i j})_{i, j \ge 1}$ with labelled horizontal edges connecting neighbouring entries in the same rows, let $l_{ij}$ and $l'_{ij}$ be $l$ and $l'$ acting on the $(i, j)$-th sub-2 by 2 matrix, namely $l_{i, j}$ (resp. $l'_{i, j}$) acts on $A$ by acting on
$ \begin{pmatrix} w_{i - 1, j - 1} & & w_{i - 1, j} \\ w_{i, j - 1} & \underset{e_{i j}}{\text{---}} & w_{i, j} \end{pmatrix} $
\Bigg(resp.
$ \begin{pmatrix} w_{i - 1, j - 1} & w_{i - 1, j} \\ w_{i, j - 1} & w_{i, j} \end{pmatrix} $
\Bigg)
and keeping other entries unchanged.
Similarly as in \cite{oconnell-seppalainen-zygouras14}, define $\rho_{i j}$ by
\begin{align*}
\rho_{i j} &= (l'_{(i - 1 - j)^+ + 1, (j - i + 1)^+ + 1} \circ \dots \circ l'_{i - 2, j - 1} \circ l'_{i - 1, j} )\\
&\qquad\qquad\circ (l_{(i - j)^+ + 1, (j - i)^+ + 1} \circ \dots \circ l_{i - 1, j - 1} \circ l_{i j})
\end{align*}
where for any integer $n$ we denote $(n)^+ := n \vee 0$ to be the positive part of $n$.
The operator $l'_{ij}$ are purely auxiliary, as they only serve to store the differences like $t_{i, j} - t_{i, j - 1} = \lambda^{j - 1}_{k - 1} - \tilde\lambda^{j - 1}_{k}$ before $t_{i, j}$ is unrecoverably changed (see the proof of Theorem \ref{t:qlocalmoves} for more details).
Given an input array $(w_{i j})$, we initialise by labelling all the horizontal edges between $w_{i - 1, j}$ and $w_{i, j}$ with $e_{i, j} = \infty$.
For two paritions $\lambda$ and $\mu$, denote by $\lambda \nearrow \mu$ if $\lambda \prec \mu$ and $\mu = \lambda + \mathbf{e}_i$ for some $i$.
Let $\Lambda$ be a Young diagram of size $N$, and $\emptyset = \Lambda(0) \nearrow \Lambda(1) \nearrow \Lambda(2) \nearrow \dots \nearrow \Lambda(N) = \Lambda$ be a sequence of growing Young diagrams, which we call a growth sequence of $\Lambda$.
For $\lambda \nearrow \mu$, denote $\mu / \lambda$ as the coordinate of the box added to $\lambda$ to form $\mu$.
For example, if $\lambda = (4, 2, 1)$ and $\mu = (4, 3, 1)$ then $\mu / \lambda = (2, 3)$.
As aside, it is well known that a growth sequence $\Lambda(0 : N)$ of $\Lambda$ corresponds to a standard Young tableau $T$ of shape $\Lambda$, where $T$ can be obtained by filling the box with coordinate ${\Lambda(i) / \Lambda(i - 1)}$ by $i$.
Now define
\begin{align*}
T_\Lambda = \rho_{\Lambda / \Lambda(N - 1)} \circ \dots \circ \rho_{\Lambda(2) / \Lambda(1)} \circ \rho_{\Lambda(1) / \emptyset}
\end{align*}
to be an operator acting on integer arrays on $\mathbb N_{>0}^2$.
It does not depend on the choice of the sequence as we shall see in the proof of Theorem \ref{t:qlocalmoves}, hence it is well defined.
Denote by $S(\Lambda)$ the boundary of $\Lambda$:
\begin{align*}
S(\Lambda) = \{(i, j) \in \Lambda: (i + 1, j + 1) \not\in \Lambda\}
\end{align*}
The set $S(\Lambda)$ determines a coordinate system of all cells in $\Lambda$.
To see this, for any $(i', j') \in \Lambda$, there exists unique $(i, j) \in S(\Lambda)$ and unique $k \ge 1$ such that $(i', j')$ and $(i, j)$ are at the same diagonal and their ``distance'' is $k - 1$:
\begin{align*}
i - i' = j - j' = k - 1
\end{align*}
In this case we call $(i, j, k)$ the $\Lambda$-coordinate of $(i', j')$.
In the following, for some big integers $\hat N$ and $\hat M$, let $I(\Lambda) := [\hat N] \times [\hat M]$ be a rectangular lattice covering $\Lambda$: $I(\Lambda) \supset \Lambda$.
\begin{thm}\label{t:qlocalmoves}
Let $(t_{ij}) = T_\Lambda A$.
For any $(i', j') \in \Lambda$ with $\Lambda$-coordinate $(i, j, k)$
\begin{align*}
t_{i', j'} = \lambda^j_k(i).
\end{align*}
where $(\lambda^j_k(i))$ is the output of $q\text{RSK}(A(I(\Lambda)))$. Note the above equality is an identity in joint distribution over all boxes $(i', j')$.
Specifically when $\Lambda = [n] \times [m]$ is a rectangular lattice, by specifying $I(\Lambda) = \Lambda$ the $P$- and $Q$-tableaux of $q\text{RSK}(A(\Lambda))$
\begin{align*}
\lambda^k_j &= t_{n - j + 1, k - j + 1}, \qquad j = 1 : k \wedge n, k = 1 : m \\
\mu^k_j &= t_{k - j + 1, m - j + 1}, \qquad k = 1 : n, j = 1 : k \wedge m
\end{align*}
form exactly the output matrix $T_\Lambda A(\Lambda)$, thus the local moves coincide with the $q$RSK algorithm taking the matrix $A(\Lambda)$ in this case.
\end{thm}
Here is an illustration, where we show the shape $\Lambda$, and $t_{i'j'}$ which corresponds to $\lambda^i_k(j)$.
\begin{center}
\includegraphics{figure-lambdaandt.pdf}
\end{center}
\begin{proof}
We prove it by induction.
When $\Lambda = (1)$, that is, it is a one-by-one matrix, applying the local move $T_\Lambda = \rho_{1, 1} = l_{1, 1}$ on $A$ we obtain the correct result $\lambda^1_1 (1) = w_{1, 1}$.
Let $\Theta \nearrow \Xi$ be two Young diagrams such that $\Xi / \Theta = (n, k)$.
We assume the theorem is true for $\Lambda = \Theta$ and want to show it holds for $\Lambda = \Xi$.
For all $(i, j)$ with $i - j \neq n - k$, on the one hand, the $\Theta$-coordinate and the $\Xi$-coordinate of $(i, j)$ coincide, and on the other hand,
\begin{align*}
T_\Xi A(i, j) = \rho_{n, k} T_\Theta A(i, j) = T_\Theta A (i, j)
\end{align*}
as $\rho_{n, k}$, by its definition, only alters the entries along the diagonal $i - j = n - k$.
Therefore it suffices to show
\begin{align*}
(\rho_{n, k} T_\Theta A) (n - \ell + 1, k - \ell + 1) = \lambda^k_\ell(n), \qquad \ell = 1 : n \wedge k.
\end{align*}
Once again we use an induction argument.
Denote for $\ell = 0 : n \wedge k$
\begin{align*}
t^\ell = l_{n - \ell + 1, k - \ell + 1} \circ l_{n - \ell, k - \ell} \circ \dots \circ l_{n - 1, k - 1} \circ l_{n, k} T_\Theta A.
\end{align*}
Then $t^0 = T_\Theta A$ and $t^{n \wedge k} = T_\Xi A$.
It suffices to show that for $\ell = 1 : n \wedge k - 1$
\begin{align*}
t^\ell_{n - i + 1, k - i + 1} =
\begin{cases}
\lambda^k_i(n) & 1 \le i \le \ell \\
a^k_{\ell + 1} (n) & i = \ell + 1 \\
\lambda^{k - 1}_{i - 1} (n - 1) & \ell + 2 \le i \le n \wedge k
\end{cases},
\end{align*}
and for $\ell = n \wedge k$, $t^\ell_{n - i + 1, k - i + 1} = \lambda^k_i(n)\forall i = 1 : n \wedge k$.
We consider the bulk case, that is when $n, k > 1$, as the boundary cases are similar and much easier.
For $\ell = 1$, when $l_{n, k}$ acts on $t^0$, by the Noumi-Yamada description it alters the submatrix (note that $w_{n, k} = a^k_1(n)$)
\begin{align*}
\begin{pmatrix}
t^0_{n - 1, k - 1} & & t^0_{n - 1, k} \\ t^0_{n, k - 1} & \underset{\infty}{\text{---}} & t^0_{n, k}
\end{pmatrix}
=
\begin{pmatrix}
\lambda^{k - 1}_1 (n - 1) & \lambda^{k}_1 (n - 1) \\ \lambda^{k - 1}_1 (n) & w_{n, k}
\end{pmatrix}
\end{align*}
into
\begin{align*}
\begin{pmatrix}
a^k_2(n) & \lambda^k_1 (n - 1) \\ \lambda^{k - 1}_1 (n) & \lambda^k_1(n)
\end{pmatrix}
\end{align*}
For $1 < \ell < n \wedge k$, given the induction assumption, $l_{n - \ell + 1, k - \ell + 1}$ acts on $t^{\ell - 1}$ by changing
\begin{align*}
\vmat{t^{\ell - 1}_{n - \ell, k - \ell}}{t^{\ell - 1}_{n - \ell, k - \ell + 1}}{t^{\ell - 1}_{n - \ell + 1, k - \ell}}{t^{\ell - 1}_{n - \ell + 1, k - \ell + 1}}{e}
= \vmat{\lambda^{k - 1}_\ell (n - 1)}{\lambda^k_\ell (n - 1)}{\lambda^{k - 1}_\ell (n)}{a^k_\ell(n)}{\lambda^{k - 1}_{\ell - 1} (n - 1) - \lambda^{k - 1}_\ell (n)}
\end{align*}
into
\begin{align*}
\begin{pmatrix}
a^k_{\ell + 1}(n) & \lambda^k_\ell (n - 1) \\ \lambda^{k - 1}_\ell(n) & \lambda^k_\ell (n)
\end{pmatrix}
\end{align*}
and that $l'_{n - \ell, k - \ell + 1}$ transforms the submatrix
\begin{align*}
\begin{pmatrix}
t^\ell_{n - \ell - 1, k - \ell} & t^{\ell}_{n - \ell - 1, k - \ell + 1} \\ t^\ell_{n - \ell, k - \ell} & t^\ell_{n - \ell, k - \ell + 1}
\end{pmatrix}
=
\begin{pmatrix}
\lambda^{k}_{\ell + 1} (n - 1) & \lambda^{k + 1}_{\ell + 1} (n - 1) \\ \lambda^{k}_{\ell + 1} (n) & \lambda^{k}_{\ell} (n - 1)
\end{pmatrix}
\end{align*}
into
\begin{align*}
\vmat{\lambda^{k}_{\ell + 1}(n - 1)}{\lambda^{k + 1}_{\ell + 1} (n - 1)}{\lambda^{k}_{\ell + 1} (n)}{\lambda^{k}_{\ell} (n - 1)}{\lambda^{k}_{\ell}(n - 1) - \lambda^{k}_{\ell + 1} (n)}
\end{align*}
which stores the correct argument for a possible future operation $\rho_{n, k + 1}$.
For $\ell = n \wedge k$, say $n > k$, then by the induction assumption and the definition of the local moves at the left boundary \eqref{eq:lmb2}, $l_{n - k + 1, 1}$ acts on $t^{k - 1}$ by changing
\begin{align*}
\begin{pmatrix}
& & t^{k - 1}_{n - k, 1} \\ & \underset{-}{\text{---}} & t^{k - 1}_{n - k + 1, 1}
\end{pmatrix}
=
\begin{pmatrix}
& & \lambda^{k}_{k}(n - 1) \\ & \underset{-}{\text{---}} & a^k_k(n)
\end{pmatrix}
\end{align*}
into
\begin{align*}
\begin{pmatrix}
& \lambda^k_k(n - 1) \\ & \lambda^k_k(n)
\end{pmatrix}.
\end{align*}
This is the boundary case \eqref{eq:leftboundary} of the Noumi-Yamada description.
Similarly when $n = k$ and $n < k$, the upper boundary and upper-left boundary cases \eqref{eq:lmb1}\eqref{eq:lmb3} correspond to Lemma \ref{l:upperboundary}.
\end{proof}
We also note a $q$-analogue of the map $b_{i,j}$ in \cite[(3.5)]{oconnell-seppalainen-zygouras14} (or the octahedron recurrence as in \cite{hopkins14}).
Applying $\rho_{n, k}$ to a tri-diagonal strip $(i, j)_{n - k - 1 \le i - j \le n - k + 1}$, in the bulk, that is when $i - j = n - k, i, j > 1, i < n$ we have
\begin{align}
t_{i, j} &:= t_{i - 1, j} + t_{i, j - 1} - t_{i - 1, j - 1} + q\text{Hyp}(t_{i + 1, j} - t_{i, j}, t_{i + 1, j + 1} - t_{i + 1, j}, t_{i, j + 1} - t_{i, j}) \notag \\
&\qquad\qquad- q\text{Hyp}(t_{i, j - 1} - t_{i - 1, j - 1}, t_{i, j} - t_{i, j - 1}, t_{i - 1, j} - t_{i - 1, j - 1}), \qquad i < n - 1 \label{eq:t1}\\
t_{i, j} &:= t_{i - 1, j} + t_{i, j - 1} - t_{i - 1, j - 1} + q\text{Hyp}(t_{i + 1, j} - t_{i, j}, \infty, t_{i, j + 1} - t_{i, j}) \notag\\
&\qquad\qquad- q\text{Hyp}(t_{i, j - 1} - t_{i - 1, j - 1}, t_{i, j} - t_{i, j - 1}, t_{i - 1, j} - t_{i - 1, j - 1}), \qquad i = n - 1 \label{eq:t2}
\end{align}
where all the $q$-hypergeometric random variables with distinct parameters are independent.
\subsection{The push-forward measure of the $q$-local moves}
In this section we prove Theorem \ref{t:lmpush}.
Before starting the proof, we show some illustrations to help with the readability.
Here is an illustration of the measure $\mu_{q, \Lambda}$ for $\Lambda = (5, 5, 3, 3, 1)$.
Some of the $t$-entries have been labelled.
We focus on the products of $q$-Pochhammers:
we use solid (resp. dashed) lines to indicate endpoints whose differences contribute to the $q$-Pochhammers in the denominator (resp. numerator).
For example, the special solid line on the top left corner connecting $0$ and $t_{11}$ corresponds to $(t_{11} - 0)^{-1}$ in the measure.
\begin{center}
\includegraphics{figure-measure.pdf}
\end{center}
The proof is about transformation by $\rho_{n, k}$ from measure $\mu_{\Theta, q}$ to $\mu_{\Lambda, q}$ where $\Lambda / \Theta = (n, k)$.
Without loss of generality assume $n > k$.
Intuitively speaking, after cancellations of $q$-Pochhammers that are not affected during this transformation, it suffices to show that $\rho_{n, k}$ has the following illustrated effect:
\begin{center}
\includegraphics[scale=.7]{figure-rhonk.pdf}
\end{center}
Where the $a_i$'s, $b_i$'s, $c_i$'s and $d_i$'s are aliases of $t_{\ell, j}$'s on the tridiagonal area $\{(\ell, j): n - k - 1 \le \ell - j \le n - k + 1\}$ and the precise definition can be found in the proof.
Now let us turn to the complete proof.
It may be compared to that of \cite[Theorem 3.2]{oconnell-seppalainen-zygouras14}.
\begin{proof}[Proof of Theorem \ref{t:lmpush}]
We want to show
\begin{align*}
\sum_{(w_{i, j})_{(i, j) \in \Lambda}} \left(\prod_{(i, j) \in \Lambda} f_{q\text{Geom}(\hat\alpha_i \alpha_j)} (w_{i, j})\right) \mathbb P(T_\Lambda A = t) = \mu_{q, \Lambda} (t)
\end{align*}
First we can remove the matching product $\prod (\hat\alpha_i \alpha_j; q)_\infty$ from both sides.
By Lemma \ref{l:wt} and Theorem \ref{t:qlocalmoves} we have that almost surely
\begin{align*}
\sum_{i = 1 : \Lambda'_j} w_{i, j} =
\begin{cases}
(T_\Lambda A)_{\Lambda'_1, 1} & j = 1 \\
\sum_{k = 1 : j \wedge \Lambda'_j} (T_\Lambda A)_{\Lambda'_j - k + 1, j - k + 1} - \sum_{k = 1 : (j - 1) \wedge \Lambda_j} (T_\Lambda A)_{\Lambda_j - k + 1, j - k} & j > 1
\end{cases}\\
\sum_{j = 1 : \Lambda_i} w_{i, j} =
\begin{cases}
(T_\Lambda A)_{1, \Lambda_1} & i = 1 \\
\sum_{k = 1 : i \wedge \Lambda_i} (T_\Lambda A)_{i - k + 1, \Lambda_i - k + 1} - \sum_{k = 1 : (i - 1) \wedge \Lambda_i} (T_\Lambda A)_{i - k, \Lambda_i - k + 1} & i > 1
\end{cases}
\end{align*}
Therefore the power of $\hat \alpha_i$ and $\alpha_j$ on both sides match, which we can also remove from the identity, leaving it sufficient to show
\begin{align}
\sum_{(w_{i, j})_{(i, j) \in \Lambda}} \left(\prod_{(i, j) \in \Lambda} (w_{i, j})_q^{-1}\right) \mathbb P(T_\Lambda A = t) = M_\Lambda (t) \label{eq:m1}
\end{align}
where
\begin{align*}
M_\Lambda (t) = (t_{1 1})_q^{-1} {\prod_{(i, j) \in \Lambda: (i - 1, j - 1) \in \Lambda} (t_{i j} - t_{i - 1, j - 1})_q \over \prod_{(i, j) \in \Lambda: (i, j - 1) \in \Lambda} (t_{i j} - t_{i, j - 1})_q \prod_{(i, j) \in \Lambda: (i - 1, j) \in \Lambda} (t_{i j} - t_{i - 1, j})_q}.
\end{align*}
Once again we show this by an induction argument.
When $\Lambda = (1)$ has just one coordinate, the left hand side of \eqref{eq:m1} becomes
\begin{align*}
\sum_{w_{11}} (w_{11})_q^{-1} \mathbb P(l_{11} A = t) = (t_{11})_q^{-1} = M_\Lambda(t)
\end{align*}
is the right hand side of \eqref{eq:m1}.
Let $\Theta$ be a Young diagram such that $\Theta \nearrow \Lambda$.
Let $(n, k) = \Lambda / \Theta$.
Since $T_\Lambda = \rho_{n, k} \circ T_\theta$, we can rewrite the left hand side of \eqref{eq:m1} as
\begin{align*}
\sum_{w_{n, k}} \sum_{(w_{i, j})_{(i, j) \in \Theta}} (w_{n, k})_q^{-1} \prod_{(i, j) \in \Theta} (w_{i, j})_q^{-1} \sum_{t': t'_{n, k} = w_{n, k}} \mathbb P(T_\Theta A = t') \mathbb P(\rho_{n, k} t' = t) \\
= \sum_{t'} (t'_{n, k})_q^{-1} M_\Theta (t') \mathbb P(\rho_{n, k} t' = t).
\end{align*}
where the last identity comes from the induction assumption.
So it suffices to show
\begin{align}
\sum_{t'} (t'_{n, k})_q^{-1} {M_\Theta (t') \over M_\Lambda (t)} \mathbb P(\rho_{n, k} t' = t) = 1. \label{eq:m4}
\end{align}
We assume $n > k$, as what follows can be adapted to the case $n < k$ due to the symmetry.
The proof when $n = k$ is similar with very minor changes.
For example, the right hand side of \eqref{eq:m4.5} will be the same except $(a_{k - 1} - b_k)_q$ and $(d_k - b_k)_q^{-1}$ are replaced by $(a_{k - 1})_q$ and $(d_k)_q^{-1}$ respectively due to the involvement of $(t'_{11})_q^{-1}$ and $(t_{11})_q^{-1}$.
We return to the proof where we assume $n > k$.
When $k = 1$,
\begin{align*}
(t'_{n, k})_q^{-1} {M_\Theta (t') \over M_\Lambda (t)} = (t'_{n, k})_q^{-1} (t_{n, k} - t_{n - 1, k})_q
\end{align*}
and due to the $q$-local moves on the boundary
\begin{align*}
\mathbb P(\rho_{n, k} t' = t) = \mathbb I_{t_{n, k} = t'_{n, k} + t'_{n - 1, k}, t_{i, j} = t'_{i, j} \forall (i, j) \neq (n, k)}
\end{align*}
and we arrive at \eqref{eq:m4}.
When $k > 1$, since $\rho_{n, k}$ only changes the coordinates $B = \{(n, k), (n - 1, k - 1), \dots, (1, n - k + 1)\}$ of the matrix, the sum in \eqref{eq:m4} is over $(t'_{n - i + 1, k - i + 1})_{i = 1 : k}$.
By the structure of the products in $M_\Theta$ and $M_\Lambda$, we see that all the products outside of the diagonal strip near $(i - j) = n - k$ are cancelled out in $M_{k - 1} (t') / M_k (t)$.
More precisely, when $k > 1$, by denoting
\begin{align*}
a_i &= t_{n - i, k - i}, \qquad i = 0 : k - 1 \\
b_i &= t_{n - i, k - i + 1} = t'_{n - i, k - i + 1}, \qquad i = 1 : k \\
c_i &= t_{n - i + 1, k - i} = t'_{n - i + 1, k - i}, \qquad i = 1 : k - 1 \\
d_i &= t'_{n - i + 1, k - i + 1}, \qquad i = 1 : k
\end{align*}
we have
\begin{align}
(t'_{n, k})_q^{-1}{M_\Theta (t') \over M_\Lambda(t)} = (d_1)_q^{-1} (b_1 - d_2)_q^{-1} (c_1 - d_2)_q^{-1} (a_{k - 1} - b_k)_q (d_k - b_k)_q^{-1} {\prod_{i = 3 : k} h(d_{i + 1}, b_i, c_i, d_i) \over \prod_{i = 1 : k - 1} h(a_i, b_i, c_i, a_{i - 1})}
\label{eq:m4.5}
\end{align}
where
\begin{align*}
h(a', b', c', d') = (d' - a')_q (b' - a')_q^{-1} (c' - a')_q^{-1} (d' - c')_q^{-1} (d' - b')_q^{-1}
\end{align*}
It is time to calculate $\mathbb P(\rho_{n, k} (t') = t)$.
This is the probability of mapping $(d_1, d_2, \dots, d_k)$ to $(a_0, a_1, \dots, a_{k - 1})$.
By the definition of the $q$-local moves (also see \eqref{eq:t1} and \eqref{eq:t2}) we have
\begin{align}
a_0 &= b_1 + c_1 - d_2 + d_1 - X_1 \label{eq:m5}\\
a_i &= b_{i + 1} + c_{i + 1} - d_{i + 2} + X_i - X_{i + 1}, \qquad i = 1 : k - 2 \label{eq:m6}\\
a_{k - 1} &= b_k + X_{k - 1} \label{eq:m7}
\end{align}
where
\begin{align*}
X_1 &\sim q\text{Hyp}(c_1 - d_2, \infty, b_1 - d_2) \\
X_i &\sim q\text{Hyp}(c_{i} - d_{i + 1}, d_{i} - c_{i} ,b_{i} - d_{i + 1}), \qquad i = 2 : k - 1
\end{align*}
By denoting $d_1 = X_0$, we can pin down the $X_i$'s in terms of the other variables.
\begin{align*}
X_i &= \sum_{j = i : k - 1} a_j - \sum_{j = i + 1 : k} b_j - \sum_{j = i + 1 : k - 1} c_j + \sum_{j = i + 2 : k} d_j, \qquad i = 0 : k - 1
\end{align*}
Therefore
\begin{align*}
\mathbb P(&\rho_{n, k} (t') = t) \\
&= f_{q\text{Hyp}(c_1 - d_2, \infty, b_1 - d_2)} (X_1) \prod_{i = 2 : k - 1} f_{q\text{Hyp}(c_{i} - d_{i + 1}, d_{i} - c_{i} ,b_{i} - d_{i + 1})} (X_i)
\end{align*}
Since we have the pmf's
\begin{align*}
&f_{q\text{Hyp}(c_1 - d_2, \infty, b_1 - d_2)} (X_1) = q^{(c_1 - d_2 - X_1) (b_1 - d_2 - X_1)} {\boxed{(c_1 - d_2)_q (b_1 - d_2)_q} \over (X_1)_q (c_1 - d_2 - X_1)_q (b_1 - d_2 - X_1)_q}\\
&f_{q\text{Hyp}(c_{i} - d_{i + 1}, d_{i} - c_{i} ,b_{i} - d_{i + 1})} (X_i) = q^{(c_{i} - d_{i + 1} - X_i) (b_{i} - d_{i + 1} - X_i)} \\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\; \times \boxed{(c_{i} - d_{i + 1})_q (d_{i} - c_{i})_q (b_{i} - d_{i + 1})_q (d_{i} - b_{i})_q (d_{i} - d_{i + 1})_q^{-1}} \\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\; \times (X_i)_q^{-1} (c_{i} - d_{i + 1} - X_i)_q^{-1} (b_{i} - d_{i + 1} - X_i)_q^{-1} (d_i + d_{i + 1} - b_i - c_i + X_i)_q^{-1}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad i = 2 : k - 1
\end{align*}
The framed terms cancel out the $(c_1 - d_2)_q^{-1} (b_1 - d_2)_q^{-1} \prod_i h(d_{i + 1}, b_i, c_i, d_i)$ term in \eqref{eq:m4.5}.
By shifting the terms not involving $d_{2 : k - 1}$ or $X_{0 : k - 1}$ from the left hand side of \eqref{eq:m4} to the right hand side we are left with showing
\begin{equation}
\begin{aligned}
\sum_{d_{2 : k - 1}} &\prod_{i = 1 : k - 1} q^{(c_i - d_{i + 1} - X_i) (b_i - d_{i + 1} - X_i)} \prod_{i = 0 : k - 1} (X_i)_q^{-1} \\
&\times \prod_{i = 1 : k - 1} \left((c_i - d_{i + 1} - X_i)_q^{-1} (b_i - d_{i + 1} - X_i)_q^{-1}\right) \\
&\times \prod_{i = 2 : k - 1} (d_i + d_{i + 1} - b_i - c_i + X_i)_q^{-1} \times (d_k - b_k)_q^{-1}\\
&= (a_{k - 1} - b_k)_q^{-1} \prod_{i = 1 : k - 1} h(a_i, b_i, c_i, a_{i - 1})
\end{aligned}
\label{eq:m8}
\end{equation}
By the relation between $X_i$ and $X_{i + 1}$ in \eqref{eq:m5}\eqref{eq:m6} as well as the explicit form of $X_{k - 1}$ in \eqref{eq:m7}, we can write
\begin{align*}
(X_i)_q^{-1} &= (X_{i + 1} + a_i - b_{i + 1} - c_{i + 1} + d_{i + 2})_q^{-1} \qquad i = 0 : k - 2 \\
(d_i + d_{i + 1} - b_i - c_i + X_i)_q^{-1} &= (X_{i - 1} + d_i - a_{i - 1})_q^{-1}, \qquad i = 2 : k - 1\\
(d_k - b_k)_q^{-1} &= (X_{k - 1} + d_k - a_{k - 1})_q^{-1}
\end{align*}
plugging these back into the left hand side of \eqref{eq:m8}, it becomes
\begin{align*}
(X_{k - 1})_q^{-1} \sum_{d_k} f_k(d_k) \sum_{d_{k - 1}} f_{k - 1}(d_{k - 1}) \sum_{d_{k - 2}} f_{k - 2}(d_{k - 2}) \dots \sum_{d_3} f_3(d_3) \sum_{d_2} f_2(d_2)
\end{align*}
where
\begin{align*}
f_{i } (d_{i }) = &q^{(c_{i - 1} - d_{i } - X_{i - 1}) (b_{i - 1} - d_{i } - X_{i - 1})} (X_{i - 1} + a_{i - 2} - b_{i - 1} - c_{i - 1} + d_{i })_q^{-1} \\
&\times (c_{i - 1} - d_{i} - X_{i - 1})_q^{-1} (b_{i - 1} - d_{i} - X_{i - 1})_q^{-1} (X_{i - 1} + d_{i } - a_{i - 1})_q^{-1}
\end{align*}
Note that $f_i (d_i)$ depends on $X_{i - 1}$, which in turn depends on $d_{i + 1 : k - 1}$.
However the sum remove the dependencies of all the $d$'s.
More precisely, starting from the innermost sum $\sum_{d_2} f_2 (d_2)$,
by applying \eqref{eq:qhyp} with $(m_1, m_2, k, s) := (c_{1} - a_{1}, a_{0} - c_{1} , b_{1} - a_{1}, d_2 + X_{1} - a_{1})$ , we have
\begin{align*}
\sum_{d_2} f_2 (d_2) = h(a_1, b_1, c_1, a_0)
\end{align*}
which has no dependency on the $d$'s.
So we can recursively apply \eqref{eq:qhyp} with $(m_1, m_2, k, s) := (c_{i - 1} - a_{i - 1}, a_{i - 2} - c_{i - 1} , b_{i - 1} - a_{i - 1}, d_i + X_{i - 1} - a_{i - 1})$ , to obtain
\begin{align*}
\sum_{d_i} f_i (d_i) = h(a_{i - 1}, b_{i - 1}, c_{i - 1}, a_{i - 2}), \qquad i = 2 : k
\end{align*}
This leaves us with only $(a_{k - 1} - b_k)_q^{-1}$ on the right hand side of \eqref{eq:m8}.
Since on the left hand side $X_{k - 1} = a_{k - 1} - b_k$ we are done.
\end{proof}
\subsection{Joint distribution of $q$-polymer and polynuclear growth models}
By applying Theorem \ref{t:qlocalmoves} and Theorem \ref{t:lmpush} we obtain the joint distribution of $q$-polymer, Corollary \ref{c:qpdist}.
When $\Lambda = (p, p - 1, p - 2, \dots, 1)$ is a staircase Young diagram, the $q$-local moves defines a $q$-version of the multilayer polynuclear growth model.
Recall in \cite{johansson03}, the PNG model was concerned with the height function $h^j_m(k)$ at time $m$, position $k$ and level $j$, where all of time, space and level are discrete.
The level starts from $0$ onwards, where we call $h^0_m(k)$ the top level height function and abbreviate it as $h_m(k)$.
The height function is $0$ outside of a cone: $h^j_m(k) = 0$ for $|k| \ge m - 2 j$.
The initial condition is $h^j_0(0) = 0$ for all $j$, and the height functions grow as they are fed by the droplets over the time.
We denote the droplet at time $m$ and position $k$ by $d_m(k)$, which is also zero outside of the cone $|k| < m$ or when $k$ and $m$ have same parity.
Later we will see that these are droplets for the top level, and there will be droplets for the non top levels as well.
Hence it is useful to denote $d^0_m(k) := d_m(k)$ and use $d^j_m(k)$ as notations for droplets at level $j$ in general.
The PNG model evolves as follows:
\begin{enumerate}
\item At time $1$, the droplet at position $0$ forms the height at the same location: $h_1(0) = d_1(0)$.
\item At time $2$, the height expands horizontally by $1$ to both directions ($h_2(-1) = h_2(0) = h_2(1) = h_1(0)$), and droplets at the new positions ($\pm1$) adds to the height function ($h_2(-1) := h_2(-1) + d_2(-1)$, $h_2(1) := h_2(1) + d_2(1)$). So the net effect is:
\begin{align*}
h_2(-1) = h_1(0) + d_2(-1), \qquad h_2(0) = h_1(0), \qquad h_2(1) = h_1(0) + d_2(1).
\end{align*}
\item At time $3$, the peak heights (namely the ones at position $\pm1$) further expands horizontally by $1$ to both directions, and at positions $-2, -1, 1$ and $2$ the same event as at time $2$ happens:
\begin{align*}
h_3(-2) &= h_2(-1) + d_3(-2), \qquad h_3(2) = h_2(1) + d_3(2),\\
h_3(-1) &= h_2(-1), \qquad h_3(1) = h_2(1).
\end{align*}
However, at position $0$, the expansions from positions $-1$ and $1$ collide, in which case the maximum of the colliding heights remains on the top level, whose sum with the droplet $d_3(0)$ forms the new height, and the minimum becomes the droplet for the first level and forms the height at the first level:
\begin{align*}
h_3(0) = (h_2(-1) \vee h_2(1)) + d_3(0), \qquad h^1_3(0) = d^1_3(0) = h_2(-1) \wedge h_2(1).
\end{align*}
\item At any time, starting from the top level, the height function at each level expands both ways. And at any collision, the sum of the maximum of the colliding heights and the droplet becomes the new height, and the minimum becomes the droplet for the next level at the same position:
\begin{equation}
\begin{aligned}
h^j_m(k) &= (h^j_{m - 1} (k - 1) \vee h^j_{m - 1} (k + 1)) + d^j_m(k)\\
d^{j + 1}_m(k) &= h^j_m(k - 1) \wedge h^j_m(k + 1).
\end{aligned}
\label{eq:png}
\end{equation}
\end{enumerate}
Clearly given that all droplets are sampled independently, the height functions are a Markov process because their values at time $n$ only depend on their values at time $n - 1$ and the droplets at time $n$.
It is known that the PNG model observes the same dynamics as the RSK algorithm acting on a staircase tableau.
More specifically, one may identify the top-level droplets for PNG with the input data for RSK:
\begin{align}
d_m(k) = w_{\lfloor {m - k \over 2} \rfloor, \lceil{m + k \over 2}\rceil } \label{eq:rskpng1}
\end{align}
Let $q = 0$, then by identifying
\begin{align}
h^j_m(k) &= t_{\lfloor {m - k \over 2} \rfloor - j, \lceil{m + k \over 2}\rceil - j} \label{eq:rskpng2}
\end{align}
where the $t_{n\ell}$'s are the output of local moves taking the staircase tableau $(w_{ij})_{i + j \le m + 1}$, one may recover the dynamics of the PNG model \eqref{eq:png} from the dynamics of the local moves.
Using the same correspondence, the gPNG model was defined using the gRSK dynamics, as per \cite{nguyen-zygouras16}.
Similarly we may define a $q$-version of the PNG model using the same correspondence \eqref{eq:rskpng1}\eqref{eq:rskpng2} for $0 < q < 1$.
With the same reasoning, one can say that the $q$PNG height functions are a Markov process.
The dynamics is a bit more hairy than the usual PNG but a simple rewriting of the $q$RSK algorithm.
Here we show the dynamics of the top level height function:
\begin{align*}
h_m(k) &= h_{m - 1} (k - 1) + h_{m - 1} (k + 1) - h_{m - 1} (k) \\
&\qquad - q\text{Hyp}(h_{m - 1}(k - 1) - h_{m - 1}(k), \infty, h_{m - 1}(k + 1) - h_{m - 1}(k)) + d_m(k).
\end{align*}
It can be seen from this formula that, similar the usual PNG model, the height function $h_m(k)$ is a function of the heights at neighbouring positions at the previous time $h_m(k - 1), h_m(k), h_m(k + 1)$ and the droplet $d_m(k)$.
In \cite{johansson03} the PNG model was used to show that asymptotically the partition functions of DLPP at the same time are the Airy process.
Here by applying the $q$-local moves on the staircase Young diagram and use Theorem \ref{t:lmpush} and Theorem \ref{t:qlocalmoves}, we obtain a $q$-version and the joint distribution of partition functions of the $q$-polymer at a fixed time in Corollary \ref{c:qpng} in Section \ref{s:intro}.
Our result on joint distributions of polymers, Corollary \ref{c:qpdist} and \ref{c:qpng} are $q$-versions of Theorem 2.8 and 3.5 in \cite{nguyen-zygouras16} respectively. To obtain anything more, such as the $q$-version of the two-point Laplace transform in Theorem 2.12 in that paper or the central limit of one point partition function in Theorem 1 in \cite{borodin-corwin-remenik13}, a natural question arises whether one can obtain a $q$-Whittaker version of Corollary 1.8 (writing one-point Laplace transform as a Fredholm determinant) in \cite{borodin-corwin-remenik13}, which is the main tool to show the two results.
\subsection{Measures on the matrix}
In this section we restrict to $\Lambda = [n] \times [m]$. As a straightfoward application of Theorem \ref{t:qlocalmoves} one can show the following result, from which Corollary \ref{c:qwmeasure} follows:
\begin{prp}
The distribution of the marginal variable $\lambda = (t_{n, m}, t_{n - 1, m - 1}, \dots, t_{(n - m)^+ + 1, (m - n)^+ + 1})$ of $(t_{ij})_{(i, j) \in \Lambda}$ is the $q$-Whittaker measure
\begin{align*}
\sum_{t_{i j}: i - j \neq n - m, 1 \le i \le n, 1 \le j \le m} \mu_q(t) = \mu_{q\text{-Whittaker}} (\lambda) = P_\lambda(\alpha) Q_\lambda(\hat\alpha) \prod_{i, j} (\hat\alpha_i \alpha_j; q)_\infty
\end{align*}
\end{prp}
Let $L$ be a measure on $\mathbb R^{n \times m}$ defined as follows
\begin{align*}
L(x) &= \exp(- e^{- x_{1 1}}) \prod_{i = 1 : n} \prod_{j = 2 : m} \exp(- e^{x_{i, j - 1} - x_{i j}}) \prod_{i = 2 : n} \prod_{j = 1 : m} \exp(- e^{x_{i - 1, j} - x_{i, j}}) \\
&\;\;\;\;\;\;\;\;\;\;\;\;\times \exp( - \sum \theta_i s_i) \exp(- \sum \hat\theta_i \hat s_i ) \prod_{i = 1 : n} \prod_{j = 1 : m} \Gamma(\hat\theta_i + \theta_j)^{-1}
\end{align*}
where
\begin{align*}
s_1 &= t_{n, 1} \\
s_i &= \sum_{j = 1 : n \wedge i} t_{n - j + 1, i - j + 1} - \sum_{j = 1 : n \wedge (i - 1)} t_{n - j + 1, i - j}, \qquad i = 2 : m\\
\hat s_1 &= t_{1, m} \\
\hat s_i &= \sum_{j = 1 : m \wedge i} t_{i - j + 1, m - j + 1} - \sum_{j = 1 : m \wedge (i - 1)} t_{i - j, m - j + 1}, \qquad i = 2 : n.
\end{align*}
This measure was introduced in \cite{oconnell-seppalainen-zygouras14} as the push-forward measure of local moves acting on a matrix with log-Gamma weights.
The next proposition demonstrates the classical limit from $\mu_q$ to $L$.
\begin{prp}
Let
\begin{align*}
q &= e^{- \epsilon} \\
t_{i j} &= (i + j - 1) m(\epsilon) + \epsilon^{- 1} x_{i j} \\
\hat\alpha_i &= e^{- \epsilon \hat\theta_i} \\
\alpha_j &= e^{- \epsilon \theta_j}
\end{align*}
Then
\begin{align*}
\lim_{\epsilon \downarrow 0} \epsilon^{- m n} \mu_q(t) = L(x).
\end{align*}
\end{prp}
\begin{proof}
Quite straightforward by plugging in Items 1 and 2 in Lemma \ref{l:qclassicallimits}.
\end{proof}
| proofpile-arXiv_066-2909 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Statistical mixtures of quantum states have quantum as well classical
correlations. Detecting and quantifying nonclassicality (or quantumness) of
multipartite quantum system is of special interest in quantum optics \cit
{horo09, dodo03,peri94}, quantum information \cite{niel00}, and even in
quantum biology \cite{lamb13,li12}. In particular, detecting and quantifying
quantum entanglement is increasingly important to experiments in quantum
information science \cite{guhn07,eise07,wund09} and it also plays a crucial
role in investigations in quantum statistical physics, e.g., in studying
phase transitions \cite{sahl15,oste02}. State coefficients of a pure quantum
state contain complete information about the correlations amongst the
subsystems. Local unitary invariance constraints allow us to construct
unitary invariant functions of state coefficients to quantify two-way,
three-way, four-way,..., N-way entanglement of an N-qubit state. Global
negativity \cite{zycz98,vida02} or equivalently concurrence \cite{hill97} of
a pure state of two qubits quantifies two-way correlations. Three tangle
\cite{coff00} is an entanglement monotone that quantifies three-way
entanglement of a three-qubit state. Four-way entanglement of a four-qubit
state is quantified by four-tangle \cite{shar13}, which for pure state is a
degree eight function of state coefficients. Three-tangle and four-tangle
for pure quantum states are moduli of specific local special unitary (LSUT)
invariants, that are polynomial functions of state coefficients.
Non-locality properties of an entangled state do not change under unitary
operations acting independently on each of its sub-systems. The idea of
describing entanglement by means of local-unitary invariants has been
explored in refs. \cite{gras98,luqu03,luqu06}. It is shown in ref. \cit
{luqu06} that given a basis of stochastic local operations and classical
communication (SLOCC) covariants of degree $d$ in variables, the scalar
products with respect to auxilliary variables, form a basis of the space of
local unitary (LUT) invariants. Squared modulus $|I|^{2}$ of a SLOCC
invariant is a LUT and LSUT invariant.
Using an intuitive approach coupled to classical theory of invariants it has
been shown in ref. \cite{shar16} that one can construct local unitary
invariants characterizing a pure state of N two-level quantum systems
(qubits) in terms of N-1 qubit invariants. A natural question is, can we
write entanglement of (N-1)-qubit marginals of an N-qubit pure state in
terms of invariants of the N-qubit pure state? In this letter, we focus on
four qubit states and their three-qubit marginals. Entanglement measures for
reduced states are, generally, based on the convex roof of a quantity
defined on pure states. In most of the cases they are not computable as
there are no efficient ways to calculate convex roofs. Although a closed
formula for the three-tangle \cite{coff00} of pure states is known, for
mixed states no such formula is available. Solutions to finding the convex
roof of three-tangle for special families of states are, however, available
in \cite{lohm06,elts08,elts12,elts14}. A method to compute entanglement
measures based on convex roof constructions is also given in ref. \cit
{toth15}. Recently, Osterloh \cite{oste16}, obtained three tangles of nine
classes of four qubit states \cite{vers02}, by finding a decomposition of
three qubit mixed state such that its entanglement lies on minimal
characteristic curve. We outline the formalism to construct three-tangle of
reduced state from relevant three-qubit polynomial invariants of the four
qubit pure state. Local unitary invariance of entanglement is the basic
principle underlying the construction. In the most general case, it involves
finding the roots of a quartic equation. The roots may be found analytically
or numerically. Closed formulae for upper bound on three tangles of nine
classes of four qubit states \cite{vers02}, are obtained and the results
compared with those of ref. \cite{regu14,regu16}. Our results offer tighter
constraints on three-way entanglement of a given qubit with the rest of the
system than those used in ref. \cite{regu14,regu16}.
We start by defining three tangle on pure and mixed states in section II,
outline the formalism in sections III and IV, discuss the upper bounds on
three tangles for nine classes of four-qubit states in section V and optimal
decomposition of a rank two mixed state in section VI. Section VII contains
the concluding remarks.
\section{Definition of Three Tangle}
Consider a three-qubit pure stat
\begin{equation}
\left\vert \Psi _{3}\right\rangle
=\sum\limits_{i_{1},i_{2},i_{3}}a_{i_{1}i_{2}i_{3}}\left\vert
i_{1}i_{2}i_{3}\right\rangle ,\quad i_{m}=0,1
\end{equation
where state coefficients $a_{i_{1}i_{2}i_{3}}$ are complex numbers and
i_{m}\ $refers to computational basis state of qubit $A_{m}$, $\left(
m=1,2,3\right) $. Let qubit $A_{1}$ be the focus qubit. Using the notation
from ref. \cite{shar16}, we define $D_{\left( A_{3}\right)
_{i_{3}}}^{00}=a_{00i_{3}}a_{11i_{3}}-a_{10i_{3}}a_{01i_{3}},$ ($i_{3}=0,1$)
as determinant of a two-way negativity font and
D^{00i_{3}}=a_{00i_{3}}a_{11i_{3}+1}-a_{10i_{3}}a_{01i_{3}+1}$, ($i_{3}=0,1
) is determinant of a three-way negativity font. Three tangle of pure state
\left\vert \Psi _{3}\right\rangle $ as defined in ref. \cite{coff00} is
equal to the modulus of a polynomial invariant of degree four that is $\tau
_{A_{1}|A_{2}|A_{3}}\left( \left\vert \Psi _{3}\right\rangle \right)
=4\left\vert I_{3,4}\right\vert $, where
\begin{equation}
I_{3,4}=\left( D^{000}+D^{001}\right) ^{2}-4D_{\left( A_{3}\right)
_{0}}^{00}D_{\left( A_{3}\right) _{1}}^{00}. \label{three_invariant}
\end{equation
The entanglement measure $\tau _{A_{1}|A_{2}|A_{3}}\left( \left\vert \Psi
_{3}\right\rangle \right) $ is extended to a mixed state of three qubits via
convex roof extension that i
\begin{equation}
\left[ \tau _{A_{1}|A_{2}|A_{3}}\left( \rho _{3}\right) \right] ^{\frac{1}{2
}=\min_{\left\{ p_{i},\left\vert \phi ^{\left( i\right) }\right\rangle
\right\} }\sum\limits_{i}p_{i}\left[ \tau _{A_{1}|A_{2}|A_{3}}\left(
\left\vert \phi ^{\left( i\right) }\right\rangle \right) \right] ^{\frac{1}{
}}, \label{mix3tangle}
\end{equation
where minimization is taken over all possible decompositions $\left\{
p_{i},\left\vert \phi ^{\left( i\right) }\right\rangle \right\} $ of $\rho
_{3}$.
\section{Upper bound on three tangle of a rank two reduced state}
Purification of a rank two three-qubit mixed state is a four-qubit pure
state. In this section, we describe the construction of upper bound on three
tangle of marginal state of a four qubit pure state in terms of three-qubit
and four-qubit unitary invariants of the pure state. In the most general
case, it involves finding the roots of a quartic equation.The roots may be
found analytically or numerically. Analytical form of relevant three-qubit
and four-qubit invariants is to be found in Sec. V of ref. \cite{shar16}.
Upper bounds on three-qubit mixed state entanglement, used in ref. \cit
{regu14,regu16} \ in the context of a monogamy of quantum entanglement, had
been calculated by using an algorithm \cite{rodr14} to construct the
entanglement-minimizing decomposition for $\rho $. Our results offer tighter
constraints on three-way entanglement of a given qubit with the rest of the
system than those used in ref. \cite{regu14,regu16}.
Consider a general four qubit pure state, written as
\begin{equation}
\left\vert \Psi \right\rangle =\sum_{i_{1},i_{2},i_{3}}\left(
a_{i_{1}i_{2}i_{3}0}\left\vert i_{1}i_{2}i_{3}0\right\rangle
+a_{i_{1}i_{2}i_{3}1}\left\vert i_{1}i_{2}i_{3}1\right\rangle \right) ,\quad
\left( i_{m}=0,1\right) , \label{4state}
\end{equation
where qubits are in the order ($A_{1}$, $A_{2}$, $A_{3}$, $A_{4}$)
corresponding to the state $\left\vert i_{1}i_{2}i_{3}i_{4}\right\rangle $.
Choosing qubit $A_{1}$ as the focus qubit, we identify $D_{\left(
A_{3}\right) _{i_{3}}\left( A_{4}\right)
_{i_{4}}}^{00}=a_{00i_{3}i_{4}}a_{11i_{3}i_{4}}-a_{10i_{3}i_{4}}a_{01i_{3}i_{4}}
$ (two-way), $D_{\left( A_{2}\right) _{i_{2}}\left( A_{4}\right)
_{i_{4}}}^{00}=a_{0i_{2}0i_{4}}a_{1i_{2}1i_{4}}-a_{1i_{2}0i_{4}}a_{0i_{2}1i_{4}}
$ (two-way), $D_{\left( A_{2}\right) _{i_{2}}\left( A_{3}\right)
_{i_{3}}}^{00}=a_{0i_{2}i_{3}0}a_{1i_{2}i_{3}1}-a_{1i_{2}i_{3}0}a_{0i_{2}i_{3}1}
$ (two-way), $D_{\left( A_{4}\right)
_{i_{4}}}^{00i_{3}}=a_{00i_{3}i_{4}}a_{11,i_{3}\oplus
1,i_{4}}-a_{10i_{3}i_{4}}a_{01,i_{3}\oplus 1,i_{4}}$ (three-way), $D_{\left(
A_{3}\right) _{i_{3}}}^{00i_{4}}=a_{00i_{3}i_{4}}a_{11i_{3},i_{4}\oplus
1}-a_{10i_{3}i_{4}}a_{01i_{3},i_{4}\oplus 1}$ (three-way), $D_{\left(
A_{2}\right) _{i_{2}}}^{00i_{4}}=a_{0i_{2}0i_{4}}a_{1i_{2}1i_{4}\oplus
1}-a_{1i_{2}0i_{4}}a_{0i_{2}1i_{4}\oplus 1}$ (three-way), and
D^{00i_{3}i_{4}}=a_{00i_{3}i_{4}}a_{11,i_{3}\oplus 1,i_{4}\oplus
1}-a_{10i_{3}i_{4}}a_{01,i_{3}\oplus 1,i_{4}\oplus 1}$- (four-way) as the
determinants of negativity fonts. We shall also use the three and four qubit
invariants constructed in section V of ref. \cite{shar16}. Three-qubit
invariants of interest for the triple $A_{1}A_{2}A_{3}$ in state $\left\vert
\Psi \right\rangle $ comprise a set denoted by $\left\{ \left(
I_{3,4}\right) _{A_{4}}^{4-m,m}:m=0\text{ to }4\right\} $. The form of
elements of the set in terms of determinants of negativity fonts is given in
Appendix \ref{A1}. The elements in set $\left\{ \left( I_{3,4}\right)
_{A_{4}}^{4-m,m}:m=0\text{ to }4\right\} $ are invariant with respect to
local unitaries on qubits $A_{1}$, $A_{2}$, and $A_{3}$. Four-qubit
invariant that quantifies the sum of three-way and four-way correlations
\cite{shar16} of triple $A_{1}A_{2}A_{3},$ reads a
\begin{equation}
N_{4,8}^{A_{1}A_{2}A_{3}}=6\left\vert \left( I_{3,4}\right)
_{A_{4}}^{2,2}\right\vert ^{2}+4\left\vert \left( I_{3,4}\right)
_{A_{4}}^{3,1}\right\vert ^{2}+4\left\vert \left( I_{3,4}\right)
_{A_{4}}^{1,3}\right\vert ^{2}+\left\vert \left( I_{3,4}\right)
_{A_{4}}^{4,0}\right\vert ^{2}+\left\vert \left( I_{3,4}\right)
_{A_{4}}^{0,4}\right\vert ^{2},
\end{equation
while degree eight invariant that detects genuine four-body entanglement of
a four-qubit state is given b
\begin{equation}
I_{4,8}=3\left( \left( I_{3,4}\right) _{A_{4}}^{2,2}\right) ^{2}-4\left(
I_{3,4}\right) _{A_{4}}^{3,1}\left( I_{3,4}\right) _{A_{4}}^{1,3}+\left(
I_{3,4}\right) _{A_{4}}^{4,0}\left( I_{3,4}\right) _{A_{4}}^{0,4}.
\label{i4inv}
\end{equation
Invariant $I_{4,8}$ is written as a function of $A_{1}A_{2}A_{3}$ invariants
in Eq. (\ref{i4inv}). Being independent of the choice of focus qubit,
I_{4,8}$ can be written in alternate forms. Four-tangle based on degree
eight invariant is defined \cite{shar13,shar16} a
\begin{equation*}
\tau _{4,8}=16\left\vert 12\left( I_{4,8}\right) \right\vert .
\end{equation*
Set$\left\{ \left( I_{3,4}\right) _{A_{3}}^{4-m,m}:m=0\text{ to }4\right\} $
for the triple $A_{1}A_{2}A_{4}$ and $\left\{ \left( I_{3,4}\right)
_{A_{2}}^{4-m,m}:m=0\text{ to }4\right\} $ for qubits $A_{1}A_{3}A_{4}$ can
be constructed from two-qubit invariants of properly selected pair of qubits
\cite{shar16}. In the following sections, the subscript $4$ in $\left(
I_{3,4}\right) _{A_{3}}^{4-m,m}$ has been dropped, that is $\left(
I_{3,4}\right) _{A_{3}}^{4-m,m}=\left( I_{3}\right) _{A_{3}}^{4-m,m}$ .
\subsection{Unitary on fourth qubit}
To illustrate the method, we focus on three-qubit reduced state $\rho
=\sum_{i=0}^{1}p_{A_{4}}^{\left( i\right) }\left\vert \phi _{A_{4}}^{\left(
i\right) }\right\rangle \left\langle \phi _{A_{4}}^{\left( i\right)
}\right\vert $, obtained by tracing out qubit $A_{4}$\ from the state
\left\vert \Psi \right\rangle $. Probability of finding qubit $A_{4}$ in
state $\left\vert i\right\rangle $ is denoted by $p_{A_{4}}^{\left( i\right)
}$. A unitary $U(x)=\frac{1}{\sqrt{1+\left\vert x\right\vert ^{2}}}\left[
\begin{array}{cc}
1 & -x^{\ast } \\
x &
\end{array
\right] $, on qubit $A_{4}$ of state $\left\vert \Psi \right\rangle $
results in an$\ x$ dependent stat
\begin{equation}
\left\vert \Psi \left( x\right) \right\rangle =\sum_{i_{4}=0}^{1}\left\vert
\Phi _{A_{4}}^{\left( i_{4}\right) }(x)\right\rangle \left\vert
i_{4}\right\rangle ,
\end{equation
where $\left\vert \Phi _{A_{4}}^{\left( i_{4}\right) }(x)\right\rangle
=\sum_{i_{1},i_{2},i_{3}}a_{i_{1}i_{2}i_{3}i_{4}}(x)\left\vert
i_{1}i_{2}i_{3}\right\rangle $ is a subnormalized state. Reduced state
obtained by tracing out qubit $A_{4}$\ reads a
\begin{equation}
\rho (x)=\sum_{i=0}^{1}p_{A_{4}}^{\left( i\right) }(x)\left\vert \phi
_{A_{4}}^{\left( i\right) }(x)\right\rangle \left\langle \phi
_{A_{4}}^{\left( i\right) }(x)\right\vert ,
\end{equation
where $\left\vert \phi _{A_{4}}^{\left( i_{4}\right) }(x)\right\rangle
\frac{\left\vert \Phi _{A_{4}}^{\left( i_{4}\right) }(x)\right\rangle }
\sqrt{p_{A_{4}}^{\left( i\right) }(x)}}$, is a normalized state, and $x$
dependent probabilities are defined as
\begin{equation}
p_{A_{4}}^{\left( 0\right) }(x)=\frac{p_{A_{4}}^{\left( 0\right)
}+\left\vert x\right\vert ^{2}p_{A_{4}}^{\left( 1\right) }}{1+\left\vert
x\right\vert ^{2}},\hspace{0.3in}p_{A_{4}}^{\left( 1\right) }(x)=\frac
p_{A_{4}}^{\left( 1\right) }+\left\vert x\right\vert ^{2}p_{A_{4}}^{\left(
0\right) }}{1+\left\vert x\right\vert ^{2}}.
\end{equation
One can verify that the $x$ dependent three-tangl
\begin{equation}
\tau _{A_{1}|A_{2}|A_{3}}\left( \left\vert \phi _{A_{4}}^{\left( 0\right)
}(x)\right\rangle \right) =\frac{4}{\left( p_{A_{4}}^{\left( 0\right)
}(x)\right) ^{2}}\left\vert \left( I_{3}\right) _{A_{4}}^{4,0}\left(
x\right) \right\vert ,
\end{equation
and
\begin{equation}
\tau _{A_{1}|A_{2}|A_{3}}\left( \left\vert \phi _{A_{4}}^{\left( 1\right)
}(x)\right\rangle \right) =\frac{4}{\left( p_{A_{4}}^{\left( 1\right)
}(x)\right) ^{2}}\left\vert \left( I_{3}\right) _{A_{4}}^{0,4}\left(
x\right) \right\vert .
\end{equation
Using the definition of three tangle for mixed states (Eq. (\ref{mix3tangle
)), we can writ
\begin{equation}
\left[ \tau _{A_{1}|A_{2}|A_{3}}\left( \rho _{3}\right) \right] ^{\frac{1}{2
}\leq 2\min_{\left\{ x\right\} }\left( \left\vert \left( I_{3}\right)
_{A_{4}}^{4,0}\left( x\right) \right\vert ^{\frac{1}{2}}+\left\vert \left(
I_{3}\right) _{A_{4}}^{0,4}\left( x\right) \right\vert ^{\frac{1}{2}}\right)
. \label{ineq}
\end{equation
Three qubit invariants $\left( I_{3}\right) _{A_{4}}^{4,0}\left( x\right) $
and $\left( I_{3}\right) _{A_{4}}^{0,4}(x)$ are related to elements of the
set $\{\left( I_{3}\right) _{A_{4}}^{4-m,m}:m=0,4$\} through
\begin{eqnarray}
\left( I_{3}\right) _{A_{4}}^{4,0}(x) &=&\frac{1}{\left( 1+\left\vert
x\right\vert ^{2}\right) ^{2}}\left[ \left( I_{3}\right)
_{A_{4}}^{4,0}-4x^{\ast }\left( I_{3}\right) _{A_{4}}^{3,1}\right. \notag
\\
&&\left. +6\left( x^{\ast }\right) ^{2}\left( I_{3}\right) ^{2,2}-4\left(
x^{\ast }\right) ^{3}\left( I_{3}\right) _{A_{4}}^{1,3}+\left( x^{\ast
}\right) ^{4}\left( I_{3}\right) _{A_{4}}^{0,4}\right] , \label{i40}
\end{eqnarray
an
\begin{eqnarray}
\left( I_{3}\right) _{A_{4}}^{0,4}(x) &=&\frac{1}{\left( 1+\left\vert
x\right\vert ^{2}\right) ^{2}}\left[ \left( I_{3}\right)
_{A_{4}}^{0,4}+4x\left( I_{3}\right) _{A_{4}}^{1,3}\right. \notag \\
&&\left. +6x^{2}\left( I_{3}\right) _{A_{4}}^{2,2}+4x^{3}\left( I_{3}\right)
_{A_{4}}^{3,1}+\left( I_{3}\right) _{A_{4}}^{4,0}x^{4}\right] . \label{i04}
\end{eqnarray
To obtain an upper bound on three tangle of mixed state, we find the value
of complex parameter $x_{1}$ such that $\left( I_{3}\right)
_{A_{4}}^{4,0}\left( x_{1}\right) =0$, and $x_{2}$ such that $\left(
I_{3}\right) _{A_{4}}^{0,4}\left( x_{2}\right) =0$. In the most general
case, that involves solving a quartic equation in variable $x$, (obtained
from Eq. (\ref{i40}) or (\ref{i04}) ). When the state coefficients are
known, the resulting quartic may be solved numerically. However, for the
representatives of nine classes of four-qubit entanglement \cite{vers02}
analytic solutions are easily found. By definition the three tangle must
satisfy Eq. (\ref{ineq}), as such three tangle satisfies the inequalit
\begin{equation}
\left[ \tau _{A_{1}|A_{2}|A_{3}}\left( \rho _{3}\right) \right] \leq 4\min
\left( \left\vert \left( I_{3}\right) _{A_{4}}^{0,4}\left( x_{1}\right)
\right\vert ,\left\vert \left( I_{3}\right) _{A_{4}}^{4,0}\left(
x_{2}\right) \right\vert \right)
\end{equation
giving us an upper bound on three tangle of the mixed state.
\subsection{Unitary on three qubit state}
If normalized three-qubit states $\left\vert \phi _{A_{4}}^{\left( 0\right)
}\right\rangle $ and $\left\vert \phi _{A_{4}}^{\left( 1\right)
}\right\rangle $ are orthogonal to each other then a unitary on three qubit
state such that
\begin{equation}
\left\vert \phi _{A_{4}}^{\left( 0\right) }(y)\right\rangle =\frac{1}{\sqrt
1+\left\vert y\right\vert ^{2}}}\left( \left\vert \phi _{A_{4}}^{\left(
0\right) }\right\rangle +y\left\vert \phi _{A_{4}}^{\left( 1\right)
}\right\rangle \right) ,
\end{equation
and
\begin{equation}
\left\vert \phi _{A_{4}}^{\left( 1\right) }(y)\right\rangle =\frac{1}{\sqrt
1+\left\vert y\right\vert ^{2}}}\left( \left\vert \phi _{A_{4}}^{\left(
1\right) }\right\rangle -y^{\ast }\left\vert \phi _{A_{4}}^{\left( 0\right)
}\right\rangle \right) ,
\end{equation
results in a $y$ dependent four-qubit stat
\begin{equation}
\left\vert \Psi \left( y\right) \right\rangle =\sum_{i_{4}=0}^{1}\sqrt
p_{A_{4}}^{\left( i_{4}\right) }}\left\vert \phi _{A_{4}}^{\left(
i_{4}\right) }(y)\right\rangle \left\vert i_{4}\right\rangle ,
\end{equation
such that three tangle may be defined a
\begin{equation}
\left[ \tau _{A_{1}|A_{2}|A_{3}}\left( \rho \right) \right] ^{\frac{1}{2
}=2\min_{\left\{ y\right\} }\left( p_{A_{4}}^{\left( 0\right) }\left\vert
\left( I_{3}\right) _{A_{4}}^{4,0}\left( y\right) \right\vert ^{\frac{1}{2
}+p_{A_{4}}^{\left( 1\right) }\left\vert \left( I_{3}\right)
_{A_{4}}^{0,4}\left( y\right) \right\vert ^{\frac{1}{2}}\right) .
\end{equation
Recalling that three-qubit invariants, $\left( I_{3}\right) _{A_{4}}^{k-m,m},
$ are degree four functions of state coefficients, in this cas
\begin{eqnarray}
\left( I_{3}\right) _{A_{4}}^{4,0}\left( y\right) &=&\frac{1}{\left(
1+\left\vert y\right\vert ^{2}\right) ^{2}}\left( \frac{\left( I_{3}\right)
_{A_{4}}^{4,0}}{\left( p_{A_{4}}^{0}\right) ^{2}}+4y\frac{\left(
I_{3}\right) _{A_{4}}^{3,1}}{\sqrt{\left( p_{A_{4}}^{0}\right)
^{3}p_{A_{4}}^{1}}}\right. \notag \\
&&\left. +6y^{2}\frac{\left( I_{3}\right) _{A_{4}}^{2,2}}
p_{A_{4}}^{0}p_{A_{4}}^{1}}+4y^{3}\frac{\left( I_{3}\right) _{A_{4}}^{1,3}}
\sqrt{p_{A_{4}}^{0}\left( p_{A_{4}}^{1}\right) ^{3}}}+y^{4}\frac{\left(
I_{3}\right) _{A_{4}}^{0,4}}{\left( p_{A_{4}}^{1}\right) ^{2}}\right) ,
\end{eqnarray
and
\begin{eqnarray}
\left( I_{3}\right) _{A_{4}}^{0,4}\left( y\right) &=&\frac{1}{\left(
1+\left\vert y\right\vert ^{2}\right) ^{2}}\left( \frac{\left( I_{3}\right)
_{A_{4}}^{04}}{\left( p_{A_{4}}^{1}\right) ^{2}}-4y^{\ast }\frac{\left(
I_{3}\right) _{A_{4}}^{1,3}}{\sqrt{\left( p_{A_{4}}^{1}\right)
^{3}p_{A_{4}}^{0}}}\right. \notag \\
&&\left. +6\left( y^{\ast }\right) ^{2}\frac{\left( I_{3}\right)
_{A_{4}}^{2,2}}{p_{A_{4}}^{0}p_{A_{4}}^{1}}-4\left( y^{\ast }\right) ^{3
\frac{\left( I_{3}\right) _{A_{4}}^{3,1}}{\sqrt{p_{A_{4}}^{1}\left(
p_{A_{4}}^{0}\right) ^{3}}}+\left( y^{\ast }\right) ^{4}\frac{\left(
I_{3}\right) _{A_{4}}^{4,0}}{\left( p_{A_{4}}^{0}\right) ^{2}}\right) .
\end{eqnarray
We look for $\rho (y_{1})$ with $\left( I_{3}\right) _{A_{4}}^{4,0}\left(
y_{1}\right) =0$, and $\rho (y_{2})$ such that $\left( I_{3}\right)
_{A_{4}}^{0,4}\left( y_{2}\right) =0$. That gives us an upper bound on
\left[ \tau _{A_{1}|A_{2}|A_{3}}\left( \rho \right) \right] ^{\frac{1}{2}}$
that is
\begin{equation}
\left[ \tau _{A_{1}|A_{2}|A_{3}}\left( \rho \right) \right] ^{\frac{1}{2
}\leq \min \left( 2p_{A_{4}}^{0}\left\vert \left( I_{3}\right)
_{A_{4}}^{0,4}\left( y_{1}\right) \right\vert ^{\frac{1}{2}}\text{,
2p_{A_{4}}^{1}\left\vert \left( I_{3}\right) _{A_{4}}^{4,0}\left(
y_{2}\right) \right\vert ^{\frac{1}{2}}\right) . \label{tauup2}
\end{equation}
If for a given state $p_{A_{4}}^{\left( 0\right) }=p_{A_{4}}^{\left(
1\right) }$, then the result obtained by a unitary on fourth qubit coincides
with that obtained by a unitary on the three-qubit pure states of the
decomposition of the mixed state. When $p_{A_{4}}^{\left( 0\right) }\neq
p_{A_{4}}^{\left( 1\right) }$, then minima found by unitary on fourth qubit
and those calculated by unitary on three qubit state must be compared to
find the correct bound on three-tangle for the mixed state. Our results
complement the upper bounds of ref. \cite{oste16} which possibly correspond
to those found by applying a unitary on a three qubit marginal state (Eq.
\ref{tauup2}).
\section{Three tangle and three-qubit invariants of six groups of states}
First of all we notice that since the difference $16\left(
N_{4,8}^{A_{i}A_{j}A_{k}}-2\left\vert I_{4,8}\right\vert \right) $ is a
measure of three-way correlations amongst qubits $A_{i}$, $A_{j}$, and $A_{k}
$ in pure state, the three tangle must satisfy the condition $\tau
_{A_{i}|A_{j}|A_{k}}\left( \rho \right) \leq 4\sqrt
N_{4,8}^{A_{i}A_{j}A_{k}}-2\left\vert I_{4,8}\right\vert }$. Evaluation of
three-qubit invariants $\left\{ \left( I_{3}\right) _{A_{q}}^{4-m,m}:m=
\text{ to }4\right\} $, where $A_{q}$ refers to the qubit which is traced
out, shows that three-qubit marginals of states representing nine classes of
four qubits belong in six groups. We use unitary on fourth qubit to express
upper bound on three tangle in terms of three-qubit invariants for the
following cases of interest:
(i) For a given triple of qubits $A_{i}A_{j}A_{k}$,
N_{4,8}^{A_{i}A_{j}A_{k}}=2\left\vert I_{4,8}\right\vert $, therefore three
tangle, $\tau _{A_{i}|A_{j}|A_{k}}\left( \rho \right) =0$.
(ii) Only $\left( I_{3}\right) _{A_{q}}^{4,0}$ is non-zero, therefore,
\left\vert I_{4,8}\right\vert =0$ and $\tau _{A_{i}|A_{j}|A_{k}}\left( \rho
\right) \leq 4\left\vert \left( I_{3}\right) _{A_{q}}^{4,0}\right\vert $.
(iii) Only $\left( I_{3}\right) _{A_{q}}^{0,4}$ \ is non-zero, therefore,
\left\vert I_{4,8}\right\vert =0$ and $\tau _{A_{i}|A_{j}|A_{k}}\left( \rho
\right) \leq 4\left\vert \left( I_{3}\right) _{A_{q}}^{0,4}\right\vert $.
(iv) Non zero three-qubit invariants are $\left( I_{3}\right) _{A_{q}}^{4,0}
and $\left( I_{3}\right) _{A_{q}}^{2,2},$ therefor
\begin{equation}
\left( I_{3}\right) _{A_{q}}^{4,0}(x)=\frac{1}{\left( 1+\left\vert
x\right\vert ^{2}\right) ^{2}}\left( \left( I_{3}\right)
_{A_{q}}^{4,0}+6\left( x^{\ast }\right) ^{2}\left( I_{3}\right)
_{A_{q}}^{2,2}\right) ,
\end{equation
an
\begin{equation}
\left( I_{3}\right) _{A_{q}}^{0,4}(x)=\frac{x^{2}}{\left( 1+\left\vert
x\right\vert ^{2}\right) ^{2}}\left( 6\left( I_{3}\right)
_{A_{q}}^{2,2}+x^{2}\left( I_{3}\right) _{A_{q}}^{4,0}\right) .
\end{equation
In this case three tangle satisfies the conditio
\begin{equation}
\tau _{A_{i}|A_{j}|A_{k}}\left( \rho \right) \leq 4\left\vert \left(
I_{3}\right) _{A_{4}}^{4,0}\right\vert \frac{\left\vert \left\vert 6\left(
I_{3}\right) ^{2,2}\right\vert -\left\vert \left( I_{3}\right)
_{A_{4}}^{4,0}\right\vert \right\vert }{\left\vert 6\left( I_{3}\right)
^{2,2}\right\vert +\left\vert \left( I_{3}\right) _{A_{4}}^{4,0}\right\vert
. \label{tauup4}
\end{equation}
(v) Non zero three-qubit invariants are $\left( I_{3}\right) _{A_{q}}^{0,4}$
and $\left( I_{3}\right) _{A_{q}}^{2,2}$, then to obtain three tangle we use
the relation
\begin{equation}
\left( I_{3}\right) _{A_{q}}^{4,0}(x)=\frac{\left( x^{\ast }\right) ^{2}}
\left( 1+\left\vert x\right\vert ^{2}\right) ^{2}}\left( 6\left(
I_{3}\right) _{A_{q}}^{2,2}+\left( x^{\ast }\right) ^{2}\left( I_{3}\right)
_{A_{q}}^{0,4}\right) ,
\end{equation
an
\begin{equation}
\left( I_{3}\right) _{A_{q}}^{0,4}(x)=\frac{1}{\left( 1+\left\vert
x\right\vert ^{2}\right) ^{2}}\left( \left( I_{3}\right)
_{A_{q}}^{0,4}+x^{2}6\left( I_{3}\right) _{A_{q}}^{2,2}\right) ,
\end{equation
leading to the condition
\begin{equation}
\tau _{A_{i}|A_{j}|A_{k}}\left( \rho \right) \leq 4\left\vert \left(
I_{3}\right) _{A_{4}}^{0,4}\right\vert \frac{\left\vert \left\vert 6\left(
I_{3}\right) ^{2,2}\right\vert -\left\vert \left( I_{3}\right)
_{A_{4}}^{0,4}\right\vert \right\vert }{\left\vert 6\left( I_{3}\right)
^{2,2}\right\vert +\left\vert \left( I_{3}\right) _{A_{4}}^{0,4}\right\vert
. \label{tauup5}
\end{equation}
(vi) The special case where only $\left( I_{3}\right) _{A_{q}}^{0,4}$ and
\left( I_{3}\right) _{A_{q}}^{1,3}$ are non-zero such tha
\begin{equation}
\left( I_{3}\right) _{A_{q}}^{4,0}(x)=\frac{\left( x^{\ast }\right) ^{3}}
\left( 1+\left\vert x\right\vert ^{2}\right) ^{2}}\left( x^{\ast }\left(
I_{3}\right) _{A_{q}}^{0,4}-4\left( I_{3}\right) _{A_{q}}^{1,3}\right) ,
\end{equation
\begin{equation}
\left( I_{3}\right) _{A_{q}}^{0,4}(x)=\frac{1}{\left( 1+\left\vert
x\right\vert ^{2}\right) ^{2}}\left( \left( I_{3}\right)
_{A_{q}}^{0,4}+4x\left( I_{3}\right) _{A_{q}}^{1,3}\right) ,
\end{equation
therefore three tangle satisfies the inequality
\begin{equation}
\tau _{A_{i}|A_{j}|A_{k}}\left( \rho \right) \leq \frac{4\left\vert \left(
I_{3}\right) _{A_{q}}^{0,4}\right\vert ^{3}}{\left\vert 4\left( I_{3}\right)
_{A_{q}}^{1,3}\right\vert ^{2}+\left\vert \left( I_{3}\right)
_{A_{q}}^{0,4}\right\vert ^{2}}. \label{tauup6}
\end{equation}
\section{Three-tangles for nine classes of four-qubit entanglement}
In this section, we use the results from previous section to write down
upper bounds on three-tangles for representatives of nine classes of four
qubit states. Our results offer tighter constraints on total three-way
entanglement of a given qubit with the rest of the system than those used in
refs. \cite{regu14,regu16}.
\subsection{Class I}
Class one states are represented by
\begin{eqnarray}
\left\vert G_{abcd}^{(1)}\right\rangle &=&\frac{a+d}{2}\left( \left\vert
0000\right\rangle +\left\vert 1111\right\rangle \right) +\frac{a-d}{2}\left(
\left\vert 0011\right\rangle +\left\vert 1100\right\rangle \right) \notag \\
&&+\frac{b+c}{2}\left( \left\vert 0101\right\rangle +\left\vert
1010\right\rangle \right) +\frac{b-c}{2}\left( \left\vert 0110\right\rangle
+\left\vert 1001\right\rangle \right) .
\end{eqnarray
For any partition $A_{i}A_{j}A_{k}$, $\left( I_{3}\right)
_{A_{l}}^{4,0}=\left( I_{3}\right) _{A_{l}}^{0,4}$, $\left( I_{3}\right)
_{A_{l}}^{2,2}\neq 0$, while $\left( I_{3}\right) _{A_{l}}^{3,1}=\left(
I_{3}\right) _{A_{l}}^{1,3}=0$. As a result, 16$N_{4,8}^{A_{i}A_{j}A_{k}}=
\left\vert I_{4,8}\right\vert $, therefore three tangle, $\tau
_{A_{i}|A_{j}|A_{k}}\left( \rho \right) =0$.
\subsection{Class II}
For class two states three-tangle for all four three-qubit partitions has
the same value. Consider pure state three-qubit invariants for partition
A_{1}A_{2}A_{3}$ in representative stat
\begin{eqnarray}
\left\vert G_{adc}^{(2)}\right\rangle &=&\frac{a+d}{2}\left( \left\vert
0000\right\rangle +\left\vert 1111\right\rangle \right) +\frac{a-d}{2}\left(
\left\vert 0011\right\rangle +\left\vert 1100\right\rangle \right) \notag
\\
&&+c\left( \left\vert 0101\right\rangle +\left\vert 1010\right\rangle
\right) +\left\vert 0110\right\rangle .
\end{eqnarray
Three-qubit invariants have values
\begin{equation}
\left( I_{3}\right) _{A_{4}}^{4,0}=\frac{c\left( a^{2}-d^{2}\right) }{\left(
\left\vert a\right\vert ^{2}+\left\vert d\right\vert ^{2}+2\left\vert
c\right\vert ^{2}+1\right) ^{2}},\left( I_{3}\right) _{A_{4}}^{2,2}=\frac
\left( a^{2}-c^{2}\right) \left( d^{2}-c^{2}\right) }{6\left( \left\vert
a\right\vert ^{2}+\left\vert d\right\vert ^{2}+2\left\vert c\right\vert
^{2}+1\right) ^{2}},
\end{equation
while $\left( I_{3}\right) _{A_{4}}^{3,1}=\left( I_{3}\right)
_{A_{4}}^{1,3}=\left( I_{3}\right) _{A_{4}}^{0,4}=0$. Consequently the sum
of three and four-way correlations is given by
\begin{equation}
16N_{4,8}^{A_{1}A_{2}A_{3}}=16\left( \left\vert \left( I_{3}\right)
_{A_{4}}^{4,0}\right\vert ^{2}+6\left\vert \left( I_{3}\right)
_{A_{4}}^{2,2}\right\vert ^{2}\right) .
\end{equation
Using the result of Eq. (\ref{tauup4}) and the fact that $\tau
_{A_{i}|A_{j}|A_{k}}\left( \rho \right) =\tau _{A_{1}|A_{2}|A_{3}}\left(
\rho \right) $, we obtain
\begin{equation}
\tau _{A_{i}|A_{j}|A_{k}}\left( \rho \right) \leq \frac{4\left\vert c\left(
a^{2}-d^{2}\right) \right\vert }{\left( \left\vert a\right\vert
^{2}+\left\vert d\right\vert ^{2}+2\left\vert c\right\vert ^{2}+1\right) ^{2
}\left( \frac{\left\vert \left\vert c\left( a^{2}-d^{2}\right) \right\vert
-\left\vert \left( a^{2}-c^{2}\right) \left( d^{2}-c^{2}\right) \right\vert
\right\vert }{\left\vert c\left( a^{2}-d^{2}\right) \right\vert +\left\vert
\left( a^{2}-c^{2}\right) \left( d^{2}-c^{2}\right) \right\vert }\right) .
\end{equation
Correct bound calculated in \cite{regu16} for this class of states is $\frac
4\left\vert c\left( a^{2}-d^{2}\right) \right\vert }{\left( \left\vert
a\right\vert ^{2}+\left\vert d\right\vert ^{2}+2\left\vert c\right\vert
^{2}+1\right) ^{2}}$.
\subsection{Class III}
Three tangle vanishes on reduced state obtained by tracing out qubit $A_{2}$
or $A_{4}$ from state
\begin{equation}
\left\vert G_{ab}^{(3)}\right\rangle =a\left( \left\vert 0000\right\rangle
+\left\vert 1111\right\rangle \right) +b\left( \left\vert 0101\right\rangle
+\left\vert 1010\right\rangle \right) +\left\vert 0011\right\rangle
+\left\vert 0110\right\rangle ,
\end{equation
because $N_{4,8}^{A_{1}A_{3}A_{4}}=N_{4,8}^{A_{1}A_{2}A_{3}}=2\left\vert
I_{4,8}\right\vert $. On the other hand, if qubit $A_{3}$ is traced out then
non-zero three-qubit invariants $\left( I_{3}\right) _{A_{3}}^{0,4}=\frac
-4ab}{\left( 2\left\vert a\right\vert ^{2}+2\left\vert b\right\vert
^{2}+2\right) ^{2}}$ and $\left( I_{3}\right) _{A_{3}}^{2,2}=\frac{2\left(
a^{2}-b^{2}\right) ^{2}}{3\left( 2\left\vert a\right\vert ^{2}+2\left\vert
b\right\vert ^{2}+2\right) ^{2}}$, determine the three tangle. Four qubit
invariant $\left\vert I_{4,8}\right\vert =3\left\vert \left( I_{3}\right)
_{A_{l}}^{2,2}\right\vert ^{2}$ and pure state three-way correlation are
found to be $16\left( N_{4,8}^{A_{1}A_{2}A_{4}}-2\left\vert
I_{4,8}\right\vert \right) =16\left\vert \left( I_{3}\right)
_{A_{3}}^{0,4}\right\vert ^{2}$. Using the result given in Eq. (\ref{tauup5
), upper bound on three tangle for the partition $A_{1}A_{2}A_{4}$ reads as
\begin{equation}
\tau _{A_{1}|A_{2}|A_{4}}\left( \rho \right) \leq \frac{4\left\vert
ab\right\vert }{\left( \left\vert a\right\vert ^{2}+\left\vert b\right\vert
^{2}+1\right) ^{2}}\frac{\left\vert 4\left\vert ab\right\vert -\left\vert
a^{2}-b^{2}\right\vert ^{2}\right\vert }{\left( 4\left\vert ab\right\vert
+\left\vert a^{2}-b^{2}\right\vert ^{2}\right) }.
\end{equation
Upper bound calculated in ref. \cite{regu14} is $\tau
_{A_{1}|A_{2}|A_{4}}\left( \rho \right) \leq \frac{4\left\vert ab\right\vert
}{\left( \left\vert a\right\vert ^{2}+\left\vert b\right\vert ^{2}+1\right)
^{2}}$. Our upper bound may also be compared with the convex roof for the
same state reported in \cite{oste16} to be
\begin{equation}
\left[ \tau _{A_{1}|A_{2}|A_{4}}\left( \rho \right) \right] ^{\frac{1}{2
}=\max \left( 0,\left( \frac{2\sqrt{\left\vert ab\right\vert }}{\left(
\left\vert a\right\vert ^{2}+\left\vert b\right\vert ^{2}+1\right) }\right)
\frac{\left( 4\left\vert ab\right\vert -\left\vert a^{2}-b^{2}\right\vert
^{2}\right) }{4\left\vert ab\right\vert }\right) \text{.}
\end{equation}
\subsection{Class IV}
For entanglement class represented by
\begin{align}
\left\vert G_{ab}^{\left( 4\right) }\right\rangle & =a\left( \left\vert
0000\right\rangle +\left\vert 1111\right\rangle \right) +\frac{a+b}{2}\left(
\left\vert 1010\right\rangle +\left\vert 0101\right\rangle \right) \notag
\\
& +\frac{a-b}{2}\left( \left\vert 0110\right\rangle +\left\vert
1001\right\rangle \right) +\frac{i}{\sqrt{2}}\left( \left\vert
0010\right\rangle +\left\vert 0001\right\rangle +\left\vert
0111\right\rangle +\left\vert 1011\right\rangle \right)
\end{align
all four reduced density matrices are found to have the same upper bound on
three-tangle. Taking up the case of qubits $A_{1}A_{2}A_{3}$, only non-zero
pure-state three-qubit invariant is $\left( I_{3}\right) _{A_{4}}^{0,4}$.
Since $I_{4,8}=0$, three-tangle is equal to
\begin{equation}
\tau _{A_{1}|A_{2}|A_{3}}\left( \rho \right) =\sqrt{N_{4,8}^{A_{1}A_{2}A_{3}
}=4\left\vert \left( I_{3}\right) _{A_{4}}^{0,4}\right\vert . \label{t123}
\end{equation
Substituting the value of $\left( I_{3}\right) _{A_{4}}^{0,4}$ in Eq. (\re
{t123}) and using $\tau _{A_{1}|A_{2}|A_{3}}\left( \rho \right) =\tau
_{A_{i}|A_{j}|A_{k}}\left( \rho \right) $, we have the inequality
\begin{equation}
\tau _{A_{i}|A_{j}|A_{k}}\left( \rho \right) \leq \frac{2\left\vert
a^{2}-b^{2}\right\vert }{\left( 2+3\left\vert a\right\vert ^{2}+\left\vert
b\right\vert ^{2}\right) ^{2}}.
\end{equation
which is the same as reported for this state in \cite{regu14}.
\subsection{Class V}
For \ representative of entanglement class V, which reads as
\begin{align}
\left\vert G_{a}^{(5)}\right\rangle & =a\left( \left\vert 0000\right\rangle
+\left\vert 1111\right\rangle +\left\vert 0101\right\rangle +\left\vert
1010\right\rangle \right) \notag \\
& +i\left\vert 0001\right\rangle +\left\vert 0110\right\rangle -i\left\vert
1011\right\rangle ,
\end{align
non-zero three-qubit invariants of interest are
\begin{equation}
\left( I_{3}\right) _{A_{4}}^{0,4}=\left( I_{3}\right) _{A_{2}}^{4,0}=\frac
-4a^{2}}{\left( 3+4\left\vert a\right\vert ^{2}\right) ^{2}},
\end{equation
an
\begin{equation}
\left( I_{3}\right) _{A_{3}}^{1,3}=\frac{-2ia^{2}}{\left( 3+4\left\vert
a\right\vert ^{2}\right) ^{2}};\quad \left( I_{3}\right) _{A_{3}}^{0,4}
\frac{-1}{\left( 3+4\left\vert a\right\vert ^{2}\right) ^{2}}.
\end{equation
Consequently, $\tau _{A_{1}|A_{2}|A_{3}}\left( \rho \right) =\tau
_{A_{1}|A_{3}|A_{4}}\left( \rho \right) $, such tha
\begin{equation*}
\tau _{A_{1}|A_{2}|A_{3}}\left( \rho \right) \leq \frac{16\left\vert
a^{2}\right\vert }{\left( 3+4\left\vert a\right\vert ^{2}\right) ^{2}}.
\end{equation*
This bound coincides with the results from ref. \cite{oste16} and ref. \cit
{regu14}.
For the marginal state obtained by tracing out qubit $A_{3}$, the values of
\left( I_{3}\right) _{A_{3}}^{0,4}$ and $\left( I_{3}\right) _{A_{3}}^{1,3}$
are substituted in Eq. (\ref{tauup6}) to obtai
\begin{equation}
\tau _{A_{1}|A_{2}|A_{4}}\left( \rho \right) \leq \frac{4}{\left(
3+4\left\vert a\right\vert ^{2}\right) ^{2}}\left( \frac{1}{1+64\left\vert
a\right\vert ^{4}}\right) .
\end{equation
In comparison, upper bound on three tangle calculated by Osterloh (Eq. 37 of
ref. (\cite{oste16}) ) corresponds t
\begin{equation}
\tau _{A_{1}|A_{2}|A_{4}}\left( \rho \right) \leq \frac{4}{\left(
3+4a^{2}\right) ^{2}}\left( \frac{1+64\left\vert a\right\vert ^{2}}{\left(
1+64\left\vert a\right\vert ^{4}\right) ^{2}}\right) \text{.}
\end{equation}
\subsection{Classes VI, VII, VIII and IX}
Non-zero pure state three-qubit invariants for class six stat
\begin{equation}
\left\vert G_{a}^{\left( 6\right) }\right\rangle =a\left( \left\vert
0000\right\rangle +\left\vert 1111\right\rangle \right) +\left\vert
0011\right\rangle +\left\vert 0101\right\rangle +\left\vert
0110\right\rangle ,
\end{equation
are given by $\left( I_{3}\right) _{A_{4}}^{2,2}=\left( I_{3}\right)
_{A_{3}}^{2,2}=\left( I_{3}\right) _{A_{2}}^{2,2}=\frac{a^{4}}{6\left(
3+2a^{2}\right) ^{2}}$. Consequently, $\tau _{A_{i}|A_{j}|A_{k}}\left( \rho
\right) $ is zero on states of the entanglement type represented by state
\left\vert G_{a}^{\left( 6\right) }\right\rangle $. States represented by
\begin{equation}
\left\vert G^{\left( 7\right) }\right\rangle =\left\vert 0000\right\rangle
+\left\vert 0101\right\rangle +\left\vert 1000\right\rangle +\left\vert
1110\right\rangle ,
\end{equation
and
\begin{equation}
\left\vert G_{ab}^{\left( 8\right) }\right\rangle =\left\vert
0000\right\rangle +\left\vert 1011\right\rangle +\left\vert
1101\right\rangle +\left\vert 1110\right\rangle ,
\end{equation
differ in the amount of two-way correlations. For both the states, $\tau
_{A_{1}|A_{j}|A_{k}}\left( \rho \right) =\frac{1}{4},\tau
_{A_{2}|A_{3}|A_{4}}\left( \rho \right) =0$, while state nine which reads a
\begin{equation}
\left\vert G_{ab}^{\left( 9\right) }\right\rangle =\left\vert
0000\right\rangle +\left\vert 0111\right\rangle ,
\end{equation
has obviously $\tau _{A_{2}|A_{3}|A_{4}}\left( \rho \right) =\frac{1}{4}$.
\section{Upper bound on three tangle and optimal decomposition of a rank two
mixed state}
The procedure of section III may be used as an additional tool to find the
optimal decomposition $\{p_{i},\left\vert \phi _{i}\right\rangle $\} that
realizes the minimum in the definition (Eq. (\ref{mix3tangle})) of three
tangle of a mixed three-qubit state $\rho _{3}=\sum_{i}p_{i}\left\vert \phi
_{i}\right\rangle \left\langle \phi _{i}\right\vert $. To write down the
equations corresponding to Eq. (\ref{i40}) and Eq. (\ref{i04}), one
calculates relevant three-qubit invariants of the purification of the state.
Minimization may require solving a quartic equation. If an analytical
solution is not available, then the roots of the equation may be found
numerically. Consider a mixture of three-qubit pure states,
\begin{equation}
\rho _{3}=p\left\vert \phi ^{\left( 0\right) }\right\rangle \left\langle
\phi ^{\left( 0\right) }\right\vert +\left( 1-p\right) \left\vert \phi
^{\left( 1\right) }\right\rangle \left\langle \phi ^{\left( 1\right)
}\right\vert , \label{ro123}
\end{equation
where $\left\langle \phi _{1}\right. \left\vert \phi _{0}\right\rangle =0$.
Purification of the state can be written as
\begin{equation}
\left\vert \Psi _{4}\right\rangle =\sqrt{p}\left\vert \phi ^{\left( 0\right)
}\right\rangle \left\vert 0\right\rangle +\exp \left( i\theta \right) \sqrt
1-p}\left\vert \phi ^{\left( 1\right) }\right\rangle \left\vert
1\right\rangle .
\end{equation
Action of $U(x)=\frac{1}{\sqrt{1+\left\vert x\right\vert ^{2}}}\left[
\begin{array}{cc}
1 & -x^{\ast } \\
x &
\end{array
\right] ,$ on fourth qubit of $\left\vert \Psi _{4}\right\rangle $ leads to
stat
\begin{equation}
\left\vert \Psi _{4}\left( x\right) \right\rangle =\sqrt{p^{\left( 0\right)
}(x)}\left\vert \phi ^{\left( 0\right) }(x)\right\rangle \left\vert
0\right\rangle +\sqrt{p^{\left( 1\right) }(x)}\left\vert \phi ^{\left(
1\right) }(x)\right\rangle \left\vert 1\right\rangle \label{psix}
\end{equation
wher
\begin{equation}
\left\vert \phi ^{\left( 0\right) }(x)\right\rangle =\frac{\sqrt{p
\left\vert \phi ^{\left( 0\right) }\right\rangle -x^{\ast }\exp \left(
i\theta \right) \sqrt{1-p}\left\vert \phi ^{\left( 1\right) }\right\rangle }
\sqrt{p+\left\vert x\right\vert ^{2}\left( 1-p\right) }};p^{\left( 0\right)
}\left( x\right) =\frac{p+\left\vert x\right\vert ^{2}\left( 1-p\right) }
1+\left\vert x\right\vert ^{2}},
\end{equation
an
\begin{equation}
\left\vert \phi ^{\left( 1\right) }(x)\right\rangle =\frac{\exp \left(
i\theta \right) \sqrt{1-p}\left\vert \phi ^{\left( 1\right) }\right\rangle +
\sqrt{p}\left\vert \phi ^{\left( 0\right) }\right\rangle }{\sqrt{p\left\vert
x\right\vert ^{2}+\left( 1-p\right) }};p^{\left( 1\right) }\left( x\right)
=1-p^{\left( 0\right) }\left( x\right) .
\end{equation
The upper bound on three tangle of reduced stat
\begin{equation*}
\rho (x)=\sum_{i=0}^{1}p^{\left( i\right) }(x)\left\vert \phi ^{\left(
i\right) }(x)\right\rangle \left\langle \phi ^{\left( i\right)
}(x)\right\vert ,
\end{equation*
subject to the condition that $\rho (x)=\rho _{3}$, can be found by using
the procedure outlined in section II. However, proper analysis of three
tangles $\tau _{A_{1}|A_{2}|A_{3}}\left( \left\vert \phi ^{\left( 0\right)
}(x)\right\rangle \right) $ and $\tau _{A_{1}|A_{2}|A_{3}}\left( \left\vert
\phi ^{\left( 1\right) }(x)\right\rangle \right) $ along with the respective
vectors, further aids in improving on the upper bound.
To illustrate, we recover the results for a mixed state studied in ref. \cit
{lohm06}, which reads as
\begin{equation}
\rho _{3}=p\left\vert GHZ\right\rangle \left\langle GHZ\right\vert +\left(
1-p\right) \left\vert W\right\rangle \left\langle W\right\vert ,
\end{equation
such that $\left\vert GHZ\right\rangle =\frac{1}{\sqrt{2}}\left( \left\vert
000\right\rangle +\left\vert 111\right\rangle \right) $ and $\left\vert
W\right\rangle =\frac{1}{\sqrt{3}}\left( \left\vert 100\right\rangle
+\left\vert 010\right\rangle +\left\vert 001\right\rangle \right) $. Values
of relevant three-qubit invariants for the purification
\begin{equation}
\left\vert \Psi \right\rangle =\sqrt{p}\left\vert GHZ\right\rangle
\left\vert 0\right\rangle +\exp \left( i\theta \right) \sqrt{\left(
1-p\right) }\left\vert W\right\rangle \left\vert 0\right\rangle ,
\end{equation
are $\left( I_{3}\right) _{A_{4}}^{4,0}=\frac{p^{2}}{4}$ and $4\left(
I_{3}\right) _{A_{4}}^{1,3}=4\frac{\exp \left( i3\theta \right) \sqrt
p\left( 1-p\right) ^{3}}}{3\sqrt{6}}$. State $\left\vert \Psi _{4}\left(
x\right) \right\rangle $ (Eq. (\ref{psix}))obtained after a unitary
transformation $U(x)$ on fourth qubit contains normalized three-qubit state
\begin{equation}
\left\vert \phi ^{\left( 0\right) }(x,\theta )\right\rangle =\frac{\sqrt{p
\left\vert GHZ\right\rangle -x^{\ast }\exp \left( i\theta \right) \sqrt{1-p
\left\vert W\right\rangle }{\sqrt{p+\left\vert x\right\vert ^{2}\left(
1-p\right) }},\quad p^{\left( 0\right) }(x)=\frac{p+\left\vert x\right\vert
^{2}\left( 1-p\right) }{1+\left\vert x\right\vert ^{2}},
\end{equation
an
\begin{equation*}
\left\vert \phi ^{\left( 1\right) }(x,\theta )\right\rangle =\frac{\exp
\left( i\theta \right) \sqrt{1-p}\left\vert W\right\rangle +x\sqrt{p
\left\vert GHZ\right\rangle }{\sqrt{\left( 1-p\right) +p\left\vert
x\right\vert ^{2}}};\quad p^{\left( 1\right) }(x)=1-p^{\left( 0\right) }(x),
\end{equation*
such that $x$ dependent three tangle is given b
\begin{equation*}
\left[ \tau _{A_{1}|A_{2}|A_{3}}\left( \rho \left( x\right) \right) \right]
^{\frac{1}{2}}=\sum_{i=1,2}p^{\left( i\right) }(x)\left[ \tau
_{A_{1}|A_{2}|A_{3}}\left( \left\vert \phi ^{\left( i\right) }(x,\theta
)\right\rangle \right) \right] ^{\frac{1}{2}}\text{.}
\end{equation*
Three tangles of vectors in the superposition are
\begin{equation*}
\tau _{A_{1}|A_{2}|A_{3}}\left( \left\vert \phi ^{\left( 0\right) }(x,\theta
)\right\rangle \right) =\frac{4\left\vert \left( I_{3}\right)
_{A_{4}}^{4,0}-4\left( x^{\ast }\right) ^{3}\left( I_{3}\right)
_{A_{4}}^{1,3}\right\vert }{\left( p+\left( 1-p\right) \left\vert
x\right\vert ^{2}\right) ^{2}},
\end{equation*
an
\begin{equation*}
\tau _{A_{1}|A_{2}|A_{3}}\left( \left\vert \phi ^{\left( 1\right) }(x,\theta
)\right\rangle \right) =\frac{4\left\vert x^{4}\left( I_{3}\right)
_{A_{4}}^{4,0}+4x\left( I_{3}\right) _{A_{4}}^{1,3}\right\vert }{\left(
\left\vert x\right\vert ^{2}p+\left( 1-p\right) \right) ^{2}}.
\end{equation*
Substituting the values of $\left( I_{3}\right) _{A_{4}}^{4,0}$ and $4\left(
I_{3}\right) _{A_{4}}^{1,3}$ in $\tau _{A_{1}|A_{2}|A_{3}}\left( \left\vert
\phi ^{\left( 0\right) }(x,\theta )\right\rangle \right) $, we obtain
\begin{equation*}
\tau _{A_{1}|A_{2}|A_{3}}\left( \left\vert \phi ^{\left( 0\right) }(x,\theta
)\right\rangle \right) =\frac{4\left\vert \frac{p^{2}}{4}-4\left( x^{\ast
}\right) ^{3}\frac{\exp \left( i3\theta \right) \sqrt{p\left( 1-p\right) ^{3
}}{3\sqrt{6}}\right\vert }{\left( p+\left( 1-p\right) \left\vert
x\right\vert ^{2}\right) ^{2}}
\end{equation*
which is a periodic function of $\theta $ with a period of $2\pi /3$. For
the choice $\theta _{n}=2\pi n/3$, $n=0$, $1$, $2$, three-tangle $\tau
_{A_{1}|A_{2}|A_{3}}\left( \left\vert \phi ^{\left( 0\right) }(x_{0},\theta
_{n})\right\rangle \right) $ becomes zero for $x_{0}=\left( \frac{3\times 2^
\frac{5}{3}}p}{16\left( 1-p\right) }\right) ^{\frac{1}{2}}$. A closer
examination shows that for $p\leq 0.626851$ the value of $x_{0}$ lies within
the range $0\leq \left\vert x\right\vert \leq 1$, while for $p>0.626851$,
\tau _{A_{1}|A_{2}|A_{3}}\left( \left\vert \phi ^{\left( 0\right) }(x,\theta
_{n})\right\rangle \right) >0$ being minimum at $x=1$. For $0\leq p\leq
0.626851$, three-tangle $\left[ \tau _{A_{1}|A_{2}|A_{3}}\left( \rho
_{3}\left( x_{0}\right) \right) \right] =0$, for the mixed stat
\begin{equation*}
\rho _{3}\left( x_{0}\right) =\frac{1}{3}\sum_{n=0}^{2}\left\vert \phi
^{\left( 0\right) }(x_{0},\theta _{n})\right\rangle \left\langle \phi
^{\left( 0\right) }(x_{0},\theta _{n})\right\vert ,
\end{equation*
wher
\begin{equation*}
\left\vert \phi ^{\left( 0\right) }(x_{0},\theta _{n})\right\rangle =\frac
4\left\vert GHZ\right\rangle -\exp \left( i\theta _{n}\right) \sqrt{3\times
2^{\frac{5}{3}}}\left\vert W\right\rangle }{4\sqrt{\left( 1+\frac{3}{8}2^
\frac{2}{3}}\right) }}.
\end{equation*
Hence the decomposition of $\rho _{3}$ of Eq. (\ref{ro123}), with $\left[
\tau _{A_{1}|A_{2}|A_{3}}\left( \rho _{3}\left( x_{0}\right) \right) \right]
=0$ can be written a
\begin{equation*}
\rho _{3}=p\left( 1+\frac{3}{8}2^{\frac{2}{3}}\right) \rho _{3}\left(
x_{0}\right) +\left( 1-p\left( 1+\frac{3}{8}2^{\frac{2}{3}}\right) \right)
\left\vert W\right\rangle \left\langle W\right\vert \text{.}
\end{equation*}
Since for $p>0.626851,$ vectors $\left\vert \phi ^{\left( 0\right)
}(1,\theta _{n})\right\rangle $ have lowest value of three-tangle the upper
bound on three-tangle is given b
\begin{equation*}
\tau _{A_{1}|A_{2}|A_{3}}\left( \rho _{3}\left( 1\right) \right) =\left\vert
\frac{p^{2}}{4}-\frac{4\sqrt{p\left( 1-p\right) ^{3}}}{3\sqrt{6}}\right\vert
\text{,}
\end{equation*
and the corresponding decomposition is
\begin{equation*}
\rho _{3}=\frac{1}{3}\sum_{n=0}^{2}\left\vert \phi ^{\left( 0\right)
}(1,\theta _{n})\right\rangle \left\langle \phi ^{\left( 0\right) }(1,\theta
_{n})\right\vert .
\end{equation*
Similar arguments may be used to find upper bound on three-tangle of an
arbitrary rank-two mixed state of three qubits.
\section{Concluding remarks}
For most of the four-qubit states, our bound on three-tangle of reduced
state is tighter than that used in ref. \cite{regu14}. A careful examination
shows that the upper bounds on three tangles listed in Table I of ref. \cit
{regu14} are given by $4\sqrt{N_{4,8}^{A_{i}A_{j}A_{k}}-\left\vert
2I_{4,8}\right\vert }$. On the other hand for states that correspond to
cases (vi), (v), and (vi) of section III, three tangle satisfie
\begin{equation}
\tau _{A_{i}|A_{j}|A_{k}}\left( \rho \right) \leq 4F\sqrt
N_{4,8}^{A_{i}A_{j}A_{k}}-\left\vert 2I_{4,8}\right\vert },\quad F\leq 1.
\end{equation
In ref. \cite{oste16} unitary on three qubit states is used to obtain
minimal characteristic curves to construct convex roof of three tangle for
nine classes of four-qubit states. Comparing the upper bound obtained by
unitary on fourth qubit with results for tangle corresponding to minimal
characteristic curve in ref. \cite{oste16} it is noted that for states in
class II, class III, and class V our value is lower for certain ranges of
state parameters than that of ref. \cite{oste16}. For all other cases, the
result obtained is the same.
Correct understanding of relation between pure state correlations and
entanglement of marginal states is crucial to discovering the form of
monogamy relations for multipartite entanglement. After examining the upper
bounds for nine classes of four-qubit states $\left\vert \Psi \right\rangle
, we conclude tha
\begin{eqnarray}
4\left\vert \left( I_{3}\right) _{A_{4}}^{4,0}\right\vert +4\left\vert
\left( I_{3}\right) _{A_{4}}^{0,4}\right\vert &\geq &\tau
_{A_{1}|A_{2}|A_{3}}^{up}\left( \rho _{123}\right) \geq \tau
_{A_{1}|A_{2}|A_{3}}\left( \rho _{123}\right) \text{,} \label{upthree} \\
4\left\vert \left( I_{3}\right) _{A_{3}}^{4,0}\right\vert +4\left\vert
\left( I_{3}\right) _{A_{3}}^{0,4}\right\vert &\geq &\tau
_{A_{1}|A_{2}|A_{4}}^{up}\left( \rho _{124}\right) \geq \tau
_{A_{1}|A_{2}|A_{4}}\left( \rho _{124}\right) \text{,}
\end{eqnarray
an
\begin{equation}
4\left\vert \left( I_{3}\right) _{A_{2}}^{4,0}\right\vert +4\left\vert
\left( I_{3}\right) _{A_{2}}^{0,4}\right\vert \geq \tau
_{A_{1}|A_{3}|A_{4}}^{up}\left( \rho _{134}\right) \geq \tau
_{A_{1}|A_{3}|A_{4}}\left( \rho _{134}\right) \text{.}
\end{equation
This result is used in \cite{shar17} to analytically write down the correct
monogamy inequality for four-qubit states.
To conclude, the method outlined in this letter can be used to obtain three
tangle of a rank-two three-qubit mixed state. Three-tangles are of interest
to establish the connection between condensed-matter physics and quantum
information \cite{amic08} as well as to better understand the connection
between quantum correlations in spin systems undergoing quantum phase
transitions \cite{werl10}.
Financial support from Universidade Estadual de Londrina PR, Brazil is
acknowledged.
| proofpile-arXiv_066-2923 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} In this paper we consider the \emph{massive Maxwell-Klein-Gordon} (MKG) equation on the Minkowski space $ \mathbb{R}^{d+1} $ endowed with the metric $ g=\text{diag}(1,-1,-1,-1,-1) $. Throughout the paper we assume $ d \geq 4 $.
This equation arises as the Euler-Lagrange equations for the Lagrangian
$$ \mathcal S[A_{\mu},\phi]= \iint_{\mathbb{R}^{d+1}} \frac{1}{2} D_{\alpha} \phi \overline{D^{\alpha} \phi} + \frac{1}{4}F_{\alpha \beta} F^{\alpha \beta} - \frac{1}{2} m^2 \vm{\phi}^2 \,\mathrm{d} x \,\mathrm{d} t $$
Here $ \phi : \mathbb{R}^{d+1} \to \mathbb{C} $ is a complex field, while $ A_{\alpha} : \mathbb{R}^{d+1} \to \mathbb{R} $ is a real $1$-form with curvature
$$ F_{\alpha \beta} \vcentcolon= \partial_{\alpha} A_{\beta}- \partial_{\beta} A_{\alpha}. $$
One defines the \emph{covariant derivatives} and the \emph{covariant Klein-Gordon operator} by
$$ D_{\alpha} \phi \vcentcolon= (\partial_{\alpha}+i A_{\alpha}) \phi, \qquad \Box_m^A \vcentcolon= D^{\alpha} D_{\alpha}+m^2
$$
A brief computation shows that the Euler-Lagrange equations take the form
\begin{equation} \label{MKG}
\left\{
\begin{aligned}
& \partial^{\beta} F_{\alpha \beta} = \mathfrak{I}(\phi \overline{D_{\alpha}\phi}), \\
& (D^{\alpha} D_{\alpha}+m^2) \phi = 0, \\
\end{aligned}
\right.
\end{equation}
The MKG system is considered the simplest classical field theory enjoying a nontrivial \emph{gauge invariance}. Indeed, for any potential function $ \chi $, replacing
\begin{equation} \label{gauge:inv} \phi \mapsto e^{i \chi} \phi, \quad A_{\alpha} \mapsto A_{\alpha}-\partial_{\alpha} \chi, \quad D_{\alpha} \mapsto e^{i \chi} D_{\alpha} e^{-i \chi}
\end{equation}
one obtains another solution to \eqref{MKG}. To remove this gauge ambiguity we will work with the \emph{Coulomb gauge}
\begin{equation} \label{Coulomb}
\text{div}_x A=\partial^j A_j=0
\end{equation}
where Roman indices are used in sums over the spatial components. The system is Lorentz invariant and admits a \emph{conserved energy}, which we will not use here.
Under \eqref{Coulomb}, denoting $ J_{\alpha}=-\mathfrak{I} (\phi \overline{D_{\alpha} \phi}) $, the MKG system \eqref{MKG} becomes
\begin{equation} \label{MKGCG}
\left\{
\begin{aligned}
& \Box_m^A \phi = 0 \\
& \Box A_i = \mathcal P_i J_x \\
& \Delta A_0=J_0, \ \Delta \partial_t A_0=\partial^i J_i
\end{aligned}
\right.
\end{equation}
where $ \mathcal P $ denotes the Leray projection onto divergence-free vector fields
\begin{equation} \label{Leray} \mathcal P_j A \vcentcolon= \Delta^{-1} \partial^k (\partial_k A_j-\partial_j A_k ).
\end{equation}
When $ m=0 $ the equation is invariant under the \emph{scaling}
$$ \phi \mapsto \lambda \phi(\lambda t, \lambda x); \qquad A_{\alpha} \mapsto \lambda A_{\alpha}(\lambda t, \lambda x) $$
which implies that $ \sigma=\frac{d}{2}-1 $ is the \emph{critical regularity}. We shall refer to $ H^{\sigma}\times H^{\sigma-1} \times \dot{H}^{\sigma} \times \dot{H}^{\sigma-1} $ as the critical space for $ (A,\phi)[0] $ also when $ m\neq 0 $.
At this regularity, globally in time, the mass term $ m^2 \phi $ is not perturbative and must be considered as part of the operator $ \Box_m^A $.
The main result of this paper consists in extending the results in \cite{KST}, \cite{RT} to the case $ m\neq 0 $. For a more detailed statement, see Theorem \ref{thm:main-iter}.
\begin{theorem}[Critical small data global well-posedness and scattering] \label{thm:main}
Let $ d \geq 4 $ and $ \sigma=\frac{d}{2}-1$.
The MKG equation \eqref{MKGCG} is well-posed for small initial data on $\mathbb R^{1+d}$ with $m^2 > 0$, in the following sense: there exists a universal constant $\varepsilon > 0$ such that:
\begin{enumerate} [leftmargin=*]
\item Let $(\phi[0], A_{x}[0])$ be a smooth initial data set satisfying the Coulomb condition \eqref{Coulomb} and the smallness condition
\begin{equation} \label{eq:main:smalldata}
\nrm{\phi[0]}_{H^{\sigma}\times H^{\sigma-1}}
+ \nrm{A_{x}[0]}_{ \dot{H}^{\sigma} \times \dot{H}^{\sigma-1} } < \varepsilon.
\end{equation}
Then there exists a unique global smooth solution $(\phi, A)$ to the system \eqref{MKGCG} under the Coulomb gauge condition \eqref{Coulomb} on $\mathbb R^{1+d}$ with these data.
\item For any $T > 0$, the data-to-solution map $ (\phi[0], A_{x}[0]) \mapsto (\phi,\partial_t \phi, A_{x}, \partial_{t} A_{x})$ extends continuously to
\begin{align*}
H^{\sigma}\times H^{\sigma-1} \times \dot{H}^{\sigma} \times \dot{H}^{\sigma-1} (\mathbb R^{d}) \cap \set{\hbox{\eqref{eq:main:smalldata}}} \to C([-T, T]; H^{\sigma}\times H^{\sigma-1} \times \dot{H}^{\sigma} \times \dot{H}^{\sigma-1} (\mathbb R^{d})).
\end{align*}
\item The solution $(\phi, A)$ exhibits modified scattering as $t \to \pm \infty$: there exist a solution $(\phi^{\pm \infty}, A^{\pm \infty}_{j})$ to the linear system
\footnote{Here, $ A^{free} $ is the free solution $ \Box A^{free}=0 $ with $A^{free}_{x}[0] = A_{x}[0]$ and $ A^{free}_{0} = 0$.}
$$ \Box A_{j}^{\pm \infty} =0, \qquad \Box_m^{A^{free}} \phi=0, \qquad \text{such that} $$
\begin{equation*}
\nrm{(\phi - \phi^{\pm \infty})[t]}_{H^{\sigma}\times H^{\sigma-1}}
+ \nrm{(A_{j} - A^{\pm \infty}_{j})[t]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1} }
\to 0 \quad \hbox{ as } t \to \pm \infty.
\end{equation*}
\end{enumerate}
\end{theorem}
\
The case $ d=4 $ is the most difficult. When $ d \geq 5 $ the argument simplifies, in particular the spaces $ NE_C^{\pm}, \ PW_C^{\pm}, \ L^2 L^{4,2} $ discussed below are not needed. To fix notation, the reader is advised to set $ d=4, \ \sigma=1 $.
\begin{remark} The theorem is stated for Coulomb initial data. However, it can be applied to an arbitrary initial data satisfying \eqref{eq:main:smalldata} by performing a gauge transformation.
Given a real 1-form $A_{j}(0)$ on $\mathbb R^{d}$, one solves the Poisson equation
$$ \Delta \chi = \mathrm{div}_{x} A_{j}(0), \qquad \qquad \chi \in \dot{H}^{\frac{d}{2}} \cap \dot{W}^{1,d}(\mathbb R^{d}). $$
Then $\tilde{A}(0) = A(0) - \mathrm{d} \chi$ obeys the Coulomb condition \eqref{Coulomb}. For small $\varepsilon$, the small data condition \eqref{eq:main:smalldata} is preserved up to multiplication by a constant.
\end{remark}
In what follows we set $ m=1 $, noting that by rescaling, the statements for any $ m \neq 0 $ can be obtained. Notation-wise, we will write $ \Box_m $ rather than $ \Box_1 $.
\
\subsection{Previous work} Progress on the Maxwell-Klein-Gordon equation has occurred in conjunction with the Yang--Mills(-Higgs) equations. An early result was obtained by Eardley and Moncrief \cite{eardley1982}.
On $ \mathbb{R}^{2+1} $ and $ \mathbb{R}^{3+1} $ the MKG system is energy subcritical. Klainerman-Machedon \cite{KlMc} and Selberg-Tesfahun \cite{Selb} (in the Lorenz gauge) have proved global regularity as a consequence of local well-posedness at the energy regularity. Further results were obtained by Cuccagna \cite{Cucc}. Machedon-Sterbenz \cite{MachedonSterbenz} proved an essentially optimal local well-posedness result. In \cite{Keel2011573} in $ \mathbb{R}^{3+1} $, global well-posedness below the energy norm was considered.
On $ \mathbb{R}^{4+1} $, an almost optimal local well-posedness result was proved by Klainerman-Tataru \cite{KlaTat} for a model problem closely related to MKG and Yang-Mills. This was refined by Selberg \cite{SSel} for MKG and Sterbenz \cite{sterbenz2007global}.
At critical regularity all the existing results are for the massless case $ m=0 $. Rodnianski-Tao \cite{RT} proved global regularity for smooth and small critical Sobolev data in dimensions $ 6+1 $ and higher. This result was extended by Krieger-Sterbenz-Tataru \cite{KST} to $ \mathbb{R}^{4+1} $.
The small data $ \mathbb{R}^{4+1} $ energy critical massless result in \cite{KST} has been extended to large data global well-posedness by Oh-Tataru (\cite{OT2},\cite{OT3},\cite{OT1}) and independently by Krieger-L\"uhrmann \cite{KL}. Proving a similar large data result for the massive case remains an open problem.
Recent works on Yang-Mills include: \cite{KS}, \cite{KT}, \cite{OhYM}, \cite{oh2015}. We also mention the related $ \mathbb{R}^{4+1} $ Maxwell-Dirac result \cite{MD} at critical regularity.
\
\subsection{Main ideas}
\subsubsection*{Null structures in the Coulomb gauge.} Null structures arise in equations from mathematical physics where it manifests through the vanishing of resonant interactions. A classical null form refers to a linear combination of
\begin{equation} \label{cl:nf}
\mathcal N_{ij}(\phi,\psi) =\partial_{i} \phi \partial_{j} \psi- \partial_{j} \phi \partial_{i} \psi, \qquad \mathcal N_{0}(\phi,\psi) =\partial_{\alpha} \phi \cdot \partial^{\alpha} \psi. \end{equation}
A key observation of Klainerman and Machedon in \cite{KlMc} was that quadratic nonlinearities of MKG consist of null forms of the type $ \mathcal N_{ij} $ (see \eqref{phi:nf:identity}, \eqref{ax:nf:identity}). They used this to prove global well-posedness at energy regularity on $\mathbb R^{1+3}$.
Furthermore, in the proof of essentially optimal local well-posedness of MKG on $\mathbb R^{1+3}$ by Machedon and Sterbenz \cite{MachedonSterbenz}, secondary trilinear null structures involving $ \mathcal N_{0}$ were identified in MKG after one iteration, see \eqref{Q:dec}, \eqref{Q:dec2}.
Both of these structures played an important role in \cite{KST}, and they also do so here. However, special care must be taken since the null form $ \mathcal N_{0}$ is adapted to the wave equation while we work with Klein-Gordon waves; see subsections \ref{Mform} \ref{M0form}.
\subsubsection*{Presence of non-perturbative nonlinearity.}
As usual in works on low regularity well-posedness, we take a paradifferential approach in treating the nonlinear terms, exploiting the fact that the high-high to low interactions are weaker and that terms where the derivative falls on low frequencies are weaker as well.
From this point of view, the worst interaction in MKG occurs in
$$ \sum_{k} A^{\alpha}_{<k-C} \partial_{\alpha} \bar{P}_k \phi. $$
At critical regularity this term is \emph{nonperturbative}, in the sense that even after utilizing all available null structure, it cannot be treated with multilinear estimates for the usual wave and Klein-Gordon equations. Instead, following the model set in \cite{RT} and \cite{KST}, this term must be viewed as a part of the underlying linear operator.
The presence of a nonperturbative term is characteristic of geometric wave equations; examples include Wave Maps, MKG, Yang--Mills, Maxwell--Dirac.
\subsubsection*{Parametrix construction for paradifferential covariant wave equation}\
A key breakthrough of Rodnianski and Tao \cite{RT} was proving Strichartz estimates for the covariant wave equation by introducing a microlocal parametrix construction, motivated by the gauge covariance of $\Box_{A} =D^{\alpha} D_{\alpha} $ under $ \eqref{gauge:inv} $, i.e., $e^{-i \Psi} \Box_{A'} (e^{i \Psi} \phi)= \Box_A \phi$.
The idea was to approximately conjugate (or renormalize) the modified d'Alembertian $ \Box+ 2i A_{<k-c} \cdot \nabla_x P_k $ to $ \Box $ by means of a carefully constructed pseudodifferential gauge transformation
$$ \Box_A^p \approx e^{i \Psi_{\pm}}(t,x,D) \Box e^{-i \Psi_{\pm}} (D,s,y). $$
These Strichartz estimates were sufficient to prove global regularity of the Maxwell-Klein-Gordon equation at small critical Sobolev data in dimensions $ d \geq 6 $.
A summary of the construction in \cite{KST} can be found in Section~\ref{Constr:phase} below.
A renormalization procedure has been also applied to the Yang-Mills equation at critical regularity \cite{KS}, \cite{KT}.
The above idea was also implemented for covariant Dirac operators in \cite{MD} in the context of the Maxwell-Dirac equation.
\subsubsection*{Function spaces}
In \cite{KST}, Krieger, Sterbenz and Tataru further advanced the parametrix idea in $ d=4 $, showing that it interacts well with the function spaces previously introduced for critical wave maps in \cite{Tat}, \cite{Tao2}. In particular, the resulting solution obeys similar bounds as ordinary waves, yielding control of an $X^{s, b}$ norm, null-frame norms and square summed angular-localized Strichartz norms.
In this paper we will follow their strategy, showing that both the spaces and the renormalization bounds generalize to the Klein-Gordon ($ m^2 > 0$) context.
Critical $ X^{s,\pm \frac{1}{2}} $ spaces, 'null' energy $ L^{\infty}_{t_{\omega,\lambda}} L^2_{x_{\omega,\lambda}} $ and Strichartz $ L^{2}_{t_{\omega,\lambda}} L^{\infty}_{x_{\omega,\lambda}} $ spaces in adapted frames, were already developed for the Klein-Gordon operators by Bejenaru and Herr \cite{BH1} and we will use some of their ideas. The difficulty here consists in proving the bounds for renormalized solutions
$$ \vn{e^{-i \psi}(t,x,D)\phi}_{\bar{S}^1_k} \lesssim \vn{\phi[0]}_{H^1 \times L^2}+ \vn{\Box_m \phi}_{\bar{N}_k}. $$
We shall rely on $ TT^{*} $ and stationary phase arguments for both $ L^{\infty}_{t_{\omega,\lambda}} L^2_{x_{\omega,\lambda}} $ and $ L^{2}_{t_{\omega,\lambda}} L^{\infty}_{x_{\omega,\lambda}} $ bounds, as well for $ P_C L^2 L^{\infty} $, see Corollaries \ref{Cornullframe}, \ref{corPW} and \ref{Cor:L2Linf}.
However, at low frequency or at high frequencies with very small angle interactions, the adapted frame spaces do not work and we are confronted with logarithmic divergences. To overcome this we rely on Strichartz estimates in Lorentz spaces $ L^2 L^{4,2} $ and an embedding property of $ \Box^{-1} $ into $ L^1 L^{\infty} $.
Here we have been inspired by the paper \cite{ShSt} of Shatah and Struwe. The difference is that instead of inverting $ \Delta $ by a type of Sobolev embedding $ \vm{D_x}^{-1}: L^{d,1}_x \to L^{\infty}_x $, we have to invert $ \Box $ by
$$ 2^{\frac{1}{2}l} \sum_{k'} P_l^{\omega} Q_{k'+2l} P_{k'} \frac{1}{\Box} : L^1 L^{2,1} \to L^1 L^{\infty} $$
See Proposition \ref{Box:Embedding}.
\section{Preliminaries}
In what follows we normalize the Lebesgue measure so that factors of $ \sqrt{2 \pi} $ do not appear in the Fourier transform formulas. We summarize the conventions that we use in the following table.
\bgroup
\def\arraystretch{1.5
\begin{tabular}{ |p{3cm}||p{3cm}|p{3cm}|p{3cm}| }
\hline
& Klein-Gordon & Wave & Laplace\\
\hline
Functions & $ \qquad \phi $ & $ \qquad A_x $ & $ \qquad A_0 $ \\ \hline
Operator & $ \quad \Box_m=\Box+I $ & $ \qquad \Box $ & $ \qquad \Delta $ \\ \hline
Frequency & $ \bar{P}_k , \ k \geq 0 $ & $ P_{k'} ,\ k' \in \mathbb{Z} $ & $ P_{k'} , \ k' \in \mathbb{Z} $ \\
localization & $ \{ \jb{\xi} \simeq 2^k \} $ & $ \{ \vm{\xi} \simeq 2^k \} $ & $ \{ \vm{\xi} \simeq 2^k \} $
\\ \hline
Modulations & \qquad $ \bar{Q}^{\pm}_j $ & \qquad $ Q ^{\pm}_j $& \\
& $ \{ \tau \mp \jb{\xi} \simeq 2^j \} $ & $ \{ \tau \mp \vm{\xi} \simeq 2^j \} $ &
\\ \hline
Spaces & $ \bar{S}_k, \bar{N}_k, \quad \bar{S}^{\sigma}, \bar{N}^{\sigma-1} $ & $ S_{k'}, N_{k'}, \quad S^{\sigma}, N^{\sigma-1} $ & \qquad $ Y^{\sigma} $ \\
\hline
\end{tabular}
\egroup
\
\subsection{Notation}
We denote
$$ \jb{\xi}_k=(2^{-2k}+\vm{\xi}^2)^{\frac{1}{2}}, \qquad \jb{\xi}=(1+\vm{\xi}^2)^{\frac{1}{2}}. $$
We define $ A \prec B $ by $ A \leq B-C $, $ A \lesssim B $ by $ A \leq C B $ and $ A=B+O(1) $ by $ \vm{A-B} \leq C $, for some absolute constant $ C $. We say $ A \ll B $ when $ A \leq \eta B $ for a small constant $ 0<\eta<1 $ and $ A \simeq B $ when he have both $ A \lesssim B $ and $ B \lesssim A $.
\subsection{Frequency projections}
Let $ \chi $ be a smooth non-negative bump function supported on $ [2^{-2},2^2] $ which satisfies the partition of unity property
$$ \sum_{k' \in \mathbb{Z}} \chi \big( \vm{\xi}/2^{k'} \big)=1 $$
for $ \xi \neq 0 $. For $ k' \in \mathbb{Z},\ k \geq 0 $ we define the Littlewood-Paley operators $ P_{k'}, \bar{P}_k $ by
$$ \widehat{P_{k'} f} (\xi)=\chi \big( \vm{\xi}/2^{k'} \big) \hat{f} (\xi), \quad \bar{P}_0=\sum_{k' \leq 0} P_{k'}, \quad \bar{P}_k=P_{k}, \ \text{for} \ k \geq 1. $$
The modulation operators $ Q_j, Q_j^{\pm}, \ \bar{Q}_j, \bar{Q}_j^{\pm} $ are defined by
$$ \mathcal{F} (\bar{Q}_j^{\pm}f) (\tau,\xi)= \chi \big( \frac{\vm{\pm \tau-\jb{\xi}} }{2^j} \big) \mathcal{F}f (\tau,\xi), \quad \mathcal{F} (Q_j^{\pm}f) (\tau,\xi)= \chi \big( \frac{\vm{\pm \tau-\vm{\xi}} }{2^j} \big) \mathcal{F}f (\tau,\xi). $$
and $ Q_j=Q_j^{+}+Q_j^{-}, \ \bar{Q}_j=\bar{Q}_j^{+}+\bar{Q}_j^{-} $ for $ j \in \mathbb{Z} $, where $ \mathcal{F} $ denotes the space-time Fourier transform.
Given $ \ell \leq 0 $ we consider a collection of directions $ \omega $ on the unit sphere which is maximally $ 2^\ell $-separated. To each $ \omega $ we associate a smooth cutoff function $ m_{\omega} $ supported on a cap $ \kappa \subset \mathbb S^{d-1} $ of radius $ \simeq 2^\ell $ around $ \omega $, with the property that $ \sum_{\omega} m_{\omega}=1 $. We define $ P_\ell^{\omega} $ (or $ P_{\kappa} $ )to be the spatial Fourier multiplier with symbol $ m_{\omega}(\xi/\vm{\xi}) $. In a similar vein, we consider rectangular boxes $ \mathcal{C}_{k'}(\ell') $ of dimensions $ 2^{k'} \times (2^{k'+\ell'})^{d-1} $, where the $ 2^{k'} $ side lies in the radial direction, which cover $\mathbb R^{d}$ and have finite overlap with each other. We then define $P_{\mathcal{C}_{k'}(\ell')}$ to be the associated smooth spatial frequency localization to $\mathcal C_{k'}(\ell')$. For convenience, we choose the blocks so that $P_{k} P_{\ell}^{\omega} = P_{\mathcal C_{k}(\ell)}$.
We will often abbreviate $ A_{k'} = P_{k'} f$ or $ \phi_k=\bar{P}_k \phi $. We will sometimes use the operators $ \tilde{P}_k, \tilde{Q}_{j/<j}, \tilde{P}^{\omega}_{\ell} $ with symbols given by bump functions which equal $ 1 $ on the support of the multipliers $ P_k, Q_{j/<j} $ and $ P^{\omega}_{\ell} $ respectively and which are adapted to an enlargement of these supports.
We call multiplier disposable when it's convolution kernel is a function (or measure) with bounded mass. Minkowski's inequality insures that disposable operators are bounded on translation-invariant normed spaces. Examples include $ P_k, P_{\ell}^{\omega}, P_{\mathcal C} $.
When $ j \geq k+2\ell-C $ the operator $ P_k P_{\ell}^{\omega} Q_{j/<j} $ is disposable \cite[Lemma 6]{Tao2}. Similar considerations apply to $ Q^{\pm}_j \bar{Q}_j, \bar{P}_k $ etc.
\subsection{Sector projections} For $ \omega \in \mathbb{S}^{d-1} $ and $ 0<\theta \lesssim 1 $ we define the sector projections $ \Pi^{\omega}_{>\theta} $ by
\begin{equation} \label{sect:proj1} \widehat{\Pi^{\omega}_{>\theta} u} (\xi)=\big(1- \eta(\angle(\xi,\omega) \theta^{-1}) \big) \big(1- \eta(\angle(\xi,-\omega) \theta^{-1}) \big) \hat{u}(\xi) \end{equation}
where $ \eta $ is a bump function on the unit scale. We define
\begin{equation} \label{sect:proj2} \Pi^{\omega}_{<\theta}=1-\Pi^{\omega}_{>\theta}, \qquad \Pi^{\omega}_{\theta}=\Pi^{\omega}_{>\theta/2}-\Pi^{\omega}_{>\theta}.
\end{equation}
\subsection{Adapted frames} Following \cite{BH1}, for $ \lambda >0 $ and $ \omega \in \mathbb{S}^{d-1} $ we define the frame
\begin{equation} \omega^{\lambda}=\frac{1}{\sqrt{1+\lambda^2}} (\pm \lambda,\omega), \quad \bar{\omega}^{\lambda}=\frac{1}{\sqrt{1+\lambda^2}} (\pm 1,-\lambda \omega), \quad \omega_i^{\perp} \in (0, \omega^{\perp}) \label{frame} \end{equation}
and the coordinates in this frame
\begin{equation} t_{\omega}=(t,x) \cdot \omega^{\lambda}, \quad x^1_{\omega}=(t,x) \cdot \bar{\omega}^{\lambda}, \quad x_{\omega,i}'=x \cdot \omega_i^{\perp} \label{frame2} \end{equation}
When $ \lambda=1 $ one obtains the null coordinates as in \cite{Tat}, \cite{Tao2}.
For these frames we define the spaces $ L^{\infty}_{t_{\omega}} L^2_{x_{\omega}^1, x_{\omega}' } , L^{2}_{t_{\omega}} L^{\infty}_{x_{\omega}^1, x_{\omega}' } $ in the usual way, which we denote $ L^{\infty}_{t_{\omega,\lambda}} L^2_{x_{\omega},\lambda} , L^{2}_{t_{\omega,\lambda}} L^{\infty}_{x_{\omega,\lambda}} $ to emphasize the dependence on $ \lambda $.
\subsection{Pseudodifferential operators}
To implement the renormalization we will use pseudodifferential operators. For symbols $ a(x,\xi) : \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{C} $ one defines the left quantization $ a(x,D) $ by
\begin{equation} \label{left:quant}
a(x,D)u=\int_{\mathbb{R}^d} e^{i x \cdot \xi} a(x,\xi) \hat{u}(\xi) \,\mathrm{d} \xi
\end{equation}
while the right quantization $ a(D,y) $ is defined by
\begin{equation} \label{right:quant}
a(D,y)u=\iint_{\mathbb{R}^d \times \mathbb{R}^d} e^{i (x-y) \cdot \xi} a(y,\xi) u(y) \,\mathrm{d} y \,\mathrm{d} \xi.
\end{equation}
Observe that $ a(x,D)^*=\bar{a}(D,y) $. We will only work with symbols which are compactly supported in $ \xi $.
\subsection{Bilinear forms}
We say that the translation-invariant bilinear form $ \mathcal M(\phi^1,\phi^2) $ has symbol $ m(\xi_1,\xi_2) $ if
$$ \mathcal M(\phi^1,\phi^2) (x) = \int_{\mathbb{R}^d \times \mathbb{R}^d } e^{i x \cdot (\xi_1+\xi_2)} m(\xi_1,\xi_2) \hat{\phi}^1(\xi_1) \hat{\phi}^2(\xi_2) \,\mathrm{d} \xi_1 \,\mathrm{d} \xi_2. $$
We make the analogous definition for functions defined on $ \mathbb{R}^{1+d} $ and symbols $ m(\Xi^1,\Xi^2) $ where $ \Xi^i=(\tau_i,\xi_i) $.
\subsection{Stationary/non-stationary phase}
We will bound oscillatory integrals using the stationary and non-stationary phase methods. For proofs of these two propositions as stated here see \cite{hormander2003introduction}.
\begin{proposition}
\label{nonstationary}
Suppose $ K \subset \mathbb{R}^n $ is a compact set, $ X $ is an open neighborhood of $ K $ and $ N \geq 0 $. If $ u \in C_0^N(K), f \in C^{N+1}(X) $ and $ f $ is real valued, then
\begin{equation} \vm{ \int e^{i \lambda f(x)} u(x) \,\mathrm{d} x } \leq C \frac{1}{\lambda^N} \sup_{\vm{\alpha} \leq N} \vm{D^{\alpha} u} \vm{f'} ^{\vm{\alpha}-2N} , \qquad \lambda >0 \end{equation}
where $ C $ is bounded when $ f $ stays in a bounded set in $ C^{N+1}(X) $.
\end{proposition}
\begin{proposition}[Stationary phase]
\label{stationary}
Suppose $ K \subset \mathbb{R}^n $ is a compact set, $ X $ is an open neighborhood of $ K $ and $ k \geq 1 $. If $ u \in C_0^{2k}(K), f \in C^{3k+1}(X) $ and $ f $ is real valued, $ f'(x_0)=0, \ det f''(x_0)\neq 0, \ f \neq 0 $ in $ K \setminus \{ x_0 \} $, then for $ \lambda >0 $ we have
\begin{equation} \label{expansion} \vm{ \int e^{i \lambda f(x)} u(x) \,\mathrm{d} x - e^{i \lambda f(x_0)} \left( \frac{\det(\lambda f''(x_0))}{2 \pi i} \rpr^{-\frac{1}{2}} \sum_{j<k} \frac{1}{\lambda^j} L_j u } \leq C \frac{1}{\lambda ^k} \sum_{\vm{\alpha} \leq 2k} \sup \vm{D^{\alpha} u} \end{equation}
where $ C $ is bounded when $ f $ stays in a bounded set in $ C^{3k+1}(X) $ and $ \vm{x-x_0}/ \vm{f'(x)} $ has a uniform bound. $ L_j $ are differential operators of order $ 2j $ acting on $ u $ at $ x_0 $.
\end{proposition}
Moreover, one controls derivatives in $ \lambda $ (see \cite[Lemma 2.35]{NaSch}):
\begin{equation} \label{st-phase:est}
\vm{ \partial_{\lambda}^j \int e^{i \lambda [f(x)-f(x_0)]} u(x) \,\mathrm{d} x} \leq C \frac{1}{\lambda^{\frac{n}{2}+j}}, \qquad j \geq 1.
\end{equation}
\subsection{$ L^p $ estimates} We will frequently use Bernstein's inequality, which states that
$$ \vn{u}_{L_x^q} \lesssim \vm{V}^{\frac{1}{p}-\frac{1}{q}} \vn{u}_{L_x^p} $$
when $ \hat{u} $ is supported in a box of volume $ V $ and $ 1 \leq p \leq q \leq \infty $. In particular,
$$ \vn{P_k u} _{L_x^q} \lesssim 2^{dk(\frac{1}{p}-\frac{1}{q})} \vn{P_k u}_{L_x^p}, \qquad \vn{P_{\mathcal{C}_{k'}(\ell')} u} _{L_x^q} \lesssim 2^{(dk'+(d-1) \ell')(\frac{1}{p}-\frac{1}{q})} \vn{P_{\mathcal{C}_{k'}(\ell')} u}_{L_x^p}
$$
For $ L^2 $ estimates we will rely on
\begin{lemma}[Schur's test] Let $ K: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{C} $ and the operator $ T $ defined by
$$ Tf(x)=\int_{\mathbb{R}^n} K(x,y) f(y) \,\mathrm{d} y $$
which satisfies
$$ \sup_{x} \int \vm{K(x,y)} \,\mathrm{d} y \leq M, \qquad \sup_{y} \int \vm{K(x,y)} \,\mathrm{d} x \leq M. $$
Then
$$ \vn{T}_{L^2(\mathbb{R}^n) \to L^2(\mathbb{R}^n)} \leq M $$
\end{lemma}
\subsection{Frequency envelopes} Given $ 0<\delta_1 < 1 $, an admissible frequency envelope $ (c_k)_{k \geq 0} $ is defined to be a sequence such that $ c_{p}/c_k \leq C 2^{\delta_1 \vm{p-k}} $ for any $ k,p \geq 0 $. Given spaces $ \bar{X},\ X $ and sequences $ (c_k)_{k \geq 0} $, $ (\tilde{c}_k)_{k \in \mathbb{Z}} $ we define the $ \bar{X}_c, \ X_{\tilde{c}} $ norms by
\begin{equation} \label{fe:X} \vn{f}_{\bar{X}_c}=\sup_{k \geq 0} \frac{\vn{\bar{P}_k f}_{\bar{X}}}{c_k}, \qquad \vn{A}_{X_{\tilde{c}}}=\sup_{k \in \mathbb{Z}} \frac{\vn{P_k A}_{\bar{X}}}{\tilde{c}_k}. \end{equation}
\
\section{Function spaces}
All the spaces that we will use are translation invariant.
\subsection{The Strichartz and $ X^{b} $-type spaces.}
We first define the admissible Strichartz norms for the $ d+1 $ dimensional wave equation. For any $ d \geq 4 $ and any $ k $ we set
$$ S_k^{Str,W} = \bigcap_{\frac{2}{q}+\frac{d-1}{r} \leq \frac{d-1}{2} } 2^{(\frac{d}{2}-\frac{1}{q}-\frac{d}{r})k} L^q L^r $$
with norm
\begin{equation} \label{Str:wave}
\vn{f}_{S_k^{Str,W}}=\sup_{\frac{2}{q}+\frac{d-1}{r} \leq \frac{d-1}{2} } 2^{-(\frac{d}{2}-\frac{1}{q}-\frac{d}{r})k} \vn{f}_{L^q L^r}
\end{equation}
Next we define the $ X^{\frac{1}{2}}_{\infty}, X^{-\frac{1}{2}}_{1} $ and $ \bar{X}^{\frac{1}{2}}_{\infty}, \bar{X}^{-\frac{1}{2}}_{1}$ spaces, which are logarithmic of refinements of the usual $X^{s, b}$ space. Their dyadic norms are
\begin{align*}
& \nrm{F}_{X^{-\frac{1}{2}}_{1}} = \sum_{j \in \mathbb Z} 2^{-\frac{1}{2} j} \nrm{Q_{j} F}_{L^{2}_{t,x}}, \quad
& \nrm{A}_{X^{\frac{1}{2}}_{\infty}} = \sup_{j \in \mathbb Z} 2^{\frac{1}{2} j} \nrm{Q_{j} A}_{L^{2}_{t,x}} \\
& \nrm{F}_{\bar{X}^{\pm \frac{1}{2}}_{1}} = \sum_{j \in \mathbb Z} 2^{\pm \frac{1}{2} j} \nrm{\bar{Q}_{j} F}_{L^{2}_{t,x}}, \quad
& \nrm{\phi}_{\bar{X}^{\frac{1}{2}}_{\infty}} = \sup_{j \in \mathbb Z} 2^{\frac{1}{2} j} \nrm{\bar{Q}_{j} \phi}_{L^{2}_{t,x}}
\end{align*}
\subsection{The spaces for the nonlinearity}
For the nonlinearity, we define for $ k \geq 0 $ and $ k' \in \mathbb{Z} $
\begin{equation} \bar{N}_k=L^1L^2 + \bar{X}_1^{-\frac{1}{2}}, \qquad N_{k'}=L^1L^2 + X_1^{-\frac{1}{2}} \end{equation}
with norms
$$ \vn{F}_{\bar{N}_k}=\inf_{F=F_1+F_2} \vn{F_1}_{L^1L^2 }+\vn{F_2}_{\bar{X}_1^{-\frac{1}{2}}}, \qquad \vn{F}_{N_{k'}}=\inf_{F=F_1+F_2} \vn{F_1}_{L^1L^2 }+\vn{F_2}_{X_1^{-\frac{1}{2}}} $$
By duality we can identify $ \bar{N}_k^* $ with $ L^{\infty} L^2 \cap \bar{X}^{\frac{1}{2}}_{\infty} $.
For the scalar equation nonlinearity, respectively the $ A_i $ equation, for $ s\in \mathbb{R} $ we define
$$ \vn{F}_{\bar{N}^s}^2 =\sum_{k \geq 0} 2^{2sk} \vn{\bar{P}_k F }_{\bar{N}_k}^2 $$
$$ \vn{F}_{\ell^1 N^s}=\sum_{k' \in \mathbb{Z}} 2^{sk'} \vn{P_{k'} F}_{N_k'} ,\qquad \vn{F}_{N^s}^2=\sum_{k' \in \mathbb{Z} } 2^{2sk'} \vn{P_{k'} F}_{N_k'}^2. $$
\subsection{The iteration space for $ \phi $}
The solution of the scalar equation will be placed in the space $ \bar{S}^{\sigma} $ for $ \sigma=\frac{d-2}{2} $ where, for any $ s $ we define
$$ \vn{\phi}_{\bar{S}^s}^2=\vn{\bar{P}_0 (\phi,\partial_t\phi)}_{\bar{S}_0}^2 + \sum_{k \geq 1} 2^{2(s-1)k} \vn{\nabla_{x,t} \bar{P}_k \phi}_{\bar{S}_k}^2 + \vn{\Box_m \phi}_{L^2 H^{s-\frac{3}{2}}}^2 $$
where $ \bar{S}_k $ are defined below.
When $ d=4 $, in addition to \eqref{Str:wave}, we will also use the Klein-Gordon Strichartz norms below. In general, using these K-G Strichartz norms at high frequencies does not lead to optimal estimates. Therefore, we will only rely on them for low frequencies or when there is enough additional dyadic gain coming from null structures. We set
\begin{equation} \label{Str:KG}
\begin{aligned}
& \text{For } d=4: \qquad \bar{S}_k^{Str}=S_k^{Str,W} \cap 2^{\frac{3}{8}k} L^4 L^{\frac{8}{3}} \cap 2^{\frac{3}{4}k} L^2 L^4 \cap 2^{\frac{3}{4}k} L^2 L^{4,2} \\
& \text{For } d \geq 5: \qquad \bar{S}_k^{Str}=S_k^{Str,W}
\end{aligned} \end{equation}
Notice that we incorporate the Lorentz norms $ L^{4,2} $. See section \ref{sec:Emb} for more information.
For low frequencies $ \{ \vm{\xi} \lesssim 1 \} $ we define
\begin{equation} \label{Sbar0} \vn{\phi}_{\bar{S}_0}=\vn{\phi}_{\bar{S}^{Str}_0}+ \vn{\phi}_{\bar{X}_{\infty}^{\frac{1}{2}}}+ \sup_{\pm,k'<0} \vn{ \bar{Q}_{<k'}^{\pm} \phi}_{S_{box(k')}} \qquad (d \geq 4) \end{equation}
where
$$ \vn{ \phi}_{S_{box(k')}}^2=2^{-2\sigma k'} \sum_{\mathcal C=\mathcal C_{k'}} \vn{P_{\mathcal C} \phi}_{L^2 L^{\infty}}^2 $$
where $ (\mathcal C_{k'})_{k'} $ is a finitely overlapping collection of cubes of sides $ \simeq 2^{k'} $.
For higher frequencies we define as follows. Let $ d \geq 4 $, $ k \geq 1 $ and
\begin{equation} \label{highfreqSp} \vn{\phi}_{\bar{S}_k}^2=\vn{\phi}_{\bar{S}^{Str}_k}^2+ \vn{\phi}_{\bar{X}_{\infty}^{\frac{1}{2}}}^2+ \sup_{\pm} \sup_{l<0} \sum_{\omega}\vn{P_l^{\omega} \bar{Q}^{\pm}_{<k+2l} \phi}_{\bar{S}_k^{\omega \pm}(l)}^2 \end{equation}
where, for $ d\geq 5 $ we define
$$
\vn{ \phi}_{\bar{S}_k^{\omega \pm}(l)}^2 =\vn{\phi}_{S_k^{Str}}^2 + \sup_{\substack{k' \leq k;-k \leq l' \leq 0 \\ k+2l \leq k'+l' \leq k+l }} \sum_{\mathcal C=\mathcal C_{k'}(l')} 2^{-(d-2)k'-(d-3)l'-k} \vn{P_{\mathcal C} \phi}_{L^2L^{\infty}}^2
$$
while for $ d=4 $ we set
\begin{equation*}
\begin{aligned}
\vn{ \phi}_{\bar{S}_k^{\omega \pm}(l)}^2 =\vn{\phi}_{S_k^{Str}}^2 &+ \sup_{\substack{k' \leq k;-k \leq l' \leq 0 \\ k+2l \leq k'+l' \leq k+l }} \sum_{\mathcal C=\mathcal C_{k'}(l')} \big( 2^{-2k'-k-l'} \vn{P_{\mathcal C} \phi}_{L^2L^{\infty}}^2 +\\
&\ + 2^{-3(k'+l')} \vn{P_{\mathcal C} \phi}_{PW_{\mathcal C}^{\pm}}^2 + \vn{ P_{\mathcal C} \phi}_{NE_{C}^{\pm}}^2 \big).
\end{aligned} \end{equation*}
where, for any $ \mathcal C=\mathcal C_{k'}(l') $
\begin{equation} \vn{\phi}_{NE_{\mathcal C}^{\pm}}= \sup_{\substack{\bar{\omega},\lambda=\lambda(p) \\ \angle(\bar{\omega},\pm \mathcal C) \gg 2^{-p},2^{-k}, 2^{l'+k'-k}}} \angle(\bar{\omega},\pm \mathcal C)\ \vn{\phi}_{L^{\infty}_{t_{\bar{\omega},\lambda}} L^2_{x_{\bar{\omega},\lambda}}}, \quad \lambda(p) \vcentcolon= \frac{1}{\sqrt{1+2^{-2p}}}
\label{NE:norm} \end{equation}
\begin{equation} \vn{\phi}_{PW_{\mathcal C}^{\pm}}= \inf_{\phi=\sum_i \phi^i} \sum_i \vn{\phi^i}_{L^{2}_{t_{\omega_i,\lambda}} L^{\infty}_{x_{\omega_i,\lambda}}}, \quad \pm \omega_i \in \mathcal C, \ \lambda=\frac{\vm{\xi_0}}{\jb{\xi_0}},\ \xi_0=\text{center}(\mathcal C)
\label{PW:norm} \end{equation}
The norms $ L^{\infty}_{t_{\bar{\omega},\lambda}} L^2_{x_{\bar{\omega},\lambda}} $ and $ L^{2}_{t_{\omega_i,\lambda}} L^{\infty}_{x_{\omega_i,\lambda}} $ are taken in the frames \eqref{frame}, \eqref{frame2}.
In other words, $ PW_{\mathcal C}^{\pm} $ is an atomic space whose atoms are functions $ \phi $ with $ \vn{\phi}_{L^{2}_{t_{\omega,\lambda}} L^{\infty}_{x_{\omega,\lambda}}} \leq 1 $ for some $ \omega \in \pm \mathcal C $, where $ \lambda $ depends on the location of $ \mathcal C=\mathcal C_{k'}(l') $.
\
The purpose of controlling the $ NE_{C}^{\pm} $ and $ PW_C^{\pm} $ norms lies in using the following type of bilinear $ L^2_{t,x} $ estimate, which was introduced in \cite{Tat} for the wave equation (see also \cite{Tao2}). A Klein-Gordon analogue was first developed in \cite{BH1}, which served as inspiration for our implementation.
\begin{proposition} \label{L2:nullFrames}
Let $ k, k_2 \geq 1 $, $ k'+C \leq k,k_2 $; $ l \in [-\min(k,k_2),C] $, and let $ \pm_1,\pm_2 $ be two signs. Let $ \mathcal C, \mathcal C' $ be boxes of size $ 2^{k'} \times (2^{k'+l})^3 $ located in $ \{ \vm{\xi} \simeq 2^{k} \} \subset \mathbb{R}^4 $ , resp. $ \{ \vm{\xi} \simeq 2^{k_2} \} \subset \mathbb{R}^4 $ such that
\begin{equation} \angle(\pm_1 \mathcal C, \pm_2 \mathcal C') \simeq 2^{l'} \gg \max(2^{-\min(k,k_2)}, 2^{l+k'-\min(k,k_2)}) \label{angSep} \end{equation}
Then we have
\begin{equation} \label{L2:nullFrames:est}
\vn{ \phi_k \cdot \varphi_{k_2} }_{L^2_{t,x}(\mathbb{R}^{4+1}) } \lesssim 2^{-l'} \vn{ \phi_k}_{NE_\mathcal C^{\pm_1}} \vn{ \varphi_{k_2}}_{PW_{\mathcal C'}^{\pm_2}}
\end{equation}
\end{proposition}
\begin{proof}
The condition \eqref{angSep} insures that $ \pm_1 \mathcal C $ and $ \pm_2 \mathcal C' $ are angularly separated and the angle between them is well-defined. Since $ PW $ is an atomic space, we may assume the second factor is an atom with $ \vn{\varphi_{k_2}}_{L^{2}_{t_{\omega,\lambda}} L^{\infty}_{x_{\omega,\lambda}}} \leq 1 $ for some $ \omega \in \pm_2 \mathcal C' $ and $ \lambda $ given by \eqref{PW:norm}.
We choose $ 2^p=\vm{\xi_0} \simeq 2^{k_2} $, so that $ \lambda=\lambda(p) $ from \eqref{NE:norm} so that together with \eqref{angSep} we have
$$ \vn{ \phi_k}_{L^{\infty}_{t_{\omega,\lambda}} L^2_{x_{\omega,\lambda}}} \lesssim 2^{-l'} \vn{P_{\mathcal C} \bar{Q}^{\pm}_{<j} \phi_k}_{NE_\mathcal C^{\pm_1}}. $$
Now \eqref{L2:nullFrames:est} follows from H\" older's inequality $ L^{\infty}_{t_{\omega,\lambda}} L^2_{x_{\omega,\lambda}} \times L^{2}_{t_{\omega,\lambda}} L^{\infty}_{x_{\omega,\lambda}} \to L^2_{t,x} $.
\end{proof}
\begin{remark} When $ \Box_m \phi_k=\Box_m \varphi_{k_2}=0 $ and $ \phi_k, \varphi_{k_2} $ have Fourier support in $ \mathcal C $, respectively $ \mathcal C'$ then one has
\begin{equation} \label{free:nf}
\vn{ \phi_k \cdot \varphi_{k_2} }_{L^2_{t,x}(\mathbb{R}^{4+1}) } \lesssim 2^{-l'} 2^{\frac{3}{2}(k'+l')} \vn{ \phi_k[0]}_{L^2 \times H^{-1}} \vn{ \varphi_{k_2}[0]}_{L^2 \times H^{-1}} \end{equation}
by convolution estimates (see eg. \cite{Foschi2000}, \cite{TaoMultilinear}). Thus \eqref{L2:nullFrames:est} is meant as a more general substitute for \eqref{free:nf}.
\end{remark}
\subsection{The iteration space for $ A $} For any $ d \geq 4$ and $ k' \in \mathbb{Z} $ we define
$$ \vn{A}_{S_{k'}}^2=\vn{A}_{S^{Str,W}_{k'}}^2+ \vn{A}_{X_{\infty}^{\frac{1}{2}}}^2+ \sup_{\pm} \sup_{l<0} \sum_{\omega}\vn{P_l^{\omega} Q^{\pm}_{<k'+2l} A}_{S_{k'}^{\omega}(l)}^2 $$
where
$$
\vn{A}_{S_{k'}^{\omega}(l)}^2= 2^{-(d-1)k'-(d-3)l} \vn{A}_{L^2 L^{\infty}}^2+ \sup_{\substack{k'' \in [0,k'], l' \leq 0 \\ k''+l' \leq k'+l}} \sum_{C_{k''}(l')} 2^{-(d-2)k''-(d-3)l'-k'} \vn{P_{C_{k''}(l')} A}_{L^2L^{\infty}}^2.
$$
Now we define $ \qquad \vn{A}_{\ell^1 S^{\sigma}}=\sum_{k' \in \mathbb{Z}} 2^{(\sigma-1)k'} \big( \vn{\nabla_{t,x} P_{k'} A}_{S_{k'}}+ 2^{-\frac{1}{2}k'} \vn{\Box P_{k'} A}_{ L^2_{t,x} } \big), $
$$ \qquad \qquad \vn{A}_{S^{\sigma}}^2=\sum_{k' \in \mathbb{Z}} 2^{2(\sigma-1)k'} \vn{\nabla_{t,x} P_{k'} A}_{S_{k'}}^2+\vn{\Box A}_{ L^2 \dot{H}^{\sigma-\frac{3}{2}}}^2 $$
For the elliptic variable we set
\begin{equation}
\vn{A_0}_{Y^{\sigma}} =\sum_{k' \in \mathbb{Z}} \vn{ \nabla_{x,t} P_{k'} A_0}_{L^{\infty} \dot{H}^{\sigma-1} \cap L^2 \dot{H}^{\sigma-\frac{1}{2}} }
\end{equation}
\subsection{The $ L^1 L^{\infty} $-type norms}
We set
\begin{equation} \label{Z:norm:def}
\vn{A}_Z =\sum_{k' \in \mathbb{Z}} \vn{P_{k'} A}_{Z_{k'}}, \qquad \vn{A}_{Z_{k'}}^2=\sup_{\pm} \sup_{\ell<C} \sum_{\omega} 2^{\ell} \vn{ P_{\ell}^{ \omega} Q_{k'+2\ell}^{\pm} A}_{L^1 L^{\infty}}^2
\end{equation}
and define $ Z^{ell}=\Box^{\frac{1}{2}} \Delta^{-\frac{1}{2}} Z $, or equivalently,
\begin{equation} \label{Z-ell:norm:def}
\vn{A_0}_{Z^{ell}} =\sum_{k' \in \mathbb{Z}} \vn{P_{k'} A_0}_{Z^{ell}_{k'}}, \quad \vn{A_0}_{Z_{k'}^{ell}}^2 \simeq\sup_{\pm, \ell<C} \sum_{\omega} 2^{-\ell} \vn{ P_{\ell}^{ \omega} Q_{k'+2\ell}^{\pm} A_0}_{L^1 L^{\infty}}^2
\end{equation}
We will use the embedding
\begin{equation} \label{ZZ:emb}
\big( \Box^{-1} \times \Delta^{-1} \big) P_{k'} : L^1 L^2 \times L^1 L^2 \to 2^{(\sigma-1)k'} Z_{k'} \times Z^{ell}_{k'}
\end{equation}
To see this, we note that both $ (2^{2k'}/\Delta) \tilde{P}_{k'} $ and $ (2^{2k'+2\ell}/\Box) \tilde{P}_{\ell}^{ \omega} \tilde{Q}_{k'+2\ell}^{\pm} \tilde{P}_{k'} $ are bounded. The latter one has symbol obeying the same bump function estimates as the symbol of $ P_{\ell}^{ \omega} Q_{k'+2\ell}^{\pm} P_{k'} $ on the rectangular region of size $ (2^{k'+\ell})^{d-1}\times 2^{k'+2\ell} \times 2^{k'} $ where it is supported. Then one uses Bernstein's inequality and an orthogonality argument.
Similarly one obtains,
\begin{equation} \label{Zell:emb}
\vn{\Delta^{-1} P_{k'}A_0}_{Z^{ell}_{k'}} \lesssim \sup_{\pm, \ell<C} 2^{\ell} 2^{(\sigma-1)k'} \vn{Q_{k'+2\ell}^{\pm} P_{k'}A_0}_{L^1 L^2}
\end{equation}
\subsection{Extra derivatives}
For $ X=S, N, Y, \dot{H} $ and $ \bar{X}= \bar{S}, \bar{N}, H $, for any $ s, \rho \in \mathbb{R} $ we have
$$ \vn{A}_{X^{s+\rho}} \simeq \vn{ \nabla_x^{\rho} A}_{X^s}, \qquad \vn{f}_{\bar{X}^{s+\rho}} \simeq\vn{ \jb{\nabla_x}^{\rho} f}_{\bar{X}^s}
$$
Similar definitions are made for their dyadic pieces, for instance
$$ \vn{\phi_k}_{\bar{S}^s_k} \simeq 2^{(s-1)k} \vn{(\jb{D_x},\partial_t) \phi_k}_{\bar{S}_k} .$$
\section{Embeddings} \label{sec:Emb}
\subsection{Lorentz spaces and $ \Box^{-1} $ embeddings} \
\
For functions $ f $ in the Lorentz space $ L^{p,q} $, by decomposing
$$ f=\sum f_m, \quad \text{where} \quad f_m(x) \vcentcolon= f(x) 1_{\{ \vm{f(x)}\in [2^m,2^{m+1}] \}} $$
we have the following equivalent norm (see \cite{Grafakos})
\begin{equation} \label{atomic:Lorentz}
\vn{f}_{L^{p,q}} \simeq \vn{ \vn{f_m}_{L^p(\mathbb{R}^d)}}_{\ell^q_m(\mathbb{Z})}.
\end{equation}
The Lorentz spaces also enjoy a H\"older-type inequality which is due to O'Neil \cite{Oneil}. We will need the following case
\begin{equation} \label{Lorentz:Holder}
\vn{\phi \psi}_{L^{2,1}} \lesssim \vn{\phi}_{L^{4,2}} \vn{\psi}_{L^{4,2}}
\end{equation}
For $ M \in \mathbb{Z} $ and $ l \leq 0 $ let
\begin{equation} \label{T:op}
T_l^{\omega}=\sum_{k' \leq M} P_l^{\omega} Q_{k'+2l}^{\pm} P_{k'} \frac{1}{\Box} \end{equation}
\begin{remark} We will use the $ T_l^{\omega} $ operators on $ \mathbb{R}^{4+1} $ to estimate parts of the potential $ A $ in $ L^1 L^{\infty} $, using the embedding \eqref{L1Linf:emb} together with Lorentz space Strichartz estimates $ L^2 L^{4,2} $ for $ \phi $ and \eqref{Lorentz:Holder}. We have been motivated by \cite{ShSt}, where $ A \approx \frac{\partial}{\Delta} (d u)^2 $, and where essentially a Sobolev-type emdedding $ \frac{1}{\vm{D_x}}: L^{d,1}_x \to L^{\infty}_x(\mathbb{R}^d) $ is used.
When $ l=0 $ the symbol of the operator $ T_l^{\omega} $ makes it resemble $ \Delta^{-1}_x $.
The main point here will be that it is crucial to keep the $ k' $ summation inside the norm in order to overcome logarithmic divergences in \eqref{SmallAngleSmallMod}.
\end{remark}
\begin{proposition} \label{Box:Embedding}
On $ \mathbb{R}^{4+1} $ the following embeddings hold uniformly in $ l\leq 0 $ and $ M $:
\begin{align}
2^{\frac{1}{2}l} T_l^{\omega} &: L^2 L^{\frac{4}{3}} \to L^2 L^4, \label{L2L4:emb} \\
2^{\frac{1}{2}l} T_l^{\omega} &: L^1 L^{2,1} \to L^1 L^{\infty}. \label{L1Linf:emb}
\end{align}
\end{proposition}
\begin{proof}
\pfstep{Step~1}{\it Proof of \eqref{L2L4:emb}.} Apply an angular projection such that $ \tilde{P}_l^{\omega} P_l^{\omega}=P_l^{\omega} $. Now \eqref{L2L4:emb} follows by composing the following embeddings
\begin{align}
& 2^{-\frac{3}{4}l} P_l^{\omega} \vm{D_x}^{-1} :L^2 L^{\frac{4}{3}} \to L^2_{t,x} \label{sob:emb1}, \quad 2^{2l} \frac{\vm{D_x}^2}{\Box} \sum_{k' \leq M} Q_{k'+2l}^{\pm} P_{k'} :L^2_{t,x} \to L^2_{t,x} \\
& 2^{-\frac{3}{4}l} \tilde{P}_l^{\omega} \vm{D_x}^{-1} :L^2_{t,x} \to L^2 L^4. \label{sob:emb2}
\end{align}
When $ l=0 $, the first and third mappings follow from Sobolev embedding. For smaller $ l $ we make a change of variable that maps an angular cap of angle $ \simeq 2^l $ into one of angle $ \simeq 2^0 $, which reduces the bound to the case $ l=0 $.
The second mapping holds because the operator has a bounded multiplier.
\pfstep{Step~2}{\it Proof of \eqref{L1Linf:emb}.} Let $ k(t,x) $ be the kernel of $ 2^{\frac{1}{2}l} T_l^{\omega} $. It suffices to show
\begin{equation} \label{red:delta}
2^{\frac{1}{2}l} T_l^{\omega}[ \delta_0(t) \otimes \cdot \ ] : L^{2,1}_x \to L^1 L^{\infty}, \ \text{i.e.} \
\vn{ \int f(y) k(t, x-y) \,\mathrm{d} y}_{L^1_t L^{\infty}_x} \lesssim \vn{f}_{L^{2,1}_x}
\end{equation}
Indeed, assuming \eqref{red:delta}, denoting $ \phi_s(\cdot)=\phi(s,\cdot) $, we have
$$ \vn{2^{\frac{1}{2}l} T_l^{\omega} \phi}_{ L^1 L^{\infty}} \leq \int \vn{ \int \phi(s,y) k(t-s,x-y) \,\mathrm{d} y }_{L^1_t L^{\infty}_x} \,\mathrm{d} s \lesssim \int \vn{\phi_s}_{L^{2,1}} \,\mathrm{d} s $$
using the time translation-invariance in \eqref{red:delta}.
To prove \eqref{red:delta}, since $ q=1 $, by \eqref{atomic:Lorentz} we may assume that $f=f_m $, i.e. $ \vm{f(x)} \simeq 2^{m} $ for $ x \in E $ and $ f(x)=0 $ for $ x \notin E $. We normalize $ \vn{f}_{L^{2,1}} \simeq \vn{f}_{L^2_x}=1 $, which implies $ \vm{E} \simeq 2^{-2m} $. We have
\begin{equation} \label{set:kernel} \vn{ \int f(x-y) k(t,y) \,\mathrm{d} y}_{L^{\infty}_x} \lesssim 2^m \sup_{\vm{F} \simeq 2^{-2m}} \int_{F} \vm{k(t,y)} \,\mathrm{d} y
\end{equation}
For $ x_{\omega}=x \cdot \omega, \ x'_{\omega,i}=x \cdot \omega^{\perp}_i $, we will show
\begin{equation} \label{Emb:kernel} \vm{k(t,x)} \lesssim 2^{\frac{1}{2}l} \frac{2^{3l}}{(2^{2l} \vm{t}+\vm{x_{\omega}}+2^l \vm{x'_{\omega}})^3}.
\end{equation}
Assuming this, we integrate it on $ F $ and since the fraction is decreasing in $ \vm{x_{\omega}}, \vm{x'_{\omega}} $,
\begin{align*}
\text{RHS} \ \eqref{set:kernel} & \lesssim 2^m 2^{\frac{1}{2}l} \int_{[-R,R] \times (2^{-l}[-R,R])^3} \frac{2^{3l}}{(2^{2l} \vm{t}+\vm{x_{\omega}}+2^l \vm{x'_{\omega}})^3} \,\mathrm{d} x_{\omega} \,\mathrm{d} x'_{\omega} \\
& \lesssim 2^m 2^{\frac{1}{2}l} \int_{[-R,R]^4} \frac{1}{(2^{2l} \vm{t}+\vm{(x_{\omega},x'_{\omega})})^3} \,\mathrm{d} x_{\omega} \,\mathrm{d} x'_{\omega} \lesssim 2^m 2^{\frac{1}{2}l} \frac{R^4}{(2^{2l} \vm{t})^3+R^3}
\end{align*}
for $ R^4 \simeq 2^{3l} 2^{-2m} $. Integrating this bound in $ t $ we obtain \eqref{red:delta}.
\pfstep{Step~3}{\it Proof of \eqref{Emb:kernel}.} Let $ k_0(t,x) $ be the kernel of $ P_{0} Q_{2l}^{\pm} P_l^{\omega} \frac{1}{\Box} $. Then
\begin{equation} k(t,x)=2^{\frac{1}{2}l} \sum_{k' \leq M} 2^{3k'} k_0 \big(2^{k'}(t,x)\big). \label{emb:kernel:sum}
\end{equation}
Let $ (t_{\omega}, x^1_{\omega},x'_{\omega}) $ be the coordinates in the frame \eqref{frame}, \eqref{frame2} for $ \lambda=1 $. Then $ 2^{-3l} k_0(t_{\omega},2^{-2l} x^1_{\omega}, 2^{-l} x'_{\omega}) $ is a Schwartz function, being the Fourier transform of a bump function. Thus,
$$ \vm{k_0(t,x) } \lesssim \frac{2^{3l}}{\jb{ \vm{t_{\omega}}+2^{2l} \vm{x^1_{\omega}}+2^l \vm{x'_{\omega}}}^N} \lesssim \frac{2^{3l}}{\jb{2^{2l} \vm{t}+ \vm{x_{\omega}}+2^l \vm{x'_{\omega}}}^N}.$$
Using this and \eqref{emb:kernel:sum}, denoting $ S=2^{2l} \vm{t}+ \vm{x_{\omega}}+2^l \vm{x'_{\omega}} $, we have
$$ \vm{k(t,x)} \lesssim 2^{\frac{1}{2}l} 2^{3l} \big( \sum_{2^{k'} \leq S^{-1}} 2^{3k'} + \sum_{S^{-1} < 2^{k'} } 2^{-(N-3) k'} S^{-N} \big) \lesssim 2^{\frac{1}{2}l} 2^{3l} S^{-3} $$
obtaining \eqref{Emb:kernel}. \end{proof}
\subsection{Further properties}
\begin{lemma}[Sobolev-type embedding] \label{Sobolev_lemma}
Let $ p \geq q $. For any sign $ \pm $ we have
$$ \vn{\bar{Q}^{\pm}_j u}_{L^p L^2} \lesssim 2^{(\frac{1}{q}-\frac{1}{p})j} \vn{\bar{Q}^{\pm}_j u}_{L^q L^2} \lesssim 2^{(\frac{1}{q}-\frac{1}{p})j} \vn{u}_{L^q L^2}. $$
The same statements holds for $ Q_j^{\pm} $.
\end{lemma}
\begin{proof} We conjugate by the operator $ U $ defined by
$$ \mathcal{F}(U u) (\tau, \xi)=\mathcal{F} u (\tau \pm \jb{\xi},\xi), $$
which acts at each $ t $ as the unitary multiplier $ e^{\mp i t \jb{D}} $. Thus we have
$$ Q^{\pm}_j u=U^{-1} \chi(\frac{D_t}{2^j}) U u. $$
This clearly implies the second inequality. For the first one we write
$$
\vn{Q^{\pm}_j u}_{L^p L^2} \lesssim \vn{ \chi(\frac{D_t}{2^j}) U u}_{L^p L^2} \lesssim 2^{(\frac{1}{q}-\frac{1}{p})j} \vn{\chi(\frac{D_t}{2^j}) U f}_{L^q L^2} \lesssim 2^{(\frac{1}{q}-\frac{1}{p})j} \vn{\bar{Q}^{\pm}_j u}_{L^q L^2}.
$$
The same argument works for $ Q_j^{\pm} $, conjugating by $ e^{\mp i t \vm{D}} $ instead.
\end{proof}
Next we prove the embedding $ \bar{X}_1^{\frac{1}{2}} \subset \bar{S}_k $.
\begin{proposition} \label{Xembedding} For $ k \geq 0 $ and $ \phi $ with Fourier support in $ \{ \jb{\xi} \simeq 2^k \} $ we have
$$ \vn{\phi}_{\bar{S}_k} \lesssim \vn{\phi}_{\bar{X}_1^{\frac{1}{2}}} $$
\end{proposition}
\begin{proof} We may assume that $ \phi $ has Fourier support in $ \{ \vm{\tau-\jb{\xi}} \simeq 2^j,\ \tau \geq 0 \}$.
The bound clearly holds for the $ \bar{X}_{\infty}^{\frac{1}{2}} $ component of $ \bar{S}_k $. For the other norms we claim $ \vn{e^{it \jb{D}} u}_{\bar{S}_k} \lesssim \vn{u}_{L^2_x} $. Assuming this, we write $ \tau=\rho+\jb{\xi}$ in the inversion formula
$$ \phi(t)=\int e^{i t \tau+ ix \xi} \mathcal{F} \phi (\tau, \xi) \,\mathrm{d} \xi \,\mathrm{d} \tau=\int_{\vm{\rho} \simeq 2^j} e^{i t \rho} e^{i t \jb{D}} \phi_{\rho} \,\mathrm{d} \rho $$
for $ \hat{\phi_{\rho}}(\xi)=\mathcal{F} \phi (\rho+\jb{\xi}, \xi) $. Then by Minkowski and Cauchy-Schwarz inequalities
$$ \vn{\phi}_{\bar{S}_k} \lesssim \int_{\vm{\rho} \simeq 2^j} \vn{e^{i t \jb{D}} \phi_{\rho} }_{\bar{S}_k} \,\mathrm{d} \rho \lesssim \int_{\vm{\rho} \simeq 2^j} \vn{\phi_{\rho} }_{L^2_x} \,\mathrm{d} \rho \lesssim 2^{\frac{j}{2}} \vn{\phi}_{L^2_{t,x}} \simeq \vn{\phi}_{\bar{X}_1^{\frac{1}{2}}}. $$
By an orthogonality argument, for any $ l<0 $ it remains to establish
$$ e^{it \jb{D}} \bar{P}_k: L^2_x \to \bar{S}^{Str}_k, \qquad e^{it \jb{D}} \bar{P}_k P_l^{\omega}: L^2_x \to \bar{S}_k^{\omega \pm}(l) $$
The first mapping follows by taking $ \psi_{k,\pm}=0 $ in \eqref{reducedStr}. The second one follows similarly, by orthogonality and \eqref{red:L2Linf} for $ L^2 L^{\infty} $, \eqref{PWwaves} for $ PW_C^{\pm} $ and Corollary \ref{CorNE} for $ NE_C^{\pm} $. For $ k=0 $, the $ S_{box(k')} $ component follows similarly. \end{proof}
For iterating Maxwell's equation we will use the following proposition.
\begin{proposition} \label{A:solv}
For any $ A $ such that $ A[0]=0 $ one has
\begin{equation}
\vn{A}_{\ell^1 S^{\sigma}} \lesssim \vn{\Box A}_{\ell^1(N^{\sigma-1} \cap L^2 \dot{H}^{\sigma-\frac{3}{2}})}
\end{equation}
For any free solution $ A^{free} $, i.e. $ \Box A^{free} $=0, one has $ \vn{A^{free}}_{S^{\sigma}} \simeq \vn{A[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}}$. Thus, for any $ A $,
\begin{equation}
\vn{A}_{S^{\sigma}} \lesssim \vn{A[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} + \vn{\Box A}_{N^{\sigma-1} \cap L^2 \dot{H}^{\sigma-\frac{3}{2}}}
\end{equation}
In addition, for any $ A_0 $ one has
\begin{equation}
\vn{A_0}_{Y^{\sigma}} \lesssim \vn{\Delta A_0}_{\ell^1(L^{\infty} \dot{H}^{\sigma-2}\cap L^2 \dot{H}^{\sigma-\frac{3}{2}})}+\vn{\Delta \partial_t A_0}_{{\ell^1(L^{\infty} \dot{H}^{\sigma-3}\cap L^2 \dot{H}^{\sigma-\frac{5}{2}})}}
\end{equation}
\end{proposition}
\begin{proof} The $ A_0 $ bound follows easily from the definition of $ Y^{\sigma} $. The $ A $ bounds are reduced to
$$ \vn{\nabla_{t,x} A_{k'}}_{S_{k'}} \lesssim \vn{A_{k'}[0]}_{\dot{H}^1 \times L^2} + \vn{\Box A_{k'}}_{N_{k'}} $$
The $ X_{\infty}^{\frac{1}{2}} $ part follows easily from Lemma \ref{Sobolev_lemma}. Using the argument of Lemma \ref{waves} (with $ \psi=0 $), we reduce to showing
\begin{equation} e^{\pm it \vm{D}} P_{k'} : L^2_x \to S^{Str,W}, \qquad e^{\pm it \vm{D}} P_{k'} P_l^{\omega}: L^2_x \to S_{k'}^{\omega }(l)
\end{equation}
The first mapping represents well-known Strichartz estimates. By orthogonality, the second one follows from
$$ 2^{-\frac{d-1}{2}k'-\frac{d-3}{2}l} e^{\pm it \vm{D}} P_{k'} P_l^{\omega}: L^2_x \to L^2 L^{\infty}, $$
$$ 2^{-\frac{d-2}{2} k''-\frac{1}{2}k'-\frac{d-3}{2}l'} e^{\pm it \vm{D}} P_{k'} P_{C_{k''}(l')}: L^2_x \to L^2 L^{\infty}
$$
By a $ TT^* $ argument, these are reduced to the dispersive estimate \eqref{dispestt2}, like in Cor. \ref{Cor:L2Linf} (with $ \psi=0 $ and $ \vm{D} $ instead of $ \jb{D} $, which does not affect the proof).
\end{proof}
Finally, we have
\begin{proposition} \label{Nk:orthog}
Let $ k \geq 0 $ and $ \mathcal C_{k'}(l') $ be a finitely overlapping collection of boxes. We have
$$ \sum_{\mathcal C_{k'}(l')} \vn{P_{\mathcal C_{k'}(l')} F}_{\bar{N}_k}^2 \lesssim \vn{F}_{\bar{N}_k}^2
$$
\end{proposition}
\begin{proof} Since $ \bar{N}_k $ is an atomic space the property reduces to the corresponding inequalities for $ L^1 L^2 $ and $ L^2_{t,x} $, which are standard inequalities.
\end{proof}
\section{The parametrix theorem} \label{Sec_parametrix}
We define the paradifferential covariant Klein-Gordon operator
\begin{equation}
\Box_m^{p,A}= \Box+I-2i \sum_{k \geq 0} A^j_{<k-C} \partial_j \bar{P}_k
\end{equation}
where $ A=A^{free}=(A_1,\dots,A_d,0) $ is a real-valued 1-form defined on $ \mathbb{R}^{1+d} $, assumed to solve the free wave equation and to obey the Coulomb gauge condition
\begin{equation} \label{A:cond}
\Box A=0, \qquad \partial^j A_j=0.
\end{equation}
By the argument in Prop. \ref{A:solv} one may show
$$ \vn{\phi}_{\bar{S}^{\sigma}} \lesssim \vn{\phi[0]}_{H^{\sigma} \times H^{\sigma-1}} + \vn{\Box_m \phi}_{\bar{N}^{\sigma-1}}
$$
Following \cite{KST}, the goal is to generalize this inequality, showing that $ \Box_m $ can be replaced by $ \Box_m^{p,A} $.
Consider the problem
\begin{equation} \label{problem}
\left\{
\begin{array}{l}
\Box_m^{p,A} \phi=F \\
\phi[0]=(f,g)
\end{array}
\right.
\end{equation}
\begin{theorem} \label{main:parametrix} Let $ A $ be a real 1-form obeying \eqref{A:cond} on $ \mathbb{R}^{d+1} $ for $ d \geq 4 $. If $ \vn{A[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} $ is sufficiently small, then for any $ F \in \bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}} $ and $ (f,g) \in H^{\sigma} \times H^{\sigma-1} $, the solution of \eqref{problem} exists globally in time and it satisfies
\begin{equation} \label{en:est}
\vn{\phi}_{\bar{S}^{\sigma}} \lesssim \vn{(f,g)}_{H^{\sigma} \times H^{\sigma-1}} + \vn{F}_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}}
\end{equation}
\end{theorem}
The proof of this theorem will reduce to its frequency localized approximate version.
\begin{theorem} \label{corethm}
Let $ A $ be a real 1-form obeying \eqref{A:cond} on $ \mathbb{R}^{d+1} $ for $ d \geq 4 $ and let $ k \geq 0 $. If $ \vn{A[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} $ is sufficiently small, then for any $ (f_k,g_k) $ with Fourier support in $ \{ \jb{\xi} \simeq 2^k \} $ and any $ F_k $ with Fourier support in $ \{ \jb{\xi} \simeq 2^k, \ \vm{\vm{\tau}- \jb{\xi} } \ll 2^k \} $ there exists a function $ \phi_k $ with Fourier support in $ \{ \jb{\xi} \simeq 2^k, \ \vm{\vm{\tau}- \jb{\xi} } \ll 2^k \} $ such that
\begin{align}
& \vn{ (\jb{D_x},\partial_t) \phi_k}_{\bar{S}_k} \lesssim \vn{(f_k,g_k)}_{H^1 \times L^2}+\vn{F_k}_{\bar{N}_k} \label{core1} =: M_k \\
& \vn{(\Box_m - 2i A^j_{<k-c} \partial_j) \phi_k -F_k}_{\bar{N}_k} \lesssim \varepsilon^{\frac{1}{2}} M_k \label{core2} \\
& \vn{(\phi_k(0)-f_k, \partial_t \phi_k(0)-g_k)}_{H^1 \times L^2} \lesssim \varepsilon^{\frac{1}{2}} M_k. \label{core3}
\end{align}
\end{theorem}
\
The approximate solution will be defined by $ 2 \phi_k=T^{+}+T^{-}+ S^{+}+S^{-} $ where
\begin{equation} \label{eq:renorm}
\begin{aligned}
& T^{\pm} \vcentcolon= e^{-i \psi^k_{\pm}}_{<k}(t,x,D) \frac{e^{\pm i t \jb{D}}}{i \jb{D}} e^{i \psi^k_{\pm}}_{<k}(D,y,0) (i \jb{D} f_k \pm g_k) \\
& S^{\pm} \vcentcolon= \pm e^{-i \psi^k_{\pm}}_{<k}(t,x,D) \frac{K^{\pm}}{i \jb{D}} e^{i \psi^k_{\pm}}_{<k}(D,y,s) F_k,
\end{aligned}
\end{equation}
The phase $ \psi^k_{\pm}(t,x,\xi) $ is defined in Section \ref{Constr:phase} and $ K^{\pm} F $ are the Duhamel terms
$$ K^{\pm} F(t)=u(t)=\int_0^t e^{\pm i (t-s) \jb{D}} F(s) \,\mathrm{d} s, \qquad (\partial_t \mp i \jb{D})u=F, \quad u(0)=0. $$
To implement this one needs estimates for the operators $ e^{-i \psi^k_{\pm}}_{<k}(t,x,D) $ and their adjoints, adapted to the function spaces used in the iteration.
\begin{theorem} \label{Renormalization:thm} For any $ k \geq 0 $, the frequency localized renormalization operators have the following properties for any $ X \in \{ \bar{N}_k,L^2_x,\bar{N}^{*}_k \} $:
\begin{align}
\label{renbd} e_{<k}^{\pm' i \psi^k_{\pm}} (t,x,D) & : X \to X \\
\label{renbdt} 2^{-k} \partial_{t,x} e_{<k}^{\pm' i \psi^k_{\pm}} (t,x,D) & : X \to \varepsilon X \\
\label{renbd2} e_{<k}^{-i \psi^k_{\pm}} (t,x,D) e_{<k}^{i \psi^k_{\pm}} (D,y,s)-I & : X \to \varepsilon^{\frac{1}{2}} X
\end{align}
as well as
\begin{equation} \label{renbd3}
2^k \vn{e_{<k}^{-i \psi^k_{\pm}} (t,x,D) u_k}_{\bar{S}_k} \lesssim \vn{u_k}_{L^{\infty}(H^1 \times L^2)}+ \vn{\Box_m u_k}_{\bar{N}_k}
\end{equation}
\begin{equation} \label{conj}
\begin{aligned}
& \vn{e_{<k}^{-i \psi^k_{\pm}} (t,x,D) \Box_m u_k - \Box_{m}^{A_{<k}} e_{<k}^{-i \psi^k_{\pm}} (t,x,D) u_k}_{\bar{N}_{k}} \lesssim \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \varepsilon \vn{ u_k}_{L^{\infty} H^1}+ \varepsilon 2^k \vn{(i\partial_t\pm \jb{D})u_k}_{\bar{N}_k}
\end{aligned} \end{equation}
\end{theorem}
Moreover, by \eqref{renbd2} and \eqref{spec_decomp2} one obtains
\begin{equation} \label{renbd4}
e^{-i \psi^k_{\pm}}_{<k}(t,x,D) \frac{1}{\jb{D}} e^{i \psi^k_{\pm}}_{<k}(D,y,s) - \frac{1}{\jb{D}} : X \to \varepsilon^{\frac{1}{2}} 2^{-k} X
\end{equation}
The proof of Theorem \ref{Renormalization:thm} is given in section \ref{sec:pf:thm:ren}, relying on the contents of sections \ref{Constr:phase}, \ref{sec:Osc-int}. Now we show how these mappings imply Theorems \ref{main:parametrix}, \ref{corethm}.
\begin{proof}[Proof of Theorem \ref{main:parametrix}]
\pfstep{Step~1} We first look to define an approximate solution $ \phi^a=\phi^a[f,g,F] $ satisfying, for some $ \delta \in (0,1) $:
\begin{equation} \label{aprr1} \vn{\Box_m^{p,A} \phi^a-F}_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}}+\vn{\phi^a[0]-(f,g)}_{H^{\sigma} \times H^{\sigma-1}} \leq \delta \big[ \vn{F}_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}}+\vn{(f,g)}_{H^{\sigma} \times H^{\sigma-1}} \big]
\end{equation}
and
\begin{equation} \label{aprr2}
\vn{\phi^a}_{\bar{S}^{\sigma}} \lesssim \vn{F}_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}}+\vn{(f,g)}_{H^{\sigma} \times H^{\sigma-1}}.
\end{equation}
We define $ \phi^a $ from its frequency-localized versions
$$ \phi^a \vcentcolon= \sum_{k \geq 0} \phi^a_k, \qquad \phi^a_k=\phi^1_k+\phi^2_k $$
which remain to be defined. We decompose $ \bar{P}_k F=\bar{Q}_{<k-6} \bar{P}_k F+ \bar{Q}_{>k-6}\bar{P}_k F $ and first define $ \phi^2_k $ by
$$
\mathcal F \phi^2_k(\tau,\xi) \vcentcolon= \frac{1}{-\tau^2+\vm{\xi}^2+1} \mathcal F(\bar{Q}_{>k-6}\bar{P}_k F) (\tau,\xi)
$$
so that $ \Box_m \phi^2_k=\bar{Q}_{>k-6}\bar{P}_k F $. We have
$$ \vn{ (\jb{D_x},\partial_t) \phi^2_k}_{\bar{S}_k } \lesssim \vn{\phi^2_k}_{L^{\infty}(H^1 \times L^2)}+\vn{\bar{Q}_{>k-6}\bar{P}_k F}_{\bar{N}_k} \lesssim \vn{\bar{P}_k F}_{\bar{N}_k}. $$
Then we apply Theorem \ref{corethm} to $ \bar{Q}_{<k-6} \bar{P}_k F $ and $ \bar{P}_k(f,g)-\phi^2_k[0] $ which defines the function $ \phi^1_k $. We are left with estimating
$$ \vn{A^j_{<k-C} \partial_j \phi^2_k}_{L^1 L^2 \cap L^2 H^{-\frac{1}{2}}} \lesssim \vn{A^j_{<k-C}}_{L^2 L^{\infty}} \vn{\nabla \phi^2_k}_{L^2_{t,x}\cap L^{\infty} H^{-\frac{1}{2}} } \lesssim \varepsilon \vn{\bar{P}_k F}_{\bar{N}_k} $$
and similarly, using also Lemma \ref{Sobolev_lemma},
\begin{align*}
2^{-\frac{1}{2}k} \vn{\Box_m \phi^1_k}_{L^2_{t,x}} & \lesssim \vn{\Box_{m}^{A_{<k}} \phi^1_k-\bar{Q}_{<k-6} \bar{P}_k F}_{\bar{N}_k}+\vn{\bar{Q}_{<k-6} \bar{P}_k F}_{\bar{N}_k}+ \vn{A^j_{<k-C} \partial_j \phi^1_k}_{L^2 H^{-\frac{1}{2}}} \\
& \lesssim \vn{\bar{P}_k F}_{\bar{N}_k} + \vn{\bar{P}_k(f,g)}_{H^1 \times L^2}
\end{align*}
The following error term, for $ k',k''=k \pm O(1) $, follows from \eqref{est:phi1:freqAx}, \eqref{est:phi6}
$$
\vn{A^j_{k'} \partial_j \bar{P}_{k''} \phi^a_k}_{\bar{N}_k \cap L^2 H^{-\frac{1}{2}}} \lesssim \varepsilon \vn{\phi^a_k}_{\bar{S}^1_k}
$$
\pfstep{Step~2} Now we iterate the approximate solutions from Step 1 to construct an exact solution. We define $ \phi \vcentcolon= \lim \phi^{\leq n} $ where
$$ \phi^{\leq n} \vcentcolon= \phi^1 + \dots + \phi^n $$
and the $ \phi^n $ are defined inductively by $ \phi^1 \vcentcolon= \phi^a[f,g,F] $ and
$$ \phi^n \vcentcolon= \phi^a[(f,g)-\phi^{\leq n-1}[0],F-\Box_m^{p,A} \phi^{\leq n-1} ] $$
Normalizing $\vn{F}_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}}+\vn{(f,g)}_{H^{\sigma} \times H^{\sigma-1}} =1 $ it follows by induction using \eqref{aprr1}, \eqref{aprr2} that
\begin{equation} \label{aprr3} \vn{\Box_m^{p,A} \phi^{\leq n}-F}_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}}+\vn{\phi^{\leq n}[0]-(f,g)}_{H^{\sigma} \times H^{\sigma-1}} \leq \delta^n \end{equation}
and
\begin{equation} \label{aprr4}
\vn{\phi^{n}}_{\bar{S}^{\sigma}} \lesssim \delta^{n-1}.
\end{equation}
Thus $ \phi^{\leq n} $ is a Cauchy sequence in $ \bar{S}^{\sigma} $ and $ \phi $ is well-defined, satisfying \eqref{en:est}. Passing to the limit in \eqref{aprr3} we see that $ \phi $ solves \eqref{problem}.
\end{proof}
\begin{remark} \label{fe:par} The argument above also implies a frequency envelope version of \eqref{en:est}, which will be useful in proving continuous dependence on the initial data :
\begin{equation}
\vn{\phi}_{\bar{S}^{\sigma}_c} \lesssim \vn{(f,g)}_{H^{\sigma}_c \times H^{\sigma-1}_c} + \vn{F}_{(\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}})_c}
\end{equation}
\end{remark}
\begin{proof}[Proof of Theorem \ref{corethm}]
We define $ \phi_k $ by
$$ \phi_k=\frac{1}{2} \big( T^{+}+T^{-}+ S^{+}+S^{-} \big) $$
where $ T^{\pm}, S^{\pm} $ are defined by \eqref{eq:renorm}.
The bound \eqref{core1} follows from \eqref{renbd3} and \eqref{renbd}, where for $ \partial_t \phi_k $ we use the low modulation support of $ \phi_k $.
We turn to \eqref{core3} and write
$$ \phi_k(0)-f_k =\frac{1}{2i} \sum_{\pm} [ e^{-i \psi^k_{\pm}}_{<k}(0,x,D) \frac{1}{\jb{D}} e^{i \psi^k_{\pm}}_{<k}(D,y,0) - \frac{1}{\jb{D}} ] (i \jb{D} f_k \pm g_k) $$
\begin{align*} \partial_t \phi_k(0)-g_k = \frac{1}{2} \sum_{\pm} \biggr[ &
[ e^{-i \psi^k_{\pm}}_{<k}(0,x,D) e^{i \psi^k_{\pm}}_{<k}(D,y,0)-I ] (\pm i \jb{D} f_k + g_k) \\
&+ [\partial_t e^{-i \psi^k_{\pm}}_{<k}] (0,x,D) \frac{1}{i \jb{D}} e^{i \psi^k_{\pm}}_{<k}(D,y,0) (i \jb{D} f_k \pm g_k)\\
& \pm [ e^{-i \psi^k_{\pm}}_{<k}(0,x,D) \frac{1}{i \jb{D}} e^{i \psi^k_{\pm}}_{<k}(D,y,0)- \frac{1}{i \jb{D}} ] F_k(0) \biggr]
\end{align*}
These are estimated using \eqref{renbd4}, \eqref{renbd2}, \eqref{renbd}, respectively \eqref{renbd4}, together with
$$ \vn{F_k(0)}_{L^2_x} \lesssim \vn{F_k}_{L^{\infty}L^2} \lesssim 2^k \vn{F_k}_{\bar{N}_k} $$
which follows from Lemma \ref{Sobolev_lemma} considering the modulation assumption on $ F_k $.
Now we prove \eqref{core2}. We write
\begin{align}
\Box_{m}^{A_{<k}} \phi_k -F_k= & \sum_{\pm} \big[ [ \Box_{m}^{A_{<k}} e_{<k}^{-i \psi^k_{\pm}} (t,x,D) - e_{<k}^{-i \psi^k_{\pm}} (t,x,D) \Box_m ] \phi_{\pm} \label{cj:ln1} \\
& \pm \frac{1}{2} e^{-i \psi^k_{\pm}}_{<k}(t,x,D) \frac{\partial_t \pm i \jb{D}}{i \jb{D}} e^{i \psi^k_{\pm}}_{<k}(D,y,s) F_k \big] -F_k. \label{cj:ln2}
\end{align}
where
$$
\phi_{\pm} \vcentcolon= \frac{1}{2i \jb{D}} \big[ e^{\pm i t \jb{D}} e^{i \psi^k_{\pm}}_{<k}(D,y,0) (i \jb{D} f_k \pm g_k) \pm K^{\pm} e^{i \psi^k_{\pm}}_{<k}(D,y,s) F_k \big]
$$
Using \eqref{conj} we estimate
$$ \vn{\eqref{cj:ln1}}_{\bar{N}_k} \lesssim \sum_{\pm} \varepsilon [ \vn{e^{i \psi^k_{\pm}}_{<k}(D,y,0) (i \jb{D} f_k \pm g_k) }_{L^2}+ \vn{e^{i \psi^k_{\pm}}_{<k}(D,y,s) F_k }_{\bar{N}_k} ]
$$
and then we use \eqref{renbd}. Now we turn to \eqref{cj:ln2} and write
\begin{align}
\eqref{cj:ln2}= \sum_{\pm} & \frac{1}{2} \biggr[ [e^{-i \psi^k_{\pm}}_{<k}(t,x,D) e^{i \psi^k_{\pm}}_{<k}(D,y,s)-I] F_k \label{cj:ln3} \\
& \pm i^{-1} [ e^{-i \psi^k_{\pm}}_{<k}(t,x,D) \frac{1}{ \jb{D}} e^{i \psi^k_{\pm}}_{<k}(D,y,s)-\frac{1}{ \jb{D}}] \partial_t F_k \label{cj:ln4} \\
& \pm e^{-i \psi^k_{\pm}}_{<k}(t,x,D) \frac{1}{i \jb{D}} [\partial_t e^{i \psi^k_{\pm}}_{<k}](D,y,s) F_k \biggr]. \label{cj:ln5}
\end{align}
For \eqref{cj:ln3} we use \eqref{renbd2}, for \eqref{cj:ln4} we use \eqref{renbd4}, and for \eqref{cj:ln5} we use \eqref{renbd}, \eqref{renbdt}, all with $ X=\bar{N}_k $.
\end{proof}
\section{Statements of the main estimates} \label{Sec_statements}
To analyze the equation for $ A $ we introduce the main terms
\begin{equation} \label{A:bil:op}
\begin{aligned}
{\bf A}_{x} ( \phi^{1}, \phi^{2}) :=& - \Box^{-1} \mathcal P_{j} \mathfrak{I} (\phi^1 \nabla_x \bar{\phi^2}) , \\
{\bf A}_{0} ( \phi^{1}, \phi^{2}) :=& - \Delta^{-1} \mathfrak{I} (\phi^1 \partial_t \bar{\phi^2}).
\end{aligned}
\end{equation}
where here $ \Box^{-1} f$ denotes the solution $\phi$ to the inhomogeneous wave equation $\Box \phi = f$ with $\phi[0] = 0$. Using the formula for $ \mathcal P_j $ one identifies the null structure (see \eqref{cl:nf})
\begin{equation} \label{ax:nf:identity}
\mathcal P_{j} (\phi^1 \nabla_x \phi^2)=\Delta^{-1} \nabla^i \mathcal N_{ij} (\phi^1,\phi^2).
\end{equation}
\begin{remark} \label{ax:skew-adj}
Note that \eqref{ax:nf:identity} shows that $ \mathcal P_{j} (\phi^1 \nabla_x \phi^2) $ is a skew adjoint bilinear form.
\end{remark}
\
\begin{proposition} \label{prop:ax:est} One has the following estimates:
\begin{align}
& \vn{\mathcal P_{j} (\phi^1 \nabla_x \phi^2)}_{\ell^1 N^{\sigma-1}} \lesssim \vn{\phi^1}_{\bar{S}^{\sigma}} \vn{\phi^2}_{\bar{S}^{\sigma}} \label{est:ax1} \\
& \vn{\phi^1 \nabla_{t,x} \phi^2}_{\ell^1 ( L^2 \dot{H}^{\sigma-\frac{3}{2}} \cap L^{\infty} \dot{H}^{\sigma-2} ) } \lesssim \vn{\phi^1}_{\bar{S}^{\sigma}} \vn{\phi^2}_{\bar{S}^{\sigma}} \label{est:a01} \\
& \vn{\phi^1 \phi^2 A}_{\ell^1 ( L^1 \dot{H}^{\sigma-1} \cap L^2 \dot{H}^{\sigma-\frac{3}{2}} \cap L^{\infty} \dot{H}^{\sigma-2} )} \lesssim \vn{\phi^1}_{\bar{S}^{\sigma}} \vn{\phi^2}_{\bar{S}^{\sigma}} \vn{A}_{S^{\sigma} \times Y^{\sigma}} \label{est:ax2}
\end{align}
\end{proposition}
\
Moving on to the $ \phi $ nonlinearity, when $ A_x $ is divergence free, we can write $ A_j=\mathcal P_j A $, which implies
\begin{equation} \label{phi:nf:identity}
A^i \partial_i \phi= \sum \mathcal N_{ij} \big( \nabla_i \Delta^{-1} A_j,\phi \big).
\end{equation}
As discussed in the introduction, the most difficult interaction occurs when $A_{0}$ and $A_{x}$ have frequencies lower than $\phi$.
To isolate this part, we introduce the low-high paradifferential operators
\begin{align} \label{pi:op}
\pi[A] \phi &:= \sum_{k \geq 0} P_{< k-C} A_{\alpha} \, \partial^{\alpha} \bar{P}_{k} \phi,
\end{align}
Moreover, we define
\begin{align*} \mathcal H^{\ast}_{k'} L(A, \phi)
&= \sum_{j < k' + C_{2}} \bar{Q}_{< j} L(P_{k'} Q_{j} A, \bar{Q}_{< j} \phi), \\
\mathcal H^{\ast} L(A, \phi)
&= \sum_{\substack{ k' < k - C_{2} - 10 \\ k' \in \mathbb{Z}, \ k,\tilde{k} \geq 0 }} \bar{P}_{\tilde{k}} \mathcal H^{\ast}_{k'} L(A, \phi_{k}).
\end{align*}
With these notations, we have
\begin{proposition} \label{prop:phi:est} \
\begin{enumerate} [leftmargin=*]
\item For all $ \phi $ and $ A=(A_x,A_0) $ such that $ \partial_j A_j=0 $ one has the null form estimates:
\begin{align}
\vn{A_{\alpha} \partial^{\alpha} \phi- \pi[A] \phi }_{\bar{N}^{\sigma-1} } & \lesssim \vn{A}_{S^{\sigma} \times Y^{\sigma}} \vn{\phi}_{\bar{S}^{\sigma}} \label{est:phi1} \\
\vn{(I-\mathcal H^*) \pi[A] \phi}_{\bar{N}^{\sigma-1}} & \lesssim \vn{A}_{\ell^1 S^{\sigma} \times Y^{\sigma} } \vn{\phi}_{\bar{S}^{\sigma}} \label{est:phi2} \\
\vn{ \mathcal H^* \pi[A] \phi}_{\bar{N}^{\sigma-1}} & \lesssim \vn{A}_{Z \times Z_{ell}} \vn{\phi}_{\bar{S}^{\sigma}} \label{est:phi3}
\end{align}
\item For all $ \phi $ and $ A=(A_x,A_0) $ one has
\begin{align}
\vn{A^{\alpha} \partial_{\alpha} \phi}_{L^2 H^{\sigma-\frac{3}{2}}} & \lesssim \vn{A}_{S^{\sigma} \times Y^{\sigma}} \vn{\phi}_{\bar{S}^{\sigma}} \label{est:phi6} \\
\vn{\partial_t A_0 \phi }_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}} & \lesssim \vn{A_0}_{Y^{\sigma}} \vn{\phi}_{\bar{S}^{\sigma}} \label{est:phi4} \\
\vn{A^1_{\alpha}A^2_{\alpha} \phi }_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}} } & \lesssim \vn{A^1}_{S^{\sigma} \times Y^{\sigma}} \vn{A^2}_{S^{\sigma} \times Y^{\sigma}} \vn{\phi}_{\bar{S}^{\sigma}} \label{est:phi5}.
\end{align}
\end{enumerate}
\end{proposition}
\
The following trilinear bound contains the more delicate estimates occurring in our system. It relies crucially on the cancelation discovered in \cite{MachedonSterbenz} and to handle it we will need the norms $ L^{\infty}_{t_{\omega,\lambda}} L^2_{x_{\omega,\lambda}},\ L^{2}_{t_{\omega,\lambda}} L^{\infty}_{x_{\omega,\lambda}} $, the Lorentz norms $ L^1 L^{2,1}, \ L^2 L^{4,2} $ as well as the bilinear forms from section \ref{bil:forms:sec}. The proof is in Section \ref{Trilinear:section}.
\
\begin{proposition} \label{trilinear} For $ {\bf A} $ and $ \pi $ defined by \eqref{A:bil:op} and \eqref{pi:op} one has:
\begin{equation}
\vn{\pi[{\bf A}( \phi^{1}, \phi^{2}) ] \phi}_{\bar{N}^{\sigma-1} } \lesssim \vn{\phi^1}_{\bar{S}^{\sigma}} \vn{\phi^2}_{\bar{S}^{\sigma}} \vn{\phi}_{\bar{S}^{\sigma}} \label{est:trilin}
\end{equation}
\end{proposition}
\
The null form in \eqref{est:trilin} can be seen as follows (\cite{KST}, \cite{MachedonSterbenz}). Plugging in the Hodge projection $ \mathcal P=I- \nabla \Delta^{-1} \nabla $ and doing some computations (see the appendix of \cite{KST} for details) we may write
\begin{equation} \label{Q:dec}
{\bf A}^{\alpha}(\phi^1,\phi^2) \partial_{\alpha} \phi=(\mathcal{Q}_1+\mathcal{Q}_2+\mathcal{Q}_3)(\phi^1,\phi^2,\phi) \end{equation}
where
\begin{equation} \label{Q:dec2}
\begin{aligned}
\mathcal{Q}_1(\phi^1,\phi^2,\phi) :=& - \Box^{-1} \mathfrak{I} (\phi^1 \partial_{\alpha} \bar{\phi^2})\cdot \partial^{\alpha}\phi , \\
\mathcal{Q}_2(\phi^1,\phi^2,\phi) :=& \Delta^{-1} \Box^{-1} \partial_t \partial_{\alpha} \mathfrak{I} (\phi^1 \partial_{\alpha} \bar{\phi^2})\cdot \partial_{t}\phi , \\
\mathcal{Q}_3(\phi^1,\phi^2,\phi) :=& \Delta^{-1} \Box^{-1} \partial_{\alpha} \partial^i \mathfrak{I} (\phi^1 \partial_{i} \bar{\phi^2})\cdot \partial^{\alpha}\phi .
\end{aligned}
\end{equation}
We also define
\begin{align*}
\mathcal H_{k'} L (\phi, \psi)
= & \sum_{j < k' + C_{2}} P_{k'} Q_{j} L(\bar{Q}_{< j} \phi, \bar{Q}_{< j} \psi), \\
\mathcal H L(\phi, \psi)
= & \sum_{\substack{ k' < k_2 - C_{2} - 10 \\ k' \in \mathbb{Z}, \ k_1,k_2 \geq 0 }} \mathcal H_{k'} L( \bar{P}_{k_{1}} \phi, \bar{P}_{k_{2}} \psi), \\
\end{align*}
\
Before solving the system MKG we give an example of using the estimates above together with Theorem \ref{main:parametrix} to solve the Cauchy problem for $ \Box_m^A \phi=F $, in the particular case $ A=A^{free} $, which will be useful below.
\begin{proposition} \label{cov:Afree}
Let $ A=A^{free} $ be a real 1-form obeying $ \Box A=0, \ \partial^j A_j=0,\ A_0=0 $. If $ \vn{A[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} $ is sufficiently small, then for any $ \phi[t_0] \in H^{\sigma} \times H^{\sigma-1} $ and any $ F \in \bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}} $, the solution of $ \Box_m^A \phi=F $ with data $ \phi[t_0] $ satisfies:
\begin{equation} \label{en:est:free}
\vn{\phi}_{\bar{S}^{\sigma}} \lesssim \vn{\phi[t_0]}_{H^{\sigma} \times H^{\sigma-1}}+ \vn{F}_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}}
\end{equation}
\end{proposition}
\begin{proof} We show that the mapping $ \psi \mapsto \phi $ given by $ \Box_m^{p,A} \phi=F+ \bar{\mathcal M}(A,\psi) $ with data $ \phi[t_0] $ at $ t=t_0 $ is a contraction on $ \Bar{S}^{\sigma} $, where
$$ \bar{\mathcal M}(A,\psi)=2i ( A_{\alpha} \partial^{\alpha} \psi - \pi[A] \psi )- A^{\alpha} A_{\alpha} \psi
$$
is chosen so that $ \bar{\mathcal M}(A,\psi)=\Box_m^{p,A} \psi-\Box_m^A \psi $. Using \eqref{est:phi1}, \eqref{est:phi6}, \eqref{est:phi5}, noting that $ \vn{A}_{S^{\sigma} \times Y^{\sigma}} \lesssim \vn{A[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} \leq \varepsilon \ll 1 $ (since $ A_0=0 $) we obtain
$$ \vn{ \bar{\mathcal M}(A,\psi)}_{ {\bar{N}}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}} } \lesssim \varepsilon \vn{\psi}_{\bar{S}^{\sigma} } $$
which together with Theorem \ref{main:parametrix} proves the existence of $ \phi $ for $ \varepsilon $ small enough. The same estimates imply \eqref{en:est:free}.
\end{proof}
\section{Proof of the main theorem}
Assuming the estimates in sections \ref{Sec_parametrix} and \ref{Sec_statements} we prove Theorem \ref{thm:main}.
For $ J_{\alpha}=-\mathfrak{I}(\phi \overline{D_{\alpha} \phi}) $, the MKG system is written as
\begin{equation} \label{MKG:CG} \tag{MKG}
\left\{
\begin{aligned}
\Box_{m}^{A} \phi & =0 \\
\Box A_i & =\mathcal{P}_i J_x \\
\Delta A_0 &=J_0
\end{aligned}
\right.
\end{equation}
We begin with a more detailed formulation of the main part of Theorem \ref{thm:main}. After proving it we proceed to the proofs of statements (2) and (3) of Theorem \ref{thm:main}.
\
\begin{theorem} \label{thm:main-iter}
There exists a universal constant $\varepsilon > 0$ such that
\begin{enumerate}[leftmargin=*]
\item For any initial data $\phi[0] \in H^{\sigma} \times H^{\sigma-1} $, $A_{x}[0] \in \dot{H}^{\sigma} \times \dot{H}^{\sigma-1} $ for \emph{MKG} satisfying the smallness condition \eqref{eq:main:smalldata} and \eqref{Coulomb}, there exists a unique global solution $(\phi, A_x,A_0) \in \bar{S}^{\sigma} \times S^{\sigma} \times Y^{\sigma} $ to \emph{MKG} with this data.
\item
For any admissible frequency envelope $ (c_k)_{k \geq 0} $ such that $ \vn{\bar{P}_k \phi[0]}_{H^{\sigma} \times H^{\sigma-1}} \leq c_k $, we have
\begin{equation} \label{eq:main-fe}
\vn{\bar{P}_k \phi}_{\bar{S}^{\sigma}} \lesssim c_k ,\quad \nrm{P_{k'} [ A_{x} - A_{x}^{free}]}_{S^{\sigma}} + \nrm{P_{k'} A_{0}}_{Y^{\sigma}} \lesssim \begin{cases} c_{k'}^2, & k' \geq 0 \\ 2^{\frac{k'}{2}} c_0^2, & k' \leq 0 \end{cases}.
\end{equation}
\item(Weak Lipschitz dependence) Let $(\phi',A') \in \bar{S}^{\sigma} \times S^{\sigma} \times Y^{\sigma} $ be another solution to \emph{MKG} with small initial data. Then, for \footnote{ $\delta_{1} $ is the admissible frequency envelope constant.}
$\delta \in (0, \delta_{1})$ we have
\begin{equation} \label{eq:weak-lip} \vn{\phi-\phi'}_{\bar{S}^{\sigma-\delta}}+ \vn{A-A'}_{S^{\sigma-\delta} \times Y^{\sigma-\delta}} \lesssim \vn{(\phi-\phi')[0]}_{H^{\sigma-\delta} \times H^{\sigma-\delta-1}} + \vn{(A_x-A_x')[0]}_{\dot{H}^{\sigma-\delta} \times \dot{H}^{\sigma-\delta-1}}
\end{equation}
\item (Persistence of regularity) If $\phi[0] \in H^{N} \times H^{N-1} $, $A_{x}[0] \in \dot{H}^{N} \times \dot{H}^{N-1}$ $(N \geq \sigma)$, then $ (\phi,\partial_t \phi) \in C_{t}(\mathbb R; H^{N}\times H^{N-1})$, $\nabla_{t, x} A_{x} \in C_{t}(\mathbb R; \dot{H}^{N-1})$. In particular, if the data $(\phi[0], A_{x}[0])$ are smooth, then so is the solution $(\phi,A)$.
\end{enumerate}
\end{theorem}
Theorem \ref{thm:main-iter} is proved by an iteration argument as in \cite{KST}. The presence of the non-perturbative interaction with $ A^{free} $ precludes both the usual iteration procedure based on inverting $ \Box $ and the possibility of proving Lipschitz dependence in the full space $ \bar{S}^{\sigma} \times S^{\sigma} \times Y^{\sigma} $. Instead, we will rely on Theorem \ref{main:parametrix} which provides linear estimates for $ \Box_m^{p,A^{free}} $.
\begin{remark} \label{cov:eq-currents} When $ \phi $ solves a covariant equation $ \Box_m^{A} \phi=0 $ for some real 1-form $ A$, denoting the currents $ J_{\alpha}=-\mathfrak{I} (\phi \overline{D_{\alpha}^A \phi}) $, a simple computation shows
$ \partial^{\alpha} J_{\alpha}=0. $
\end{remark}
\
\subsection{Existence and uniqueness}
We first prove Statement~(1) of Theorem~\ref{thm:main-iter}.
\pfstep{Step~1} We set up a Picard iteration. For the zeroth iterate, we take $(\phi^{0},A_j^{0},A_0^0) = (0,A_j^{free},0) $ and for any $n \geq 0$ define $ J_{\alpha}^n=-\mathfrak{I}(\phi^n \overline{D_{\alpha}^{A_n} \phi^n}) $ and, recursively,
\begin{align}
& \Box_{m}^{A^n} \phi^{n+1} =0 \label{cov:iterate} \\
& \Box A_j^{n+1} =\mathcal{P}_j J_x^n \label{ax:iterate} \\
& \Delta A_0^{n+1} =J_0^n \label{a0:iterate}
\end{align}
with initial data $ (\phi[0], A_x[0]) $. Differentiating \eqref{a0:iterate} and using Remark \ref{cov:eq-currents}, we get
\begin{equation} \label{a0t:iterate}
\Delta \partial_t A_0^{n+1}=\partial^i J_i^n.
\end{equation}
Note that $ A_0^1=0 $. We claim that
\begin{equation} \label{first:iterate}
\vn{A_x^1}_{S^{\sigma}}=\vn{A_x^{free}}_{S^{\sigma}} \leq C_0 \vn{A_x[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} \leq C_0 \varepsilon, \qquad \vn{\phi^1}_{\bar{S}^{\sigma}} \leq C_0 \varepsilon
\end{equation}
where $A_{j}^{free}$ denotes the free wave development of $A_{j}[0] = (A_{j}, \partial_{t} A_{j})(0)$.
For $n \geq 1$, denoting $ A^m=(A_x^m,A_0^m) $ we make the induction hypothesis
\begin{equation} \label{eq:main-iter-ind}
\vn{\phi^m-\phi^{m-1}}_{\bar{S}^{\sigma}}+\vn{A^m-A^{m-1}}_{\ell^1 S^{\sigma} \times Y^{\sigma}} \leq (C_{\ast} \varepsilon)^{m} \qquad m=2,n.
\end{equation}
for a universal constant $C_{\ast} > 0$. By summing this up and adding \eqref{first:iterate} we get
\begin{equation} \label{eq:sec-iter-ind}
\vn{\phi^m}_{\bar{S}^{\sigma}}+\vn{A^m_x-A^{free}_x}_{\ell^1 S^{\sigma}}+ \vn{A^m_x}_{S^{\sigma}} +\vn{A_0^m}_{Y^{\sigma}} \leq 2 C_0 \varepsilon \quad m=1,n. \end{equation}
These estimates imply convergence of $ (\phi^n, A^n_x, A_0^n) $ in the topology of $ \bar{S}^{\sigma} \times S^{\sigma} \times Y^{\sigma} $ to a solution of MKG.
\pfstep{Step~2} Notice that we can decompose
\begin{equation*}
\begin{aligned}
& A_{0}^{n+1} = {\bf A}_{0}(\phi^{n}, \phi^{n})+A_0^{R,n+1}, \qquad & A_0^{R,n+1} & \vcentcolon= - \Delta^{-1} (\vm{\phi^n}^2 A_0^n ) \\
& A_{j}^{n+1} = A_{j}^{free} + {\bf A}_{j}(\phi^{n}, \phi^{n}) + A_j^{R,n+1}, \qquad & A_j^{R,n+1} & \vcentcolon= -\Box^{-1} \mathcal{P}_j (\vm{\phi^n}^2 A_x^n )
\end{aligned}
\end{equation*}
for $ {\bf A}=( {\bf A}_{0}, {\bf A}_{j} ) $ defined in \eqref{A:bil:op}, and set $ A^{R,n}=(A_x^{R,n},A_0^{R,n}) $. To estimate $ A^{n+1}-A^n $ we write
\begin{equation} \label{eqA:dif}
\begin{aligned}
A^{n+1}-A^n &={\bf A}(\phi^{n}-\phi^{n-1}, \phi^{n})+{\bf A}(\phi^{n-1}, \phi^{n}-\phi^{n-1})+\big( A^{R,n+1}-A^{R,n} \big) \\
\partial_t A_{0}^{n+1}-\partial_t A_{0}^{n} &=\Delta^{-1} \nabla_x \mathfrak{I} \big( \phi^{n-1} \nabla_x \overline{\phi^{n-1}} -\phi^n \nabla_x \bar{\phi^{n}}+ i \vm{\phi^{n-1}}^2 A_x^{n-1}- i \vm{\phi^n}^2 A_x^n \big)
\end{aligned}
\end{equation}
The difference $ A^{n+1}-A^n $ is estimated in $ \ell^1 S^{\sigma} \times Y^{\sigma} $ using Proposition \ref{A:solv} and \eqref{est:ax1}-\eqref{est:ax2}, together with \eqref{eq:main-iter-ind}, \eqref{eq:sec-iter-ind}. With an appropriate choice of $C_{\ast}$ and $\varepsilon$, this insures the induction hypothesis \eqref{eq:main-iter-ind} for $ A $ remains valid with $ m=n+1 $.
Moreover, using \eqref{ZZ:emb} and \eqref{est:ax2} with \eqref{eq:main-iter-ind}, \eqref{eq:sec-iter-ind} we obtain
\begin{equation} \label{eq:aux:z}
\vn{A^{R,n}}_{(Z \cap \ell^1 S^{\sigma}) \times (Z_{ell} \cap Y^{\sigma})} \lesssim \varepsilon ,\quad \vn{A^{R,n}-A^{R,n-1} }_{(Z \cap \ell^1 S^{\sigma}) \times (Z_{ell} \cap Y^{\sigma})} \lesssim ( C_{\ast} \varepsilon )^{n+1}
\end{equation}
\pfstep{Step~3} In order to solve \eqref{cov:iterate}, we rewrite it as
$$ \Box_m^{p,A^{free}} \phi^{n+1}=\mathcal M(A^n,\phi^{n+1})$$
where
\begin{align*}
(2i)^{-1} \mathcal M(A^n,\phi)=\ & \big( A^n_{\alpha} \cdot \partial^{\alpha} \phi - \pi[A^n] \phi \big) + \pi[A^{R,n}] \phi \\
+ & \pi[{\bf A}(\phi^{n-1}, \phi^{n-1})] \phi -(2i)^{-1} \big( \partial_t A_0^n \phi+ A^{n,\alpha} A^n_{\alpha} \phi \big)
\end{align*}
We prove that the map $ \phi \mapsto \psi $ defined by $ \Box_m^{p,A^{free}} \psi=\mathcal M(A^n,\phi) $ is a contraction on $ \bar{S}^{\sigma} $. This follows from Theorem \ref{main:parametrix} together with
\begin{equation} \label{phi:contr}
\vn{\mathcal M(A^n,\phi) }_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}} \lesssim \varepsilon \vn{ \phi}_{\bar{S}^{\sigma}}.
\end{equation}
which holds due to \eqref{est:phi1}-\eqref{est:phi5}, \eqref{est:trilin} since we have \eqref{eq:sec-iter-ind} and \eqref{eq:aux:z}.
Moreover, this argument also establishes \eqref{first:iterate} for $ \phi^1 $ since we are assuming $ A^{R,0}={\bf A}(\phi^{-1}, \phi^{-1})=0 $.
\pfstep{Step~4} To estimate $ \phi^{n+1}-\phi^n $ using Theorem \ref{main:parametrix} in addition to applying $ \eqref{phi:contr} $ with $ \phi=\phi^{n+1}-\phi^n $ we also need
$$
\vn{\mathcal M(A^n,\phi^n)-\mathcal M(A^{n-1},\phi^n) }_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}} \lesssim ( C_{\ast} \varepsilon )^{n} \vn{\phi^n}_{\bar{S}^{\sigma}}
$$
This follows by applying \eqref{est:phi1}, \eqref{est:phi4}, \eqref{est:phi5} with $ A=A^n-A^{n-1} $, then \eqref{est:phi2}, \eqref{est:phi3} with $ A=A^{R,n}-A^{R,n-1} $, and finally \eqref{est:trilin} with $ {\bf A}(\phi^{n-1}, \phi^{n-1}- \phi^{n-2}) $ and $ {\bf A}(\phi^{n-1}- \phi^{n-2},\phi^{n-2}) $. We use these together with \eqref{eq:main-iter-ind} and \eqref{eq:aux:z}. We conclude that, with appropriate $C_{\ast}$ and $\varepsilon$, the induction hypothesis \eqref{eq:main-iter-ind} remains valid with $ m=n+1 $ for $ \phi $ as well.
\pfstep{Step~5} To prove uniqueness, assume that $ (\phi,A) $ and $ (\phi',A') $ are two solutions with the same initial data. Then the same $ A^{free} $ is used in $ \Box_m^{p,A^{free}} $ for both $ \phi, \phi' $ and using the same estimates as above one obtains
$$ \vn{A-A'}_{\ell^1 S^{\sigma} \times Y^{\sigma}}+\vn{\phi-\phi'}_{\bar{S}^{\sigma}} \lesssim \varepsilon \big( \vn{A-A'}_{\ell^1 S^{\sigma} \times Y^{\sigma}}+\vn{\phi-\phi'}_{\bar{S}^{\sigma}} \big).
$$
Choosing $ \varepsilon $ small enough the uniqueness statement follows.
\subsection{The frequency envelope bounds \eqref{eq:main-fe}} The main observation here is that all estimates used in the proof of existence have a frequency envelope version. Using Remark \ref{fe:par} and $ \Box_m^{p,A^{free}} \phi=\mathcal M(A,\phi)$ we have
\begin{equation} \label{fee}
\vn{\phi}_{\bar{S}^{\sigma}_c} \lesssim \vn{\phi[0]}_{(H^{\sigma} \times H^{\sigma-1})_c} + \vn{\mathcal M(A,\phi)}_{(\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}})_c}
\end{equation}
By \eqref{est:phi1:freqA0}, \eqref{est:phi1:freqAx}, \eqref{est:phi2:freqA0}, \eqref{est:phi2:freqAx}, \eqref{est:phi3:freq}, \eqref{Q1:trilinear} , \eqref{Q2:bilest}, \eqref{Q3:est}, Lemma \ref{lemma:additional} and the proof of \eqref{est:phi6}-\eqref{est:phi5} we have
\begin{equation} \label{nonl:fe} \vn{\mathcal M(A,\phi)}_{(\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}})_c} \lesssim \big( \vn{A}_{S^{\sigma} \times Y^{\sigma}}+\vn{A^R}_{(Z \cap \ell^1 S^{\sigma}) \times (Z_{ell} \cap Y^{\sigma})} + \vn{\phi}_{\bar{S}^{\sigma}}^2 \big) \vn{\phi}_{\bar{S}^{\sigma}_c}
\end{equation}
The term in the bracket is $ \lesssim \varepsilon $, thus from \eqref{fee} we obtain $ \vn{\phi}_{\bar{S}^{\sigma}_c} \lesssim \vn{\phi[0]}_{(H^{\sigma} \times H^{\sigma-1})_c} $ which implies $ \vn{\bar{P}_k \phi}_{\bar{S}^{\sigma}} \lesssim c_k $.
Now we turn to $ A $. We define $ \tilde{c}_{k'}=c_{k'}^2 $ for $ k' \geq 0 $ and $ \tilde{c}_{k'}=2^{\frac{k'}{2}} c_0^2 $ for $ k' \leq 0 $. One has
$$ \vn{A_x-A^{free}_x }_{S^{\sigma}_{\tilde{c}}}+ \vn{A_0}_{Y^{\sigma}_{\tilde{c}}} \lesssim \vn{\Box A_x}_{N^{\sigma-1}_{\tilde{c}} \cap L^2 \dot{H}^{\sigma-\frac{3}{2}}_{\tilde{c}}}+ \vn{\Delta A_0}_{\Delta Y^{\sigma}_{\tilde{c}}} \lesssim \vn{\phi}_{\bar{S}^{\sigma}_c}^2 \lesssim 1 $$
using \eqref{est:ax1:freq} and the proofs of \eqref{est:a01}, \eqref{est:ax2}. This concludes the proof of \eqref{eq:main-fe}.
\begin{remark} A consequence of \eqref{eq:main-fe} is that if we additionally assume $ (\phi[0],A_x[0]) \in H^s \times H^{s-1} \times \dot{H}^s \times \dot{H}^{s-1} $ for $ s\in (\sigma,\sigma+\delta_1) $ then we can deduce
\begin{equation} \label{hs:fe}
\vn{\phi}_{L^{\infty} (H^s \times H^{s-1})} + \vn{A}_{L^{\infty} (\dot{H}^s \times \dot{H}^{s-1})} \lesssim \vn{\phi[0]}_{H^s \times H^{s-1}} + \vn{A_x[0]}_{\dot{H}^s \times \dot{H}^{s-1}}
\end{equation}
Indeed, choosing the frequency envelope
\begin{equation} \label{ck:fe}
c_k=\sum_{k_1 \geq 0} 2^{-\delta_1 \vm{k-k_1}} \vn{\bar{P}_{k_1} \phi[0]}_{H^{\sigma} \times H^{\sigma-1}}, \qquad \vn{c_k}_{\ell^2(\mathbb{Z}_{+})} \simeq \vn{\phi[0]}_{H^{\sigma} \times H^{\sigma-1}}
\end{equation}
from \eqref{eq:main-fe} we obtain
$$ \vn{\phi}_{L^{\infty} (H^s \times H^{s-1})} \lesssim \vn{\jb{D}^{s-\sigma} \phi}_{\bar{S}^{\sigma}} \lesssim \vn{2^{k(s-\sigma)} c_k}_{\ell^2(\mathbb{Z}_{+})} \lesssim \vn{\phi[0]}_{H^s \times H^{s-1}} $$
and similarly with $ (A_x-A_x^{free},A_0) $; meanwhile $ \vn{A_x^{free}}_{L^{\infty} (\dot{H}^s \times \dot{H}^{s-1})} \lesssim \vn{A_x[0]}_{\dot{H}^s \times \dot{H}^{s-1}} $.
\end{remark}
\subsection{Weak Lipschitz dependence \eqref{eq:weak-lip}} Let $ \delta \phi=\phi-\phi' $ and $ \delta A=A-A' $. Similarly to the equations in \eqref{eqA:dif} we write
$$ \delta A ={\bf A}(\delta \phi, \phi)+{\bf A}(\phi', \delta \phi)+\big( A^{R}-A'^{R} \big)
$$
and similarly for $ \delta \partial_t A_0 $. Applying \eqref{est:ax1:freq} and the estimates in the proofs of \eqref{est:a01}, \eqref{est:ax2} we get
$$ \vn{\delta A}_{S^{\sigma-\delta} \times Y^{\sigma-\delta} } \lesssim \vn{\delta A_x [0]}_{\dot{H}^{\sigma-\delta} \times \dot{H}^{\sigma-\delta-1}}+ \varepsilon \vn{\delta \phi}_{\bar{S}^{\sigma-\delta}} + \varepsilon \vn{\delta A}_{S^{\sigma-\delta}\times Y^{\sigma-\delta} }.
$$
By Remark \ref{fe:par} we have
$$ \vn{ \delta \phi}_{\bar{S}^{\sigma-\delta}} \lesssim \vn{\delta \phi[0]}_{H^{\sigma-\delta} \times H^{\sigma-\delta-1}}+ \vn{\Box_m^{p,A^{free}} \delta \phi}_{\bar{N}^{\sigma-\delta-1}}.
$$
The equation for $ \delta \phi $ is
$$ \Box_m^{p,A^{free}} \delta \phi=\mathcal M(A, \delta \phi)+ \big( \mathcal M(A, \phi')-\mathcal M(A', \phi') \big) + 2i \sum_{k \geq 0} \delta A^{free}_{<k-C} \cdot \nabla_x \phi_k'
$$
By applying \eqref{nonl:fe} with an appropriate frequency envelope $ c $ we get
$$ \vn{\mathcal M(A, \delta \phi)}_{\bar{N}^{\sigma-\delta-1} \cap L^2 H^{\sigma-\frac{3}{2}-\delta}} \lesssim \varepsilon \vn{\delta \phi}_{\bar{S}^{\sigma-\delta}} $$
Similarly we obtain
$$ \vn{ \mathcal M(A, \phi')-\mathcal M(A', \phi') }_{\bar{N}^{\sigma-\delta-1} \cap L^2 H^{\sigma-\frac{3}{2}-\delta}} \lesssim \varepsilon \big( \vn{\delta A}_{S^{\sigma-\delta}\times Y^{\sigma-\delta} } + \vn{\delta \phi}_{\bar{S}^{\sigma-\delta}} \big) $$
Using \eqref{est:phi2:freqAx} (note that the $ \mathcal H^{\ast} $ term is $ 0 $ for $ A^{free} $) we get
$$ \vn{\sum_{k \geq 0} \delta A^{free}_{<k-C} \cdot \nabla_x \phi_k'}_{\bar{N}^{\sigma-\delta-1} \cap L^2 H^{\sigma-\frac{3}{2}-\delta}} \lesssim \vn{\delta A^{free}}_{S^{\sigma-\delta}} \vn{\phi'}_{\bar{S}^{\sigma}} \lesssim \varepsilon \vn{\delta A_x [0]}_{\dot{H}^{\sigma-\delta} \times \dot{H}^{\sigma-\delta-1}}
$$
At this point is where $ \delta>0 $ was used, to do the $ k'<k\ $ $ \ell^2 $-summation of $ \delta A^{free} $. Putting the above together we obtain
\begin{align*}
\vn{ \delta \phi}_{\bar{S}^{\sigma-\delta}}+\vn{\delta A}_{S^{\sigma-\delta} \times Y^{\sigma-\delta} } & \lesssim \vn{\delta \phi[0]}_{H^{\sigma-\delta} \times H^{\sigma-\delta-1}}+ \vn{\delta A_x [0]}_{\dot{H}^{\sigma-\delta} \times \dot{H}^{\sigma-\delta-1}} \\
& + \varepsilon \big( \vn{\delta \phi}_{\bar{S}^{\sigma-\delta}} + \vn{\delta A}_{S^{\sigma-\delta}\times Y^{\sigma-\delta} } \big).
\end{align*}
For $ \varepsilon $ small enough we obtain \eqref{eq:weak-lip}.
\subsection{Subcritical local well-posedness} \label{subcritical:} Here we review some local wellposedness facts that will be used in the proofs below.
Given $ s>\sigma $ we introduce the shorthand $ \mathcal H^{\sigma,s}=(\dot{H}^s \times \dot{H}^{s-1}) \cap (\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}) $.
Note that for $ s>\frac{d}{2}+1 $, $ H^{s-1} $ becomes a Banach Algebra of functions on $ \mathbb{R}^d $.
\begin{proposition} \label{subcritical:prop}
Let $ s>\frac{d}{2}+1 $. For any initial data $ \phi[0] \in H^s \times H^{s-1} $ and $ A_x[0] \in \mathcal H^{\sigma,s} $ there exists a unique local solution $ (\phi,A) $ to \emph{MKG} with these data in the space $ (\phi, \partial_t \phi; A,\partial_t A) \in C_t([0,T], H^s \times H^{s-1}; \mathcal H^{\sigma,s}) $ where $ T>0 $ depends continuously on $ \vn{\phi[0]}_{H^s \times H^{s-1}} $ and $ \vn{A_x[0]}_{\mathcal H^{\sigma,s}} $. The data-to-solution map in these spaces is Lipschitz continuous. Moreover, additional Sobolev regularity of the initial data is preserved by the solution.
\end{proposition}
We omit the proof, which proceeds by usual Picard iteration (based on the d'Alembertian $ \Box $) and the algebra and multiplication properties of the spaces above. Here, the massive term $ \phi $ can be treated perturbatively.
We remark that a stronger subcritical result - almost optimal local well-posedness (i.e. initial data in $ H^{1+\varepsilon}( \mathbb{R}^4) $) was proved in \cite{SSel}.
\subsection{Persistence of regularity} Now we sketch the proof of Statement (4) of Theorem \ref{thm:main-iter}. In view of Prop. \ref{subcritical:prop} it remains to check
\begin{equation} \label{higher:reg}
\vn{\nabla^N \phi}_{\bar{S}^{\sigma}}+ \vn{\nabla^N (A-A_x^{free})}_{\ell^1 S^{\sigma} \times Y^{\sigma}} \lesssim \vn{\phi[0]}_{H^{\sigma+N}\times H^{\sigma+N-1}} +\vn{A_x[0]}_{\dot{H}^{\sigma+N} \times \dot{H}^{\sigma+N-1}}.
\end{equation}
for $ N=1,2 $ whenever the RHS is finite. For brevity we will only consider $ N=1 $; the case $ N=2 $ can be treated similarly. We will already assume that $ \nabla \phi \in \bar{S}^{\sigma},\ \nabla A \in S^{\sigma} \times Y^{\sigma} $. This assumption can be bypassed by repeating the proof of \eqref{higher:reg} for each iterate in the proof of existence. We write
$$ \nabla (A_{x}-A_x^{free}) = {\bf A}_{x}( \nabla \phi, \phi) + {\bf A}_{x}(\phi, \nabla \phi)+\nabla A_x^{R}, \quad A_{x}^{R} = -\Box^{-1} \mathcal{P}_x (\vm{\phi}^2 A_x ) $$
Using the product rule we distribute the derivative on the terms inside $ A_x^{R} $. We also write the similar formula for $ \nabla A_0 $. From Prop. \ref{prop:ax:est} we get
\begin{equation} \label{hr1} \vn{\nabla (A-A_x^{free})}_{\ell^1 S^{\sigma} \times Y^{\sigma}} \lesssim \varepsilon ( \vn{\nabla \phi}_{\bar{S}^{\sigma}}+ \vn{\nabla A}_{S^{\sigma} \times Y^{\sigma}} )
\end{equation}
The equation for $ \nabla \phi $ is
$$ \Box_m^{p,A^{free}} \nabla \phi=\nabla \mathcal M(A, \phi) + 2i \sum_{k \geq 0} \nabla A^{free}_{<k-C} \cdot \nabla_x \phi_k
$$
Using the product rule on $ \nabla \mathcal M(A, \phi) $ and Prop. \ref{prop:ax:est}, \ref{prop:phi:est}, \ref{trilinear} we obtain
$$ \vn{ \nabla \mathcal M(A, \phi)}_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}} \lesssim \varepsilon ( \vn{\nabla \phi}_{\bar{S}^{\sigma}}+ \vn{\nabla A}_{S^{\sigma} \times Y^{\sigma}} )
$$
Using \eqref{est:phi2:freqAx} (note that the $ \mathcal H^{\ast} $ term is $ 0 $ for $ A^{free} $) we get
$$ \vn{\sum_{k \geq 0} \nabla A^{free}_{<k-C} \cdot \nabla_x \phi_k}_{\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}}} \lesssim \varepsilon \vn{\nabla A_x [0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}}
$$
We bound $ \nabla \phi $ using Theorem \ref{main:parametrix} so that together with \eqref{hr1} we have
\begin{align*} \vn{\nabla \phi}_{\bar{S}^{\sigma}}+ \vn{\nabla (A-A_x^{free})}_{\ell^1 S^{\sigma} \times Y^{\sigma}} \lesssim &\vn{\nabla \phi[0]}_{H^{\sigma}\times H^{\sigma-1}} +\varepsilon \vn{\nabla A_x [0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} \\
+& \varepsilon ( \vn{\nabla \phi}_{\bar{S}^{\sigma}}+ \vn{\nabla A}_{S^{\sigma} \times Y^{\sigma}} )
\end{align*}
Choosing $ \varepsilon $ small enough gives \eqref{higher:reg}.
\begin{remark} An alternative approach would be to use \eqref{hs:fe} for $ s\in (\sigma,\sigma+\delta_1) $ together with the almost optimal local well-posedness result in \cite{SSel} and its higher dimensional analogue.
\end{remark}
\subsection{Proof of continuous dependence on data} Now we prove statement (2) of Theorem \ref{thm:main} and that every solution obtained by Theorem \ref{thm:main-iter} can be approximated by smooth solutions.
Let $ \phi[0] \in H^{\sigma} \times H^{\sigma-1} , \ A_x[0] \in \dot{H}^{\sigma} \times \dot{H}^{\sigma-1} $ be an initial data set for MKG. For large $ m $ we denote $ \phi^{(m)}[0]=P_{\leq m} \phi[0],\ A_x^{(m)}[0]=P_{\leq m} A_x[0] $. Let $ \phi, A $ (resp. $\phi^{(m)}, A^{(m)} $) be the solutions with initial data $ \phi[0], A_x[0] $ (resp. $ \phi^{(m)}[0], A_x^{(m)}[0] $) given by Theorem \ref{thm:main-iter}.
\begin{lemma}[Approximation by smooth solutions] \label{apr:smoth:lm} Let $ (c_k) $ be a $ H^{\sigma} \times H^{\sigma-1} $ admissible frequency envelope for $ \phi[0] $, i.e. $ \vn{\bar{P}_k \phi[0] }_{H^{\sigma} \times H^{\sigma-1}} \leq c_k $. Then
$$ \vn{\phi-\phi^{(m)}}_{\bar{S}^{\sigma}}+\vn{A-A^{(m)}}_{S^{\sigma} \times Y^{\sigma}} \lesssim \big( \sum_{k>m} c_k^2 \big)^{1/2}+ \vn{P_{> m} A_x[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}}.
$$
\end{lemma}
\begin{proof}
Clearly, $ c $ is also a frequency envelope for $ \phi^{(m)}[0] $. Applying the bound \eqref{eq:main-fe} to $ (\phi,A) $ and $ (\phi^{(m)}, A^{(m)}) $ separately, we obtain the estimate above for $ P_{>m}(\phi-\phi^{(m)}) $ and $ P_{>m}(A-A^{(m)}) $. For the terms $ P_{\leq m}(\phi-\phi^{(m)}) $ and $ P_{\leq m}(A-A^{(m)}) $ we use the weak Lipschitz dependence bound \eqref{eq:weak-lip}:
\begin{align*}
\vn{P_{\leq m}(\phi&-\phi^{(m)})}_{\bar{S}^{\sigma}} + \vn{P_{\leq m}(A-A^{(m)})}_{S^{\sigma} \times Y^{\sigma}} \lesssim \\
& \lesssim 2^{\delta m} \vn{P_{\leq m}(\phi-\phi^{(m)})}_{\bar{S}^{\sigma-\delta}}+ 2^{\delta m} \vn{P_{\leq m}(A-A^{(m)})}_{S^{\sigma-\delta} \times Y^{\sigma-\delta}} \\
& \lesssim 2^{\delta m} \vn{P_{>m} \phi[0]}_{H^{\sigma-\delta} \times H^{\sigma-\delta-1}}+2^{\delta m} \vn{P_{>m} A_x [0]}_{\dot{H}^{\sigma-\delta} \times \dot{H}^{\sigma-\delta-1}}
\end{align*}
which concludes the proof of the lemma.
\end{proof}
We continue with the proof of statement (2) of Theorem \ref{thm:main}. Let $ (\phi^n[0],A_x^n[0]) $ be a sequence of initial data sets converging to $ (\phi[0],A_x[0]) $ in $ H^{\sigma} \times H^{\sigma-1} \times \dot{H}^{\sigma} \times \dot{H}^{\sigma-1} $. For large $ n $ we denote $ (\phi^n,A^n) $ the corresponding solutions given by Theorem \ref{thm:main-iter}. We also define the approximations $ (\phi^{n(m)},A^{n(m)}) $ as above.
For $ T>0 $ and $ \epsilon>0 $ we prove
\begin{equation} \vn{\phi-\phi^n}_{C_t([0,T]; H^{\sigma} \times H^{\sigma-1})}+\vn{A-A^n}_{C_t([0,T]; \dot{H}^{\sigma} \times \dot{H}^{\sigma-1})} <\epsilon
\end{equation}
for large $ n $. We apply Lemma \ref{apr:smoth:lm} with the frequency envelopes \eqref{ck:fe} and
$$ c_k^n=\sum_{k_1 \geq 0} 2^{-\delta_1 \vm{k-k_1}} \vn{\bar{P}_{k_1} \phi^n[0]}_{H^{\sigma} \times H^{\sigma-1}}. $$
Since $ \vn{\phi^n[0]-\phi[0]}_{H^{\sigma} \times H^{\sigma-1}}+\vn{A_x^n[0]-A_x[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} \to 0 $, there exists $ m $ such that
$$ \sum_{k >m} c_k^2 < \epsilon^6, \quad \sum_{k >m} (c_k^n)^2 < \epsilon^6, \quad \vn{P_{> m} A_x[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}}<\epsilon^3, \quad \vn{P_{> m} A_x^n[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} < \epsilon^3
$$
for all $ n \geq n_{\epsilon} $. By Lemma \ref{apr:smoth:lm} we obtain
$$ \vn{\phi-\phi^{(m)}}_{\bar{S}^{\sigma}}+\vn{A-A^{(m)}}_{S^{\sigma} \times Y^{\sigma}} <\epsilon^2, \quad \vn{\phi^n-\phi^{n(m)}}_{\bar{S}^{\sigma}}+\vn{A^n-A^{n(m)}}_{S^{\sigma} \times Y^{\sigma}} < \epsilon^2
$$
and it remains to prove
$$
\vn{\phi^{(m)}-\phi^{n(m)}}_{C_t([0,T]; H^{\sigma} \times H^{\sigma-1})}+\vn{A^{(m)}-A^{n(m)}}_{C_t([0,T]; \dot{H}^{\sigma} \times \dot{H}^{\sigma-1})} <\frac{1}{2} \epsilon
$$
Now the solutions are smooth enough and we may apply Prop. \ref{subcritical:prop}. It is a simple matter to note that for large $ n $ the $ H^s $ norms of the differences stay finite for all $ t \in [0,T] $. This concludes the proof.
\subsection{Proof of scattering} Here we discuss the proof of statement (3) of Theorem \ref{thm:main}. Without loss of generality we set $ \pm=+ $.
Let $ (\phi, A) $ be the solutions with initial data $ (\phi[0], A_x[0]) $ given by Theorem \ref{thm:main-iter} and let $ A^{free} $ be the free wave development of $ A_x[0] $. We denote by $ S^{A^{free}}(t',t) $ the propagator from time $ t $ to $ t' $ for the covariant equation $ \Box_m^{A^{free}} \phi=0 $, given by Prop. \ref{cov:Afree}, which implies, for any $ t<t' $
$$ \vn{\phi[t']- S^{A^{free}}(t',t) \phi[t]}_{H^{\sigma} \times H^{\sigma-1}} \lesssim \vn{\Box_m^{A^{free}} \phi}_{(\bar{N}^{\sigma-1} \cap L^2 H^{\sigma-\frac{3}{2}})[t,\infty)}
$$
the last one being the time interval localized norm (see \cite[Proposition~3.3]{OT2}). Using the estimates from Prop. \ref{prop:phi:est} like in the proof of existence shows that the RHS is finite for, say $ t=0 $, and the RHS vanishes as $ t \to \infty $. By the uniform boundedness of $ S^{A^{free}}(0,t) $ on $ H^{\sigma} \times H^{\sigma-1} $ (Prop. \ref{cov:Afree}) and the formula $ S^{A^{free}}(t'',t)=S^{A^{free}}(t'',t') S^{A^{free}}(t',t) $ it follows that, as $ t \to \infty $
$$ \vn{S^{A^{free}}(0,t') \phi[t']- S^{A^{free}}(0,t) \phi[t]}_{H^{\sigma} \times H^{\sigma-1}} \lesssim \vn{\phi[t']- S^{A^{free}}(t',t) \phi[t]}_{H^{\sigma} \times H^{\sigma-1}} \to 0
$$
Therefore the limit $ \lim_{t \to \infty} S^{A^{free}}(0,t) \phi[t]=:\phi^{\infty}[0] $ exists in $ H^{\sigma} \times H^{\sigma-1} $ and $ \phi^{\infty}[0] $ is taken as the initial data for $ \phi^{\infty} $ in Theorem \ref{thm:main}.
The proof of scattering for $ A_x $ is similar, we omit the details.
\section{Core bilinear forms} \label{bil:forms:sec}
This section is devoted to the analysis of translation-invariant bilinear forms.
\subsection{The $ \mathcal M $ form} \label{Mform}
During the proof of the trilinear estimate, we will need to consider terms like
$$ P_{k'} Q_j \mathcal M( \bar{Q}_{<j} \phi^1_{k_1}, \bar{Q}_{<j} \phi^2_{k_2} ) $$
where
\begin{equation} \label{M:form} \mathcal M(\phi^1,\phi^2) \vcentcolon= \partial_{\alpha}( \phi^1\cdot \partial^{\alpha} \phi^2) \end{equation}
is a null-form adapted to the wave equation, while $ \phi^1_{k_1}, \phi^2_{k_2} $ are assumed to be high-frequency Klein-Gordon waves of low $ \bar{Q} $-modulation, with low frequency output.
To obtain effective bounds, we need to split
\begin{equation} \label{M:form:decom} \mathcal M=\mathcal R_0^{\pm}+\mathcal M_0-\mathcal N_0 \end{equation}
where, denoting $ \Xi^i=(\tau_i,\xi_i) $, the symbols of $ \mathcal M, \mathcal R_0^{\pm}, \mathcal M_0, \mathcal N_0 $ are
\begin{equation} m(\Xi^1,\Xi^2) =(\tau_1+\tau_2) \tau_2-(\xi_1+\xi_2) \cdot \xi_2, \label{m:form:symb} \end{equation}
and, respectively,
\begin{align}
r_0^{\pm}(\Xi^1,\Xi^2) & \vcentcolon= \tau_1(\tau_2 \pm \jb{\xi_2})+ (\jb{\xi_1} \mp \tau_1) \jb{\xi_2}+(\tau_2^2-\jb{\xi_2}^2),\label{r0:form:symb} \\
m_0(\Xi^1,\Xi^2) & \defeq1+\vm{\xi_1} \vm{\xi_2}-\jb{\xi_1} \jb{\xi_2},\label{m0:form:symb} \\
n_0(\Xi^1,\Xi^2) & \vcentcolon= \vm{\xi_1} \vm{\xi_2}+\xi_1 \cdot \xi_2. \label{n0:form:symb}
\end{align}
\subsection{The $ \mathcal M_0$ form} \label{M0form}
Let $ \mathcal M_0 (\phi^1,\phi^2) $ be the bilinear form with symbol
$$ m_0(\xi_1,\xi_2)=1+\vm{\xi_1} \vm{\xi_2}-\jb{\xi_1} \jb{\xi_2}. $$
Notice that this multiplier is a radial function in $ \xi_1 $ and $ \xi_2 $.
The following two statements are aimed at obtaining an exponential gain for $ \mathcal M_0 $ in the high $ \times $ high $ \to $ low frequency interactions.
\begin{lemma} \label{lemma:m:bound}
The following bounds hold:
\begin{align*}
\vm{m_0(\xi_1,\xi_2)} \leq & \frac{\vm{\xi_1+\xi_2}^2}{\jb{\xi_1}\jb{\xi_2}} \\
\vm{\partial_{\xi_i} m_0(\xi_1,\xi_2)} \leq & \frac{\vm{\xi_1+\xi_2}}{\jb{\xi_i}} \Big( \frac{1}{\jb{\xi_1}}+\frac{1}{\jb{\xi_2}} \Big), \qquad i=1,2 \\
\vm{\partial_{\xi_i}^{\alpha} m_0(\xi_1,\xi_2)} \lesssim & \frac{\jb{\xi_1}\jb{\xi_2}}{\jb{\xi_i}^{\vm{\alpha}+2}}, \qquad \qquad \quad \vm{\alpha} \geq 2, \ i=1,2.
\end{align*}
\end{lemma}
We return to the proof of this lemma after the following proposition which provides an exponential gain needed for estimate \eqref{Q3:est}.
\begin{proposition} \label{M0:form}
Let $ k \geq 0 $, $ k' \leq k-C $ and $ 1 \leq p,q_1,q_2 \leq \infty $ with $ p^{-1}=q_1^{-1}+q_2^{-1} $. Let $ \mathcal C_1, \mathcal C_2 $ be boxes of size $ \simeq (2^{k'})^d $ located to $ \mathcal C_i \subset \{ \jb{\xi_i} \simeq 2^k \} $ so that
$$ \mathcal C_1 + \mathcal C_2 \subset \{ \vm{\xi} \leq 2^{k'+2} \} $$
Then, for all functions $ \phi_1, \phi_2 $ with Fourier support in $ \mathcal C_1, \mathcal C_2 $ we have
\begin{equation} \label{eq:m-basic}
\vn{\mathcal M_0 (\phi^1,\phi^2)}_{L^p} \lesssim 2^{2(k'-k)} \vn{\phi_1}_{L^{q_1}} \vn{\phi_2}_{L^{q_2}}.
\end{equation}
\end{proposition}
\begin{proof}
We expand $m_0(\xi_1,\xi_2)$ as a rapidly decreasing sum of tensor products
\begin{equation} \label{eq:m-basic:dcmp}
m_0(\xi_1,\xi_2) = \sum_{{\bf j}, {\bf k} \in \mathbb Z^{d}} c_{{\bf j}, {\bf k}} \, a^1_{{\bf j}}(\xi_1) a^2_{{\bf k}}(\xi_2) \quad \hbox{ for } (\xi_1,\xi_2) \in \mathcal C_1 \times \mathcal C_2
\end{equation}
where, denoting $ \mu=2^{2(k'-k)} $, for any $n \geq 0$, $c_{{\bf j}, {\bf k}}$ obeys
\begin{equation} \label{eq:m-basic:c}
\abs{c_{{\bf j}, {\bf k}}} \lesssim_{n} \mu (1+\abs{{\bf j}} + \abs{{\bf k}})^{-n},
\end{equation}
and for some universal constant $n_{0} > 0$, the $a_{{\bf j}}^i$ satisfy
\begin{equation} \label{eq:m-basic:ab}
\nrm{a_{{\bf j}}^i(D)}_{L^{q} \to L^{q}} \lesssim (1+\abs{{\bf j}})^{n_{0}}, \qquad i=1,2. \end{equation}
Assuming \eqref{eq:m-basic:dcmp}--\eqref{eq:m-basic:ab}, the desired estimate \eqref{eq:m-basic} follows immediately. Indeed, \eqref{eq:m-basic:dcmp} implies that
\begin{equation*}
\mathcal M_0(\phi_{1}, \phi_{2}) = \sum_{{\bf j}, {\bf k} \in \mathbb Z^{d}} c_{{\bf j}, {\bf k}} \cdot a_{{\bf j}}^1(D) \phi_{1} \cdot a_{{\bf k}}^2(D) \phi_{2},
\end{equation*}
so \eqref{eq:m-basic} follows by applying H\"older's inequality and \eqref{eq:m-basic:ab}, then using \eqref{eq:m-basic:c} to sum up in ${\bf j}, {\bf k} \in \mathbb Z^{d}$.
Let the boxes $ \tilde{\mathcal C}_1, \tilde{\mathcal C}_2 $ be enlargements of $ \mathcal C_1, \mathcal C_2 $ of size $ \simeq (2^{k'})^d $ and let $ \chi_1, \chi_2 $ be bump functions adapted to these sets which are equal to $ 1 $ on $ \mathcal C_1 $, respectively $\mathcal C_2 $.
Then for $ (\xi_1,\xi_2) \in \mathcal C_1 \times \mathcal C_2 $, we have $ m_0(\xi_1,\xi_2)=m_0(\xi_1,\xi_2) \chi_1(\xi_1) \chi_2(\xi_2) $. Performing a Fourier series expansion of $m_0(\xi_1,\xi_2) \chi_1(\xi_1) \chi_2(\xi_2) $ by viewing $ \tilde{\mathcal C}_1\times \tilde{\mathcal C}_2$ as a torus, we may write
\begin{equation} \label{eq:m-basic:fs}
m_0(\xi_1,\xi_2) = \sum_{{\bf j}, {\bf k} \in \mathbb Z^{d}} c_{{\bf j}, {\bf k}} \, e^{2 \pi i {\bf j} \cdot \xi_1'/2^{k'+c}} e^{2 \pi i {\bf k} \cdot \xi_2'/2^{k'+c}} \quad \hbox{ for } (\xi_1,\xi_2) \in \mathcal C_1 \times \mathcal C_2.
\end{equation}
for $ \xi_i'=\xi_i-\xi_i^0 $ where $ \xi_i^0 $ is the center of $ \mathcal C_i $. Defining
\begin{equation*}
a_{{\bf j}}^i(\xi) = \chi_i(\xi_i) e^{2 \pi i {\bf j} \cdot \xi_1'/2^{k'+c}}, \qquad i=1,2,
\end{equation*}
we obtain the desired decomposition \eqref{eq:m-basic:dcmp} from \eqref{eq:m-basic:fs}.
To prove \eqref{eq:m-basic:c}, we use the Fourier inversion formula
\begin{equation*}
c_{{\bf j}, {\bf k}} = \frac{1}{\hbox{ Vol }( \tilde{\mathcal C}_1\times \tilde{\mathcal C}_2)} \int_{ \tilde{\mathcal C}_1\times \tilde{\mathcal C}_2} m_0(\xi_1^0+\xi_1',\xi_2^0+\xi_2') \chi_1 \chi_2 e^{-2 \pi i( {\bf j} \cdot \xi_1'+ {\bf k} \cdot \xi_2')/2^{k'+c}} \, \mathrm{d} \xi_1' \, \mathrm{d} \xi_2'.
\end{equation*}
By Lemma \ref{lemma:m:bound}, for $ (\xi_1,\xi_2) \in \mathcal C_1 \times \mathcal C_2 $, since $ \vm{\xi_1+\xi_2} \lesssim 2^{k'} $, for any $ \vm{\alpha} \geq 0 $ we have
$$ \vm{ (2^{k'} \partial_{\xi_i})^{\alpha} m_0(\xi_1,\xi_2)} \lesssim \mu, \qquad i=1,2 $$
Thus, integrating by parts in $ \xi_{1}' $ [resp. in $\xi_{2}' $], we obtain
\begin{equation*}
\abs{c_{{\bf j}, {\bf k}}} \lesssim_{n} \mu (1+\abs{{\bf j}})^{-n}, \qquad \abs{c_{{\bf j}, {\bf k}}} \lesssim_{n} \mu (1+\abs{{\bf k}})^{-n}, \quad n \geq 0.
\end{equation*}
These bounds imply \eqref{eq:m-basic:c}. Next, we have
$$ \vm{(2^{k'} \partial_{\xi_i})^{\alpha} a_{{\bf j}}^i(\xi_i) }\lesssim (1+\abs{{\bf j}})^{\vm{\alpha}}, \qquad \vm{\alpha} \geq 0, \ i=1,2 $$
This implies that the convolution kernel of $ a_{{\bf j}}^i(D_i) $ satisfies $\nrm{\check{a}^i_{{\bf j}}}_{L^{1}} \lesssim (1+\abs{{\bf j}})^{n_{0}} $ for $ n_0=d+1 $, which gives \eqref{eq:m-basic:ab}
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:m:bound}] The bounds follow from elementary computations. Indeed,
$$ -m_0(\xi_1,\xi_2)=\frac{(\vm{\xi_1}-\vm{\xi_2} )^2}{1+\vm{\xi_1} \vm{\xi_2}+\jb{\xi_1} \jb{\xi_2}}\leq\frac{\vm{\xi_1+\xi_2}^2}{\jb{\xi_1}\jb{\xi_2}}. $$
Next, wlog assume $ i=1 $. Since $ m_0 $ is radial in $ \xi_1 $ it suffices to compute
$$ \partial_{\vm{\xi_1}} m_0(\xi_1,\xi_2)=\frac{1}{\jb{\xi_1}} \big( \jb{\xi_1} \vm{\xi_2}- \jb{\xi_2} \vm{\xi_1} \big)=\frac{1}{\jb{\xi_1}} \frac{\vm{\xi_2}^2-\vm{\xi_1}^2}{\jb{\xi_1} \vm{\xi_2}+\jb{\xi_2} \vm{\xi_1}} $$
which gives the desires bound.
Finally, the estimate for higher derivatives follows from $ \vm{ \partial_r^n \jb{r}} \lesssim \jb{r}^{-n-1} $ for $ n \geq 2 $, which is straightforward to prove by induction.
\end{proof}
\subsection{The $ \mathcal N_0$ and $ \tilde{\mathcal N}_0 $ forms}
We consider the bilinear forms $ \tilde{\mathcal N}_0(\phi^1,\phi^2) $ on $ \mathbb{R}^{d+1} $ with symbol
\begin{equation} \label{nn0:eq} \tilde{n}(\Xi^1,\Xi^2)= \frac{1}{\vm{(\tau_1,\xi_1)}}\frac{1}{\vm{(\tau_2,\xi_2)}}(\tau_1 \tau_2- \xi_1 \cdot \xi_2) \end{equation}
and $ \mathcal N_0 (\phi^1,\phi^2) $ on $ \mathbb{R}^d $ with symbol
\begin{equation} \label{n0:eq} n_0(\xi_1,\xi_2)=\vm{\xi_1} \vm{\xi_2}+\xi_1 \cdot \xi_2. \end{equation}
\begin{proposition} \label{N0:form}
Let $ k_1, k_2 \in \mathbb{Z} $, $ l' \leq 0 $, and signs $ \pm_1, \pm_2 $. Let $ \kappa_1, \kappa_2 $ be spherical caps of angle $ \simeq 2^{l'} $ centered at $ \omega_1, \omega_2 $ such that $ \angle(\pm_1 \omega_1,\pm_2 \omega_2) \lesssim 2^{l'} $. Let $ X_1, X_2 $ be translation-invariant spaces and $ L $ be a translation-invariant bilinear operator. Suppose that
$$ \vn{L( \phi^1, \phi^2)}_X \lesssim C_{S_1,S_2} \vn{\phi^1}_{X_1} \vn{\phi^2}_{X_2} $$
holds for all $ \phi_1, \phi_2 $ which are Fourier-supported, respectively, in some subsets
$$ S_i \subset E_i \vcentcolon= \{ \vm{\xi_i} \simeq 2^{k_i}, \ \vm{\tau_i \mp_i \vm{\xi_i}} \lesssim 2^{k_i+2l'}, \ \frac{\xi_i}{\vm{\xi_i}} \in \kappa_i \} ,\qquad i=1,2. $$
Then one also has
\begin{equation} \label{eq:nn0:basic} \vn{L(\partial_{\alpha} \phi^1, \partial^{\alpha} \phi^2) }_X \lesssim 2^{2l'} C_{S_1,S_2} \vn{\nabla_{t,x} \phi^1}_{X_1} \vn{\nabla_{t,x} \phi^2}_{X_2} \end{equation}
for all such $ \phi_1, \phi_2 $ .
\end{proposition}
\begin{corollary} \label{L2:NFnullFrames:cor} Under the conditions from Proposition \ref{L2:nullFrames}, for $ j \leq \min(k,k_2)+2l'-C$ one has
$$
\vn{ \partial^{\alpha} P_{\mathcal C} \bar{Q}^{\pm_1}_{<j} \phi_k \cdot \partial_{\alpha} P_{\mathcal C'} \bar{Q}^{\pm_2}_{<j} \varphi_{k_2} }_{L^2_{t,x}} \lesssim 2^{l'} \vn{ P_{\mathcal C} \bar{Q}^{\pm_1}_{<j}\nabla \phi_k}_{NE_\mathcal C^{\pm_1}} \vn{ P_{\mathcal C'} \bar{Q}^{\pm_2}_{<j}\nabla \varphi_{k_2}}_{PW_{\mathcal C'}^{\pm_2}}
$$
\end{corollary}
\begin{remark} \label{NF:remark}
One may of course formulate analogues of Prop. \ref{N0:form} also for multilinear forms, such as the trilinear expressions $ L(\phi^1, \partial_{\alpha} \phi^2, \partial^{\alpha} \phi^3) $ that occur in the proofs of \eqref{SmallAnglesLargeMod}, \eqref{SmallAngleSmallMod}, \eqref{SmallAngleRemainders}. Checking that the same argument applies for them is straightforward and is left to the reader.
\end{remark}
\begin{proof}[Proof of Prop. \ref{N0:form}] \pfstep{Step~1} Let $ \ell(\Xi^1,\Xi^2) $ be the multiplier symbol of $ L $. In \eqref{eq:nn0:basic} we have the operator with symbol $ \ell(\Xi^1,\Xi^2) \tilde{n}(\Xi^1,\Xi^2) $ applied to $ \vm{D_{t,x}} \phi^1, \vm{D_{t,x}} \phi^2$.
The idea is to perform a separation of variables in the form
\begin{equation} \label{eq:nf-basic:dcmp}
\tilde{n}(\Xi^1,\Xi^2) = \sum_{{\bf j}, {\bf k} \in \mathbb Z^{d}} c_{{\bf j}, {\bf k}} \, a_{{\bf j}}(\Xi^1) b_{{\bf k}}(\Xi^2) \quad \hbox{ for } (\Xi^1,\Xi^2) \in E_{1} \times E_{2}
\end{equation}
where for each $n \geq 0$ the coefficients obey
\begin{equation} \label{eq:nf-basic:c}
\abs{c_{{\bf j}, {\bf k}}} \lesssim_{n} 2^{2l'} (1+\abs{{\bf j}} + \abs{{\bf k}})^{-n},
\end{equation}
and for some universal constant $n_{0} > 0$, the operators $a_{{\bf j}}$ and $b_{{\bf k}}$ satisfy
\begin{equation} \label{eq:nf-basic:ab}
\nrm{a_{{\bf j}}(D_{t,x})}_{X_1 \to X_1} \lesssim (1+\abs{{\bf j}})^{n_{0}}, \quad \nrm{b_{{\bf k}}(D_{t,x}) }_{X_2 \to X_2} \lesssim (1+\abs{{\bf k}})^{n_{0}},
\end{equation}
From these, \eqref{eq:nn0:basic} follows immediately.
\newline
We do a change of variables such that $ \tau_i^{\omega} $ is the (essentially null vector) radial coordinate, $ \tau_i^{\omega^{\perp}} $ is orthogonal to it, and $ \xi_i' $ are angular type coordinates in the $ \xi $ hyperplane, so that $ \vm{\xi_i'} \simeq 2^{k_i} \theta_i $ where $ \theta_i $ are the angles between $ \xi_i $ and the center of $ \kappa_i $. We denote $ \tilde{\Xi_i}=( \tau_i^{\omega},\tau_i^{\omega^{\perp}}, \xi_i') $.
Denote by $ \tilde{E}_i $ an enlargement of $ E_i $, chosen be a rectangular region of size $ \simeq 2^{k_i} \times 2^{k_i+2l'} \times (2^{k_i+l'})^{d-1} $ (consistently with the coordinates $ ( \tau_i^{\omega},\tau_i^{\omega^{\perp}}, \xi_i') $). Let $ \chi_i $ be a bump function adapted to $ \tilde{E}_i $, which is equal to $ 1 $ on $ E_i $.
\pfstep{Step~2} We claim the following bounds for $ (\Xi^1,\Xi^2) \in E_{1} \times E_{2} $:
\begin{align}
\vm{\tilde{n}(\Xi^1,\Xi^2)} & \lesssim 2^{2l'} \label{n:formbd:1} \\
\vm{ \partial_{\xi_i'} \tilde{n}(\Xi^1,\Xi^2)} & \lesssim 2^{-k_i} 2^{l'}, \qquad \quad \ i=1,2; \label{n:formbd:2} \\
\vm{ \partial_{\Xi_i}^{\alpha} \tilde{n}(\Xi^1,\Xi^2) }& \lesssim \vm{\Xi^i}^{-\vm{\alpha}} , \qquad \ i=1,2. \label{n:formbd:3}
\end{align}
Recall \eqref{nn0:eq}. We write
$$ \tau_1 \tau_2- \xi_1 \cdot \xi_2=(\tau_1 \mp_1 \vm{\xi_1} )\tau_2 \pm_1 \vm{\xi_1} (\tau_2 \mp_2 \vm{\xi_2}) \pm_1 \pm_2 \vm{\xi_1} \vm{\xi_2} \big(1-\cos\angle(\pm_1 \xi_1,\pm_2 \xi_2 ) \big) $$
which clearly implies \eqref{n:formbd:1}. It is easy to see that
$$ \vm{ \partial_{\xi_i'} \tilde{n}(\Xi^1,\Xi^2)} \lesssim 2^{-k_i} \sin \angle(\xi_1,\xi_2) $$
which implies \eqref{n:formbd:2}, while \eqref{n:formbd:3} follows from the fact that $ \tilde{n} $ is homogeneous in both $ \Xi^1,\Xi^2 $.
\pfstep{Step~3} Performing a Fourier series expansion of $ \tilde{n}(\tilde{\Xi_1},\tilde{\Xi_2}) \chi_1(\tilde{\Xi_1}) \chi_2(\tilde{\Xi_2}) $ by viewing $ \tilde{E}_1 \times \tilde{E}_2 $ as a torus, we may write
\begin{equation} \label{eq:nf-basic:fs}
\tilde{n}(\tilde{\Xi_1},\tilde{\Xi_2}) = \sum_{{\bf j}, {\bf k} \in \mathbb Z^{d}} c_{{\bf j}, {\bf k}} \, e^{2 \pi i {\bf j} \cdot D_{1} \tilde{\Xi_1}} e^{2 \pi i {\bf k} \cdot D_{2} \tilde{\Xi_2}} \quad \hbox{ for } (\tilde{\Xi_1},\tilde{\Xi_2}) \in E_{1} \times E_{2},
\end{equation}
where $D_{1}, D_{2}$ are diagonal matrices of the form
\begin{equation} D_{i} = \mathrm{diag} \, (O(2^{-k_i}), O(2^{-k_i-2l'}), O(2^{-k_i-l'}), \ldots, O(2^{-k_i-l'})).
\end{equation}
Defining
\begin{equation*}
a_{{\bf j}}(\Xi_1) = ( \chi_1(\tilde{\Xi_2}) e^{2 \pi i {\bf j} \cdot D_{1} \tilde{\Xi_1}})(\Xi_1), \quad
b_{{\bf k}}(\Xi_2) = (\chi_2(\tilde{\Xi_2}) e^{2 \pi i {\bf k} \cdot D_{2} \tilde{\Xi_2}})(\Xi_2),
\end{equation*}
we obtain the desired decomposition \eqref{eq:nf-basic:dcmp} from \eqref{eq:nf-basic:fs}.
To prove \eqref{eq:nf-basic:c}, by the Fourier inversion formula
\begin{equation*}
c_{{\bf j}, {\bf k}} = \frac{1}{\hbox{Vol}(\tilde{E}_1 \times \tilde{E}_2)} \int_{\tilde{E}_1 \times \tilde{E}_2} n(\tilde{\Xi_1},\tilde{\Xi_2}) \chi_1(\tilde{\Xi_1}) \chi_2(\tilde{\Xi_2})e^{- 2 \pi i {\bf j} \cdot D_{1} \tilde{\Xi_1}} e^{- 2 \pi i {\bf k} \cdot D_{2} \tilde{\Xi_2}} \, \mathrm{d} \tilde{\Xi_1} \, \mathrm{d} \tilde{\Xi_2}.
\end{equation*}
Integrating by parts w.r.t. to $ \tau_i^{\omega} $ by the homogeneity of $ \tilde{n} $ and \eqref{n:formbd:1} we obtain
\begin{equation*}
\abs{c_{{\bf j}, {\bf k}}} \lesssim_{n} 2^{2l'} (1+\abs{{\bf j}_{1}})^{-n} \quad [\hbox{resp. } \abs{c_{{\bf j}, {\bf k}}} \lesssim_{n} 2^{2l'} (1+\abs{{\bf k}_{1}})^{-n}],
\end{equation*}
for any $n \geq 0$. On the other hand, for any $j = 2, \ldots, d+1$, integration by parts in $ \tau_i^{\omega^{\perp}} $ or in $ \xi_i' $ and using \eqref{n:formbd:1}-\eqref{n:formbd:3} yields
\begin{equation*}
\abs{c_{{\bf j}, {\bf k}}} \lesssim_{n} 2^{2l'} \abs{{\bf j}_{j}}^{-n} \quad [\hbox{resp. } \abs{c_{{\bf j}, {\bf k}}} \lesssim_{n} 2^{2l'} \abs{{\bf k}_{j}}^{-n}].
\end{equation*}
The preceding bounds imply \eqref{eq:nf-basic:c} as desired.
Finally, we need to establish \eqref{eq:nf-basic:ab}. We will describe the case of $a_{{\bf j}}(D)$. Consider the differential operators
$$ D_{\omega_1}=( 2^{k_1} \partial_{\tau_i^{\omega}}, 2^{k_1+2l'} \partial_{\tau_i^{\omega^{\perp}}}, 2^{k_1+l'} \partial_{\xi_1'} ) $$
For any multi-index $\alpha$, observe that
$$
\abs{D_{\omega_1}^{\alpha} (\chi_1(\tilde{\Xi_1}) e^{2 \pi i {\bf j} \cdot D_{1} \tilde{\Xi_1}})} \lesssim_{\alpha} (1+\abs{{\bf j}})^{\abs{\alpha}}.
$$
From this bound, it is straightforward to check that the convolution kernel of $a_{{\bf j}}(D)$ obeys $\nrm{\check{a}_{{\bf j}}}_{L^{1}} \lesssim (1+\abs{{\bf j}})^{n_{0}}$ for some universal constant $n_{0}$, which implies the bound \eqref{eq:nf-basic:ab} for $a_{{\bf j}}(D)$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{L2:NFnullFrames:cor}] The corollary follows from Prop. \ref{N0:form} and Prop. \ref{L2:nullFrames}. Indeed, with $ k=k_1, k_2=k_2 $ we take $ C_{S_1,S_2}=2^{-l'} $ with
$$ S_1=\{ (\tau_1,\xi_1) \ | \ \xi_1 \in \mathcal C, \ \vm{\xi_1} \simeq 2^k, \ \vm{\tau_1 \mp_1 \jb{\xi_1}} \lesssim 2^j \} $$
and $ S_2 $ defined analogously. We check that $ S_i \subset E_i $. The condition \eqref{angSep} insures that we can define $ \kappa_1,\kappa_2 $ appropriately. It remains to verify
$$ \vm{\tau_i \mp_i \vm{\xi_i}} \leq \vm{\tau_i \mp_i \jb{\xi_i}}+ \jb{\xi_i}-\vm{\xi_i} \lesssim 2^j+2^{-k_i} \lesssim 2^{k_i+2l'} $$
by the condition on $ j $ and \eqref{angSep}.
\end{proof}
If we replace $ \tau_i $ by $ \pm \vm{\xi_i} $ in \eqref{nn0:eq} we remove the time dependence in Prop. \ref{N0:form} and may formulate a spatial analogue for the bilinear form defined by $ \vm{\xi_1} \vm{\xi_2}\pm \xi_1 \cdot \xi_2 $. We consider the $ + $ case for $ \mathcal N_0(\phi_1,\phi_2) $ in \eqref{n0:eq}, which will be useful for high $ \times $ high $ \to $ low frequency interactions.
\begin{proposition} \label{n0:form:prop}
Let $ k \in \mathbb{Z},\ l \leq 0 $ and $ 1 \leq p,q_1,q_2 \leq \infty $ with $ p^{-1}=q_1^{-1}+q_2^{-1} $. Let $ \kappa_1, \kappa_2 $ be spherical caps of angle $ \simeq 2^l $ such that $ \angle( \kappa_1, -\kappa_2) \lesssim 2^l $.
Then, for all functions $ \phi_1, \phi_2 $ with Fourier support, respectively, in $ \{ \vm{\xi_i} \simeq 2^k, \ \xi_i/\vm{\xi_i} \in \kappa_i \}, \ i=1,2, $ we have
$$ \vn{\mathcal N_0 (\phi^1,\phi^2)}_{L^p} \lesssim 2^{2l+2k} \vn{\phi_1}_{L^{q_1}} \vn{\phi_2}_{L^{q_2}}.
$$
\end{proposition}
\begin{proof}
The proof is very similar to the proof of Prop. \ref{N0:form} and is omitted. The basic difference is that here one performs the Fourier series expansion on a $ \big( 2^{k} \times (2^{k+l})^{d-1} \big)^2 $-sized region in $ \mathbb{R}_{\xi}^{d} \times \mathbb{R}_{\xi}^{d} $ instead of $ \mathbb{R}_{\tau,\xi}^{d+1} \times \mathbb{R}_{\tau,\xi}^{d+1} $.
\end{proof}
\subsection{The $ \mathcal N_{ij} $ forms}
In this subsection we consider the null forms
\begin{equation} \label{Nijforms}
\mathcal N_{ij}(\phi,\varphi)=\partial_i \phi \partial_j \varphi- \partial_j \phi \partial_i \varphi. \end{equation}
We begin with a general result
\begin{proposition} \label{ntht:form:prop}
Let $ N $ be a bilinear form with symbol $ n(\xi,\eta) $ assumed to be homogeneous of degree $ 0 $ in $ \xi,\eta $ and to obey
$$
\vm{n(\xi,\eta)} \leq A \vm{ \angle(\xi,\eta) }.
$$
Let $ \omega_1, \omega_2 \subset \mathbb{S}^{d-1} $ be angular caps of radius $ \vm{r_i}\leq 2^{-10}, \ i=1,2 $ and define $ \theta \vcentcolon= \max \{ \angle( \vm{ \omega_1, \omega_2 )}, r_1,r_2 \} $. Let $ 1 \leq p, q_1, q_2 \leq \infty $ be such that $ p^{-1}=q_1^{-1}+q_2^{-1} $. Let the functions $ f_1, f_2 $ be defined on $ \mathbb{R}^{d} $ with Fourier support in
$$ \{ \vm{\xi} \simeq 2^{k_i}, \ \frac{\xi}{\vm{\xi}} \in \omega_i \}, \qquad i=1,2. $$
Then we have
\begin{equation} \label{ntht:est}
\vn{N(f_1,f_2)}_{L^p} \lesssim \theta \vn{f_1}_{L^{q_1}} \vn{f_2}_{L^{q_2}}.
\end{equation}
\end{proposition}
\begin{proof}
This is a known proposition, whose proof is similar to the one of Prop. \ref{N0:form}. For a complete proof we refer to \cite[Prop. 7.8 and appendix of Section 7]{MD}.
\end{proof}
\begin{corollary} \label{Nij:form:prop}
Under the conditions of Prop. \ref{ntht:form:prop} we have
\begin{equation} \label{Nij:est}
\vn{\mathcal N_{ij}(f_1,f_2)}_{L^p} \lesssim \theta \vn{\nabla_x f_1}_{L^{q_1}} \vn{\nabla_x f_2}_{L^{q_2}}.
\end{equation}
\end{corollary}
\begin{proof}
This follows by writing $ \mathcal N_{ij}(f_1,f_2)=N(\vm{D}f_1,\vm{D} f_2) $ for $ N $ with symbol $ n(\xi,\eta)=\frac{\xi_i}{\vm{\xi}} \frac{\eta_j}{\vm{\eta}}- \frac{\xi_j}{\vm{\xi}} \frac{\eta_i}{\vm{\eta}} $ and applying \eqref{ntht:est}.
\end{proof}
\subsection{The geometry of frequency interactions} \
Before we state the core $ \mathcal N_{ij} $ estimates that will be used to estimate the nonlinearity, we pause here to analyze the geometry of the frequencies of two hyperboloids interacting with a cone at low modulations.
The methods of doing this is well-known, see \cite[sec. 13]{Tao2}, \cite[Lemma 6.5]{BH1}.
In what follows we denote by $ k_{\min}, k_{\max} $ the minimum and maximum of $ k_0,k_1,k_2 $, and similarly we consider $ j_{\min}, j_{\mathrm{med}}, j_{\max} $ for $ j_0, j_1, j_2 $.
\begin{lemma} \label{geom:cone} Let $ (k_0,k_1,k_2) \in \mathbb{Z} \times \mathbb{Z}_+ \times \mathbb{Z}_+ $, $ j_i \in \mathbb{Z} \ $ for $ i=0,1,2 $ and let $ \omega_i \subset \mathbb{S}^{d-1} $ be angular caps of radius $ r_i \ll 1 $. Let $ \phi^1, \phi^2 $ have Fourier support, respectively, in
$$ S_i=\{ \jb{\xi} \simeq 2^{k_i}, \ \frac{\xi}{\vm{\xi}} \in \omega_i, \ \vm{\tau-s_i \jb{\xi}} \simeq 2^{j_i} \}, \qquad i=1,2 $$
and let $ A $ have Fourier support in
$$ S_0=\{ \vm{\xi} \simeq 2^{k_0}, \ \frac{\xi}{\vm{\xi}} \in \omega_0, \ \vm{\tau-s_0 \vm{\xi} } \simeq 2^{j_0} \}, $$
for some signs $ s_0,s_1,s_2 $. Let $ L $ be translation-invariant and consider
\begin{equation} \label{expr}
\int A \cdot L(\phi^1,\phi^2) \,\mathrm{d} x \,\mathrm{d} t.
\end{equation}
\begin{enumerate} [leftmargin=*]
\item Suppose $ j_{\max} \leq k_{\min}+C_0 $. Then \eqref{expr} vanishes unless $$ j_{\max} \geq k_{\min}-2 \min(k_1,k_2)-C. $$
\item Suppose $ j_{\max} \leq k_{\min}+C_0 $ and define $ \ell \vcentcolon= \frac{1}{2}(j_{\max}-k_{\min})_{-} $.
Then \eqref{expr} vanishes unless $ 2^{\ell} \gtrsim 2^{-\min(k_1,k_2)} $ and
\begin{equation} \angle(s_i \omega_i, s_{i'} \omega_{i'} ) \lesssim 2^{\ell} 2^{k_{\min}-\min(k_i,k_{i'})} + \max (r_i,r_{i'}) \label{geom:ang} \end{equation}
for every pair $ i,i' \in \{ 0,1,2\} $.
\item If in addition we assume $ j_{\mathrm{med}} \leq j_{\max}-5 $, then in \eqref{geom:ang} we have $ \simeq $ instead of $ \lesssim $. \\
\item If $ j_{\mathrm{med}} \leq j_{\max}-5 $ then \eqref{expr} vanishes unless either $ j_{\max}=k_{\max}+O(1) $ or $ j_{\max} \leq k_{\min}+\frac{1}{2} C_0 $.
\end{enumerate}
\end{lemma}
\begin{proof} If \eqref{expr} does not vanish, there exist $ (\tau^i,\xi^i) \in S_i $, ($ i=0,1,2 $) such that $ \sum_i (\tau^i,\xi^i) =0 $. Consider
$$ H \vcentcolon= s_0 \vm{\xi_0} + s_1 \jb{\xi_1}+s_2 \jb{\xi_2}. $$
Using $ \sum_i \tau^i=0 $, note that
\begin{equation} \label{H:jmax}
\vm{H} = \vm{(s_0 \vm{\xi_0}-\tau^0) + (s_1 \jb{\xi_1}-\tau^1)+(s_2 \jb{\xi_2}-\tau^2) } \lesssim 2^{j_{\max}}. \end{equation}
When the signs $ s_i $ of the two highest frequencies are the same, we have $ \vm{H} \simeq 2^{k_{\max}} $. This implies $ j_{\max} \geq k_{\max}-C $ and with the assumption $ j_{\max} \leq k_{\min}+C_0 $ we deduce $ \vm{k_{\max}-k_{\min}} \leq C $ and $ \ell=O(1) $, in which case the statements are obvious.
Now suppose the high frequencies have opposite signs. By conjugation symmetry we may assume $ s_0=+ $. By swapping $ \phi^1 $ with $ \phi^2 $ if needed, we may assume $ s_2=- $ and that $ k_2 \neq k_{\min} $. We write
\begin{align*} H & =\vm{\xi_0} + s_1 \jb{\xi_1}- \jb{\xi_2}=\frac{(\vm{\xi_0} +s_1 \jb{\xi_1})^2-(1+\vm{\xi_0+\xi_1}^2)}{\vm{\xi_0} + s_1 \jb{\xi_1}+ \jb{\xi_2}} = \\
& =\frac{2 s_1 \vm{\xi_0} \jb{\xi_1}-2 \xi_0 \cdot \xi_1}{\vm{\xi_0} + s_1 \jb{\xi_1}+ \jb{\xi_2}}=\frac{2 s_1 \vm{\xi_0} \vm{\xi_1}-2 \xi_0 \cdot \xi_1}{\vm{\xi_0} + s_1 \jb{\xi_1}+ \jb{\xi_2}}+ \frac{2 s_1 \vm{\xi_0}}{\jb{\xi_1}+\vm{\xi_1}} \frac{1}{{\vm{\xi_0} + s_1 \jb{\xi_1}+ \jb{\xi_2}}} .
\end{align*}
where we have used $ \jb{\xi_1}-\vm{\xi_1}=(\jb{\xi_1}+\vm{\xi_1})^{-1} $.
If $ k_0=k_{\min} $ we are in the case $ (s_0,s_1,s_2)=(+,+,-) $. If $ k_0=k_{\max}+O(1) $, we are in the case $ k_1=k_{\min} $. Either way, we deduce
$$ \vm{H} \simeq 2^{k_{\min}} \angle(\xi^0,s_1 \xi^1)^2+ 2^{k_0-k_1-k_2} .$$
This and \eqref{H:jmax} proves Statement (1) and (2) for $ (i,i')=(0,1) $. The other pairs $ (i,i') $ are reduced to this case. Indeed, denote by $ \xi^l $ and $ \xi^h $ the low and high frequencies among $ \xi_0, \xi_1 $. By the law of sines we have
$$ \sin \angle(\xi^h, -\xi_2) =\frac{\vm{\xi^l}}{\vm{\xi_2}} \sin \angle(\xi^l, \xi^h) \lesssim 2^{\ell} 2^{k_{\min}-k_2} $$
which implies \eqref{geom:ang} in the high-high case. The remaining low-high case now follows from the previous two cases and the triangle inequality.
Statement (3) follows by noting that in the case $ j_{\mathrm{med}} \leq j_{\max}-5 $ we have $ \vm{H} \simeq 2^{j_{\max}} $. Similarly, for statement (4), since either $ \vm{H} \simeq 2^{k_{\max}} $ or $ \vm{H} \lesssim 2^{k_{\min}} $, the statement follows by choosing $ C_0 $ large enough.
\end{proof}
\begin{remark} \label{rk:geom:cone}
In the case $ k_{\min} \in \{k_i, k_{i'} \} $, Statement (3) can be rephrased as follows. Denoting $ 2^{\ell_0}= \angle(s_i \omega_i, s_{i'} \omega_{i'} ) $ and choosing $ r_i,r_{i'} \ll 2^{\ell_0} $, then \eqref{expr} vanishes unless
$$ j_{\max}=k_{\min}+2 \ell_0+O(1).
$$
\end{remark}
\subsection{Core $ \mathcal N_{ij} $ and $ \mathcal L $ bilinear estimates} We now state the main bilinear estimates for $ \mathcal L $ and $ \mathcal N_{ij} $ when the inputs and the output have low modulation (i.e. less than the minimum frequency).
\begin{lemma} \label{lem:ellip}
Let $ (k_{0}, k_{1}, k_{2}) \in \mathbb{Z}\times \mathbb{Z}_+\times \mathbb{Z}_+$ be such that $\abs{k_{\mathrm{med}} - k_{\max}} \leq 5$. Let $\mathcal L$ be a translation invariant bilinear operator on $\mathbb R^{d}$ with bounded mass kernel. Then we have
\begin{align}
\nrm{P_{k_{0}} \mathcal L(\bar{P}_{k_{1}} f, \bar{P}_{k_{2}}g)}_{L^{2} L^{2}}
\lesssim & \nrm{f_{k_{1}}}_{L^{\infty} L^{2}} \Big( \sum_{\mathcal C_{k_{\min}}} \nrm{P_{\mathcal C_{k_{\min}}} g_{k_{2}}}_{L^{2} L^{\infty}}^{2} \Big)^{1/2}, \label{eq:ellip-0} \\
\nrm{P_{k_{0}} \mathcal L(\bar{P}_{k_{1}} f, \bar{P}_{k_{2}}g)}_{L^{1} L^{2}}
\lesssim & \nrm{f_{k_{1}}}_{L^{2} L^{2}} \Big( \sum_{\mathcal C_{k_{\min}}} \nrm{P_{\mathcal C_{k_{\min}}} g_{k_{2}}}_{L^{2} L^{\infty}}^{2} \Big)^{1/2}. \label{eq:ellip-1}
\end{align}
The same statement holds when $ (k_{0}, k_{1}, k_{2}) \in \mathbb{Z}_+ \times \mathbb{Z} \times \mathbb{Z}_+ $ or $ (k_{0}, k_{1}, k_{2}) \in \mathbb{Z}_+ \times \mathbb{Z}_+ \times \mathbb{Z} $ when we replace the LHS by $ \bar{P}_{k_{0}} \mathcal L(P_{k_{1}} f, \bar{P}_{k_{2}}g) $, respectively $ \bar{P}_{k_{0}} \mathcal L(\bar{P}_{k_{1}} f, P_{k_{2}}g) $.
\end{lemma}
\begin{proposition} \label{prop:no-nf}
Let $ (k_{0}, k_{1}, k_{2}) \in \mathbb Z_{+} \times \mathbb Z_{+} \times \mathbb Z $ be such that $\abs{k_{\max} - k_{\mathrm{med}}} \leq 5$ and $j \leq k_{\min} + C_{0}$. Define $\ell := \frac{1}{2} (j-k_{\min})_{-}$. Then, the following estimates hold:
\begin{equation} \label{eq:no-nf:est0}
\nrm{\bar{P}_{k_{0}} \bar{Q}_{j} ( \bar{Q}_{<j} f_{k_{1}} \cdot Q_{<j} g_{k_{2}})}_{L^{2}_{t,x}}
\lesssim \nrm{f_{k_{1}}}_{L^{\infty} L^{2}} \big( \sup_{\pm} \sum_{\mathcal C_{k_{\min}}(\ell)}\nrm{P_{\mathcal C_{k_{\min}}(\ell)} Q_{<j}^{\pm} g_{k_{2}}}_{L^{2} L^{\infty}}^{2} \big)^{1/2}
\end{equation}
\begin{equation} \label{eq:no-nf:est1}
\nrm{\bar{P}_{k_{0}} \bar{Q}_{<j} (\bar{Q}_{j} f_{k_{1}}, Q_{<j} g_{k_{2}})}_{L^{1} L^{2}}
\lesssim \nrm{\bar{Q}_{j} f_{k_{1}}}_{L^{2}_{t,x}} \Big( \sup_{\pm} \sum_{\mathcal C_{k_{\min}}(\ell)}\nrm{P_{\mathcal C_{k_{\min}}(\ell)} Q_{<j}^{\pm} g_{k_{2}}}_{L^{2} L^{\infty}}^{2} \Big)^{1/2}
\end{equation}
The same statement holds when we replace $ (\bar{Q}_j,\bar{Q}_{<j},Q_{<j}) $ by $ (Q_j,\bar{Q}_{<j},\bar{Q}_{<j}) $ and all the similar variations.
\end{proposition}
\begin{proposition} \label{prop:nf}
Let $k_{0} \in \mathbb Z, \ k_{1}, k_{2} \geq 0, \ j \in \mathbb Z $ be such that $\abs{k_{\max} - k_{\mathrm{med}}} \leq 5$ and $j \leq k_{\min} + C_{0}$. Define $\ell := \frac{1}{2} (j-k_{\min})_{-}$ and let $ \mathcal N $ be any of the null forms $ \mathcal N_{m,r} $ from \eqref{Nijforms}. Then, the following estimates hold:
\begin{equation} \label{eq:nf:est0}
\begin{aligned}
& \hskip-2em
\nrm{P_{k_0} Q_{j} \mathcal N (\bar{Q}_{<j} f_{k_{1}}, \bar{Q}_{<j} g_{k_{2}})}_{L^{2} L^{2}} \\
\lesssim & 2^{\ell} 2^{k_{\min} +k_{\max}} \nrm{f_{k_{1}}}_{L^{\infty} L^{2}} \Big( \sup_{\pm}\sum_{\mathcal C_{k_{\min}}(\ell)}\nrm{P_{\mathcal C_{k_{\min}}(\ell)} \bar{Q}_{<j}^{\pm} g_{k_{2}}}_{L^{2} L^{\infty}}^{2} \Big)^{1/2}
\end{aligned}
\end{equation}
\begin{equation} \label{eq:nf:est1}
\begin{aligned}
& \hskip-2em
\nrm{P_{k_0} Q_{<j} \mathcal N (\bar{Q}_{j} f_{k_{1}}, \bar{Q}_{<j} g_{k_{2}})}_{L^{1} L^{2}} \\
\lesssim & 2^{\ell} 2^{k_{\min}+k_{\max} } \nrm{\bar{Q}_{j} f_{k_{1}}}_{L^{2} L^{2}} \Big(\sup_{\pm} \sum_{\mathcal C_{k_{\min}}(\ell)}\nrm{P_{\mathcal C_{k_{\min}}(\ell)} \bar{Q}_{<j}^{\pm} g_{k_{2}}}_{L^{2} L^{\infty}}^{2} \Big)^{1/2}
\end{aligned}
\end{equation}
The same statement holds in the case $ (k_0,k_1,k_2) \in \mathbb{Z}_{+} \times \mathbb{Z} \times \mathbb{Z}_{+} $ when we replace the LHS of \eqref{eq:nf:est0}, \eqref{eq:nf:est1} by $ \bar{P}_{k_0} \bar{Q}_{j} \mathcal N (Q_{<j} f_{k_{1}}, \bar{Q}_{<j} g_{k_{2}}) $ and $ \bar{P}_{k_0} \bar{Q}_{<j} \mathcal N ( Q_{j} f_{k_{1}}, \bar{Q}_{<j} g_{k_{2}}) $ respectively; or in the case $ (k_0,k_1,k_2) \in \mathbb{Z}_{+} \times \mathbb{Z}_{+} \times \mathbb{Z} $ when we replace the LHS of \eqref{eq:nf:est0}, \eqref{eq:nf:est1} by $ \bar{P}_{k_0} \bar{Q}_{j} \mathcal N (\bar{Q}
_{<j} f_{k_{1}}, Q_{<j} g_{k_{2}}) $ and $ \bar{P}_{k_0} \bar{Q}_{<j} \mathcal N ( \bar{Q}_{j} f_{k_{1}}, Q_{<j} g_{k_{2}}) $ respectively.
\end{proposition}
\begin{proof}[Proof of Lemma \ref{lem:ellip}, Prop. \ref{prop:no-nf}, Prop. \ref{prop:nf}] The idea of these estimates is taken from \cite{KST}. For a proof of these bounds as stated here, see \cite[Section 7.5]{MD}. We can invoke that proof since we have Corollary \ref{Nij:form:prop} and Lemma \ref{geom:cone}.
\end{proof}
Finally, we record the following identity.
\begin{lemma}[Commutator identity] \label{comm_id} We can write
$$ P_{<k}(fg)=f P_{<k}g+L(\nabla_x f, 2^{-k} g) $$
where $ L $ is a translation-invariant bilinear operator with integrable kernel.
\end{lemma}
\begin{proof}
See \cite[lemma 2]{Tao2}.
\end{proof}
\section{Bilinear estimates}
The proofs in this section and the next one are based on the Littlewood-Paley trichotomy which states that $ P_{k_0}(P_{k_1} f_1 P_{k_2} f_2) $ vanishes unless $ \vm{k_{\mathrm{med}}-k_{\max}} \leq 5 $, where $ k_{\mathrm{med}}, k_{\max} $ are the the median and the maximum of $ \{ k_0,k_1, k_2 \} $.
Most of the arguments in this section originate in \cite{KST}. However, we have tried to give a thorough exposition in order to justify that the arguments carry over when two of the inputs/output correspond to Klein-Gordon waves.
\subsection{Additional bilinear estimates}
Before we begin the proofs we state some additional bilinear estimates that will be used in the proof of the trilinear estimate in the next section.
We separate the high-high and low-high parts of $ {\bf A}_{0} $
\begin{equation} \label{a0:decomp}
\begin{aligned}
{\bf A}_{0} ( \phi^{1}, \phi^{2}) & ={\bf A}_{0}^{LH} ( \phi^{1}, \phi^{2})+ {\bf A}_{0}^{HH} ( \phi^{1}, \phi^{2}) \\
\text{where} \qquad \qquad \qquad {\bf A}_{0}^{HH} ( \phi^{1}, \phi^{2}) & = \sum_{\substack{k_{0}, k_{1}, k_{2} \\ k_{0} < k_{2} - C_{2} - 5}} P_{k_{0}} {\bf A}_{0}( \bar{P}_{k_{1}} \phi^1, \bar{P}_{k_{2}} \phi^2).
\end{aligned}
\end{equation}
\begin{lemma} \label{lemma:additional} With the decomposition above, one has:
\begin{align}
\vn{ \pi[(0,A_0)] \phi}_{\bar{N}^{\sigma-1} } & \lesssim \vn{A_0}_{\ell^1 L^1 L^{\infty}} \vn{\phi}_{\bar{S}^{\sigma}}. \label{est:phi7} \\
\vn{{\bf A}_{0}^{LH} ( \phi^{1}, \phi^{2})}_{\ell^1 L^1 L^{\infty}} & \lesssim \vn{\phi^1}_{\bar{S}^{\sigma}} \vn{\phi^2}_{\bar{S}^{\sigma}} \label{A0:lh} \\
\vn{({\bf A}_{x},{\bf A}_{0}^{HH}) ( \phi^{1}, \phi^{2})}_{\ell^1 S^{\sigma} \times Y^{\sigma}} & \lesssim \vn{\phi^1}_{\bar{S}^{\sigma}} \vn{\phi^2}_{\bar{S}^{\sigma}} \label{Ax:A0:hh} \\
\vn{(I - \mathcal H) ({\bf A}_{x},{\bf A}_{0}^{HH})( \phi^{1}, \phi^{2})}_{Z \times Z_{ell}}
& \lesssim \nrm{\phi^{1}}_{\bar{S}^{\sigma}} \nrm{\phi^{2}}_{\bar{S}^{\sigma}} \label{eq:axr-Z}.
\end{align}
For $ d \geq 5 $ one also has:
\begin{equation} \vn{ ({\bf A}_{x},{\bf A}_{0}^{HH})( \phi^{1}, \phi^{2})}_{Z \times Z_{ell}}
\lesssim \nrm{\phi^{1}}_{\bar{S}^{\sigma}} \nrm{\phi^{2}}_{\bar{S}^{\sigma}}
\label{AHH:highdim} \end{equation}
\end{lemma}
\begin{proof}
By doing dyadic decompositions, \eqref{est:phi7} follows trivially from H\" older's inequality $ L^1 L^{\infty} \times L^{\infty} L^2 \to L^1 L^2 $. The bound \eqref{A0:lh} follows from
$$ \vn{P_{k'} ( \phi^{1}_{k_1} \partial_t \phi^{2}_{k_2})}_{ L^1 L^{\infty}}
\lesssim \nrm{\phi^{1}_{k_1}}_{L^2 L^{\infty}} \nrm{\phi^{2}_{k_2}}_{L^2 L^{\infty}}. $$
The bound \eqref{Ax:A0:hh} follows from Prop. \ref{prop:ax:est} and from the proof of \eqref{est:a01}.
The proofs of estimates \eqref{eq:axr-Z}, \eqref{AHH:highdim} are longer and are deferred to the end of this section.
\end{proof}
\subsection{Dyadic norms} For easy referencing in the arguments below, here we collect the norms that we control. Recall that we denote
$$ \vn{A_x}_{S^s_{k'}} = 2^{(s-1)k'} \vn{\nabla_{t,x} A}_{S_{k'}}, \qquad \vn{\phi_k}_{\bar{S}^s_k} = 2^{(s-1)k} \vn{ (\jb{D_x},\partial_t) \phi_k}_{\bar{S}_k} $$
For $ k' \in \mathbb{Z} $ and $ k \geq 0 $ we have:
\begin{align}
\label{fe-LinfL2}
& \vn{\nabla_{t,x} P_{k'} A_x}_{L^{\infty}L^2} \lesssim \vn{P_{k'} A_x}_{S^1_{k'}} ,
& \vn{(\jb{D_x},\partial_t) \phi_k}_{L^{\infty}L^2} \lesssim \vn{\phi_{k}}_{\bar{S}^1_{k}}
\\
\label{fe-L2}
& \nrm{Q_{j} P_{k'} A_x}_{L^{2}_{t,x}}
\lesssim 2^{-\frac{1}{2} j} \vn{P_{k'} A_x}_{S_{k'}} ,
& \nrm{\bar{Q}_{j} \phi_{k}}_{L^{2}_{t,x}}
\lesssim 2^{-\frac{1}{2} j} \vn{\phi_{k}}_{\bar{S}_{k}}
\\
\label{fe-L2Linf}
& \vn{P_{k'} A_x}_{L^2 L^{\infty}} \lesssim 2^{\frac{1}{2}k'} \vn{P_{k'} A_x}_{S^{\sigma}_{k'}},
& \vn{\phi_k}_{L^2 L^{\infty}} \lesssim 2^{\frac{1}{2}k} \vn{\phi_{k}}_{\bar{S}^{\sigma}_{k}}
\end{align}
For any $k' \leq k$ and $ l' \in [-k,C] $, $ j=k'+2l' $ and any $ \pm $:
\begin{equation} \label{fe-L2Linfty}
\begin{aligned}
\Big(\sum_{\mathcal C= \mathcal C_{k'}(l')} \nrm{P_{\mathcal C} \bar{Q}^{\pm}_{<j} \phi_{k}}_{L^{2} L^{\infty}}^{2} \Big)^{1/2}
\lesssim & 2^{\frac{1}{2} l'} 2^{\sigma(k'-k)} 2^{\frac{1}{2} k} \vn{\phi_{k}}_{\bar{S}^{\sigma}_{k}}, \\
\Big(\sum_{\mathcal C=\mathcal C_{k'}(0)} \nrm{P_{\mathcal C} \phi_{k}}_{L^{2} L^{\infty}}^{2} \Big)^{1/2}
\lesssim & 2^{\sigma(k'-k)} 2^{\frac{1}{2} k} \vn{\phi_{k}}_{\bar{S}^{\sigma}_{k}}.
\end{aligned}
\end{equation}
The former follows by choosing $ k+2l=k'+2l' $ in \eqref{highfreqSp}. When $ k=0 $ it suffices to consider $ l'=0 $. The latter inequality holds for $ \bar{Q}_{<k'} \phi_k $, while for $ \bar{Q}_{\geq k'} \phi_k $ it follows from \eqref{fe-L2}, orthogonality and Bernstein's inequality (with $ l'=0 $)
\begin{equation} \label{Brns} P_{C_{k'}(l')} L^2_x \subset 2^{\frac{d}{2}k'+\frac{d-1}{2} l'} L^{\infty}_x \end{equation}
Using \eqref{Brns} we also obtain, when $ d=4, \ \sigma=1$,
\begin{equation}
\begin{aligned} \label{fe-sqX}
\big(\sum_{C_{k'}(l')} \vn{P_{C_{k'}(l')} (\partial_t \mp i \jb{D}) \bar{Q}_{<j}^{\pm} \phi_k}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} & \lesssim 2^{\frac{3}{2} l'} 2^{2k'} 2^{\frac{1}{2}j} \vn{\phi_k}_{\bar{X}^{\frac{1}{2}}_{\infty}} \\
& \lesssim 2^{\frac{3}{2} l'} 2^{2k'} 2^{\frac{1}{2}j} 2^{-k} \vn{\phi_{k}}_{\bar{S}^1_{k}}.
\end{aligned}
\end{equation}
For any $k' \leq k''$ and $ l' \leq 0 $, $ j=k'+2l' $ and any $ \pm $ we have
\begin{equation} \label{A:fe-L2Linfty}
\begin{aligned}
\Big(\sum_{\mathcal C=\mathcal C_{k'}(l')} \nrm{P_{\mathcal C} Q^{\pm}_{<j} A_{k''}}_{L^{2} L^{\infty}}^{2} \Big)^{1/2}
\lesssim & 2^{\frac{1}{2} l'} 2^{\sigma(k'-k'')} 2^{\frac{1}{2} k''} \vn{P_{k''} A_x}_{S^{\sigma}_{k''}}, \\
\Big(\sum_{\mathcal C=\mathcal C_{k'}(0)} \nrm{P_{\mathcal C} A_{k''}}_{L^{2} L^{\infty}}^{2} \Big)^{1/2}
\lesssim & 2^{\sigma(k'-k'')} 2^{\frac{1}{2} k''} \vn{P_{k''} A_x}_{S^{\sigma}_{k''}}.
\end{aligned}
\end{equation}
For $ A_0 $ we have the following bounds
\begin{equation} \label{A0:fe-LinfL2}
\vn{\nabla_{t,x} P_{k'} A_0}_{L^{\infty} L^2} \lesssim \vn{P_{k'}A_0}_{Y^1}.
\end{equation}
Since we control $ \partial_t A_0 $, for $ j \geq k' $ we have both
\begin{equation} \label{A0:fe-L2}
\vn{P_{k'} A_0}_{L^2_{t,x}} \lesssim 2^{-(\sigma+\frac{1}{2})k'} \vn{P_{k'}A_0}_{Y^{\sigma}}, \qquad \vn{Q_j P_{k'} A_0}_{L^2_{t,x}} \lesssim 2^{-j} 2^{-(\sigma-\frac{1}{2})k'} \vn{P_{k'}A_0}_{Y^{\sigma}}
\end{equation}
and for $ j=k'+2l'$, using \eqref{Brns} and orthogonality, we have
\begin{equation} \label{A0:fe-L2Linfty}
\Big(\sum_{\mathcal C=\mathcal C_{k'}(l')} \nrm{P_{\mathcal C} ( Q^{\pm}_{<j}) A^0_{k'}}_{L^{2} L^{\infty}}^{2} \Big)^{1/2}
\lesssim 2^{\frac{d}{2}k'+\frac{d-1}{2} l'} \vn{A^0_{k'}}_{L^2_{t,x}} \lesssim 2^{\frac{1}{2} k'+\frac{3}{2}l'} \vn{P_{k'}A_0}_{Y^{\sigma}}
\end{equation}
\begin{equation} \label{A0:fe-L2Linf} \text{In particular,} \qquad \qquad \qquad \qquad \qquad \qquad
\vn{P_{k'} A_0}_{L^2 L^{\infty}} \lesssim 2^{\frac{1}{2} k'} \vn{P_{k'}A_0}_{Y^{\sigma}}.
\end{equation}
Now we turn to the proofs of Prop. \ref{prop:ax:est}, \ref{prop:phi:est}.
\
\subsection{Proof of \eqref{est:ax1}}
This follows from proving, for $ k' \in \mathbb{Z} $, $ k_1,k_2 \geq 0 $:
\begin{equation} \label{est:ax1:freq}
\vn{P_{k'} \mathcal P_{j} (\phi^1_{k_1} \nabla_x \phi^2_{k_2})}_{N_{k'}^{\sigma-1} } \lesssim 2^{\frac{1}{2}(k_{\min} - k_{\max})} \vn{\phi^1_{k_1}}_{\bar{S}^{\sigma}_{k_1}} \vn{\phi^2_{k_2}}_{\bar{S}^{\sigma}_{k_2}}.
\end{equation}
Note that the factor $ 2^{\frac{1}{2}(k_{\min} - k_{\max})} $ provides the $ \ell^1 $ summation in \eqref{est:ax1}. Here $ k_{\min}, k_{\max} $ are taken from the set $ \{ k',k_1,k_2\} $.
We first treat the high modulation contribution. Since $ \mathcal P_{j} (\phi^1 \nabla_x \phi^2) $ is skew adjoint (see Remark \ref{ax:skew-adj}), in the low-high case ($ 2^{k'}\simeq 2^{k_{max}} $) we may assume $ k_2=k_{\min} $ (i.e. the derivative falls on the lower frequency). By Lemma~\ref{lem:ellip} we have
\begin{align}
& \nrm{P_{k'} \mathcal P_{j} (\bar{Q}_{\geq k_{\min} } \phi_{k_{1}}^1 \nabla_x \phi^2_{k_{2}})}_{L^{1} L^{2}}
\lesssim \nonumber \\
& \qquad \qquad \qquad \qquad \quad \vn{\bar{Q}_{\geq k_{\min} } \phi_{k_{1}}^1}_{L^2_{t,x}} \big( \sum_{\mathcal C_{k_{\min}}} \vn{P_{\mathcal C_{k_{\min}}} \nabla_x \phi_{k_{2}}^2}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} , \label{highmod:1} \\
& \nrm{P_{k'} \mathcal P_{j} (\bar{Q}_{< k_{\min} } \phi_{k_{1}}^1 \nabla_x \bar{Q}_{\geq k_{\min} } \phi_{k_{2}}^2)}_{L^{1} L^{2}}
\lesssim \nonumber \\
& \qquad \qquad \qquad \qquad \quad \big( \sum_{\mathcal C_{k_{\min}}} \vn{P_{\mathcal C_{k_{\min}}} \bar{Q}_{< k_{\min}} \phi_{k_{1}}^1}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} \vn{\bar{Q}_{\geq k_{\min} } \nabla_x \phi_{k_{2}}^2}_{L^2_{t,x}} \label{highmod:2} \\
& \nrm{P_{k'} Q_{\geq k_{\min}} \mathcal P_{j} (\bar{Q}_{< k_{\min}} \phi_{k_{1}}^1 \nabla_x \bar{Q}_{< k_{\min}} \phi_{k_{2}}^2)}_{L^2_{t,x}}
\lesssim \nonumber \\
& \qquad \qquad \qquad \quad \big( \sum_{\mathcal C_{k_{min}}} \vn{P_{\mathcal C_{k_{min}}} \bar{Q}_{< k_{\min}} \phi_{k_{1}}^1}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} \vn{ \bar{Q}_{< k_{\min}} \nabla_x \phi_{k_{2}}^2}_{L^{\infty}L^2} . \label{highmod:3}
\end{align}
Using \eqref{fe-L2}, \eqref{fe-L2Linfty} for \eqref{highmod:1}, using \eqref{fe-L2Linfty}, \eqref{fe-L2} for \eqref{highmod:2}, and using \eqref{fe-L2Linfty}, \eqref{fe-LinfL2} and the $ X_1^{-1/2} $ norm for \eqref{highmod:3}, we see that these terms are acceptable.
We continue with the low modulation term
$$ P_{k'} Q_{< k_{\min} } \mathcal P_{j} (\bar{Q}_{< k_{\min} } \phi_{k_{1}}^1 \nabla_x \bar{Q}_{< k_{\min} } \phi_{k_{2}}^2), $$
which, summing according to the highest modulation, using \eqref{ax:nf:identity}, we decompose into sums of
\begin{align}
I_{0} =& \sum_{j < k_{\min}} P_{k'} Q_{j} \Delta^{-1} \nabla^l \mathcal N_{lm} (\bar{Q}_{< j} \phi_{k_{1}}^1, \bar{Q}_{< j} \phi_{k_{2}}^2), \label{I:0} \\
I_{1} =& \sum_{j < k_{\min}}P_{k'} Q_{\leq j}\Delta^{-1} \nabla^l \mathcal N_{lm}(\bar{Q}_{j} \phi_{k_{1}}^1, \bar{Q}_{< j}\phi_{k_{2}}^2), \label{I:1} \\
I_{2} =& \sum_{j < k_{\min}} P_{k'} Q_{\leq j}\Delta^{-1} \nabla^l \mathcal N_{lm} ( \bar{Q}_{\leq j} \phi_{k_{1}}^1, \bar{Q}_{ j} \phi_{k_{2}}^2). \label{I:2}
\end{align}
for which we have
\begin{equation} \label{low:mod:Aeq}
\vn{ \vm{D}^{\sigma-1} I_0}_{X_1^{-1/2}}+\vn{I_1}_{L^1 \dot{H}^{\sigma-1}} +\vn{I_2}_{L^1 \dot{H}^{\sigma-1} } \lesssim 2^{\frac{1}{2}(k_{\min} - k_{\max})} \vn{\phi^1_{k_1}}_{\bar{S}^{\sigma}_{k_1}} \vn{\phi^2_{k_2}}_{\bar{S}^{\sigma}_{k_2}}.
\end{equation}
These are estimated by Proposition \ref{prop:nf} and \eqref{fe-LinfL2}, \eqref{fe-L2}, \eqref{fe-L2Linfty}, which concludes the proof of \eqref{est:ax1:freq}.
\subsection{Proof of \eqref{est:phi1}.}
We separate $ A_0 \partial_t \phi $ and $ A^j \partial_j \phi $. Since we subtract $ \pi[A]\phi $, this effectively eliminates low-high interactions in the Littlewood-Paley trichotomy. Thus for $ k, k_0 \geq 0 $, $ k' \geq k-C $ it suffices to prove
\begin{equation} \label{est:phi1:freqA0}
\vn{\bar{P}_{k_0} \big( A^0_{k'} \partial_t \phi_k \big)}_{L^1 H^{\sigma-1}} \lesssim 2^{k_{\min}-k_{\max}} \vn{A_{k'}^0}_{L^2 \dot{H}^{\sigma+\frac{1}{2}}} \vn{\partial_t \phi_k}_{\bar{S}^{\sigma-1}_{k}},
\end{equation}
\begin{equation} \label{est:phi1:freqAx}
\vn{\bar{P}_{k_0} \big( A^j_{k'} \partial_j \phi_k \big)}_{\bar{N}_{k_0}^{\sigma-1} } \lesssim 2^{\frac{1}{2}(k_{\min}-k_{\max})} \vn{A_{k'}}_{S^{\sigma}_{k'}} \vn{\phi_k}_{\bar{S}^{\sigma}_{k}}.
\end{equation}
The bound \eqref{est:phi1:freqA0} follows immediately from \eqref{eq:ellip-1}. Now we turn to \eqref{est:phi1:freqAx}.
We first treat the high modulation contribution. By Lemma~\ref{lem:ellip} we have
\begin{align*}
& \nrm{\bar{P}_{k_0} \bar{Q}_{\geq k_{\min} } \big( A^j_{k'} \partial_j \phi_k \big)}_{L^2_{t,x}}
\lesssim \vn{A_{k'}}_{L^{\infty}L^2} \big( \sum_{\mathcal C_{k_{\min}}} \vn{P_{\mathcal C_{k_{\min}}} \nabla_x \phi_{k}}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} , \\
& \nrm{\bar{P}_{k_0} \bar{Q}_{< k_{\min} } (Q_{\geq k_{\min} } A^j_{k'} \partial_j \phi_k )}_{L^{1} L^{2}}
\lesssim \\
& \qquad \qquad \qquad \qquad \qquad \qquad \vn{Q_{\geq k_{\min} } A_{k'}}_{L^2_{t,x}} \big( \sum_{\mathcal C_{k_{\min}}} \vn{P_{\mathcal C_{k_{\min}}} \nabla_x \phi_{k}}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}}, \\
& \nrm{\bar{P}_{k_0} \bar{Q}_{< k_{\min} } (Q_{< k_{\min} } A^j_{k'} \partial_j \bar{Q}_{\geq k_{\min}}\phi_k) }_{L^{1} L^{2}}
\lesssim \\
& \qquad \qquad \qquad \qquad \quad \big( \sum_{\mathcal C_{k_{\min}}} \vn{P_{\mathcal C_{k_{\min}}} Q_{< k_{\min}} A_{k'}}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} \vn{\bar{Q}_{\geq k_{\min} } \nabla_x \phi_{k}}_{L^2_{t,x}}.
\end{align*}
Using \eqref{fe-LinfL2}, \eqref{fe-L2Linfty} and the $ \bar{X}_1^{-1/2} $ norm for the first term, \eqref{fe-L2}, \eqref{fe-L2Linf} for the second, and \eqref{A:fe-L2Linfty}, \eqref{fe-L2} for the third, we see that these terms are acceptable.
We continue with the low modulation term
$$ \bar{P}_{k_0} \bar{Q}_{< k_{\min} } \big(Q_{< k_{\min} } A^j_{k'} \partial_j \bar{Q}_{< k_{\min}}\phi_k \big) $$
which, summing according to the highest modulation, using \eqref{phi:nf:identity}, we decompose into sums of
\begin{align}
I_{0} =& \sum_{j < k_{\min}} \bar{P}_{k_0} \bar{Q}_{j} \mathcal N_{lm} (\Delta^{-1} \nabla^l Q_{< j} A^m_{k'}, \bar{Q}_{< j} \phi_{k}), \label{I:zero} \\
I_{1} =& \sum_{j < k_{\min}}\bar{P}_{k_0} \bar{Q}_{\leq j} \mathcal N_{lm}(\Delta^{-1} \nabla^l Q_{j} A^m_{k'}, \bar{Q}_{< j}\phi_{k}),\\
I_{2} =& \sum_{j < k_{\min}} \bar{P}_{k_0} \bar{Q}_{\leq j} \mathcal N_{lm} (\Delta^{-1} \nabla^l Q_{\leq j} A^m_{k'}, \bar{Q}_{ j} \phi_{k}). \label{I:two}
\end{align}
These are estimated using Proposition \ref{prop:nf}. We use \eqref{eq:nf:est0} with \eqref{fe-LinfL2} and \eqref{fe-L2Linfty} to estimate $ I_0 $ in $ \bar{X}_1^{-1/2} $. For $ I_1 $ we use \eqref{eq:nf:est1} with \eqref{fe-L2} and \eqref{fe-L2Linfty}, while for $ I_2 $ we use \eqref{eq:nf:est1} with \eqref{A:fe-L2Linfty} and \eqref{fe-L2}. This concludes the proof of \eqref{est:phi1}.
\subsection{Proof of \eqref{est:phi2}.} We separate $ A_0 \partial_t \phi $ and $ A^j \partial_j \phi $. This case corresponds to low-high interactions in the Littlewood-Paley trichotomy. Thus for $ k, k_0 \geq 0 $, $ k' \leq k-C $ (and $ \vm{k-k_0} \leq 5 $) it suffices to prove
\begin{equation} \label{est:phi2:freqA0}
\vn{\bar{P}_{k_0} \big( A^0_{k'} \partial_t \phi_k \big)- \bar{P}_{k_0} \mathcal H^{\ast}_{k'} \big( A_0 \partial_t \phi_k \big) }_{\bar{N}_{k_0}} \lesssim \vn{P_{k'} A_{0}}_{Y^{\sigma}} \vn{\phi_k}_{\bar{S}^1_{k}}
\end{equation}
\begin{equation} \label{est:phi2:freqAx}
\vn{\bar{P}_{k_0} \big( A^j_{k'} \partial_j \phi_k \big)- \bar{P}_{k_0} \mathcal H^{\ast}_{k'} \big( A^j \partial_j \phi_k \big) }_{\bar{N}_{k_0}} \lesssim \vn{P_{k'} A_{x}}_{S^{\sigma}_{k'}} \vn{\phi_k}_{\bar{S}^1_{k}}.
\end{equation}
Notice that the lack of an exponential gain of type $ 2^{\frac{1}{2}(k_{\min}-k_{\max})} $ (as in \eqref{est:phi1:freqA0}, \eqref{est:phi1:freqAx}) is responsible for the need of $ \ell^1 $ summation in the norm on the RHS of \eqref{est:phi2}.
We first treat the high modulation contribution, where we denote $ A $ for either $ A^0 $ or $ A^j $. For any $ j \geq k'+C_2 $, by H\" older's inequality
\begin{equation} \label{aaph:eq}
\begin{aligned}
& \vn{\bar{P}_{k_0} \bar{Q}_{\geq j-5} \big( Q_j A_{k'} \partial \phi_k \big)}_{\bar{X}_1^{-1/2}} \lesssim 2^{-\frac{1}{2}j} \vn{A_{k'}}_{L^2 L^{\infty}} \vn{\nabla \phi_k}_{L^{\infty} L^2} \\
& \vn{\bar{P}_{k_0} \bar{Q}_{< j-5} \big( Q_j A_{k'} \bar{Q}_{\geq j-5}\partial \phi_k \big)}_{L^1 L^2 } \lesssim \vn{A_{k'}}_{L^2 L^{\infty}} \vn{\bar{Q}_{\geq j-5} \nabla \phi_k}_{L^2_{t,x}}
\end{aligned}
\end{equation}
Using \eqref {A0:fe-L2Linf}, \eqref{fe-L2Linf}, \eqref{fe-LinfL2}, \eqref{fe-L2} and summing over $ j \geq k'+C_2 $, it follows that $ \bar{P}_{k_0} \big( Q_{\geq k'+C_2} A_{k'} \partial \phi_k \big) $ is acceptable except
$$ T=\sum_{j \geq k'+C_2} \bar{P}_{k_0} \bar{Q}_{< j-5} \big( Q_j A_{k'} \bar{Q}_{<j-5}\partial \phi_k \big) $$
By applying Lemma \ref{geom:cone} (here we choose $ C_2 > \frac{1}{2} C_0 $) we see that the summand vanishes unless $ j=k_{\max}+O(1) $. Then, by Lemma~\ref{lem:ellip} we have
$$ \vn{T}_{L^1 L^2} \lesssim \sum_{j=k+O(1)} \vn{ Q_j A_{k'}}_{L^2_{t,x}} \big( \sum_{\mathcal C_{k'}} \vn{P_{\mathcal C_{k'}} \nabla \phi_k}^2_{L^2 L^{\infty}} \big)^{\frac{1}{2}} $$
which is acceptable by \eqref{A0:fe-L2}, \eqref{fe-L2}, \eqref{fe-L2Linfty}.
The terms
$$ \bar{P}_{k_0} \bar{Q}_{\geq k'+C_2} \big( Q_{< k'+C_2} A_{k'} \partial \phi_k \big), \qquad \bar{P}_{k_0} \bar{Q}_{< k'+C_2} \big( Q_{< k'+C_2} A_{k'} \bar{Q}_{\geq k'+C_2} \partial \phi_k \big) $$
are treated in the same way as \eqref{aaph:eq}. We omit the details.
We continue with the low modulation terms. Since we are subtracting $ \mathcal H^{\ast} $ we consider
$$ I= \sum_{j < k'+C_2} \bar{P}_{k_0} \bar{Q}_{j} ( Q_{< j} A^0_{k'} \cdot \partial_t \bar{Q}_{< j} \phi_{k}) $$
$$ J=\sum_{j < k'+C_2} \bar{P}_{k_0} \bar{Q}_{\leq j} (Q_{\leq j} A^0_{k'} \cdot \partial_t \bar{Q}_{ j} \phi_{k}) $$
and prove
$$ \vn{I}_{\bar{X}_1^{-1/2}} + \vn{J}_{L^1 L^2} \lesssim \vn{P_{k'} A_{0}}_{Y^{\sigma}} \vn{\phi_k}_{\bar{S}^1_{k}}. $$
by using \eqref{eq:no-nf:est0} with \eqref{fe-LinfL2} and \eqref{A0:fe-L2Linfty} for $ I $; we use \eqref{eq:no-nf:est1} with \eqref{fe-L2} and \eqref{A0:fe-L2Linfty} for $ J $.
It remains to show that for $ I_0, I_2 $ from \eqref{I:zero} and \eqref{I:two} (with summation over $ j<k'+C_2 $) we have
$$ \vn{I_0}_{\bar{X}_1^{-1/2}} + \vn{I_{2}}_{L^1 L^2} \lesssim \vn{A_{k'}}_{S^{\sigma}_{k'}} \vn{\phi_k}_{\bar{S}^1_{k}}. $$
These follow from \eqref{eq:nf:est0} with \eqref{A:fe-L2Linfty}, \eqref{fe-LinfL2} and from \eqref{eq:nf:est1} with \eqref{A:fe-L2Linfty}, \eqref{fe-L2}, respectively.
\subsection{Proof of \eqref{est:phi3}.} This estimate follows from the next bound, for $ k' <k-5 $
\begin{equation} \label{est:phi3:freq}
\vn{\mathcal H^{\ast}_{k'} \big( A^{\alpha} \partial_{\alpha} \phi_k \big) }_{L^1 L^2} \lesssim \vn{A_{k'}}_{Z_{k'} \times Z^{ell}_{k'}} \vn{\phi_k}_{\bar{S}^1_{k}}.
\end{equation}
To prove \eqref{est:phi3:freq}, let $ \ell=\frac{1}{2}(j-k')_{-} \geq -k-C $ and separate $ A_0 \partial_t \phi $ from $ A^j \partial_j \phi $. We use \eqref{phi:nf:identity} and denote by $ \mathcal N(A,\phi)$ one of $ A^0 \partial_t \phi $ or $ \mathcal N_{lm} (\Delta^{-1} \nabla^l A^m, \phi) $. We expand
$$ \mathcal H^{\ast}_{k'} \mathcal N \big( A, \phi_k \big)=\sum_{j<k'+C_2^{\ast}} \sum_{\omega_1,\omega_2} \bar{Q}_{<j} \mathcal N \big(P_{\ell}^{\omega_1} Q_j A_{k'},P_{\ell}^{\omega_2} \bar{Q}_{<j} \phi_k \big) $$
Splitting $ \bar{Q}_{<j}=\bar{Q}_{<j}^{+}+\bar{Q}_{<j}^{-}, \ Q_j=Q_j^++Q_j^-, $ and applying Lemma \ref{geom:cone} we see that the summand vanishes unless $ \vm{\angle(\omega_1,\pm \omega_2 )} \lesssim 2^{\ell} $.
For $ \mathcal N=\mathcal N_{lm} (\Delta^{-1} \nabla^l A^m, \phi) $ and $ s_1,s \in \{+,-\} $, by Corollary \ref{Nij:form:prop} we have
\begin{equation*} \vn{\bar{Q}_{<j} \mathcal N \big(P_{\ell}^{\omega_1} Q_j^{s_1} A_{k'},P_{\ell}^{\omega_2} \bar{Q}_{<j}^s \phi_k \big)}_{L^1 L^2} \lesssim 2^{\ell} \vn{P_{\ell}^{\omega_1} Q_j^{s_1} A_{k'}}_{L^1 L^{\infty}} \vn{P_{\ell}^{\omega_2} \bar{Q}_{<j}^s \nabla \phi_k}_{L^{\infty} L^2} \end{equation*}
For $ \mathcal N=A^0 \partial_t \phi $ we have the same inequality but without the $ 2^{\ell} $ factor. This is compensated by the fact that the $ Z^{ell} $ norm is larger. Indeed, we have
$$ Z^{ell}=\Box^{\frac{1}{2}} \Delta^{-\frac{1}{2}} Z \quad \text{and} \quad
2^{\ell} \vn{ P_{\ell}^{ \omega_1} Q_j^{s_1} A^0_{k'}}_{L^1 L^{\infty}} \simeq \vn{\Box^{\frac{1}{2}} \Delta^{-\frac{1}{2}} P_{\ell}^{ \omega_1} Q_j^{s_1} A^0_{k'}}_{L^1 L^{\infty}}. $$
Note that for fixed $ \omega_1 $ there are only (uniformly) bounded number of $ \omega_2 $ such that the product is non-vanishing. Therefore, by Cauchy-Schwarz,
$$ \vn{\mathcal H^{\ast}_{k'} \big( A^{\alpha} \partial_{\alpha} \phi_k \big) }_{L^1 L^2} \lesssim \sum_{\ell \leq 0} 2^{\frac{1}{2} \ell} \vn{A_{k'}}_{Z_{k'} \times Z^{ell}_{k'}} \big( \sup_{\pm} \sum_{\omega_2} \vn{P_{\ell}^{\omega_2} \bar{Q}_{<j}^{\pm} \nabla \phi_k}_{L^{\infty} L^2}^2 \big)^{\frac{1}{2}} $$
which implies \eqref{est:phi3:freq}.
\subsection{Proof of \eqref{est:a01}, \eqref{est:phi6} and the $ L^2 H^{\sigma-\frac{3}{2}} $ part of \eqref{est:phi4} } One proceeds by dyadic decompositions. The $ L^2_{t,x} $-type estimates follow easily by H\" older's inequality $ L^{\infty}L^2 \times L^2 L^{\infty} \to L^2_{t,x} $ in the low-high/high-low cases and by Lemma \ref{lem:ellip} (eq. \eqref{eq:ellip-0}) in the high-high to low case. One uses the norms \eqref{fe-LinfL2}, \eqref{A0:fe-LinfL2}, \eqref{fe-L2Linf}, \eqref{A0:fe-L2Linf}, \eqref{fe-L2Linfty}, \eqref{A:fe-L2Linfty}, \eqref{A0:fe-L2Linfty}.
The $ L^{\infty} L^2 $ estimate follows by H\" older's ( $ L^{\infty} L^{\infty} \times L^{\infty}L^2 \to L^{\infty}L^2 $ or $ L^{\infty}L^2 \times L^{\infty}L^2 \to L^{\infty}L^1 $) and Bernstein's inequalities ( $ P_k L^2_x \to 2^{\frac{d}{2}k} L^{\infty}_x $ or $ P_k L^1_x \to 2^{\frac{d}{2}k} L^2_x $), depending on which frequency (input or output) is the lowest.
\subsection{Proof of \eqref{est:phi4} for $ \bar{N} $.} Suppose $ k,k_2 \geq 0 $, $ k_1 \in \mathbb{Z} $. Let $ r_0 $ be the endpoint Strichartz exponent (i.e. $ \frac{d-1}{r_0}=\sigma-\frac{1}{2} $). By H\" older's inequality and using Bernstein's inequality for the lowest frequency (input or output) we obtain
\begin{equation} \label{est:cubic:auxx}
\vn{\bar{P}_k \big( P_{k_1}A \cdot \phi_{k_2} \big)}_{L^1 H^{\sigma-1} } \lesssim 2^{\frac{d}{r_0}(k-\max{k_i})} 2^{-\frac{1}{r_0} \vm{k_1-k_2}} \vn{P_{k_1}A}_{L^2 \dot{H}^{\sigma-\frac{1}{2}}} \vn{\phi_{k_2}}_{L^2 W^{r_0,\rho }}
\end{equation}
With $ A=\partial_t A_0 $ and $ \vn{\phi_{k_2}}_{L^2 W^{r_0,\rho }} \lesssim \vn{\phi_{k_2}}_{\bar{S}^{\sigma}_{k_2}} $, upon summation we obtain \eqref{est:phi4}.
\subsection{Proof of \eqref{est:phi5} and \eqref{est:ax2} } We first prove the $ L^1 L^2 $ part. For \eqref{est:phi5} we consider $ k, k_2 \geq 0 $, $ k_1,k_3,k_4 \in \mathbb{Z} $. We apply \eqref{est:cubic:auxx} with $ A=A^1_{\alpha} A^2_{\alpha} $ together with
$$
\vn{P_{k_1}(P_{k_3} A^1_{\alpha} P_{k_4} A^2_{\alpha})}_{L^2 \dot{H}^{\sigma- \frac{1}{2}}} \lesssim 2^{\frac{1}{2}(k_{\min}-k_{\max})} \vn{A_{\max\{k_3,k_4\}}}_{L^{\infty} \dot{H}^{\sigma}} \vn{A_{\min\{k_3,k_4\}}}_{L^2 \dot{W}^{\infty,-\frac{1}{2}} }.
$$
By summing we obtain \eqref{est:phi5}. The same argument is used for $ L^1 L^2 $ of \eqref{est:ax2}.
To prove the $ L^2 \dot{H}^{\sigma-\frac{3}{2}} $ and $ L^2 H^{\sigma-\frac{3}{2}} $ estimates we write
$$ \vn{P_{k}\big(P_{k_1} (fg) P_{k_2} h \big)}_{L^2 \dot{H}^{\sigma-\frac{3}{2}} } \lesssim 2^{\frac{1}{2}(k-\max{k_i})} 2^{-\frac{1}{2} \vm{k_1-k_2}} \vn{P_{k_1} (fg)}_{L^{\infty} \dot{H}^{\sigma-1 }} \vn{P_{k_2} h}_{L^2 W^{r_0,\rho }} $$
and use $ L^{\infty} \dot{H}^{\sigma} \times L^{\infty} \dot{H}^{\sigma} \to L^{\infty} \dot{H}^{\sigma-1} $ by H\" older and Sobolev embedding.
The $ \ell^1 L^{\infty} \dot{H}^{\sigma-2} $ part of \eqref{est:ax2} is similarly a consequence of H\" older and Bernstein inequalities.
\subsection{Proof of \eqref{eq:axr-Z}}
Recall that $ \mathcal H $ subtracts terms only for high-high interactions. For $ k' \leq k_2-C_2-10 $ we claim
\begin{equation} \label{Z:norm1}
\vn{(P_{k'}-\mathcal H_{k'}) {\bf A} ( \phi^{1}_{k_1}, \phi^{2}_{k_2})}_{Z_{k'} \times Z^{ell}_{k'}}
\lesssim 2^{\frac{1}{2}(k'-k_{2})} \nrm{\phi^{1}_{k_1}}_{\bar{S}^{\sigma}_{k_1}} \nrm{\phi^{2}_{k_2}}_{\bar{S}^{\sigma}_{k_2}}.
\end{equation}
while the low-high interactions: for $ k' \geq k_2-C_2-10 $
\begin{equation} \label{Z:norm2}
\vn{P_{k'} {\bf A}_x ( \phi^{1}_{k_1}, \phi^{2}_{k_2})}_{Z_{k'}}
\lesssim 2^{-\frac{1}{2} \vm{k_{1}-k_{2}}} \nrm{\phi^{1}_{k_1}}_{\bar{S}^{\sigma}_{k_1}} \nrm{\phi^{2}_{k_2}}_{\bar{S}^{\sigma}_{k_2}}.
\end{equation}
Clearly, \eqref{Z:norm1} and \eqref{Z:norm2} imply \eqref{eq:axr-Z}.
First we recall that
$$ (\Box {\bf A}_{x} , \Delta {\bf A}_{0}) ( \phi^{1}, \phi^{2})=-\mathfrak{I}( \mathcal P_{x }(\phi^1 \nabla_x \bar{\phi^2}), \phi^1 \partial_t \bar{\phi^2}) $$
and the embedding from \eqref{ZZ:emb}
$$
\big( \Box^{-1} \times \Delta^{-1} \big) P_{k'} : L^1 L^2 \times L^1 L^2 \to 2^{(\sigma-1)k'} Z_{k'} \times Z^{ell}_{k'}
$$
\pfstep{Step~1}{\it Proof of \eqref{Z:norm1}.} The terms
$$ P_{k'} {\bf A} ( \bar{Q}_{\geq k'+C} \phi^{1}_{k_1}, \phi^{2}_{k_2}), \qquad P_{k'} {\bf A} ( \bar{Q}_{\leq k'+C} \phi^{1}_{k_1}, \bar{Q}_{\geq k'+C} \phi^{2}_{k_2}) $$
are estimated using \eqref{highmod:1}, \eqref{highmod:2} and \eqref{ZZ:emb}. For $ {\bf A}_0 $ we note that \eqref{highmod:1}, \eqref{highmod:2} still hold with $ \mathcal P_j $ replaced by $ \mathcal L $ \footnote{ $ \mathcal L $ denotes any translation invariant bilinear form with bounded mass kernel.} and $ \nabla_x $ replaced by $ \partial_t $.
Recall that the $ Z $ norms restrict modulation to $ Q_{\leq k'+C} $. Thus it remains to consider
$$ (P_{k'} Q_{\leq k'+C}-\mathcal H_{k'}) {\bf A} ( \bar{Q}_{\leq k'+C} \phi^{1}_{k_1}, \bar{Q}_{\leq k'+C} \phi^{2}_{k_2})
$$
For $ {\bf A}_x $, using \eqref{ax:nf:identity}, we need to treat $ \Box^{-1} I_1 $, $ \Box^{-1} I_2 $ as defined in \eqref{I:0}-\eqref{I:2} (the $ \Box^{-1} I_0 $ term is subtracted by $ \mathcal H_{k'} $). These are estimated using \eqref{low:mod:Aeq} and \eqref{ZZ:emb}.
We turn to $ {\bf A}_0 $. By switching the roles of $ \phi^1, \phi^2 $ if needed, it remains to consider
$$ J_j= P_{k'} Q_{\leq j} {\bf A}_0 ( \bar{Q}_{j} \phi^{1}_{k_1}, \bar{Q}_{\leq j} \phi^{2}_{k_2}), \qquad \qquad j \leq k'+C. $$
Using \eqref{Zell:emb} we obtain
\begin{align*} \vn{J_j}_{Z_{k'}^{ell}} & \lesssim \sum_{\pm; j' \leq j} 2^{\frac{1}{2}(j'-k')} \vn{P_{k'} Q^{\pm}_{j'} (\bar{Q}_{j} \phi^{1}_{k_1} \cdot \partial_t \bar{Q}_{\leq j} \phi^{2}_{k_2}) }_{L^1 \dot{H}^{\sigma-1} } \\
& \lesssim 2^{\frac{1}{2}(j-k')} 2^{\frac{1}{2} (k'-k_2)} \nrm{\phi^{1}_{k_1}}_{\bar{S}^{\sigma}_{k_1}} \nrm{\phi^{2}_{k_2}}_{\bar{S}^{\sigma}_{k_2}}
\end{align*}
For the last inequality we have used Prop. \ref{prop:no-nf} together with \eqref{fe-L2} and \eqref{fe-L2Linfty}.
Summing in $ j \leq k'+C $ completes the proof of \eqref{Z:norm1}.
\pfstep{Step~2}{\it Proof of \eqref{Z:norm2}.} Due to skew-adjointness (see Remark \ref{ax:skew-adj}), we may assume that $ k_2=k_{\min}+O(1) $. The terms
$$ P_{k'} {\bf A}_x ( \bar{Q}_{\geq k_2-c} \phi^{1}_{k_1}, \phi^{2}_{k_2}), \qquad P_{k'} {\bf A}_x ( \bar{Q}_{\prec k_2} \phi^{1}_{k_1}, \bar{Q}_{\geq k_2-c} \phi^{2}_{k_2}) $$
are estimated using \eqref{highmod:1}, \eqref{highmod:2} and \eqref{ZZ:emb}.
Note that the $ Z $ norm restricts modulations to $ Q_{\leq k'+C} $. Thus it remains to consider
\begin{equation} \label{mod:Z:terms}
P_{k'} Q_j {\bf A}_x ( \bar{Q}_{\prec k_2} \phi^{1}_{k_1}, \bar{Q}_{\prec k_2} \phi^{2}_{k_2})
\end{equation}
for $ j \leq k'+C $. When $ j \geq k_2+C $, by Lemma \ref{geom:cone} the term vanishes unless $ j=k'+O(1) $. In this case
\begin{align*} & \vn{P_{k'} Q_{j} {\bf A}_x ( \bar{Q}_{\prec k_2} \phi^{1}_{k_1}, \bar{Q}_{\prec k_2} \phi^{2}_{k_2})}_{Z_{k'}} \lesssim 2^{-2k'} \vn{\bar{Q}_{\prec k_2} \phi^1_{k_1} \nabla_x \bar{Q}_{\prec k_2} \phi^2_{k_2}}_{L^1 L^{\infty}} \\
& \lesssim 2^{k_2-2k'} \vn{\bar{Q}_{\leq k_1} \phi^1_{k_1}}_{L^2 L^{\infty}} \vn{\bar{Q}_{\prec k_2} \phi^2_{k_2}}_{L^2 L^{\infty}}+ 2^{k_2} \vn{\bar{Q}_{[k_2-c,k_1]} \phi^1_{k_1}}_{L^2 H^{\sigma-1} } \vn{\bar{Q}_{\prec k_2} \phi^2_{k_2}}_{L^2 L^{\infty}}
\end{align*}
which is estimated using \eqref{fe-L2Linf} and \eqref{fe-L2}.
It remains to consider \eqref{mod:Z:terms} for $ j<k_2+C $. Using \eqref{ax:nf:identity} we decompose into sums of $ \Box^{-1} I_i $, $ (i=0,2) $ as defined in \eqref{I:0}-\eqref{I:2} (for $ k_2-C<j<k_2+C $ with $ \bar{Q}$ indices slightly adjusted). Then $ \Box^{-1} I_1 $ and $ \Box^{-1} I_2 $ are estimated using \eqref{low:mod:Aeq} and \eqref{ZZ:emb}.
Now we consider $ \Box^{-1} I_0 $. Define $ \ell \vcentcolon= \frac{1}{2}(j-k_2)_{-} \geq \ell' \vcentcolon= \frac{1}{2}(j-k')_{-} $ and for $ s=\pm $ we decompose
$$ P_{k'} Q_{j}^s \mathcal N_{lm} (\bar{Q}_{< j} \phi_{k_{1}}^1, \bar{Q}_{< j} \phi_{k_{2}}^2)=\sum_{s_2, \omega_i} P_{\ell'}^{\omega_0} P_{k'} Q_{j}^s \mathcal N_{lm} (P_{\ell'}^{\omega_1} \bar{Q}_{< j}^s \phi_{k_{1}}^1, P_{\ell}^{\omega_2}\bar{Q}_{< j}^{s_2} \phi_{k_{2}}^2)
$$
By Lemma \ref{geom:cone}, the summand on the RHS vanishes unless
\begin{align*}
&\ \ \vm{\angle(\omega_0,\omega_1)} \ \ \lesssim 2^{\ell} 2^{k_2-k'}+2^{\ell'} \lesssim 2^{\ell'} \\
& \vm{\angle(s \omega_0,s_2 \omega_2)} \lesssim 2^{\ell}+\max(2^{\ell'},2^{\ell}) \lesssim 2^{\ell}.
\end{align*}
Note that $ P_{\ell'}^{\omega_0} P_{k'} Q_{j}^s $ and $ 2^{2\ell'+2k'} \Box^{-1} P_{\ell'}^{\omega_0} P_{k'} Q_{j}^s $ are disposable. Corollary \ref{Nij:form:prop} implies
\begin{equation} \label{Z:nf:bd}
\vn{ \mathcal N_{lm} (P_{\ell'}^{\omega_1} \bar{Q}_{< j}^s \phi_{k_{1}}^1, P_{\ell}^{\omega_2}\bar{Q}_{< j}^{s_2} \phi_{k_{2}}^2) }_{L^1 L^{\infty}} \lesssim 2^{\ell} \vn{P_{\ell'}^{\omega_1} \bar{Q}_{< j}^s \nabla \phi_{k_{1}}^1}_{L^2 L^{\infty}} \vn{P_{\ell}^{\omega_2}\bar{Q}_{< j}^{s_2} \nabla \phi_{k_{2}}^2}_{L^2 L^{\infty}}
\end{equation}
For a fixed $\omega_{0}$ [resp. $\omega_{1}$], there are only (uniformly) bounded number of $\omega_{1}, \omega_{2}$ [resp. $\omega_{0}, \omega_{2}$] such that the summand is nonzero. Summing first in $\omega_{2}$ (finitely many terms), then the (essentially diagonal) summation in $\omega_{0}, \omega_{1}$, we obtain
$$ \big( \sum_{\omega_0} \text{LHS} \eqref{Z:nf:bd}^2 \big)^\frac{1}{2} \lesssim
2^{\ell} \big( \sum_{\omega_1} \vn{P_{\ell'}^{\omega_1} \bar{Q}_{< j}^s \nabla \phi_{k_{1}}^1}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} \sup_{\omega_2} \vn{P_{\ell}^{\omega_2}\bar{Q}_{< j}^{s_2} \nabla \phi_{k_{2}}^2}_{L^2 L^{\infty}}
$$
Keeping track of derivatives and dyadic factors, recalling \eqref{Z:norm:def} and using \eqref{fe-L2Linfty} for $ \phi^1, \phi^2 $, we obtain
$$ \vn{\Box^{-1} I_0}_{Z_{k'}} \lesssim \sum_{j<k_2} 2^{\frac{1}{4}(j-k_2)} 2^{k_2-k'} \nrm{\phi^{1}_{k_1}}_{\bar{S}^{\sigma}_{k_1}} \nrm{\phi^{2}_{k_2}}_{\bar{S}^{\sigma}_{k_2}} $$
This completes the proof of \eqref{Z:norm2}.
\subsection{Proof of \eqref{AHH:highdim}}
The low-high part of the estimate for $ {\bf A}_x ( \phi^{1}, \phi^{2}) $ follows from \eqref{Z:norm2}. For the high-high parts of both $ {\bf A}_x ( \phi^{1}, \phi^{2}) $ and $ {\bf A}_0 ( \phi^{1}, \phi^{2}) $ we fix the frequency and use \eqref{ZZ:emb}, H\" older $ L^2 L^4 \times L^2 L^4 \to L^1 L^2 $ together with $ L^2 L^4 $ Strichartz inequalities. We gain the factor $ 2^{\frac{d-4}{2} (k_{\min}-k_{\max})} $ which suffices to do the summation in the present case $ d \geq 5 $.
\section{Proof of the trilinear estimate \eqref{est:trilin}} \label{Trilinear:section}
This section is devoted to the the proof of Proposition \ref{trilinear}.
\subsection{Proof of Proposition \ref{trilinear}} Our goal is to prove
$$ \vn{\pi[{\bf A}( \phi^{1}, \phi^{2}) ] \phi}_{\bar{N}^{\sigma-1} } \lesssim \vn{\phi^1}_{\bar{S}^{\sigma} } \vn{\phi^2}_{\bar{S}^{\sigma}} \vn{\phi}_{\bar{S}^{\sigma}} $$
First we note that (recalling definition \eqref{a0:decomp}) \eqref{est:phi7} together with \eqref{A0:lh} implies
$$ \vn{\pi[0,{\bf A}_{0}^{LH}( \phi^{1}, \phi^{2}) ] \phi}_{\bar{N}^{\sigma-1} } \lesssim \vn{\phi^1}_{\bar{S}^{\sigma} } \vn{\phi^2}_{\bar{S}^{\sigma}} \vn{\phi}_{\bar{S}^{\sigma}} $$
Secondly, \eqref{Ax:A0:hh} and \eqref{est:phi2} imply
$$ \vn{(I-\mathcal H^{\ast}) \pi[({\bf A}_{x},{\bf A}_{0}^{HH})( \phi^{1}, \phi^{2}) ] \phi}_{\bar{N}^{\sigma-1} } \lesssim \vn{\phi^1}_{\bar{S}^{\sigma} } \vn{\phi^2}_{\bar{S}^{\sigma}} \vn{\phi}_{\bar{S}^{\sigma}} $$
For $ d \geq 5 $, \eqref{est:phi3} and \eqref{AHH:highdim} imply
$$
\vn{\mathcal H^{\ast} \pi[({\bf A}_{x},{\bf A}_{0}^{HH})( \phi^{1}, \phi^{2}) ] \phi}_{\bar{N}^{\sigma-1} } \lesssim \vn{\phi^1}_{\bar{S}^{\sigma} } \vn{\phi^2}_{\bar{S}^{\sigma}} \vn{\phi}_{\bar{S}^{\sigma}}
$$
which concludes the proof in the case $ d \geq 5 $. In the remaining of this section we assume $ d=4 $, $ \sigma=1 $.
Next we use \eqref{est:phi3} together with \eqref{eq:axr-Z} and obtain
$$
\vn{\mathcal H^{\ast} \pi[(I - \mathcal H)({\bf A}_{x},{\bf A}_{0}^{HH})( \phi^{1}, \phi^{2}) ] \phi}_{\bar{N}} \lesssim \vn{\phi^1}_{\bar{S}^1} \vn{\phi^2}_{\bar{S}^1} \vn{\phi}_{\bar{S}^1}
$$
Since $ \mathcal H {\bf A}_{0}= \mathcal H {\bf A}_{0}^{HH} $ it remains to consider $ \mathcal H^{\ast} \pi[\mathcal H {\bf A} ] \phi $ which we write using \eqref{Q:dec}, \eqref{Q:dec2} as
$$ \mathcal H^{\ast} \pi[\mathcal H {\bf A}( \phi^{1}, \phi^{2}) ] \phi =\mathcal{Q}^1+\mathcal{Q}^2+\mathcal{Q}^3 $$
where
\begin{align*}
\mathcal{Q}^1 :=& \mathcal H^{\ast} ( \Box^{-1} \mathcal H \mathfrak{I} (\phi^1 \partial_{\alpha} \bar{\phi^2})\cdot \partial^{\alpha}\phi) , \\
\mathcal{Q}^2 :=& - \mathcal H^{\ast} ( \mathcal H \Delta^{-1} \Box^{-1} \partial_t \partial_{\alpha} \mathfrak{I} (\phi^1 \partial_{\alpha} \bar{\phi^2})\cdot \partial_{t}\phi ), \\
\mathcal{Q}^3 :=& - \mathcal H^{\ast} ( \mathcal H \Delta^{-1} \Box^{-1} \partial_{\alpha} \partial^i \mathfrak{I} (\phi^1 \partial_{i} \bar{\phi^2})\cdot \partial^{\alpha}\phi ).
\end{align*}
and it remains to prove
\begin{equation} \label{eq:tri-Q}
\vn{\mathcal{Q}^i(\phi^1,\phi^2,\phi)}_{\bar{N}} \lesssim \vn{\phi^1}_{\bar{S}^1} \vn{\phi^2}_{\bar{S}^1} \vn{\phi}_{\bar{S}^1}
, \qquad i=1,3; \ (d=4).
\end{equation}
\
\subsection{Proof of \eqref{eq:tri-Q} for $ \mathcal{Q}^1 $}
Fix $ k, k_1, k_2 \geq 0 $ and let $ k_{\min}=\min(k,k_1,k_2) \geq 0 $. The estimate follows from
\begin{equation} \label{Q1:trilinear}
\vn{ \sum_{k' < k_{\min}-C} \sum_{j<k'+C} \mathcal{Q}^1_{j,k'} (\phi^1_{k_1},\phi^2_{k_2},\phi_k)}_{N_k} \lesssim \vn{\phi^1_{k_1}}_{\bar{S}^1_{k_1}} \vn{\phi^2_{k_2}}_{\bar{S}^1_{k_2}} \vn{\phi_{k}}_{\bar{S}^1_{k}}
\end{equation}
by summing in $ k_1=k_2+O(1) $, where
$$ \mathcal{Q}^1_{j,k'} (\phi^1_{k_1},\phi^2_{k_2},\phi_k)= \bar{Q}_{<j}[P_{k'}Q_j \Box^{-1} (\bar{Q}_{<j}\phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<j} \phi^2_{k_2}) \cdot \partial^{\alpha} \bar{Q}_{<j} \phi_k]. $$
Define $ l \in [-k_{\min}, C] $ by $ j=k'+2l $ which implies $ \angle(\phi_k, P_{k'}A), \angle(\phi^2_{k_2}, P_{k'}A) \lesssim 2^l $. When $ k_{\min}=0 $ we may set $ l=0 $ and similarly for $ l_0 $ below.
In proving \eqref{Q1:trilinear}, we make the normalization
\begin{equation} \label{normalz}
\vn{\phi^1_{k_1}}_{\bar{S}_{k_1}^1}=1, \qquad \vn{\phi^2_{k_2}}_{\bar{S}^1_{k_2}}=1,\qquad \vn{\phi_{k}}_{\bar{S}^1_{k}}=1.
\end{equation}
Since we have a null form between $ \phi^2 $ and $ \phi $ we use a bilinear partition of unity based on their angular separation:
$$ \mathcal{Q}^1_{j,k'}=\sum_{l_0+C<l'<l} \sum_{\substack{\omega_1,\omega_2 \in \Gamma(l') \\ \angle(\omega_1,\omega_2) \simeq 2^{l'} }} \mathcal{Q}^1_{j,k'} (\phi^1_{k_1},P_{l'}^{\omega_2} \phi^2_{k_2},P_{l'}^{\omega_1} \phi_k)+ \sum_{\substack{\omega_1,\omega_2 \in \Gamma(l_0) \\ \angle(\omega_1,\omega_2) \lesssim 2^{l_0} } } \mathcal{Q}^1_{j,k'} (\phi^1_{k_1},P_{l_0}^{\omega_2} \phi^2_{k_2},P_{l_0}^{\omega_1} \phi_k)
$$
where $ l_0 \vcentcolon= \max(-k_{\min},l+k'-k_{\min},\frac{1}{2}(j-k_{\min})) $ and the angle $ \angle(\omega_1,\omega_2) $ is taken $ \mod \pi $. Notice that the sums in $ \omega_1,\omega_2 $ are essentially diagonal. In each summand, we may insert $ P_l^{[\omega_1]} $ in front of $ P_{k'}Q_j \Box^{-1} $, where $ P_l^{[\omega_1]} $ is uniquely (up to $ O(1)$) defined by $ \omega_1 $ (or $ \omega_2 $).
For the first sum, for $ k_{\min}>0 $, for any $ l' \in [l_0+C,l] $ we will prove
\begin{equation} \label{NullFramesEstimate}
\sum_{\omega_1,\omega_2} \vn{\mathcal{Q}^1_{j,k'} (\phi^1_{k_1},P_{l'}^{\omega_2} \phi^2_{k_2},P_{l'}^{\omega_1} \phi_k)}_{L^1L^2} \lesssim 2^{\frac{1}{4}(l'+l)} 2^{\frac{1}{2}(k'-k_2)}
\end{equation}
by employing the null-frame estimate in Corollary \ref{L2:NFnullFrames:cor}, which takes advantage of the angular separation. Summing in $ l',j,k' $ we obtain part of \eqref{Q1:trilinear}.
At small angles however, one does not control the null-frame norms for Klein-Gordon waves and the null-form gives only a limited gain. We consider two cases.
For $ j \geq -k_{\min} $ we sum the following in $ j,k' $
\begin{equation} \label{SmallAnglesLargeMod}
\sum_{\omega_1,\omega_2} \vn{ \mathcal{Q}^1_{j,k'} (\phi^1_{k_1},P_{l_0}^{\omega_2} \phi^2_{k_2},P_{l_0}^{\omega_1} \phi_k) }_{L^1L^2} \lesssim 2^l 2^{k'-k_{\min}}
\end{equation}
When $ j \leq -k_{\min} $ (thus $ k' \leq -k_{\min}-2l $ and $ l_0=-k_{\min} $) the operator $ P_{k'}Q_j \Box^{-1} $ becomes more singular and we encounter a logarithmic divergence if we try to sum $ k', j $ outside the norm in \eqref{Q1:trilinear}. We proceed as follows. We write
$$ \mathcal{Q}^1_{j,k'} (\phi^1_{k_1},P_{l_0}^{\omega_2} \phi^2_{k_2},P_{l_0}^{\omega_1} \phi_k)= \bar{Q}_{<j}[P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} (\bar{Q}_{<j}\phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<j} P_{l_0}^{\omega_2} \phi^2_{k_2}) \cdot \partial^{\alpha} \bar{Q}_{<j}P_{l_0}^{\omega_1} \phi_k] $$
We define
$$ \tilde{\mathcal{Q}}^{\omega_i}_{j,k'}=P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} (\phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<k_{2}+2l_0} P_{l_0}^{\omega_2} \phi^2_{k_2}) \cdot \partial^{\alpha} \bar{Q}_{<k+2l_0}P_{l_0}^{\omega_1} \phi_k $$
and we shall prove, using the embeddings in Prop. \ref{Box:Embedding}, that for any $ l \in [-k_{\min}, C] $
\begin{equation} \label{SmallAngleSmallMod}
\sum_{\omega_1,\omega_2} \vn{\sum_{k' \leq -k_{\min}-2l } \tilde{\mathcal{Q}}^{\omega_i}_{k'+2l,k'}}_{L^1L^2} \lesssim 2^{-\frac{1}{2}(l+k_{\min})} ,
\end{equation}
which sums up (in $ l $) towards the rest of \eqref{Q1:trilinear} except for the remainders
$$ \tilde{\mathcal{Q}}^{\omega_i}_{j,k'}- \mathcal{Q}^1_{j,k'} (\phi^1_{k_1},P_{l_0}^{\omega_2} \phi^2_{k_2},P_{l_0}^{\omega_1} \phi_k)=\mathcal{R}^{1,\omega_i}_{j,k'}+\mathcal{R}^{2,\omega_i}_{j,k'}+\mathcal{R}^{3,\omega_i}_{j,k'}+\mathcal{R}^{4,\omega_i}_{j,k'} $$
for which we have
\begin{equation} \label{SmallAngleRemainders}
\sum_{\omega_1,\omega_2} \vn{\mathcal{R}^{i,\omega}_{j,k'}}_{N_k} \lesssim 2^{\frac{l}{2}} 2^{\frac{1}{2}(k'-k_2)} , \qquad i=1,4
\end{equation}
where
\begin{align*}
\mathcal{R}^{1,\omega_i}_{j,k'} & := \bar{Q}_{>j}[ P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} (\bar{Q}_{<j}\phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<j} P_{l_0}^{\omega_2} \phi^2_{k_2}) \cdot \partial^{\alpha} \bar{Q}_{<j}P_{l_0}^{\omega_1} \phi_k] , \\
\mathcal{R}^{2,\omega_i}_{j,k'} & := P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} ( \bar{Q}_{<j}\phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<j} P_{l_0}^{\omega_2} \phi^2_{k_2}) \cdot \partial^{\alpha} \bar{Q}_{[j,k-2k_{\min}]}P_{l_0}^{\omega_1} \phi_k , \\
\mathcal{R}^{3,\omega_i}_{j,k'} & := P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} ( \bar{Q}_{>j}\phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<j} P_{l_0}^{\omega_2} \phi^2_{k_2}) \cdot \partial^{\alpha} \bar{Q}_{<k-2k_{\min}}P_{l_0}^{\omega_1} \phi_k , \\
\mathcal{R}^{4,\omega_i}_{j,k'} & := P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} ( \phi^1_{k_1} \partial_{\alpha} \bar{Q}_{[j,k_2-2k_{\min}]} P_{l_0}^{\omega_2} \phi^2_{k_2}) \cdot \partial^{\alpha} \bar{Q}_{<k-2k_{\min}}P_{l_0}^{\omega_1} \phi_k. \\
\end{align*}
Summing in $ j,k' $ we obtain the rest of \eqref{Q1:trilinear}.
\subsection{Proof of \eqref{NullFramesEstimate} and \eqref{SmallAnglesLargeMod}}
We are in the case $ k_1=k_2+O(1) $, $ k=\tilde{k}+O(1) $, $ k'+C<k_{\min}=\min(k,k_1,k_2) > 0 $, $ j=k'+2l $. We prove \footnote{Notice that this case does not occur when $ k_{\min}=0 $.}
\begin{multline*}
| \langle P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} (\bar{Q}_{<j}\phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<j} P_{l'/l_0}^{\omega_2} \phi^2_{k_2}) , \tilde{P}_{k'} \tilde{Q}_j ( \partial^{\alpha} \bar{Q}_{<j} P_{l'/l_0}^{\omega_1} \phi_k \cdot \bar{Q}_{<j} \psi_{\tilde{k}} \rangle | \\
\lesssim M_{\omega_1,\omega_2} \vn{\psi_{\tilde{k}}}_{L^{\infty} L^2}
\end{multline*}
where $ M_{\omega_1,\omega_2} $ will be defined below.
The two products above are summed over diametrically opposed boxes $ \pm \mathcal C $ [resp. $ \pm \mathcal C' $ ] of size $ \simeq 2^{k'} \times (2^{k'+l})^3 $ included in the angular caps $ \mathcal C_{l'/l_0}^{\omega_2} $ [resp. $\mathcal C_{l'/l_0}^{\omega_1}$] where $ P_{l'/l_0}^{\omega_2} $ [resp. $ P_{l'/l_0}^{\omega_1} $] are supported (Lemma \ref{geom:cone}).
Note that $ 2^{j+k'} P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} $ acts by convolution with an integrable kernel. By a simple argument based on translation-invariance we may dispose of this operator (after first making the the $ \mathcal C, \mathcal C' $ summation).
{\bf Step~1:~Proof of \eqref{NullFramesEstimate}}
In this case the null form gains $ 2^{2l'} $. It suffices to show, having normalized \eqref{normalz}
\begin{equation} \label{NullFramesEstimateDual}
2^{-j-k'} | \langle (\bar{Q}_{<j}\phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<j}^{\pm \pm'} P_{l'}^{\omega_2} \phi^2_{k_2}) , ( \partial^{\alpha} \bar{Q}_{<j}^{\pm} P_{l'}^{\omega_1} \phi_k \cdot \bar{Q}_{<j} \psi_{\tilde{k}} \rangle | \lesssim M_{\omega_1,\omega_2} \vn{\psi_{\tilde{k}}}_{L^{\infty} L^2} \end{equation}
\begin{equation} \label{SumedNullFramesEstimateDual}
\sum_{\omega_1,\omega_2} M_{\omega_1,\omega_2} \lesssim 2^{\frac{1}{4}(l'+l)} 2^{\frac{1}{2}(k'-k_2)} \end{equation}
where $ \angle(\omega_1, \pm' \omega_2 ) \simeq 2^{l'} $. We write $ 2^{j+k'} \text{LHS} \eqref{NullFramesEstimateDual} \lesssim $
\begin{align*}
& \int \sum_{\substack{ \mathcal C \subset \mathcal C_{l'}^{\omega_2} \\ \mathcal C' \subset \mathcal C_{l'}^{\omega_1} }} \vn{P_{-\mathcal C} \bar{Q}_{<j}\phi^1_{k_1}}_{L^{\infty}_x} \vn{\partial_{\alpha} P_{\mathcal C} \bar{Q}_{<j}^{\pm \pm'}\phi^2_{k_2} \partial^{\alpha} P_{\mathcal C'} \bar{Q}_{<j}^{\pm} \phi_{k}}_{L^2_x} \vn{P_{-\mathcal C'} \bar{Q}_{<j} \psi_{\tilde{k}} }_{L^2_x} (t) \,\mathrm{d} t \lesssim \\
& \int (\sum_{\mathcal C \subset \mathcal C_{l'}^{\omega_2}} \vn{P_{-\mathcal C} \bar{Q}_{<j}\phi^1_{k_1}}_{L^{\infty}_x}^2)^{\frac{1}{2}} ( \sum_{\substack{ \mathcal C \subset \mathcal C_{l'}^{\omega_2} \\ \mathcal C' \subset \mathcal C_{l'}^{\omega_1} }}\vn{\partial_{\alpha} P_{\mathcal C} \bar{Q}_{<j}^{\pm \pm'} \phi^2_{k_2} \partial^{\alpha} P_{\mathcal C'} \bar{Q}_{<j}^{\pm} \phi_{k}}_{L^2_x}^2 )^{\frac{1}{2}} \vn{ \bar{Q}_{<j} \psi_{\tilde{k}} }_{L^2_x} (t) \,\mathrm{d} t \\
& \lesssim (\sum_{\mathcal C \subset \mathcal C_{l'}^{\omega_2}} \vn{P_{-\mathcal C} \bar{Q}_{<j}\phi^1_{k_1}}_{L^2 L^{\infty}}^2)^{\frac{1}{2}} \ \mathcal{I}_{\omega_1,\omega_2}(l') \vn{ \bar{Q}_{<j} \ \psi_{\tilde{k}} }_{L^{\infty} L^2}
\end{align*}
where, using Corollary \ref{L2:NFnullFrames:cor},
\begin{align*} \mathcal{I}_{\omega_1,\omega_2}(l')^2 & \vcentcolon= \sum_{\mathcal C \subset \mathcal C_{l'}^{\omega_2}} \sum_{\mathcal C' \subset \mathcal C_{l'}^{\omega_1}} \vn{\partial_{\alpha} P_{\mathcal C} \bar{Q}_{<j}^{\pm \pm'} \phi^2_{k_2} \cdot \partial^{\alpha} P_{\mathcal C'} \bar{Q}_{<j}^{\pm} \phi_{k}}_{L^2_{t,x}}^2 \\
& \lesssim 2^{l'} ( \sum_{\mathcal C \subset \mathcal C_{l'}^{\omega_2}} \vn{P_{\mathcal C} \bar{Q}_{<j}^{\pm \pm'} \nabla_{t,x}\phi^2_{k_2}}_{PW_\mathcal C^{{\pm \pm'}}}^2 ) ( \sum_{\mathcal C' \subset \mathcal C_{l'}^{\omega_1}} \vn{ P_{\mathcal C'} \bar{Q}_{<j}^{\pm} \nabla_{t,x} \phi_{k}}_{NE_{\mathcal C'}^{\pm}}^2).
\end{align*}
Thus in \eqref{NullFramesEstimateDual} we may take
$$ M_{\omega_1,\omega_2}=2^{-j-k'} (\sum_{\mathcal C \subset \mathcal C_{l'}^{\omega_2}} \vn{P_{-\mathcal C} \bar{Q}_{<j}\phi^1_{k_1}}_{L^2 L^{\infty}}^2)^{\frac{1}{2}} \ \mathcal{I}_{\omega_1,\omega_2}(l') $$
and by Summing in $ \omega_2 $ (the $ \omega_1$ sum is redundant) using C-S we have
$$ \sum_{\omega_1,\omega_2} M_{\omega_1,\omega_2} \lesssim 2^{-2l-2k'} \cdot 2^{l'} \cdot 2^{\frac{1}{2}l}2^{k'}2^{-\frac{1}{2}k_1} \cdot 2^{\frac{3}{2}(k'+l)} $$
which implies \eqref{SumedNullFramesEstimateDual}.
{\bf Step~2:~Proof of \eqref{SmallAnglesLargeMod}} Here $ j \geq-k_{\min} $.
In this case the null form gains $ 2^{j-k_{\min}} $ and
$$ 2^{l_0}=\max( 2^{-k_{\min}},2^l 2^{k'-k_{\min}}, 2^{\frac{1}{2}(j-k_{\min})}) \leq 2^l $$
By Prop. \ref{N0:form} and Remark \ref{NF:remark} it suffices to prove, under \eqref{normalz}
\begin{multline} \label{SmallAnglesLargeModDual}
2^{j-k_{\min}} | \langle P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} (\bar{Q}_{<j}\phi^1_{k_1} \nabla_{t,x} \bar{Q}_{<j} P_{l_0}^{\omega_2} \phi^2_{k_2}) , \tilde{P}_{k'} \tilde{Q}_j ( \nabla_{t,x} \bar{Q}_{<j}P_{l_0}^{\omega_1} \phi_k \cdot \bar{Q}_{<j} \psi_{\tilde{k}} \rangle | \\
\lesssim M_{\omega_1,\omega_2} \vn{\psi_{\tilde{k}}}_{L^{\infty} L^2}, \qquad \sum_{\omega_1,\omega_2} M_{\omega_1,\omega_2} \lesssim 2^l 2^{k'-k_{\min}}.
\end{multline}
We have
\begin{align*} \text{LHS} \ \eqref{SmallAnglesLargeModDual} \lesssim & 2^{-k_{\min}-k'} 2^{k_2} \sum_{\mathcal C \subset \mathcal C_{l_0}^{\omega_2}} \vn{P_{\mathcal C} \bar{Q}_{<j}\phi^1_{k_1}}_{L^2 L^{\infty}} \vn{P_{-\mathcal C} \bar{Q}_{<j}\phi^2_{k_2}}_{L^2 L^{\infty}} \\
& \times \sup_{t} \sum_{\mathcal C' \subset \mathcal C_{l_0}^{\omega_1}} \vn{P_{\mathcal C'} \bar{Q}_{<j} \nabla \phi_{k}(t)}_{L^2_x} \vn{P_{-\mathcal C'} \bar{Q}_{<j} \psi_{\tilde{k}} (t) }_{L^2_x} \lesssim M_{\omega_1,\omega_2} \vn{\psi_{\tilde{k}}}_{L^{\infty} L^2}
\end{align*}
where for each $ t $ we have used Cauchy-Schwarz and orthogonality, where
$$ M_{\omega_1,\omega_2}= 2^{-k_{\min}-k'} 2^{k_2} ( \sum_{\mathcal C \subset \mathcal C_{l_0}^{\omega_2}} \vn{P_{\mathcal C} \bar{Q}_{<j}\phi^1_{k_1}}_{L^2 L^{\infty}} ^2)^{\frac{1}{2}} ( \sum_{\mathcal C \subset \mathcal C_{l_0}^{\omega_2}} \vn{P_{-\mathcal C} \bar{Q}_{<j}\phi^2_{k_2}}_{L^2 L^{\infty}}^2)^{\frac{1}{2}} \vn{\nabla \phi_{k}}_{L^{\infty} L^2}
$$
Summing in $ \omega_2 $ (the $ \omega_1$ sum is redundant) using C-S and \eqref{fe-L2Linfty}, \eqref{fe-LinfL2} we get \eqref{SmallAnglesLargeModDual}.
\subsection{Proof of \eqref{SmallAngleSmallMod} } Recall that $ l \in [-k_{\min},C], \ k_{\min}=\min(k,k_1,k_2) \geq 0 $ and $ k_1=k_2+O(1) $ are fixed. We are in the case $ k'+2l=j \leq -k_{\min} $, thus $ l_0=-k_{\min} $, i.e. $ \angle (\omega_1,\omega_2) \lesssim 2^{-k_{\min}} $.
By Prop. \ref{N0:form} and Remark \ref{NF:remark} the null form gains $ 2^{-2k_{\min}} $. We can apply that proposition because $ \bar{Q}_{<k_i-2k_{\min}}=Q_{<k_i-2k_{\min}+C} \bar{Q}_{<k_i-2k_{\min}} $.
We will apply Prop. \ref{Box:Embedding}. For $ M=-k_{\min}-2l $, we write
\begin{equation} \label{interm:sum}
\sum_{k' \leq M} \tilde{\mathcal{Q}}^{\omega_i}_{k'+2l,k'}=\sum_{\pm} T_{l}^{[\omega_1]} ( \phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<k_{2}+2l_0} P_{l_0}^{\omega_2} \phi^2_{k_2}) \cdot \partial^{\alpha} \bar{Q}_{<k+2l_0}P_{l_0}^{\omega_1} \phi_k
\end{equation}
We consider two cases.
{\bf Case~1:~ $ k_{\min}=k_2+O(1) $.} The null form gains $ 2^{-2k_2} $. We have
$$ \vn{\eqref{interm:sum}}_{L^1L^2} \lesssim 2^{-2k_2} \vn{T_{l}^{[\omega_1]} (\phi^1_{k_1} \bar{Q}_{<k_{2}+2l_0} P_{l_0}^{\omega_2} \nabla\phi^2_{k_2})}_{L^1 L^{\infty}} \vn{P_{l_0}^{\omega_1} \bar{Q}_{<k+2l_0} \nabla \phi_k}_{L^{\infty} L^2}. $$
Using \eqref{L1Linf:emb} and \eqref{Lorentz:Holder} we have
$$ \vn{T_{l}^{[\omega_1]} (\phi^1_{k_1} \bar{Q}_{<k_{2}+2l_0} P_{l_0}^{\omega_2} \nabla\phi^2_{k_2})}_{L^1 L^{\infty}} \lesssim 2^{-\frac{1}{2}l} 2^{k_2} \vn{\phi^1_{k_1}}_{L^2 L^{4,2}} \vn{P_{l_0}^{\omega_2}\bar{Q}_{<k_{2}+2l_0} \phi^2_{k_2}}_{L^2 L^{4,2}} $$
Summing (diagonally) in $ \omega_1, \omega_2 $ we obtain \eqref{SmallAngleSmallMod} since $ \vn{\phi^1_{k_1}}_{L^2 L^{4,2}} \lesssim 2^{-\frac{1}{4}k_1} \vn{\phi^1_{k_1}}_{\bar{S}^1_{k_1}} $,
$$
\big( \sum_{\omega} \vn{P_{l_0}^{\omega}\bar{Q}_{<k_{2}+2l_0} \phi^2_{k_2}}_{L^2 L^{4,2}}^2 \big) ^{\frac{1}{2}} \lesssim 2^{-\frac{1}{4}k_2} \vn{\phi^2_{k_2}}_{\bar{S}^1_{k_2}} $$
\begin{equation} \label{fe:sum:LinfL2}
\big( \sum_{\omega} \vn{P_{l_0}^{\omega} \bar{Q}_{<k+2l_0} \nabla_{t,x} \phi_k}_{L^{\infty} L^2} \big) ^{\frac{1}{2}} \lesssim \vn{\phi_{k}}_{\bar{S}^1_{k}} .
\end{equation}
{\bf Case~2:~ $ k_{\min}=k $.} Now the null form gains $ 2^{-2k} $, so we can put $ \phi_k $ in $ L^2 L^4 $.
$$ \vn{\eqref{interm:sum}}_{L^1L^2} \lesssim 2^{-2k} \vn{T_{l}^{[\omega_1]} (\phi^1_{k_1} \bar{Q}_{<k_{2}+2l_0} P_{l_0}^{\omega_2} \nabla\phi^2_{k_2})}_{L^2 L^4} \vn{P_{l_0}^{\omega_1} \bar{Q}_{<k+2l_0} \nabla \phi_k}_{L^2 L^4}. $$
Using \eqref{L2L4:emb} and H\"older's inequality we have
$$ \vn{T_{l}^{[\omega_1]} (\phi^1_{k_1} \bar{Q}_{<k_{2}+2l_0} P_{l_0}^{\omega_2} \nabla\phi^2_{k_2})}_{L^2 L^4} \lesssim 2^{-\frac{1}{2}l} 2^{k_2} \vn{\phi^1_{k_1}}_{L^4 L^{\frac{8}{3}}} \vn{P_{l_0}^{\omega_2}\bar{Q}_{<k_{2}+2l_0} \phi^2_{k_2}}_{L^4 L^{\frac{8}{3}}} $$
Summing (diagonally) in $ \omega_1, \omega_2 $ we obtain \eqref{SmallAngleSmallMod} since $ \vn{\phi^1_{k_1}}_{L^4 L^{\frac{8}{3}}} \lesssim 2^{-\frac{5}{8}k_1} \vn{\phi^1_{k_1}}_{\bar{S}^1_{k_1}} $,
$$
\big( \sum_{\omega} \vn{P_{l_0}^{\omega}\bar{Q}_{<k_{2}+2l_0} \phi^2_{k_2}}_{L^4 L^{\frac{8}{3}}}^2 \big) ^{\frac{1}{2}} \lesssim 2^{-\frac{5}{8}k_2} \vn{\phi^2_{k_2}}_{\bar{S}^1_{k_2}}
$$
$$ \big( \sum_{\omega} \vn{P_{l_0}^{\omega} \bar{Q}_{<k+2l_0} \nabla_{t,x} \phi_k}_{L^{2} L^4} \big) ^{\frac{1}{2}} \lesssim 2^{\frac{3}{4}k} \vn{\phi_{k}}_{\bar{S}^1_{k}}.
$$
\subsection{Proof of \eqref{SmallAngleRemainders}} By Prop. \ref{N0:form} and Remark \ref{NF:remark} the null form gains $ 2^{-2k_{\min}} $.
{\bf Step~1: $\mathcal{R}^{1} $ and $\mathcal{R}^{2} $.} Denoting
$$ h^{\omega_i}=P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} ( \bar{Q}_{<j}\phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<j} P_{l_0}^{\omega_2} \phi^2_{k_2}), $$
we estimate using Bernstein and Prop. \ref{prop:no-nf}
$$ \vn{h^{\omega_i}}_{L^2 L^{\infty}} \lesssim 2^{2k'+\frac{3}{2}l} \vn{h^{\omega_i}}_{L^2_{t,x}} \lesssim 2^{-j-k'} 2^{2k'+\frac{3}{2}l} \big(\sum_{\mathcal C} \vn{P_{\mathcal C} \bar{Q}_{<j} \phi^1_{k_1}}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} \vn{P_{l_0}^{\omega_2} \bar{Q}_{<j} \nabla \phi^2_{k_2}}_{L^{\infty} L^2} $$
where $ \mathcal C=C_{k'}(l) $. Using the $ \bar{X}^{-\frac{1}{2}}_1 $ norm, we have
$$ \vn{\mathcal{R}^{1,\omega_i}_{j,k'}}_{N_k} \lesssim 2^{-\frac{j}{2}} 2^{-2k_{\min}} \vn{h^{\omega_i}}_{L^2 L^{\infty}} \vn{P_{l_0}^{\omega_1} \bar{Q}_{<j}\nabla \phi_k}_{L^{\infty} L^2} $$
$$ \vn{\mathcal{R}^{2,\omega_i}_{j,k'}}_{L^1 L^2} \lesssim 2^{-2k_{\min}} \vn{h^{\omega_i}}_{L^2 L^{\infty}} \vn{P_{l_0}^{\omega_1} \bar{Q}_{[j,k-2k_{\min}]} \nabla \phi_k}_{L^2_{t,x}} $$
Summing in $ \omega_1,\omega_2 $, we obtain \eqref{SmallAngleRemainders} for $ \mathcal{R}^{1} $, $\mathcal{R}^{2} $ by using \eqref{fe-L2Linfty} for $ \phi^1 $ and \eqref{fe:sum:LinfL2} for $ \phi^2 $ (first introducing $ \bar{Q}_{<k_2+2l_0} $ and discarding $ \bar{Q}_{<j} $), and \eqref{fe:sum:LinfL2}, respectively \eqref{fe-L2} for $ \phi $. We also use $ 2^{-k_{\min}} \lesssim 2^l $.
{\bf Step~2:~ $\mathcal{R}^{3} $ and $\mathcal{R}^{4} $.} We denote
\begin{align*} h^{\omega_{1,2}}_3 &= P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} ( \bar{Q}_{>j}\phi^1_{k_1} \partial_{\alpha} \bar{Q}_{<j} P_{l_0}^{\omega_2} \phi^2_{k_2}) , \\
h^{\omega_{1,2}}_4 &=P_l^{[\omega_1]} P_{k'}Q_j \Box^{-1} ( \phi^1_{k_1} \partial_{\alpha} \bar{Q}_{[j,k_2-2k_{\min}]} P_{l_0}^{\omega_2} \phi^2_{k_2}).
\end{align*}
For $ i=3,4 $ we have
$$ \vn{\mathcal{R}^{i,\omega}_{j,k'}}_{L^1 L^2} \lesssim 2^{-2k_{\min}} \vn{h^{\omega_{1,2}}_{i}}_{L^1 L^{\infty}} \vn{P_{l_0}^{\omega_1} \bar{Q}_{<k-2k_{\min}} \nabla \phi_k}_{L^{\infty} L^2}
$$
We estimate using Prop. \ref{prop:no-nf}
$$ \vn{h^{\omega_{1,2}}_3}_{L^1 L^{\infty}} \lesssim 2^{2k'+\frac{3}{2}l} \vn{h^{\omega_{1,2}}_3}_{L^1 L^2} \lesssim 2^{-j-k'} 2^{2k'+\frac{3}{2}l} \vn{\bar{Q}_{>j} \phi^1_{k_1}}_{L^2_{t,x}} \big(\sum_{\mathcal C} \vn{P_{\mathcal C} \bar{Q}_{<j} \nabla \phi^2_{k_2}}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} $$
where $ \mathcal C=C_{k'}(0) $. Reversing the roles of $ \phi^1, \phi^2 $, $\vn{h^{\omega_{1,2}}_4}_{L^1 L^{\infty}} $ is also estimated.
Summing in $ \omega_1,\omega_2 $, we obtain \eqref{SmallAngleRemainders} for $ \mathcal{R}^{3} $, $\mathcal{R}^{4} $ by using \eqref{fe:sum:LinfL2} for $ \phi $ and \eqref{fe-L2} \eqref{fe-L2Linfty}, for $ \phi^1, \phi^2 $. We also use $ 2^{-k_{\min}} \lesssim 2^l $.
\subsection{Proof of \eqref{eq:tri-Q} for $ \mathcal{Q}^2 $} Estimating in $ L^1 L^2$ we use H\" older's inequality with $ \vn{\partial_t \phi_k}_{L^{\infty}L^2} \lesssim \vn{\phi_{k}}_{\bar{S}^1_{k}} $ and
\begin{equation} \label{Q2:bilest} \vn{\Delta^{-1} \Box^{-1} \partial_t Q_{j} P_{k'} \partial_{\alpha} ( \bar{Q}_{<j}\phi^1_{k_1} \cdot \partial^{\alpha} \bar{Q}_{<j} \phi^2_{k_2})}_{L^1 L^{\infty}} \lesssim 2^l 2^{\frac{1}{2}(k'-k_1)} \vn{\phi^1_{k_1}}_{\bar{S}^1_{k_1}} \vn{\phi^2_{k_2}}_{\bar{S}^1_{k_2}} \end{equation}
The $ \mathcal{Q}^2 $ part of \eqref{eq:tri-Q} follows by summing this in $ k', j$, where
\begin{equation} \label{Q23:cond}
k_1 =k_2+O(1),\quad k'+C<k_1,k, \quad j <k'+C, \quad l \vcentcolon= \frac{1}{2}(j-k')_{-} \geq -k_1,k
\end{equation}
To prove \eqref{Q2:bilest}, first note that the product is summed over diametrically opposed boxes $ \mathcal C_1,\mathcal C_2 $ of size $ \simeq 2^{k'} \times (2^{k'+l})^3 $ (Lemma \ref{geom:cone}). Each term in the sum forces a localization $ P_l^{\omega} $ in front of $ Q_{j} P_{k'} $ and note that $ 2^{j+k'} P_l^{\omega} Q_j P_{k'}\Box^{-1} $ is disposable.
Now recall for \eqref{M:form} the decomposition \eqref{M:form:decom}-\eqref{n0:form:symb}. By Prop. \ref {n0:form:prop}, and the fact that here $ \angle(\mathcal C_1,-\mathcal C_2) \lesssim 2^{l+k'-k_1} $ we have
$$ \vn{\Delta^{-1} \Box^{-1} \partial_t Q_{j} P_{k'} \mathcal N_0 ( \bar{Q}_{<j}\phi^1_{k_1},\bar{Q}_{<j} \phi^2_{k_2})}_{L^1 L^{\infty}} \lesssim 2^{-j-2k'} \times (2^{2l+2k'} ) \times $$
$$ \times \big(\sum_{\mathcal C_1} \vn{P_{\mathcal C_1} \bar{Q}_{<j} \phi^1_{k_1}}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} \big(\sum_{\mathcal C_2} \vn{P_{\mathcal C_2} \bar{Q}_{<j} \phi^2_{k_2}}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}}
$$
The same holds true for $ \mathcal M_0 $ , since now, by Prop. \ref{M0:form} we gain $ 2^{2k'-2k_1} \lesssim 2^{2k'+2l} $. Using \eqref{fe-L2Linfty} we obtain \eqref{Q2:bilest} for $ \mathcal N_0, \mathcal M_0 $. We turn to $ \mathcal R_0^{\pm} $ and write
$$ 2^{k_2} \vn{\Delta^{-1} \Box^{-1} \partial_t Q_{j} P_{k'} \big( (\partial_t \mp i \jb{D}) \bar{Q}_{<j}^{\pm} \phi^1_{k_1} \cdot \bar{Q}_{<j}^{\mp} \phi^2_{k_2} \big)}_{L^1 L^{\infty}} \lesssim 2^{k_2} \cdot 2^{-j-2k'} \times $$
$$ \times \big(\sum_{\mathcal C_1} \vn{P_{\mathcal C_1} (\partial_t \mp i \jb{D}) \bar{Q}_{<j}^{\pm} \phi^1_{k_1}}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} \big(\sum_{\mathcal C_2} \vn{P_{\mathcal C_2} \bar{Q}_{<j}^{\mp} \phi^2_{k_2}}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}}
$$
Then we use \eqref{fe-sqX}, \eqref{fe-L2Linfty} to obtain $ \mathcal R_0^{\pm} $-part of \eqref{Q2:bilest}. The other parts of $ \mathcal R_0^{\pm} $ follow by reversing the roles of $ \phi^1,\phi^2 $.
\subsection{Proof of \eqref{eq:tri-Q} for $ \mathcal{Q}^3 $} Let $ k_1, k_2, k, k', j, l $ as in \eqref{Q23:cond} and $ \tilde{k}=k+O(1), \ k_{\min} \vcentcolon= \min(k_1,k_2,k), \ k'=k''+O(1)<k_{\min} -C, \ j =j'+O(1)<k'+C. $ We prove
\begin{equation}
\begin{aligned} \label{Q3:est}
| \langle \frac{\partial Q_{j'} P_{k'}}{\Delta \Box} ( \bar{Q}_{<j'}\phi^1_{k_1} \cdot \partial \bar{Q}_{<j'} \phi^2_{k_2}) , Q_j P_{k''} \partial_{\alpha}(\partial^{\alpha} \bar{Q}_{<j} \phi_k \cdot \bar{Q}_{<j} \psi_{\tilde{k}} )\rangle | \lesssim \\
\lesssim 2^{\frac{1}{2}l} 2^{\frac{1}{2}(k'-k_{\min})} \vn{\phi^1_{k_1}}_{\bar{S}^1_{k_1}} \vn{\phi^2_{k_2}}_{\bar{S}^1_{k_2}} \vn{\phi_{k}}_{\bar{S}^1_{k}} \vn{\psi_{\tilde{k}}}_{N_k^{*}}
\end{aligned}
\end{equation}
which, by duality, implies \eqref{eq:tri-Q} for $ \mathcal{Q}^3 $. Like for $ \mathcal{Q}^2 $, we sum over diametrically opposed boxes $ \pm \mathcal C $ of size $ \simeq 2^{k'} \times (2^{k'+l})^3 $ and introduce $ P_l^{\omega} $ to bound $ \Box^{-1} $.
First, using Prop. \ref{prop:no-nf} and \eqref{fe-LinfL2}, \eqref{fe-L2Linfty}, we estimate
$$ \vn{\Delta^{-1} \Box^{-1} \partial Q_{j'} P_{k'} ( \bar{Q}_{<j'}\phi^1_{k_1} \cdot \partial \bar{Q}_{<j'} \phi^2_{k_2})}_{L^2_{t,x}} \lesssim 2^{-j-2k'} (2^{\frac{1}{2}l} 2^{k'} 2^{-\frac{1}{2}k_1}) \vn{\phi^1_{k_1}}_{\bar{S}^1_{k_1}} \vn{\phi^2_{k_2}}_{\bar{S}^1_{k_2}} $$
For the second product, we recall the decomposition \eqref{M:form}-\eqref{n0:form:symb}.
By Prop. \ref {n0:form:prop} and orthogonality, using the fact that $ \angle(\phi,\psi) \lesssim 2^{l+k'-k} $, we have
$$
\vn{Q_j P_{k''} \mathcal N_0 (\bar{Q}_{<j} \phi_k,\bar{Q}_{<j} \psi_{\tilde{k}} ) }_{L^2_{t,x}} \lesssim 2^{2l+2k'} \big(\sum_{\mathcal C} \vn{P_{\mathcal C} \bar{Q}_{<j} \phi_k}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} \vn{\psi_{\tilde{k}}}_{L^{\infty} L^2}
$$
The same holds true for $ \mathcal M_0 $ , since now, by Prop. \ref{M0:form} we gain $ 2^{2k'-2k} \lesssim 2^{2k'+2l} $.
For $ \mathcal R_0^\pm $ we prove
\begin{align*}
& \vn{Q_j P_{k''} \big( (\partial_t \mp i \jb{D}) \bar{Q}_{<j}^{\pm} \phi_k \cdot \bar{Q}_{<j}^{\mp} \psi_{\tilde{k}} \big) }_{L^2_{t,x}} \lesssim \big(\sum_{\mathcal C} \vn{P_{\mathcal C} (\partial_t \mp i \jb{D}) \bar{Q}_{<j}^{\pm} \phi_k}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} \vn{\psi_{\tilde{k}}}_{L^{\infty} L^2}, \\
& 2^k \vn{Q_j P_{k''} (\bar{Q}_{<j}^{\pm} \phi_k \cdot (\partial_t \pm i \jb{D}) \bar{Q}_{<j}^{\mp} \psi_{\tilde{k}} ) }_{L^2_{t,x}} \lesssim 2^k \vn{\bar{Q}_{<j}^{\pm} \phi_k}_{L^{\infty} L^2} \times \\
& \qquad \qquad \times \big(\sum_{\mathcal C} \vn{P_{\mathcal C} (\partial_t \pm i \jb{D}) \bar{Q}_{<j}^{\mp} \psi_{\tilde{k}}}_{L^2 L^{\infty}}^2 \big)^{\frac{1}{2}} \lesssim 2^k 2^{2k'+\frac{3}{2}l} 2^{\frac{1}{2}j} \vn{\phi_k}_{L^{\infty} L^2} \vn{\psi_{\tilde{k}}}_{\bar{X}^{\frac{1}{2}}_{\infty}}.
\end{align*}
where we have used Bernstein's inequality and orthogonality.
Putting all of the above together, using \eqref{fe-L2Linfty}, \eqref{fe-sqX} and \eqref{fe-LinfL2}, we obtain \eqref{Q3:est}.
\section{The construction and properties of the phase} \label{Constr:phase}
\subsection{Motivation}
We begin by recalling some heuristic considerations motivating the construction in \cite{RT}, which also extends to the massive case.
Suppose one is interested in solving the equation
\begin{equation} \label{eq:cva}
\Box_m^A \phi=0, \qquad \Box_m^A\vcentcolon= D^{\alpha} D_{\alpha}+I
\end{equation}
where $ D_{\alpha}\phi=(\partial_{\alpha}+iA_{\alpha}) \phi $ and $ \Box A=0 $. After solving \eqref{eq:cva}, one can also obtain solutions to the inhomogeneous equation $ \Box_m^A \phi=F $ by Duhamel's formula. The equation \eqref{eq:cva} enjoys the following gauge invariance. For any real function $ \psi $, replacing
$$ \phi \mapsto e^{i \psi} \phi, \quad A_{\alpha} \mapsto A_{\alpha}-\partial_{\alpha} \psi, \quad D_{\alpha} \mapsto e^{i \psi} D_{\alpha} e^{-i \psi}
$$
we obtain another solution. To make use of this, one expects that by choosing $ \psi $ appropriately ($ \nabla \psi \approx A $) one could reduce closer to the free wave equation $ \Box \phi \approx 0 $.
However, this is not in general possible since $ A $ is not a conservative vector field. Instead, one makes the construction microlocally and for each dyadic frequency separately. Taking $ e^{i x \cdot \xi} $ as initial data, considering $ \phi=e^{-i \psi_{\pm}(t,x)} e^{\pm i t \jb{\xi}+i x \cdot \xi} $ we compute
$$ \Box_m^A \phi=2 \left( \pm \jb{\xi} \partial_t \psi_{\pm} - \xi \cdot \nabla \psi_{\pm} + A \cdot \xi \rpr \phi+ \big( -i \Box \psi_{\pm} + (\partial_t \psi_{\pm})^2- \vm{\nabla \psi_{\pm}}^2 - A \cdot \nabla \psi_{\pm} \big) \phi $$
The second bracket is expected to be an error term, while for the first, one wants to define $ \psi_{\pm} $ so as to get as much cancelation as possible, while also avoiding to make $ \psi_{\pm} $ too singular. Defining
$$ L_{\pm}=\pm \partial_t + \frac{\xi}{\jb{\xi}} \cdot \nabla_x, \qquad \text{one has} $$
\begin{equation} \label{nullvf} -L_{+} L_{-}=\Box + \Delta_{\omega^{\perp}}+\frac{1}{\jb{\xi}^2} (\omega \cdot \nabla_x) ^2, \quad \omega=\frac{\xi}{\vm{\xi}}. \end{equation}
We would like to have $ L_{\mp} \psi_{\pm}=A \cdot \xi / \jb{\xi} $ thus applying $ L_{\pm} $ and neglecting $ \Box $ in \eqref{nullvf} (since $ \Box A=0 $) one obtains, for fixed $ \xi $:
\begin{equation} \psi_{\pm}(t,x)=\frac{-1}{\Delta_{\omega^{\perp}}+\frac{1}{\jb{\xi}^2} (\omega \cdot \nabla_x) ^2} L_{\pm} \left( A(t,x) \cdot \frac{\xi}{\jb{\xi}} \rpr. \label{appr:defn} \end{equation}
Taking general initial data $ \int e^{i x \cdot \xi} h_{\pm}(\xi) \,\mathrm{d} \xi $, using linearity, one obtains the approximate solutions
$$ \phi_{\pm}(t,x)=\int e^{-i \psi_{\pm}(t,x,\xi)} e^{\pm i t \jb{\xi}} e^{i x \cdot \xi} h_{\pm}(\xi) \,\mathrm{d} \xi.
$$
Thus, the renormalization is done through the pseudodifferential operators $ e^{-i \psi_{\pm}}(t,x,D) $.
In what follows, $ \xi $ will be restricted to dyadic frequencies $ \vm{\xi} \simeq 2^k $ or $ \vm{\xi} \lesssim 1 $, while $ A(t,x) $ (and thus $ \psi $ too) will be localized to strictly lower frequencies $ \ll 2^k $. When $ \vm{\xi} \lesssim 1 $, the denominator in \eqref{appr:defn} is essentially $ \Delta_x $. If $ \xi $ is a high frequency then the dominant term is $ \Delta_{\omega^{\perp}}^{-1} $ and the construction needs to be refined to remove the singularity; see the next subsection for precise definitions.
For more details motivating the construction see \cite[sec. 7,8]{RT}.
\
The construction in \cite{KST} slightly differs from the one in \cite{RT} in that they further localize the exponentials in the $ (t,x) $-frequencies $ \big( e^{-i \psi_{\pm}(t,x,\xi)} \big)_{<k-c} $.
By Taylor expansion one can see that these constructions are essentially equivalent. Indeed, since
$$ e^{i \psi_{<k-c}(t,x,\xi)} =1+i \psi_{<k-c}(t,x,\xi)+O\big(\psi_{<k-c}^2(t,x,\xi) \big) $$
we see that they differ only by higher order terms, which are negligible due to the smallness assumption on $ A $. Here, following \cite{KST}, it will be technically convenient to do this localization.
We denote by
$$ e^{\pm i \psi_{\pm}^k}_{<h} (t,x,D), \qquad e^{\pm i \psi_{\pm}^k}_{<h} (D,s,y) $$
the left and right quantizations of the symbol $ e^{\pm i \psi_{\pm}^k}_{<h}(t,x,\xi) $ where the $ <h $ subscript denotes $ (t,x)$-frequency localization to frequencies $ \leq h-C $, pointwise in $ \xi $. Thus
\begin{equation} \label{averaging} e^{\pm i \psi_{\pm}^k}_{<h}(t,x,\xi)=\int_{\mathbb{R}^{d+1}} e^{\pm i T_{(s,y)} \psi_{\pm}^k(t,x,\xi)} m_h(s,y) \,\mathrm{d} s \,\mathrm{d} y \end{equation}
where $ T_{(s,y)} \psi(t,x,\xi)=\psi(t+s,x+y,\xi) $ and $ m_h=2^{(d+1)h} m(2^h \cdot) $ for a bump function $ m(s,y) $. By averaging arguments such as Lemmas \ref{loczsymb}, \ref{lemma:locz:symb}, estimates for $ e^{-i \psi_{\pm}}(t,x,D) $ will automatically transfer to $ e^{-i \psi_{\pm}}_{<k}(t,x,D) $.
\
\subsection{The construction of the phase}\
\
We recall that $ A $ is real-valued, it solves the free wave equation $ \Box A=0 $ and satisfies the Coulomb gauge condition $ \nabla_x \cdot A=0 $.
For $ k=0 $ we define
\begin{equation} \label{defn1} \begin{aligned}
\psi_{\pm}^0(t,x,\xi) & \vcentcolon= \sum_{j<-C} \psi^{0}_{j,\pm}(t,x,\xi), \qquad \text{where} \\
\psi^{0}_{j,\pm}(t,x,\xi) & \vcentcolon= \frac{-L_{\pm}}{\Delta_{\omega^{\perp}}+\frac{1}{\jb{\xi}^2}(\omega \cdot \nabla_x) ^2} \left( P_jA (t,x) \cdot \frac{\xi}{\jb{\xi}} \rpr
\end{aligned} \end{equation}
For $ k\geq 1$ we define
\begin{equation} \label{defn2} \psi^{k} _{\pm}(t,x,\xi) \vcentcolon= \frac{-1}{\Delta_{\omega^{\perp}}+\frac{1}{\jb{\xi}^2} (\omega \cdot \nabla_x) ^2} L_{\pm} \sum_{k_1 <k-c} \left( \Pi^{\omega}_{> \delta(k_1-k)} P_{k_1} A \cdot \frac{\xi}{\jb{\xi}} \rpr \end{equation}
It will be convenient to rescale the angular pieces that define $ \psi^k_{\pm} $ to $ \vm{\xi} \simeq 1 $:
\begin{equation} \label{phase_piece} \psi^{k}_{j,\theta,\pm} (t,x,2^k \xi) \vcentcolon= \frac{-L_{\pm,k}}{\Delta_{\omega^{\perp}}+2^{-2k} \frac{1}{\jb{\xi}_k^2} (\omega \cdot \nabla_x) ^2} \left( \Pi^{\omega}_{\theta} P_{j} A \cdot \omega \frac{\vm{\xi}}{\jb{\xi}_k} \rpr \end{equation}
for $ 2^{\delta(j-k)}< \theta < c $ and $ j<k-c $, where
$$ L_{\pm,k}= \pm \partial_t + \frac{\vm{\xi}}{\jb{\xi}_k} \omega \cdot \nabla_x, \qquad \omega=\frac{\xi}{\vm{\xi}}, \qquad \jb{\xi}_k=\sqrt{2^{-2k}+\vm{\xi}^2}. $$
Note that $ \Pi^{\omega}_{\theta}, \Pi^{>\omega}_{\theta} $ defined in \eqref{sect:proj1}, \eqref{sect:proj2} behave like Littlewood-Paley projections in the space $ \omega^{\perp} $.
\begin{remark} It will be important to keep in mind that $ \psi^k_{\pm} $ is real-valued, since it is defined by applying real and even Fourier multipliers to the real function $ A $.
\end{remark}
\begin{remark}\label{rk:Col:nf} Due to the Coulomb condition $ \nabla_x \cdot A=0 $ the expression in \eqref{defn2} acts like a null form, leading to an angular gain. Indeed, a simple computation shows
$$ \vm{\widehat{\Pi^{\omega}_{\theta} A} (\eta) \cdot \omega} \lesssim \theta \vm{\widehat{\Pi^{\omega}_{\theta} A} (\eta)}, $$
which implies $ \vn{\Pi^{\omega}_{\theta} A \cdot \omega}_{L^2_x} \lesssim \theta \vn{\Pi^{\omega}_{\theta} A}_{L^2_x} $.
\end{remark}
\
We denote by
$$ \varphi_{\xi}(\eta)=\vm{\eta}^2_{\omega^{\perp}}+ 2^{-2k} \jb{\xi}_k^{-2} (\omega \cdot \eta)^2 $$
the Fourier multiplier of the operator $ \Delta_{\omega^{\perp}}+2^{-2k} \frac{1}{\jb{\xi}_k^2} (\omega \cdot\nabla_x)^2 $. We have the following bounds on $ \varphi_{\xi}(\eta) $:
\begin{lemma} \label{lemmadermultiplier}Let $ k \geq 1 $. For any $ \eta $ and $ \xi=\vm{\xi} \omega $ such that $ \angle (\xi, \eta) \simeq \theta $ and $ \vm{\eta} \simeq 2^j $ we have
\begin{align}
\label{dermult}
\vm{ (\theta \nabla_{\omega})^{\alpha} \frac{1}{\varphi_{\xi}(\eta)}} & \leq \frac{C_{\alpha}}{(2^j \theta)^2+2^{2j-2k}}
\\
\label{dermultiplier}
\vm{ \partial_{\vm{\xi}}^l (\theta \nabla_{\omega})^{\alpha} \frac{1}{\varphi_{\xi}(\eta)}} & \leq \frac{C_{\alpha,l}}{(2^j \theta)^2+2^{2j-2k}} \cdot \frac{2^{-2k}}{ \theta^2+2^{-2k}}, \qquad l\geq 1.
\end{align}
\end{lemma}
\begin{remark} \label{rkdermultiplier} Suppose we want to estimate $\partial_{\vm{\xi}}^{l} ( \theta \partial_{\omega})^{\alpha} \psi^{k}_{j,\theta,\pm}(t_0, \cdot,2^k \xi) $ in $ L^2_x $.
By lemma \ref{lemmadermultiplier} and the Coulomb condition (remark \ref{rk:Col:nf}), the following multiplier applied to $ A(t_0) $
$$ \partial_{\vm{\xi}}^{\alpha_1} ( \theta \partial_{\omega})^{\alpha} \frac{-L_{\pm,k}}{\Delta_{\omega^{\perp}}+2^{-2k} \frac{1}{\jb{\xi}_k^2} (\omega \cdot \nabla_x) ^2} \left( \Pi^{\omega}_{\theta} P_{j} ( \ ) \cdot \omega \frac{\vm{\xi}}{\jb{\xi}_k} \rpr $$
may be replaced by
$$ \frac{2^{-j} \theta}{\theta^2+2^{-2k}} \Pi^{\omega,\alpha}_{\theta} \tilde{P}_{j} \quad (\text{if}\ l=0), \qquad \frac{2^{-2k}2^{-j} \theta}{(\theta^2+2^{-2k})^2} \Pi^{\omega,\alpha,l}_{\theta} \tilde{P}_{j} \quad (\text{if}\ l\geq 1), $$
for the purpose of obtaining an upper bound for the $ L^2_x $ norm, where $ \Pi^{\omega,\alpha,l}_{\theta} $ and $ \tilde{P}_{j} $ obey the same type of localization properties and symbol estimates as $ \Pi^{\omega}_{\theta} $ and $ P_{j} $.
\end{remark}
\begin{proof}
For $ \alpha=0, \ l=0 $ the bound is clear since
\begin{equation} \label{indbd0} \varphi_{\xi}(\eta) \simeq (2^j \theta)^2+2^{2j-2k} . \end{equation}
For $ N\geq 1$ we prove the lemma by induction on $ N=l+\vm{\alpha} $. We focus on the case $ l\geq 1 $ since the proof of \eqref{dermult} is entirely similar. Suppose the claim holds for all $ l', \alpha' $ such that $ 0 \leq l'+\vm{\alpha'} \leq N-1 $. Applying the product rule to $ 1=\varphi_{\xi}(\eta) \frac{1}{\varphi_{\xi}(\eta)} $ we obtain
$$ \varphi_{\xi}(\eta) \cdot \partial_{\vm{\xi}}^l (\theta \nabla_{\omega})^{\alpha} \frac{1}{\varphi_{\xi}(\eta)}=\sum C^{\alpha',\beta'}_{\alpha'',\beta''} \cdot \partial_{\vm{\xi}}^{l'} (\theta \nabla_{\omega})^{\alpha'} \varphi_{\xi}(\eta)\cdot \partial_{\vm{\xi}}^{l''} (\theta \nabla_{\omega})^{\alpha''} \frac{1}{\varphi_{\xi}(\eta)} $$
where we sum over $ l'+l''=l, \ \alpha'+\alpha''=\alpha, \ l''+\vm{\alpha''} \leq N-1$. Given the induction hypothesis and \eqref{indbd0}, for the terms in the sum it suffices to show
\begin{align}
\label{indbd1} \vm{\partial_{\vm{\xi}}^{l'} (\theta \nabla_{\omega})^{\alpha'} \varphi_{\xi}(\eta)} & \lesssim 2^{2j-2k} \qquad \hbox{for} \ l' \geq 1, \\
\label{indbd2} \vm{\partial_{\vm{\xi}}^{l'} (\theta \nabla_{\omega})^{\alpha'} \varphi_{\xi}(\eta)} & \lesssim (2^j \theta)^2 \qquad \hbox{for } \ l'=0,\ \vm{\alpha'} \geq 1\ (l''=l\geq 1)
\end{align}
We write
$$ \varphi_{\xi}(\eta)=C_{\eta}- (\omega \cdot \eta)^2(1-2^{-2k} \jb{\xi}_k^{-2} ). $$
We have $ \vm{ \omega \cdot \eta} \lesssim 2^j $ and thus for $ l'\geq 1 $ we obtain \eqref{indbd1}.
Now suppose $ l'=0 $ and thus $ \vm{\alpha'} \geq 1 $. Observe that $ \partial_{\omega} (\omega \cdot \eta)^2 \simeq 2^{2j} \theta $ and thus for all $ \vm{\alpha'} \geq 1 $ we have $ (\theta \nabla_{\omega})^{\alpha'} (\omega \cdot \eta)^2 \lesssim 2^{2j} \theta^2 $, which implies \eqref{indbd2}.
\end{proof}
The following proposition will be used in stationary phase arguments.
\begin{proposition} \label{phasederProp} For $ k \geq 0 $, $ \vm{\xi} \simeq 1 $, denoting $ T= \vm{t-s}+\vm{x-y} $ we have:
\begin{align}
\label{phasediff} \vm{ \psi_{\pm}^k(t,x,2^k \xi) - \psi_{\pm}^k(s,y, 2^k \xi) } & \lesssim \varepsilon \log(1+ 2^k T ) \\
\label{phaseder} \vm{ \partial_{\omega}^{\alpha} \left( \psi_{\pm}^k(t,x,2^k \xi) - \psi_{\pm}^k(s,y, 2^k \xi) \rpr } & \lesssim \varepsilon (1+ 2^k T )^{ (\vm{\alpha}- \frac{1}{2}) \delta} , \quad 1\leq \vm{\alpha} \leq \delta^{-1} \\
\label{phaseder2}
\vm{ \partial_{\vm{\xi}}^l \partial_{\omega}^{\alpha} \left( \psi_{\pm}^k(t,x,2^k \xi) - \psi_{\pm}^k(s,y, 2^k \xi) \rpr } & \lesssim \varepsilon 2^{-2k} (1+ 2^k T )^{ (\vm{\alpha}+ \frac{3}{2}) \delta}, \quad l \geq 1, (\vm{\alpha}+ \frac{3}{2}) \delta <1
\end{align}
\end{proposition}
\begin{proof}
Using $ \vn{\vm{\nabla}^{\sigma} A }_{L^{\infty}L^2} \lesssim \varepsilon $, Bernstein's inequality $ P_j \Pi_{\theta}^{\omega} L^2_x \to ( 2^{dj} \theta^{d-1})^{\frac 1 2} L^{\infty}_x $ and the null form (Remark \ref{rk:Col:nf}), for $ k \geq 1 $ we obtain
$$ \vm{ \psi^{k}_{j,\theta,\pm} (t,x,2^k \xi) } \lesssim \varepsilon (2^{dj} \theta^{d-1})^{\frac 1 2} \theta \frac{2^j 2^{-\sigma j}}{(2^j \theta)^2+2^{2j-2k}} \lesssim \varepsilon \theta^{\frac{1}{2}}$$
Thus, for both $ k=0 $ and $ k \geq 1 $, one has
$$ \vm{\psi_{j,\pm}^k(t,x,2^k \xi)} \lesssim \varepsilon, \qquad \vm{\nabla_{x,t} \psi_{j,\pm}^{k} (t,x,2^k \xi) } \lesssim 2^{j} \varepsilon $$
We sum the last bound for $ j \leq j_0 $ and the previous one for $ j_0 <j \leq k-c $:
$$ \vm{ \psi_{\pm}^k(t,x,2^k \xi) - \psi_{\pm}^k(s,y, 2^k \xi) } \lesssim \varepsilon \left( 2^{j_0} T + (k-j_0) \rpr $$
Choosing $ k-j_0=\log_2(2^k T)-O(1) $ we obtain \eqref{phasediff}.
For the proof of \eqref{phaseder} and \eqref{phaseder2} we use Remark \ref{rkdermultiplier}. Since their proofs are similar we only write the details for \eqref{phaseder2}. First suppose $ k \geq 1 $.
From Bernstein's inequality and Remark \ref{rkdermultiplier} we obtain
\begin{align*} & \vm{ \partial_{\vm{\xi}}^l \partial_{\omega}^{\alpha} \psi^{k}_{j,\theta,\pm} (t,x,\xi) } \lesssim (2^{dj} \theta^{d-1})^{\frac 1 2} \theta \frac{1}{(2^j \theta)^2+2^{2j-2k}} \frac{2^{-2k}}{\theta^2} \theta^{-\vm{\alpha}} \lesssim 2^{-2k} \varepsilon \theta^{-\frac{3}{2}-\vm{\alpha}} \\
& \vm{\nabla_{x,t} \partial_{\vm{\xi}}^l \partial_{\omega}^{\alpha} \psi^{k}_{j,\theta,\pm} (t,x,\xi) } \lesssim 2^{j} 2^{-2k} \varepsilon \theta^{-\frac{3}{2}-\vm{\alpha}}.
\end{align*}
We sum after $ 2^{\delta(j-k)}< \theta < c $. Summing one bound for $ j \leq j_0 $ and the other one for $ j_0 <j \leq k-c $ we obtain
$$ \vm{ \partial_{\vm{\xi}}^l \partial_{\omega}^{\alpha} \left( \psi_{\pm}^k(t,x,2^k \xi) - \psi_{\pm}^k(s,y, 2^k \xi) \rpr } \lesssim \varepsilon 2^{-2k} \left( 2^{j_0} T 2^{-(\frac{3}{2}+\vm{\alpha})\delta(j_0-k)}+2^{-(\frac{3}{2}+\vm{\alpha})\delta(j_0-k)} \rpr $$
Choosing $ 2^{-j_0} \sim T $ , we obtain \eqref{phaseder2}. When $k=0$ the same numerology, but without the $ \theta $ factors, implies \eqref{phaseder2}.
\end{proof}
\subsection{Decomposable estimates} \
The decomposable calculus was introduced in \cite{RT}. The formulation that we use here is similar to \cite{KST}, which we have modified to allow for non-homogeneous symbols.
For $ k=0 $ we define
$$ \vn{F}_{D_0(L^q L^r)} = \sum_{\alpha \leq 10d} \sup_{\vm{\xi} \leq C} \vn{ \partial_{\xi}^{\alpha} F(\cdot,\cdot,\xi)}_{L^q L^r}. $$
For $ k \geq 1 $ we define
\begin{equation} \label{decomposable_def} \vn{F^{\theta}}_{D_k^{\theta} (L^q L^r)}^2 = \sum_{\phi} \sum_{\alpha_1, \vm{\alpha} \leq 10d} \sup_{\vm{\xi} \sim 2^k} \vn{\chi^{\theta}_{\phi}(\xi) (2^k \partial_{\vm{\xi}})^{\alpha_1} (2^k \theta \nabla_{\omega})^{\alpha} F^{\theta}(\cdot,\cdot,\xi)}_{L^q L^r}^2. \end{equation}
where $ \chi^{\theta}_{\phi}(\xi) $ denote cutoff functions to sectors centered at $ {\phi} $ of angle $ \lesssim \theta $ and $ {\phi} $ is summed over a finitely overlapping collection of such sectors.
The symbol $ F(t,x,\xi) $ is in $ D_k(L^q L^r) $ if we can decompose $ F=\sum F^{\theta} $ such that
$$ \sum_{\theta} \vn{F^{\theta}}_{D_k^{\theta} (L^q L^r)} < \infty $$
and we define $ \vn{F}_{D_k(L^q L^r)} $ to be the infimum of such sums.
\begin{lemma} \label{decomp_lemma}
Suppose that for $ i=1,N $ the symbols $ a(t,x,\xi), F_i(t,x,\xi) $ satisfy $ \vn{F_i}_{D_k(L^{q_i} L^{r_i})} \lesssim 1$ and
$$ \sup_{t \in \mathbb{R}} \vn{A(t,x,D)}_{L^2_x \to L^2_x} \lesssim 1. $$
The symbol of $ T $ is defined to be
$$ a(t,x,\xi) \prod_{i=1}^N F_i(t,x,\xi) . $$
Then, whenever $ q, \tilde{q}, r, r_i \in [1,\infty], \ q_i \geq 2 $ are such that
$$ \frac{1}{q}=\frac{1}{\tilde{q}}+ \sum_{i=1}^N \frac{1}{q_i}, \qquad \frac{1}{r}=\frac{1}{2}+\sum_{i=1}^N \frac{1}{r_i}, $$
we have
$$ T(t,x,D) \bar{P}_k : L^{\tilde{q}} L^2 \to L^{q} L^{r}, $$
By duality, when $ r=2, \ r_i=\infty $, the same mapping holds for $ T(D,s,y) $.
\end{lemma}
\begin{proof}[Proof for $ k=0$]
For each $ i=1,N $ we decompose into Fourier series
$$ F_i(t,x,\xi)=\sum_{j \in \mathbb{Z}^d} d_{i,j} (t,x) e_j(\xi), \qquad e_j(\xi)=e^{i j \cdot \xi} $$
on a box $ [-C/2,C/2]^d $.
From the Fourier inversion formula and integration by parts we obtain
$$ \jb{j}^{M} \vn{d_{i,j}}_{L^{q_i} L^{r_i}} \lesssim \vn{F_i}_{D_0(L^{q_i} L^{r_i})} \lesssim 1 $$
for some large $ M $. Then
$$ Tu(t,x)= \sum_{j_1, \dots j_N \in \mathbb{Z}^d} \prod_{i=1}^N d_{i,j_i}(t,x) \int e^{i x \cdot \xi} e^{i (j_1+\dots +j_N) \cdot \xi} a(t,x,\xi) \hat{u} (t,\xi) \,\mathrm{d} \xi $$
From H\" older's inequality we obtain
\begin{align*}
\vn{Tu}_{L^q L^r} & \leq \sum_{j_1, \dots j_N \in \mathbb{Z}^d} \prod_{i=1}^N \vn{d_{i,j_i}}_{L^{q_i} L^{r_i}} \vn{A(t,x,D) e^{i (j_1+\dots +j_N)D} u}_{L^{\tilde{q}} L^2} \lesssim \\
& \lesssim \sum_{j_1, \dots j_N \in \mathbb{Z}^d} \prod_{i=1}^N \jb{j_i}^{-M} \vn{u}_{L^{\tilde{q}} L^2} \lesssim \vn{u}_{L^{\tilde{q}} L^2},
\end{align*}
which proves the claim.
\end{proof}
\begin{proof}[Proof for $ k \geq1 $]
We present the proof for the case $ N=2 $. It is straightforward to observe that the following works for any $ N $. From the decompositions $ F_i=\sum_{\theta_i} F^{\theta_i}_i $ and definition of $ D_k(L^q L^r) $ we see that it suffices to restrict attention to the operator $ T $ with symbol
$$ a(t,x,\xi) F_1(t,x,\xi) F_2(t,x,\xi) $$
in the case $ F_i=F^{\theta_i}_i, \ i=1,2 $ and to prove
\begin{equation} \label{symbmapping}
\vn{T(t,x,D) \tilde{P}_k }_{ L^{\tilde{q}} L^2 \to L^{q} L^{r}} \lesssim \vn{F_1}_{D^{\theta_1}_k(L^{q_1} L^{r_1})} \vn{F_2}_{D^{\theta_2}_k(L^{q_2} L^{r_2})}. \end{equation}
For $ i=1,2 $ we decompose
$$ F_i=\sum_{T_i} F_i^{T_i}, \qquad F_i^{T_i} (t,x,\xi) \vcentcolon= \varphi_{\theta_i}^{T_i}(\xi) F_i(t,x,\xi) $$
where $ \varphi_{\theta_i}^{T_i}(\xi) $ are cutoff functions to sectors $ T_i $ of angle $ \theta_i $, where the $ T_i $ is summed over a finitely overlapping collection of such sectors. We also consider the bump functions $ \chi_{T_i}(\xi) $ which equal $ 1 $ on the supports of $ \varphi_{\theta_i}^{T_i}(\xi) $ and are adapted to some enlargements of the sectors $ T_i $. We expand each component as a Fourier series
$$ F_i^{T_i} (t,x,\xi)= \sum_{j \in \mathbb{Z}^d} d_{i,j}^{T_i} (t,x) e^{T_i}_{\theta_i,j}(\xi), \qquad e^{T_i}_{\theta_i,j}(\xi)=\exp ( i 2^{-k} j \cdot (\vm{\xi}, \tilde{\omega} \theta_i^{-1})/C ) $$
on the tube $ T_i=\{ \vm{\xi}\sim 2^k, \angle(\xi, \phi_i )\lesssim \theta_i \} $ where $ \xi=\vm{\xi} \omega $ in polar coordinates so that $ \omega $ is parametrized by $ \tilde{\omega} \in \mathbb{R}^{d-1} $ such that $ \vm{\tilde{\omega}} \lesssim \theta_i $.
Integrating by parts in the Fourier inversion formula for $ d_{i,j}^{T_i} (t,x) $ we obtain
$$ \jb{j}^M \vn{d_{i,j}^{T_i} (t)}_{L^{r_i}} \lesssim \sum_{\alpha_1, \vm{\alpha} \leq 10d} \sup_{\vm{\xi} \sim 2^k} \vn{ (2^k \partial_{\vm{\xi}})^{\alpha_1} (2^k \theta_i \nabla_{\omega})^{\alpha} F_i^{T_i}(t,\cdot,\xi)}_{L^{r_i}} $$
and since $ q_i \geq 2 $ we have
\begin{equation} \label{decaydec}
\vn{d_{i,j}^{T_i}}_{L^{q_i}_t l^2_{T_i} L^{r_i}} \lesssim \jb{j}^{-M} \vn{F_i}_{D^{\theta_i}_k(L^{q_i} L^{r_i})} \end{equation}
Since for $ \xi \in T_i $ we have $ F_i^{T_i}=F_i^{T_i} \chi_{T_i}(\xi) $, we can write
$$ T u (t,x)=\sum_{T_1,T_2} \sum_{j_1,j_2} d_{1,j_1}^{T_1} (t,x) d_{2,j_2}^{T_2} (t,x) \int e^{i x \xi} a(t,x,\xi) e^{T_1}_{\theta_1,j_1} e^{T_i}_{\theta_2,j_2} \chi_{T_1} \chi_{T_2} \tilde{\chi}(\xi/2^k) \hat{u} (t,\xi) \,\mathrm{d} \xi. $$
Thus
\begin{align*}
\vn{Tu(t)}_{L^r_x} & \lesssim \sum_{j_1,j_2 \in \mathbb{Z}^d} \sum_{T_1,T_2} \vn{d_{1,j_1}^{T_1} (t)}_{L_x^{r_1}} \vn{d_{2,j_2}^{T_2} (t)}_{L_x^{r_2}} \vn{\chi_{T_1} \chi_{T_2} \hat{u} (t)}_{L^2} \ \\
& \lesssim \sum_{j_1,j_2 \in \mathbb{Z}^d} \vn{d_{1,j_1}^{T_1} (t)}_{l^2_{T_1} L_x^{r_1}} \sum_{T_2} \vn{d_{2,j_2}^{T_2} (t)}_{L_x^{r_2}} \vn{\chi_{T_2} \hat{u} (t)}_{L^2} \ \\
& \lesssim \sum_{j_1,j_2 \in \mathbb{Z}^d} \vn{d_{1,j_1}^{T_1} (t)}_{l^2_{T_1} L_x^{r_1}} \vn{d_{2,j_2}^{T_2} (t)}_{l^2_{T_2} L_x^{r_2 }} \vn{\hat{u} (t)}_{L^2}
\end{align*}
Applying H\" older's inequality and \eqref{decaydec} we obtain
$$ \vn{Tu}_{L^q L^r} \lesssim \vn{u}_{L^{\tilde{q}} L^2} \sum_{j_1,j_2 \in \mathbb{Z}^d} \jb{j_1}^{-M}\jb{j_2}^{-M} \vn{F_1}_{D^{\theta_1}_k(L^{q_1} L^{r_1})} \vn{F_2}_{D^{\theta_2}_k(L^{q_2} L^{r_2})} $$
which sums up to \eqref{symbmapping}.
\end{proof}
\subsection{Decomposable estimates for the phase} \
\
Now we apply the decomposable calculus to the phases $ \psi^k_{\pm}(t,x,\xi) $.
\begin{lemma} Let $ q \geq 2, \ \frac{2}{q}+\frac{d-1}{r} \leq \frac{d-1}{2} $. For $ k \geq 1$ we have
\begin{align}
\label{decomp1} \vn{(\psi^{k}_{j,\theta,\pm},2^{-j}\nabla_{t,x} \psi^{k}_{j,\theta,\pm} ) }_{D_k^{\theta}(L^q L^r)} & \lesssim \varepsilon 2^{-(\frac{1}{q}+\frac{d}{r})j} \frac{\theta^{\frac{d+1}{2}-(\frac{2}{q}+\frac{d-1}{r})}}{\theta^2+2^{-2k}} . \\
\label{decomp6} \vn{P_{\theta}^{\omega} A_{k_1} (t,x) \cdot \omega}_{D_k^{\theta}(L^2L^{\infty})} & \lesssim \theta^{\frac{3}{2}} 2^{\frac{k_1}{2}} \varepsilon.
\end{align}
For $ k=0 $ we have
\begin{equation} \label{decomp0}
\vn{(\psi^{0}_{j,\pm},2^{-j}\nabla_{t,x} \psi^{0}_{j,\pm} ) }_{D_0(L^q L^r)} \lesssim \varepsilon 2^{-(\frac{1}{q}+\frac{d}{r})j}.
\end{equation}
\end{lemma}
\begin{proof} Suppose $ k \geq 1 $.
Without loss of generality, we will focus on $ \psi^{k}_{j,\theta,\pm} $, since exactly the same estimates hold for $ 2^{-j}\nabla_{t,x} \psi^{k}_{j,\theta,\pm} $. In light of the definition \eqref{phase_piece}, for any $ \xi=\vm{\xi} \omega $ and any $ \alpha_1, \vm{\alpha} \leq 10d $, the derivatives $ \partial_{\vm{\xi}}^{\alpha_1} ( \theta \partial_{\omega})^{\alpha}\psi^{k}_{j,\theta,\pm} $ are localized to a sector of angle $ O(\theta) $ in the $ (t,x) $-frequencies and they solve the free wave equation
$$ \Box_{t,x} \partial_{\vm{\xi}}^{\alpha_1} ( \theta \partial_{\omega})^{\alpha} \psi^{k}_{j,\theta,\pm}(t,x,2^k \xi) =0 $$
Let $ r_0 $ be defined by $ \frac{2}{q}+\frac{d-1}{r_0}=\frac{d-1}{2} $. The Bernstein and Strichartz inequalities imply
$$ \vn{\partial_{\vm{\xi}}^{\alpha_1} ( \theta \partial_{\omega})^{\alpha} \psi^{k}_{j,\theta,\pm}(\cdot,2^k \xi)}_{L^q L^r} \lesssim \theta^{(d-1)(\frac{1}{r_0}-\frac{1}{r})} 2^{d(\frac{1}{r_0}-\frac{1}{r})j} \vn{\partial_{\vm{\xi}}^{\alpha_1} ( \theta \partial_{\omega})^{\alpha} \psi^{k}_{j,\theta,\pm}(\cdot,2^k \xi)}_{L^q L^{r_0}} $$
\begin{equation} \label{Ltwo_estim} \lesssim 2^{(1-\frac{1}{q}-\frac{d}{r})j} \theta^{(d-1)(\frac{1}{r_0}-\frac{1}{r})} \vn{\partial_{\vm{\xi}}^{\alpha_1} ( \theta \partial_{\omega})^{\alpha} \psi^{k}_{j,\theta,\pm}(2^k \xi)[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1} } \end{equation}
By Remark \ref{rkdermultiplier} (which uses the null form) we deduce
$$ \eqref{Ltwo_estim} \lesssim 2^{-(\frac{1}{q}+\frac{d}{r})j} \frac{\theta^{\frac{d+1}{2}-(\frac{2}{q}+\frac{d-1}{r})}}{\theta^2+2^{-2k}} \vn{\Pi^{\omega,\alpha}_{\theta} \tilde{P}_{j} A[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}}. $$
By putting together this estimate, definition \ref{decomposable_def}, the finite overlap of the sectors and the orthogonality property, we obtain
$$ \vn{\psi^{k}_{j,\theta,\pm}}_{D_k^{\theta}(L^q L^r)} \lesssim 2^{-(\frac{1}{q}+\frac{d}{r})j} \frac{\theta^{\frac{d+1}{2}-(\frac{2}{q}+\frac{d-1}{r})}}{\theta^2+2^{-2k}} \vn{\tilde{P}_{j} A[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}}, $$
which proves the claim, since $ \vn{\tilde{P}_{j} A[0]}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} \lesssim \varepsilon $.
The same argument applies to \eqref{decomp6}. The only difference is that one uses the angular-localized Strichartz inequality $ \vn{P_{\theta}^{\omega} A_{j}}_{L^2 L^{\infty}} \lesssim \theta^{\frac{d-3}{2}} 2^{\frac{j}{2}} \vn{A_{j}}_{\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}} $, which holds for free waves, in addition to the null form which gives an extra $ \theta $.
When $ k=0 $ the same argument goes through without angular projections and with no factors of $ \theta $ in \eqref{Ltwo_estim}.
\end{proof}
\begin{remark} As a consequence of the above we also obtain
\begin{equation} \label{decomp5} \vn{\nabla_{\xi} (\Pi^{\omega}_{\leq \delta(k_1-k)} A_{k_1}(t,x) \cdot \xi) }_{D_k^1 (L^2 L^{\infty})} \lesssim 2^{-10d \delta (k_1-k)} 2^{\frac{k_1}{2}} \varepsilon.
\end{equation}
\end{remark}
\begin{corollary} For $ k \geq 0 $ we have
\begin{equation} \label{decomp2} \vn{(\psi^{k}_{j,\pm}, 2^{-j} \nabla_{t,x} \psi^{k}_{j,\pm} ) }_{D_k(L^q L^{\infty})} \lesssim 2^{-\frac{j}{q}} \varepsilon, \qquad q>4
\end{equation}
\begin{equation} \label{decomp3}\vn{\nabla_{t,x} \psi^{k}_{\pm}}_{D_k(L^2 L^{\infty})} \lesssim 2^{\frac{k}{2}} \varepsilon
\end{equation}
\begin{equation} \label{decomp4} \vn{\nabla_{t,x} \psi^{k}_{\pm}}_{D_k(L^{\infty} L^{\infty})} \lesssim 2^{k} \varepsilon.
\end{equation}
\end{corollary}
\begin{proof} The bound \eqref{decomp4} follows by summing over \eqref{decomp2}. For $ k=0 $, \eqref{decomp2} and \eqref{decomp3} follow from \eqref{decomp0}.
Now assume $ k \geq 1$. The condition $ q>4 $ makes the power of $ \theta $ positive in \eqref{decomp1} for any $ d \geq 4 $. Thus \eqref{decomp2} follows by summing in $ \theta $. For \eqref{decomp3}, summing in $ \theta $ gives the factor $ 2^{\frac{\delta}{2}(k-j)} $, which is overcome by the extra factor of $ 2^{j} $ when summing in $ j<k $.
\end{proof}
\subsection{Further properties}
\begin{lemma}
Let $a(x,\xi)$ and $b(x,\xi)$ be smooth symbols. Then one has
\begin{equation}
\vn{a^rb^r- (ab)^r}_{L^r(L^2)
\to L^q(L^2)} \lesssim \vn{(\nabla_x a)^r}_{L^r(L^2)\to L^{p_1}(L^2)}
\vn{\nabla_\xi b}_{D^1_k L^{p_2}(L^\infty)} \label{spec_decomp1}
\end{equation}
\begin{equation}
\vn{a^l b^l- (ab)^l}_{L^r(L^2)
\to L^q(L^2)} \lesssim \vn{\nabla_\xi a}_{D_k^1L^{p_2}(L^\infty)} \vn{(\nabla_x b)^l}_{L^r(L^2)\to L^{p_1}(L^2)} \label{spec_decomp3}
\end{equation}
where $q^{-1}=\sum p_i^{-1}$. Furthermore, if $b=b(\xi)$ is a smooth multiplier supported on $ \{ \jb{\xi} \simeq 2^k \} $, then for any two translation
invariant spaces $X,Y$ one has:
\begin{equation}
\vn{a^rb^r- (ab)^r}_{X\to Y} \lesssim 2^{-k}
\vn{(\nabla_x a)^r}_{X\to Y}
\ . \label{spec_decomp2}
\end{equation}
\end{lemma}
\begin{proof}
See \cite[Lemma 7.2]{KST}.
\end{proof}
\begin{lemma} \label{loczsymb} Let $ X, Y $ be translation-invariant spaces of functions on $ \mathbb{R}^{n+1} $ and consider the symbol $ a(t,x,\xi) $ such that
$$ a(t,x,D) : X \to Y. $$
Then the $ (t,x)$-frequency localized symbol $ a_{<h} (t,x,\xi) $ also satisfies
$$ a_{<h} (t,x,D): X \to Y .$$
\end{lemma}
\begin{proof} We write
$$ a_{<h} (t,x,D)u=\int m(s,y) T_{(s,y)} a(t,x,D) T_{-(s,y)} u \,\mathrm{d} s \,\mathrm{d} y $$
where $ m(s,y) $ is a bump function and $ T_{(s,y)} $ denotes translation by $ (s,y) $. Now the claim follows from Minkowski's inequality and the $ T_{\pm(s,y)} $-invariance of $ X, Y $.
\end{proof}
\section{Oscillatory integrals estimates } \label{sec:Osc-int} \
In this section we prove estimates for oscillatory integrals that arise as kernels of $ TT^* $ operators used in proofs of the mapping \eqref{renbd3} and \eqref{renbd}, \eqref{renbdt}, \eqref{renbd2}. These bounds are based on stationay and non-stationary phase arguments (see Prop. \ref{nonstationary} and \ref{stationary}).
\subsection{Rapid decay away from the cone}
We consider
\begin{equation} \label{kernel:a} K_k^a(t,x,s,y)= \int e^{- i \psi_{\pm}^k(t,x,\xi)} a\big( \frac{\xi}{2^k} \big) e^{\pm i (t-s) \jb{\xi}+i (x-y) \xi} e^{+ i \psi_{\pm}^k(s,y,\xi)} \,\mathrm{d} \xi \end{equation}
where $ a(\xi) $ is a bump function supported on $ \{ \vm{\xi} \simeq 1 \} $ for $ k\geq 1 $ and on $ \{ \vm{\xi} \lesssim 1 \} $ for $ k=0 $.
\begin{proposition} \label{stat_phase_a}
For $ k \geq 0 $ and any $ N \geq 0 $, we have
\begin{equation} \vm{K_k^a(t,x,s,y)} \lesssim 2^{dk} \frac{1}{\jb{2^k ( \vm{t-s}-\vm{x-y})}^N} \end{equation}
whenever $ 2^k \vm{ \vm{t-s}-\vm{x-y}} \gg 2^{-k} \vm{t-s}$.
Moreover, the implicit constant is bounded when $ a(\xi) $ is bounded in $ C^N(\vm{\xi}\lesssim 1) $.
\end{proposition}
\begin{proof} We first assume $ k\geq 1 $. Suppose without loss of generality that $ t-s \geq 0 $, and $ \pm=+ $. Denoting $ \lambda=\vm{ \vm{t-s}-\vm{x-y}} $ it suffices to consider $ 2^k \lambda \gg 1 $. By a change of variables we write
$$ K_k^a= 2^{dk} \int_{\vm{\xi} \simeq 1} e^{i 2^k (\phi_k+\varphi_k)(t,x,s,y,\xi)} a(\xi) \,\mathrm{d} \xi $$
where
$$ \phi_k(t,x,s,y,\xi)=(t-s) \jb{\xi}_k+(x-y) \cdot \xi $$
$$ \varphi_k(t,x,s,y,\xi)=-(\psi_{\pm}^k(t,x,2^k \xi)-\psi_{\pm}^k(s,y,2^k \xi))/2^k. $$
By Prop \ref{phasederProp} and noting that $ T=\vm{t-s}+\vm{x-y} \lesssim 2^{2k} \lambda $ we have
$$ \vm{ \nabla \varphi_k} \lesssim \varepsilon (2^k T)^{3\delta}/2^k \lesssim \varepsilon \lambda $$
Furthermore,
$$ \nabla \phi_k=(t-s) \frac{\xi}{\jb{\xi}_k}+(x-y) $$
If $ \vm{x-y} \geq 2 \vm{t-s} $ or $ \vm{t-s} \geq 2 \vm{x-y} $, by non-stationary phase, we easily estimate $ \vm{ K_k^a}\lesssim 2^{dk} \jb{2^k T}^{-N} $.
Now we assume $ \vm{t-s} \simeq \vm{x-y} \gg 2^{-k} $. On the region $ \angle(-\xi,x-y)> 10^{-3} $ we have $ \vm{ \nabla \phi_k} \gtrsim \vm{t-s} $, thus by a smooth cutoff and non-stationary phase, that component of the integral is $ \lesssim 2^{dk} \jb{2^k T}^{-N} $. Now we can assume $ a(\xi) $ is supported on the region $ \angle(-\xi,x-y) \leq 10^{-2} $.
If $ \vm{ \nabla \phi_k}\geq 1/4 \lambda $ on that region, we get the bound $ 2^{dk} \jb{2^k \lambda }^{-N} $. We claim this is always the case. Suppose the contrary, that there exists $ \xi $ such that $ \vm{ \nabla \phi_k} \leq 1/4 \lambda $. Then $ (t-s) \frac{\vm{\xi'}}{\jb{\xi}_k} \leq 1/4 \lambda $ writing in coordinates $ \xi=(\xi_1,\xi') $ where $ \xi_1 $ is in the direction $ x-y $ while $ \xi' $ is orthogonal to it.
$$ \nabla \phi_k \cdot \frac{x-y}{\vm{x-y}}=(t-s)\frac{\xi_1}{\jb{\xi}_k} + \vm{x-y}=\pm \lambda+ (t-s)(1+\frac{\xi_1}{\jb{\xi}_k} ) $$
Thus $ \xi_1 \leq 0 $ and using that $ \lambda \gg 2^{-2k} (t-s) $ we have
$$ (t-s)(1+\frac{\xi_1}{\jb{\xi}_k})=\frac{t-s}{\jb{\xi}_k}\frac{2^{-2k}+\vm{\xi'}^2}{\jb{\xi}_k+\vm{\xi_1}}<2^{-2k}(t-s)+ \frac{1}{4} \lambda \leq \frac{1}{2} \lambda $$
which implies $ \vm{ \nabla \phi_k} \geq 1/2 \lambda $, a contradiction. This concludes the case $ k \geq 1 $.
When $ k=0$ we have $ \vm{x-y} \gg \vm{t-s} $. For the corresponding phase we have $ \vm{\nabla \phi_0} \geq \frac{1}{2} \vm{x-y} $ and thus we get the factor $ \jb{x-y}^{-N} $.
\end{proof}
\
\subsection{\bf Dispersive estimates} \
\
Dispersive estimates for the Klein-Gordon equation are treated in places like \cite[Section 2.5]{NaSch}, \cite{BH1}. The situation here is slightly complicated by the presence of the $ e^{\pm i\psi} $ transformations. To account for this we use Prop. \ref{phasederProp}.
Let
$$ K^k \vcentcolon= \int e^{- i \psi_{\pm}^k (t/2^k,x/2^k,2^k\xi)+i\psi_{\pm}^k(s/2^k,y/2^k,2^k\xi) } e^{\pm i t(t-s) \jb{\xi}_k} e^{i (x-y) \xi} a(\xi) \,\mathrm{d} \xi $$
where $ a(\xi) $ is a bump function supported on $ \{ \vm{\xi} \simeq 1 \} $ for $ k \geq 1 $ and on $ \{ \jb{\xi} \simeq 1 \} $ for $ k=0 $.
\begin{proposition}
For any $ k \geq 0 $ one has the inequalities
\begin{numcases}{\vm{K^k(t,x;s,y)} \lesssim }
\frac{1}{\jb{t-s}^{\frac{d-1}{2}}} \label{dispestt1} \\
\frac{2^k}{\jb{t-s}^{d/2}} \label{dispestt12}
\end{numcases}
\end{proposition}
\begin{proof}
\pfstep{Step~1} We first prove \eqref{dispestt1} for $ k \geq 1 $. We assume $ \vm{t-s} \simeq \vm{x-y} \gg 1 $ and that $ a(\xi) $ is supported on the region $ \angle(\mp \xi,x-y) \leq 10^{-2} $, since in the other cases the phase is non-stationary and we obtain the bound $ \jb{t-s}^{-N} $ from the proof of Prop. \ref{stat_phase_a}. We denote
$$ \varphi(t,x,s,y,\xi)=- \psi_{\pm}^k (t/2^k,x/2^k,2^k\xi)+\psi_{\pm}^k(s/2^k,y/2^k,2^k\xi) $$
and write
$$ (x-y) \cdot \xi \pm \vm{x-y} \vm{\xi}=\pm 2 \vm{x-y} \vm{\xi} \sin^2(\theta/2) $$
where $ \theta=\angle(\mp \xi,x-y) $. We write $ \xi=(\xi_1,\xi') $ in polar coordinates, where $ \xi_1=\vm{\xi} $ is the radial component. Then
\begin{equation} \qquad \quad \ K^k=\int_{\xi_1 \simeq 1} \xi_1^3e^{\pm i (t-s) \jb{\xi_1}_k \mp i \vm{x-y} \xi_1 } \Omega(\xi_1) \,\mathrm{d} \xi_1 \label{int_pol} \end{equation}
$$ \text{where} \quad \Omega(\xi_1)= \int e^{\pm i \vm{x-y} 2 \xi_1 \sin^2(\theta/2)} a(\xi_1,\xi') e^{i \varphi} \,\mathrm{d} S(\xi')
$$
For each $ \xi_1 $ we bound
\begin{equation} \label{stph:omg1} \vm{\Omega(\xi_1)} \lesssim \vm{x-y}^{-\frac{d-1}{2}} \end{equation}
as a stationary-phase estimate (see Prop. \ref{stationary}). When derivatives fall on $ e^{i \varphi} $ we get factors of $ \varepsilon \vm{x-y}^{\delta} $ by \eqref{phaseder}; however, these are compensated by the extra factors $ \vm{x-y}^{-1} $ from the expansion \eqref{expansion}. Integrating in $ \xi_1 $ we obtain \eqref{dispestt1}.
Furthermore, using \eqref{st-phase:est} we obtain
\begin{equation} \label{stph:omg2}
\vm{\partial_{\xi_1}^j \Omega(\xi_1)} \lesssim \vm{x-y}^{-\frac{d-1}{2}} \langle 2^{-2k} \vm{x-y}^{4 \delta} \rangle \qquad j=1,2.
\end{equation}
The term $ \langle 2^{-2k} \vm{x-y}^{4 \delta} \rangle $ occurs by \eqref{phaseder2} when $ \partial_{\xi_1} $ derivatives fall on $ e^{i \varphi} $.
\pfstep{Step~2} Now we prove \eqref{dispestt12}.
First we consider $ k = 0 $ and $ \vm{t-s} \gg 1 $. When $ \vm{t-s} \leq c \vm{x-y} $ the phase is non-stationary and we obtain $ \jb{t-s}^{-N} $. Otherwise, we consider the phase $ \jb{\xi}+\frac{x-y}{\vm{t-s}}\cdot \xi $ and get the bound $ \jb{t-s}^{-d/2} $ as a stationary-phase estimate using Prop. \ref{phasederProp}.
Now we take $ k \geq 1 $ under the assumptions from Step 1. We may also assume $ \vm{t-s} \gg 2^{2k} $ (otherwise \eqref{dispestt12} follows from \eqref{dispestt1}).
In \eqref{int_pol} we have the phase $ 2^{-2k} \vm{t-s} f(\xi_1) $ where
$$ f(\xi_1)=2^{2k} \Big( \jb{\xi_1}_k- \frac{\vm{x-y}}{\vm{t-s}} \xi_1 \Big), \quad f'(\xi_1)=2^{2k} \Big( \frac{\xi_1}{\jb{\xi_1}_k}- \frac{\vm{x-y}}{\vm{t-s}} \Big), \quad \vm{f''(\xi_1)} \simeq 1,
$$
and $ \vm{f^{(m)}(\xi_1)} \lesssim 1 $ for $ m \geq 3 $. Using stationary phase in $ \xi_1 $ (Prop. \ref{stationary}/\ref{nonstationary}) one has
$$ \vm{K^k} \lesssim \frac{1}{\vm{2^{-2k} \vm{t-s}}^{\frac{1}{2}}} \sup \vm{\Omega} + \frac{1}{2^{-2k}\vm{t-s}} \sup_{j \leq 2} \vm{\partial_{\xi^1}^j \Omega},
$$
which, together with \eqref{stph:omg1}, \eqref{stph:omg2}, implies \eqref{dispestt12}.
\end{proof}
Now we consider more localized estimates.
Let $ \mathcal C $ be a box of size $\simeq 2^{k'} \times ( 2^{k' +l'} )^{d-1} $ located in an annulus $ \{ \jb{\xi} \simeq 2^k \} $ for $ k \geq 0 $.
Suppose $ a_{\mathcal C} $ is a bump function adapted to $ \mathcal C $ and define
\begin{equation} K^{k',l'}(t,x;s,y) \vcentcolon= \int e^{- i \psi_{\pm}^k (t,x,\xi)+i\psi_{\pm}^k(s,y,\xi) } e^{\pm i t(t-s) \jb{\xi}} e^{i (x-y) \xi} a_{\mathcal C}(\xi) \,\mathrm{d} \xi. \label{oscInteg} \end{equation}
\begin{proposition}
Let $ k \geq 0, \ k' \leq k $ and $ -k \leq l' \leq 0 $. Then, we have
\begin{equation} \label{dispestt2}
\vm{K^{k',l'}(t,x;s,y) } \lesssim 2^{dk'+(d-1)l'} \frac{1}{\jb{2^{2(k'+l')-k} (t-s)}^{\frac{d-1}{2}}}
\end{equation}
\end{proposition}
\begin{proof}
We assume $ 2^{2(k'+l')-k} \vm{t-s} \gg 1 $ (otherwise we bound the integrand by absolute values on $ \mathcal C $) and assume $ \vm{t-s} \simeq \vm{x-y} $ (otherwise the phase is non-stationary). Let $ k \geq 1 $. By a change of variable we rescale to $ \vm{\xi} \simeq 1 $ and write $ K^{k',l'} $ as $ 2^{dk} \times $ \eqref{int_pol}- applied to $ 2^k (t,x;s,y) $, with $ a(\cdot) $ supported on a box $ 2^{k'-k} \times ( 2^{k' +l'-k} )^{d-1} $. Like before, for each $ \xi_1 $ we bound the inner integral $ \Omega(\xi_1) $ by $ (2^k \vm{t-s})^{-\frac{d-1}{2}} $ by stationary-phase. Integrating in $ \xi_1 $ on a radius of size $ 2^{k'-k} $ we get $ 2^{dk} 2^{k'-k} (2^k \vm{t-s})^{-\frac{d-1}{2}} $ which gives \eqref{dispestt2}. When $ k=0, \ l'=O(1) $ the estimate is straightforward.
\end{proof}
\begin{corollary} \label{Cor:L2Linf} Let $ k \geq 0, \ k' \leq k $ and $ -k \leq l' \leq 0 $. Then
\begin{equation} e^{-i \psi_{\pm}^k} (t,x,D) e^{\pm i t \jb{D}} P_{C_{k'}(l')} :L^2_x \to 2^{\frac{k}{2}+\frac{d-2}{2}k'+\frac{d-3}{2}l' }L^2 L^{\infty} \label{TT*:L2Linf}
\end{equation}
\end{corollary}
\begin{proof}
By a $ TT^* $ argument this follows from
$$ 2^{-k-(d-2)k'-(d-3)l'} e^{- i \psi_{\pm}^k} (t,x,D) e^{\pm i t(t-s) \jb{D}} P_{C_{k'}(l')}^2 e^{i \psi_{\pm}^k} (D,s,y) : L^2L^1 \to L^2 L^{\infty} $$
We use $ \eqref{dispestt2} $ to bound the kernel of this operator, and the mapping follows since $ 2^{2k'+2l'-k} \jb{2^{2k'+2l'-k} \vm{r}}^{-(d-1)/2} $ has $ L^1_r L^{\infty}_x $ norm $ \lesssim 1 $.
\end{proof}
\
\subsection{\bf The PW decay bound, $ d=4 $}\
\
Let $ \mathcal C $ be a box of size $\simeq 2^{k'} \times ( 2^{k'-k} )^3 $ with center $ \xi_0 $ located in an annulus $ \{ \vm{\xi} \sim 2^k \} \subset \mathbb{R}^4 $. We consider the decay of the integral $ K^{k',-k} $ defined in \eqref{oscInteg}, in the frame \eqref{frame}, \eqref{frame2}, where $ \omega $ is the direction of $ \xi_0 $ and $ \lambda=\frac{\vm{\xi_0}}{\jb{\xi_0}} $.
This type of bound is similar to the one used by Bejenaru and Herr \cite[Prop. 2.3]{BH1} to establish null-frame $ L^2_{t_{\omega,\lambda}} L^{\infty}_{x_{\omega,\lambda}} $- Strichartz estimates, an idea we will also follow in this paper.
\begin{proposition} When $ \vm{t_{\omega}-s_{\omega}} \gg 2^{k'-3k} \vm{t-s} $, we have
\begin{equation} \label{PWdecay}
\vm{K^{k',-k}(t,x;s,y)} \lesssim 2^{4k'-3k} \frac{1}{\jb{2^{k'} (t_{\omega}-s_{\omega})}^2}
\end{equation}
\end{proposition}
\begin{proof} Denoting $ T=\vm{t-s}+\vm{x-y} $, we clearly have $ \vm{t_{\omega}-s_{\omega}} \leq T $. In the cases when $ \vm{t-s} \geq 2\vm{x-y} $ or $\vm{x-y} \geq 2 \vm{t-s} $ from integrating by parts radially we obtain the decay $ \jb{2^{k'}T}^{-N} 2^{4k'-3k} $.
Now suppose $ \vm{t-s} \simeq \vm{x-y} $, $ \pm=+ $ and let
$$ \phi(\xi)=(t-s) \jb{\xi}+(x-y) \cdot \xi, \qquad \nabla \phi=(t-s) \frac{\xi}{\jb{\xi}}+x-y. $$
For $ \xi \in \mathcal C $ we have $ \frac{\vm{\xi}}{\jb{\xi}}=\lambda+O(2^{k'-3k}) $ and
$$ \frac{\xi}{\vm{\xi}}=\omega+\sum_i O(2^{k'-2k}) \omega_i + O(2^{2(k'-2k)}), \qquad \omega_i \in \omega^{\perp}. $$
Therefore
$$ \omega \cdot \nabla \phi = (t_{\omega}-s_{\omega}) \sqrt{1+\lambda^2} + O(2^{k'-3k} \vm{t-s}). $$
Due to the assumption, the phase is non-stationary $ \vm{ \omega \cdot \nabla \phi} \gtrsim \vm{t_{\omega}-s_{\omega}} $ and we obtain \eqref{PWdecay} by integrating by parts with $ \partial_{\omega}=\omega \cdot \nabla $.
When derivatives fall on $ e^{- i \psi_{\pm}^k (t,x,\xi)+\psi_{\pm}^k(s,y,\xi) } $ we get extra factors of $ 2^{k'-k} (2^k T)^{\delta} $ from Prop \ref{phasederProp}. However, we compensate this factors by writing the integral in polar coordinates similarly to \eqref{int_pol} and using stationary-phase for the inner integral like in the proof of \eqref{dispestt2}, \eqref{dispestt1}, giving an extra $ (2^{2k'-3k}T)^{-3/2} $, which suffices. \end{proof}
\begin{corollary} \label{corPW} For $ k\geq 1 $ let $ \xi_0 $ be the center of the box $ C_{k'}(-k) $, $ \lambda=\frac{\vm{\xi_0}}{\jb{\xi_0}} $ and $ \omega=\frac{\xi_0}{\vm{\xi_0}} $. Then
$$ 2^{-\frac{3}{2}(k'-k)} e^{- i \psi_{\pm}^k} (t,x,D) e^{i t \jb{D}} P_k P_{C_{k'}(-k)} : L^2_x \to L^2_{t_{\omega,
\lambda}} L^{\infty}_{x_{\omega,\lambda}} $$
\end{corollary}
\begin{proof}
By a $ TT^* $ argument this follows from the mapping
$$ 2^{-3(k'-k)} e^{- i \psi_{\pm}^k} (t,x,D) e^{ i t(t-s) \jb{D}} P_k^2 a_{\mathcal C}(D) e^{i \psi_{\pm}^k} (D,s,y) :L^2_{t_{\omega,\lambda}} L^1_{x_{\omega,\lambda}} \to L^2_{t_{\omega,\lambda}} L^{\infty}_{x_{\omega,\lambda}} $$
which holds since the kernel of this operator is bounded by $ 2^{k'} \jb{2^{k'} (t_{\omega}-s_{\omega})}^{-3/2} \in L^1_{t_{\omega}-s_{\omega}} L^{\infty} $. When $ \vm{t_{\omega}-s_{\omega}} \gg 2^{k'-3k} \vm{t-s} $ this follows from \eqref{PWdecay}, while for $ \vm{t_{\omega}-s_{\omega}} \lesssim 2^{k'-3k} \vm{t-s} $ it follows from \eqref{dispestt2} with $ l'=-k $.
\end{proof}
\
\subsection{\bf The null frame decay bound, $ d=4 $} \
\
Let $ \bar{\omega} \in \mathbb{S}^3 $ and let $ \kappa_l $ be a spherical cap of angle $ 2^l $ such that $ \angle(\kappa_l,\pm \bar{\omega}) \simeq 2^{l} $.
Let $ \lambda=\frac{1}{\sqrt{1+2^{-2p}}} $, which together with $ \bar{\omega} $ defines the frame \eqref{frame} and the coordinates in this frame
$$ t_{\bar{\omega}}=(t,x) \cdot \bar{\omega}^{\lambda}, \quad x^1_{\bar{\omega}}=(t,x) \cdot \bar{\bar{\omega}}^{\lambda}, \quad x_{\bar{\omega},i}'=x \cdot \bar{\omega}_i^{\perp} $$
Suppose $ a_l(\xi) $ is a smooth function adapted to $ \{ \vm{\xi} \simeq 2^k, \ \frac{\xi}{\vm{\xi}} \in \kappa_l \}$ and consider
$$ K_l^{a_l}(t,x;s,y) \vcentcolon= \int e^{- i \psi_{\pm}^k (t,x,\xi)+\psi_{\pm}^k(s,y,\xi) } e^{\pm i t(t-s) \jb{\xi}} e^{i (x-y) \xi} a_l(\xi) \,\mathrm{d} \xi. $$
\begin{proposition}
Suppose $ \max(2^{-p},2^{-k}) \ll 2^{l} \simeq \angle(\kappa_l,\pm \bar{\omega}). $ Then, we have
\begin{equation} \label{NEdecay} \vm{ K_l^{a_l}(t,x;s,y) } \lesssim 2^{4k+3l} \frac{1}{\jb{2^{k+2l} \vm{t-s}}^N} \frac{1}{\jb{2^{k+l} \vm{x'_{\bar{\omega}}-y'_{\bar{\omega}}}}^N} \jb{2^k \vm{t_{\bar{\omega}}-s_{\bar{\omega}}}}^{2N} \end{equation}
Moreover, the implicit constant depends on only $ 2N+1 $ derivatives of $ a_l $.
\end{proposition}
\begin{proof}
We prove that the phase is non-stationary due to the angular separation.
Suppose $ \pm=+ $ and let
$$ \phi(\xi)=(t-s) \jb{\xi}+(x-y) \cdot \xi, \qquad \nabla \phi=(t-s) \frac{\xi}{\jb{\xi}}+x-y. $$
Choosing the right $ \bar{\omega}_i^{\perp} $ we obtain
$$ \nabla \phi \cdot \bar{\omega}_i^{\perp} \simeq 2^{l} (t-s)+\vm{x'_{\bar{\omega}}-y'_{\bar{\omega}}}. $$
When $ 2^{l} \vm{t-s} \ll \vm{x'_{\bar{\omega}}-y'_{\bar{\omega}}} $ we obtain
$ \vm{K_l^{a_l}} \lesssim 2^{4k+3l} \jb{2^{k+l} \vm{x'_{\bar{\omega}}-y'_{\bar{\omega}}}}^{-2N} $, which implies \eqref{NEdecay}. Similarly when $ \vm{x'_{\bar{\omega}}-y'_{\bar{\omega}}} \ll 2^{l} \vm{t-s} $ we get $ \vm{K_l^{a_l}} \lesssim 2^{4k+3l} \jb{2^{k+l} 2^{l} \vm{t-s}}^{-2N} $ which also suffices.
We use Prop \ref{phasederProp} to control the contribution of $ \psi_{\pm}^k (t,x,\xi)-\psi_{\pm}^k(s,y,\xi) $.
Now assume $ 2^{l} \vm{t-s} \simeq \vm{x'_{\bar{\omega}}-y'_{\bar{\omega}}} $. When $ ( 2^{2l} \vm{t-s}\simeq ) \ 2^{l} \vm{x'_{\bar{\omega}}-y'_{\bar{\omega}}} \lesssim \vm{t_{\bar{\omega}}-s_{\bar{\omega}}} $ estimating the integrand by absolute values we get $ \vm{ K_l^{a_l}} \lesssim 2^{4k+3l} $, which suffices in this case.
Now we assume $ \vm{t_{\bar{\omega}}-s_{\bar{\omega}}} \ll 2^{2l} \vm{t-s} \simeq 2^l \vm{x'_{\bar{\omega}}-y'_{\bar{\omega}}} $.
Since $(x-y) \cdot \bar{\omega}=- \lambda (t-s)+(t_{\bar{\omega}}-s_{\bar{\omega}}) \sqrt{1+\lambda^2} $, we have
$$ \nabla \phi \cdot \bar{\omega}=(t-s) \left( \frac{\vm{\xi}}{\jb{\xi}} \frac{\xi}{\vm{\xi}} \cdot \bar{\omega} - \lambda \rpr+(t_{\bar{\omega}}-s_{\bar{\omega}}) \sqrt{1+\lambda^2} $$
We estimate
$$ \frac{\vm{\xi}}{\jb{\xi}}-1 \simeq -2^{-2k}, \quad \frac{\xi}{\vm{\xi}} \cdot \bar{\omega} -1\simeq -2^{2l}, \quad \lambda-1 \simeq -2^{-2p} $$
From the hypothesis on $ 2^{l} $ we conclude that this term dominates so
$$ \vm{\nabla \phi \cdot \bar{\omega}} \gtrsim 2^{2l} \vm{t-s}, $$
which implies \eqref{NEdecay} as a non-stationary phase estimate.
\end{proof}
Now we consider frequency localized symbols and look at the $ TT^{*} $ operator
\begin{equation} \label{ttstarop}
e^{- i \psi_{\pm}^k}_{<k} (t,x,D) e^{\pm i t(t-s) \jb{D}} a_l(D) e^{i \psi_{\pm}^k}_{<k} (D,s,y) \end{equation}
from $ L^2(\Sigma) \to L^2(\Sigma) $, where $ \Sigma=(\bar{\omega}^{\lambda})^{\perp} $ with kernel
$$ K_{<k}^l(t,x;s,y) \vcentcolon= \int e^{- i \psi_{\pm}^k(t,x,\xi)}_{<k} e^{\pm i t(t-s) \jb{\xi}} e^{i (x-y) \xi} a_l(\xi) e^{i \psi_{\pm}^k(s,y,\xi)}_{<k} \,\mathrm{d} \xi. $$
for $ (t,x;s,y) \in \Sigma \times \Sigma $, i.e. $ t_{\bar{\omega}}=s_{\bar{\omega}}=0 $.
\begin{proposition} \label{Propnullekernel}
Suppose $ \max(2^{-p},2^{-k}) \ll 2^{l} \simeq \angle(\kappa_l,\pm \bar{\omega}). $ Then,
\begin{equation} \label{nullekernelbd} \vm{ K_{<k}^l(t,x;s,y) } \lesssim 2^{4k+3l} \frac{1}{\jb{2^{k+2l} \vm{x^1_{\bar{\omega}}-y^1_{\bar{\omega}}}}^N} \frac{1}{\jb{2^{k+l} \vm{x'_{\bar{\omega}}-y'_{\bar{\omega}}}}^N} \end{equation}
holds when $ \lambda (t-s)+(x-y) \cdot \bar{\omega}=0 $.
\end{proposition}
\begin{corollary} \label{Cornullframe}
Suppose $ \max(2^{-p},2^{-k}) \ll 2^{l} \simeq \angle(\kappa_l,\pm \bar{\omega}) $. Then
\begin{equation} 2^l e^{- i \psi_{\pm}^k}_{<k} (t,x,D) e^{\pm i t \jb{D}} P_k P_{\kappa_l} : L^2_x \to L^{\infty}_{t_{\bar{\omega}, \lambda}} L^2_{x_{\bar{\omega},\lambda}}. \end{equation}
\end{corollary}
\begin{corollary} \label{CorNE}
Let $ \mathcal C=\mathcal C_{k'}(l') $. Then
\begin{equation} e^{- i \psi_{\pm}^k}_{<k} (t,x,D) e^{\pm i t \jb{D}} P_k P_{\mathcal C} : L^2_x \to NE_{\mathcal C}^{\pm}. \end{equation}
\end{corollary}
\begin{proof}[Proof of Prop. \ref{Propnullekernel}] We average using \eqref{averaging} to write $ K_{<k}^l(t,x;s,y) $ as
$$ \iint e^{- i T_z \psi_{\pm}^k(t,x,\xi)} e^{+ i T_w\psi_{\pm}^k(s,y,\xi)} a_l(\xi) e^{\pm i (t-s) \jb{\xi}} e^{i (x-y) \xi} \,\mathrm{d} \xi m_k(z) m_k(w) \,\mathrm{d} z \,\mathrm{d} w= $$
\begin{equation} =\int T_z T_w K^{a(z,w)}_l (t,x;s,y) m_k(z) m_k(w) \,\mathrm{d} z \,\mathrm{d} w \label{intermint1} \end{equation}
where $ a(z,w)(\xi)=e^{-i (z-w)\cdot(\pm \jb{\xi},\xi)} a_l(\xi) $. Since $ t_{\bar{\omega}}=s_{\bar{\omega}}=0 $ using \eqref{NEdecay} we obtain
\begin{align*}
& \vm{T_z T_w K^{a(z,w)}_l (t,x;s,y)} \lesssim \jb{2^k (\vm{z}+\vm{w})}^{2N+1} 2^{4k+3l} \times \\
& \quad \times \jb{2^{k+2l} \vm{t-s+z_1-w_1}}^{-N} \jb{2^{k+l} \vm{x'_{\bar{\omega}}-y'_{\bar{\omega}}+z'_{\bar{\omega}}-w'_{\bar{\omega}} }}^{-N} \jb{2^k \vm{z-w}}^{2N}
\end{align*}
We obtain \eqref{nullekernelbd} from the integral \eqref{intermint1} using the rapid decay
$$ \jb{2^k (\vm{z}+\vm{w})}^{2N+1} \jb{2^k \vm{z-w}}^{2N} m_k(z) m_k(w) \lesssim \jb{2^k \vm{z}}^{-N_2} \jb{2^k \vm{w}}^{-N_2}. $$
for any $ N_2 $, and by repeatedly applying
$$ \int_{\mathbb{R}} \frac{1}{\jb{\alpha \vm{a-r}}^{N}} \frac{2^k}{\jb{2^k \vm{r} }^{N_2}} \,\mathrm{d} r \lesssim \frac{1}{\jb{\alpha \vm{a}}^{N}} $$
for $ \alpha \leq 2^k $ and $ N_2 $ large enough. Note that here $ \vm{t-s} \simeq \vm{x^1_{\bar{\omega}}-y^1_{\bar{\omega}}} $.
\end{proof}
\begin{proof}[Proof of Corollary \ref{Cornullframe}] By translation invariance, it suffices to prove that the operator is bounded from $ L^2_x \to L^2(\Sigma) $. By a $ TT^* $ argument this follows if we prove $ 2^{2l} \times $ \eqref{ttstarop} $ : L^2(\Sigma) \to L^2(\Sigma) $, for which we use Schur's test. Indeed, the kernel of this operator is $ 2^{2l} K_{<k}^l(t,x;s,y) $ on $ \Sigma \times \Sigma $, which is integrable on $ \Sigma $ by \eqref{nullekernelbd}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{CorNE}]
Recall definition \eqref{NE:norm}. For any $ \bar{\omega} $, $ \lambda=\lambda(p) $ such that $ \angle(\bar{\omega},\pm \mathcal C) \gg \max(2^{-p},2^{-k}, 2^{l'+k'-k}) $ we may define $ 2^l \simeq \angle(\bar{\omega},\pm \mathcal C) $ and $ \kappa_l \supset \mathcal C $ so that Corollary \ref{Cornullframe} applies.
\end{proof}
\
\section{Proof of Theorem \ref{Renormalization:thm}} \label{sec:pf:thm:ren}
\subsection{Proof of the fixed time $ L^2_x $ estimates \eqref{renbd}, \eqref{renbdt}, \eqref{renbd2}} \
\
The following proposition establishes the $ L^2_x $ part of \eqref{renbd}, \eqref{renbdt}.
\begin{proposition} \label{fixedtimeprop} For any $ k \geq 0 $, the mappings
\begin{equation} \label{fixedtime1} e^{\pm i \psi_{\pm}^k} (t_0,x,D) \bar{P}_k : L^2_x \to L^2_x \end{equation}
\begin{equation} \label{fixedtime2} e_{<h}^{\pm i \psi_{\pm}^k} (t_0,x,D) \bar{P}_k : L^2_x \to L^2_x \end{equation}
\begin{equation} \label{fixedtime3} \nabla_{t,x} e_{<h}^{\pm i \psi_{\pm}^k} (t_0,x,D) \bar{P}_k : L^2_x \to \varepsilon 2^k L^2_x \end{equation}
hold for any fixed $ t_0 $, uniformly in $ h,t_0\in \mathbb{R} $. By duality, the same mappings hold for right quantizations.
\end{proposition}
\begin{proof}
\pfstep{Step~1}
First we prove \eqref{fixedtime1} by considering the $ TT^* $ operator
$$ e^{\pm i \psi_{\pm}^k} (t_0,x,D) \bar{P}_k^2 e^{\mp i \psi_{\pm}^k} (D,y,t_0) $$
with kernel $ K_k^a(t_0,x,t_0,y) $ defined by \eqref{kernel:a}.
Due to the $ (x,y) $ symmetry and Schur's test it suffices to show
$$ \sup_{x} \int \vm{K_k^a(t_0,x,t_0,y)} \,\mathrm{d} y \lesssim 1. $$
This follows from Prop. \ref{stat_phase_a}.
\pfstep{Step~2} Now we prove \eqref{fixedtime2} using \eqref{averaging} and \eqref{fixedtime1}. For $ u \in L^2_x $ we write
$$ e^{\pm i \psi_{\pm}^k}_{<h}(t_0,x,D) \bar{P}_k u =\int_{\mathbb{R}^{d+1}} m_h(s,y) e^{\pm i \psi_{\pm}^k}(t_0+s,x+y,D) [ \bar{P}_k u_{y} ] \,\mathrm{d} s \,\mathrm{d} y $$
where $ \hat{u}_{y}(\xi)=e^{-i y \xi} \hat{u}_0(\xi) $.
By Minkowski's inequality, \eqref{fixedtime1} for $ t_0+s $, translation invariance of $ L^2_x $, and the bound $ \vn{u_{y}}_{L^2_x} \leq \vn{u}_{L^2_x} $ we obtain
\begin{align*} \vn{e^{\pm i \psi_{\pm}^k}_{<h}(t_0,x,D) \bar{P}_k u }_{L^2_x} & \lesssim \int_{\mathbb{R}^{d+1}} m_h(s,y) \vn{e^{\pm i \psi_{\pm}^k}(t_0+s,\cdot,D) [ \bar{P}_k u_{y} ]}_{L^2_x} \,\mathrm{d} s \,\mathrm{d} y \\
& \lesssim \int_{\mathbb{R}^{d+1}} m_h(s,y) \vn{ \bar{P}_k u_{y}}_{L^2_x} \,\mathrm{d} s \,\mathrm{d} y \lesssim \vn{u}_{L^2_x}.
\end{align*}
\pfstep{Step~3} Since we have
$$ \nabla_{t,x} e^{\pm i \psi_{\pm}^k(t,x,\xi)} =\pm i \nabla_{t,x}\psi_{\pm}^k (t,x,\xi)e^{\pm i \psi_{\pm}^k(t,x,\xi)} $$
using \eqref{decomp4}, \eqref{fixedtime1} and Lemma \ref{decomp_lemma} we obtain
$$ \vn{ \nabla_{t,x} e^{\pm i \psi_{\pm}^k}(t,x,D)P_k}_{L^1L^2 \to L^1 L^2} \lesssim \varepsilon 2^k $$
Applying this to $ \phi(t,x)=\delta_{t_0}(t) \otimes u(x) $ (or rather with an approximate to the identity $ \eta_{\varepsilon} $ converging to $ \delta_{t_0} $ in $ t $) we obtain
$$ \nabla_{t,x} e^{\pm i \psi_{\pm}^k} (t_0,x,D) P_k : L^2_x \to \varepsilon 2^k L^2_x $$
for any $ t_0 $. By averaging this estimate as in Step 2 we obtain \eqref{fixedtime3}.
\end{proof}
\begin{remark}
The same argument also shows
\begin{equation} \label{fixedtime1b} e^{\pm i \psi^{k}_{<k'',\pm}} (t_0,x,D) P_k : L^2_x \to L^2_x
\end{equation}
\end{remark}
\
Now we turn to the proof of \eqref{renbd2}.
\begin{proposition} \label{fixedtime4} Let $ k \geq 0 $. For any $ t_0 $ we have
$$ e_{<k}^{-i \psi_{\pm}^k} (t_0,x,D) e_{<k}^{i \psi_{\pm}^k} (D,t_0,y)-I : \bar{P}_k L^2_{x} \to \varepsilon^{\frac{1}{2}} L^2_{x} $$
\end{proposition}
\begin{proof}
\pfstep{Step~1} First, let us note that
$$ e_{<k}^{-i \psi_{\pm}^k} (t_0,x,D) [ e_{<k}^{i \psi_{\pm}^k} (D,t_0,y)\bar{P}_k- \bar{P}_k e_{<k}^{i \psi_{\pm}^k} (D,t_0,y)] : L^2_{x} \to \varepsilon L^2_{x} $$
This follows from \eqref{fixedtime2} and from \eqref{spec_decomp2}, \eqref{fixedtime3}.
The kernel of $ e_{<k}^{-i \psi_{\pm}^k} (t_0,x,D) \bar{P}_k e_{<k}^{i \psi_{\pm}^k} (D,t_0,y) $ is
$$ K_{<k}(x,y)= \int e^{- i \psi_{\pm}^k}_{<k} (t_0,x,\xi) a(\xi /2^k) e^{i (x-y) \xi} e^{+ i \psi_{\pm}^k}_{<k} (t_0,y,\xi) \,\mathrm{d} \xi $$
while the kernel of $ \bar{P}_k $ is $ 2^{dk} \check{a}(2^k(x-y)) $. Thus, by Schur's test it remains to prove
\begin{equation} \label{kerSchur} \sup_x \int \vm{K_{<k}(x,y)-2^{dk} \check{a}(2^k(x-y))} \,\mathrm{d} y \lesssim \varepsilon^{\frac{1}{2}}. \end{equation}
\pfstep{Step~2} For large $ \vm{x-y} $ we will use
\begin{equation} \label{krbd1} 2^{dk} \vm{\check{a}(2^k(x-y))},\ \vm{K_{<k}(x,y)} \lesssim \frac{2^{dk}}{(1+2^k \vm{x-y})^{2d+1}}.
\end{equation}
The bound for $ \check{a} $ is obvious. Recalling \eqref{averaging} we write $ K_{<k}(x,y) $ as
$$ \iint e^{- i T_z \psi_{\pm}^k(t_0,x,\xi)} e^{+ i T_w\psi_{\pm}^k(t_0,y,\xi)} a(\xi /2^k) e^{i (x-y) \xi} \,\mathrm{d} \xi m_k(z) m_k(w) \,\mathrm{d} z \,\mathrm{d} w= $$
\begin{equation} =\int T_z T_w K^{a(z,w)}_k (t_0,x,t_0,y) m_k(z) m_k(w) \,\mathrm{d} z \,\mathrm{d} w \label{intermint} \end{equation}
where $ z=(t,z'),\ w=(s,w') $, $ a(z,w)(\xi)=e^{-i2^k (z-w)\cdot(\pm \jb{\xi}_k,\xi)} a(\xi) $ and
$$ K^{a}_k (t,x,s,y)=\int e^{- i \psi_{\pm}^k} (t,x,\xi) a(\xi /2^k) e^{\pm i (t-s) \jb{\xi}+i (x-y) \xi} e^{+ i \psi_{\pm}^k}(s,y,\xi) \,\mathrm{d} \xi $$
From Prop. \ref{stat_phase_a}, on the region $ \vm{ \vm{t-s}-\vm{x-y+z'-w'}} \gg 2^{-2k} \vm{t-s} $ we have
$$ \vm{T_z T_w K^{a(z,w)}_k (t_0,x,t_0,y)} \lesssim \jb{2^k (\vm{z}+\vm{w})}^N \frac{2^{dk}}{\jb{2^k ( \vm{t-s}-\vm{x-y+z'-w'})}^N} $$
Over this region, the integral \eqref{intermint} obeys the upper bound in \eqref{krbd1}. This can be seen by repeatedly applying
$$ \int_{\mathbb{R}} \frac{1}{(1+2^k \vm{r-a})^N} \frac{2^k}{(1+2^k r)^{N_1}} \,\mathrm{d} r \lesssim \frac{1}{(1+2^k a)^{N-1}}, \quad N_1 \geq 2 N $$
and for any $ N_2 $
$$ \jb{2^k (\vm{z}+\vm{w})}^N m_k(z) m_k(w) \lesssim \jb{2^k \vm{z}}^{-N_2} \jb{2^k \vm{w}}^{-N_2}. $$
On the region $ \vm{ \vm{t-s}-\vm{x-y+z'-w'}} \lesssim 2^{-2k} \vm{t-s} $, we use the term $ \jb{2^k (t-s)}^{-N} $ from the rapid decay of $ m_k(z), m_k(w) $ and bound
$$ \frac{1}{\jb{2^k (t-s)}^{N}} \lesssim \frac{1}{\jb{2^k( \vm{t-s}-\vm{x-y+z'-w'})}^N}, \quad \vm{T_z T_w K^{a(z,w)}_k} \lesssim 2^{dk} $$
which imply the upper bound in \eqref{krbd1} as before.
\pfstep{Step~3} The kernel of $ e_{<k}^{-i \psi_{\pm}^k} (t_0,x,D) \bar{P}_k e_{<k}^{i \psi_{\pm}^k} (D,t_0,y)-\bar{P}_k $ obeys the bound
\begin{equation} \label{krbd2} \vm{K_{<k}(x,y)-2^{dk} \check{a}(2^k(x-y))} \lesssim \varepsilon 2^{dk} (3+2^k \vm{x-y}). \end{equation}
Indeed, we write $ K_{<k}(x,y)-2^{dk} \check{a}(2^k(x-y)) $ as
$$ 2^{dk} \iint ( e^{- i T_z \psi_{\pm}^k(t_0,x,2^k \xi)+ i T_w\psi_{\pm}^k(t_0,y,2^k \xi)}-1) a(\xi) e^{i 2^k (x-y) \xi} \,\mathrm{d} \xi m_k(z) m_k(w) \,\mathrm{d} z \,\mathrm{d} w $$
and by \eqref{phasediff}, we bound
\begin{align*} \vm{ e^{- i T_z \psi_{\pm}^k(t_0,x,2^k \xi)+ i T_w\psi_{\pm}^k(t_0,y,2^k \xi)}-1} & \lesssim \varepsilon \log(1+2^k (\vm{x-y}+ \vm{z}+\vm{w})) \\
& \lesssim \varepsilon [1+2^k (\vm{x-y}+ \vm{z}+\vm{w})].
\end{align*}
Bounding by absolute values and integrating in $ z $ and $ w $ we obtain \eqref{krbd2}.
\pfstep{Step~4} Now we prove \eqref{kerSchur}. We integrate \eqref{krbd2} on $ \{ y \ | \ \vm{x-y} \leq R \} $ and integrate \eqref{krbd1} on the complement of this set, for $ (2^k R)^{d+1} \simeq \varepsilon^{-\frac{1}{2}} $. We obtain
$$ \text{LHS} \ \eqref{kerSchur} \lesssim \varepsilon (2^kR)^{d+1}+ \frac{1}{(2^k R)^{d+1}} \lesssim \varepsilon^{\frac{1}{2}}. $$
\end{proof}
\subsection{Proof of the $ \bar{N}_k $, $ \bar{N}_{k} ^* $ estimates \eqref{renbd}, \eqref{renbdt}, \eqref{renbd2}} \
\
In the proof we will need the following lemma.
\begin{lemma} \label{Nspacelemma}
For $ k \geq 0 $, $ k \geq k' \geq j-O(1) $ and for both quantizations, we have:
\begin{equation} \label{Nlemmabd} 2^{j/2} \vn{\bar{Q}_{j} e^{\pm i \psi_{\pm}^{k}}_{k'} \bar{P}_k G}_{L^2_{t,x}} \lesssim \varepsilon 2^{\delta(j-k')} \vn{G}_{\bar{N}_k^{*}}, \end{equation}
and thus, by duality
\begin{equation} \label{Nlemmaest} 2^{\frac{j}{2}} \vn{\bar{P}_k e^{\pm i \psi_{\pm}^{k}}_{k'} \bar{Q}_{j} F_k}_{\bar{N}_k} \lesssim \varepsilon 2^{\delta(j-k')} \vn{F_k}_{L^2_{t,x}} \end{equation}
\end{lemma}
\begin{corollary} \label{coremb}
For $k \geq 0,\ l\leq 0 $ we have
$$ \bar{Q}_{<k+2l} ( e^{- i \psi_{\pm}^{k}}_{< k}-e^{- i \psi_{\pm}^{k}}_{< k+2l})(t,x,D) \bar{P}_k : \bar{N}_k^{*} \to \bar{X}^{1/2}_1 $$
\end{corollary}
\begin{proof}
This follows by summing over \eqref{Nlemmabd}.
\end{proof}
The proof of this lemma is a bit long and is defered to the end of this section. The following proposition completes the proofs of \eqref{renbd}, \eqref{renbdt}, \eqref{renbd2}.
\begin{proposition}\label{Nspacemapprop} For any $ k \geq 0 $, denoting $ \psi=\psi^{k}_{\pm} $, one has:
\begin{equation} \label{Nspacemap1} e_{< k}^{\pm i \psi} (t,x,D), e_{< k}^{\pm i \psi} (D,s,y) : \bar{N}_{k} \to \bar{N}_{k} \end{equation}
\begin{equation} \label{Nspacemap2} \partial_{t,x} e_{< k}^{\pm i \psi} (t,x,D), \partial_{t,x} e_{< k}^{\pm i \psi} (D,s,y) : \bar{N}_{k} \to \varepsilon 2^k \bar{N}_{k} \end{equation}
\begin{equation} \label{Nspacemap3} e_{< k}^{-i \psi} (t,x,D) e_{< k}^{i \psi} (D,s,y)-I : \bar{N}_{k} \to \varepsilon^{\frac{1}{2}} \bar{N}_{k} \end{equation}
By duality, the same mappings hold for $ \bar{N}_{k} ^* $ in place of $ \bar{N}_{k} $.
\end{proposition}
\begin{proof}
\pfstep{Step~1} Since $ \bar{N}_{k} $ is defined as an atomic space, it suffices to prove \eqref{Nspacemap1}, \eqref{Nspacemap2} applied to $ F $ when $ F $ is an $ L^1 L^2 $-atom ($ \vn{F}_{L^1 L^2} \leq 1 $) or to $ \bar{Q}_j F $ an $ \bar{X}_{1}^{-\frac{1}{2}} $-atom ( $ 2^{-\frac{j}{2}} \vn{ \bar{Q}_j F}_{L^2_{t,x}} \leq 1 $). The first case follows from integrating the pointwise in $ t $ bounds \eqref{fixedtime2}, \eqref{fixedtime3}
$$ \vn{ (e_{< k}^{\pm i \psi}, \varepsilon^{-1}2^{-k} \nabla e_{< k}^{\pm i \psi})F_k(t)}_{L^2_x} \lesssim \vn{F_k(t)}_{L^2_x} $$
for both the left and right quantizations.
Now consider the second case. We split
$$ e_{< k}^{\pm i \psi}=e_{< \min(j,k)}^{\pm i \psi}+( e_{< k}^{\pm i \psi}-e_{< \min(j,k)}^{\pm i \psi}). $$
Note that $ e_{< \min(j,k)}^{\pm i \psi} \bar{Q}_j F= \tilde{\bar{Q}}_j e_{< \min(j,k)}^{\pm i \psi} \bar{Q}_j F $ and thus the bound
$$ \vn{e_{< \min(j,k)}^{\pm i \psi} \bar{Q}_j F}_{\bar{N}_{k} } \lesssim 2^{-\frac{j}{2}} \vn{e_{< \min(j,k)}^{\pm i \psi} \bar{Q}_j F}_{L^2_{t,x}} \lesssim 2^{-\frac{j}{2}} \vn{\bar{Q}_j F}_{L^2_{t,x}} $$
follows from integrating \eqref{fixedtime2}. The same argument applies to $ \nabla e_{< \min(j,k)}^{\pm i \psi} $ using \eqref{fixedtime3}.
The remaining estimate, for $ j \leq k $
\begin{equation} \label{intermbd} \vn{(e_{< k}^{\pm i \psi}-e_{< j}^{\pm i \psi}) \bar{Q}_j F_k}_{\bar{N}_{k} } \lesssim \varepsilon 2^{-\frac{j}{2}} \vn{\bar{Q}_j F_k}_{L^2_{t,x}} \end{equation}
follows by summing \eqref{Nlemmaest} in $ k'$. Note that \eqref{intermbd} remains true with $ e^{\pm i \psi} $ replaced by $ 2^{-k} \nabla e^{\pm i \psi} $, because \eqref{Nlemmaest} remains true, which concludes \eqref{Nspacemap2}. To see this, one writes $ 2^{-k'} \nabla e^{\pm i \psi}_{k'}=L e^{\pm i \psi}_{k'} $ where $ L $ is disposable and use translation invariance and \eqref{Nlemmaest}.
\pfstep{Step~2} To prove \eqref{Nspacemap3}, since that operator is self-adjoint, we prove that it is bounded from $ \bar{N}_{k} ^* \to \varepsilon^{\frac{1}{2}} \bar{N}_{k} ^* $, where $ \bar{N}_{k} ^* \simeq L^{\infty}L^2 \cap \bar{X}_{\infty}^{\frac{1}{2}} $. The $ L^{\infty}L^2 $ mapping follows from Prop. \ref{fixedtime4}, so it remains to prove
$$ 2^{\frac{j}{2}} \vn{\bar{Q}_j \tilde{P}_k [ e_{< k}^{-i \psi} (t,x,D) e_{< k}^{i \psi} (D,s,y)-I ] F_k}_{L^2_{t,x}} \lesssim \varepsilon^{\frac{1}{2}} \vn{F_k}_{L^{\infty}L^2 \cap \bar{X}_{\infty}^{\frac{1}{2}}} $$
For $ \bar{Q}_{>j-c} F_k $ we can discard $ \bar{Q}_j \tilde{P}_k $ and since $ \vn{\bar{Q}_{>j-c} F_k}_{L^2_{t,x}} \lesssim 2^{-j/2} \vn{F_k}_{\bar{X}_{\infty}^{\frac{1}{2}}} $, the bound follows for this component from Prop. \ref{fixedtime4} by integration .
For $ \bar{Q}_{\leq j-c} F_k $ the claim follows by adding the following
$$ 2^{\frac{j}{2}} \bar{Q}_j \tilde{P}_k [ e_{< k}^{-i \psi} -e_{< j}^{-i \psi} ](t,x,D) e_{< k}^{i \psi}(D,s,y) \bar{Q}_{\leq j-c} : \bar{N}_{k} ^* \to \varepsilon L^2_{t,x} $$
$$ 2^{\frac{j}{2}} \bar{Q}_j \tilde{P}_k e_{< j}^{-i \psi}(t,x,D) [ e_{< k}^{i \psi} -e_{< j}^{i \psi} ](D,s,y) \bar{Q}_{\leq j-c} : \bar{N}_{k} ^* \to \varepsilon L^2_{t,x} $$
since
$$ \bar{Q}_j \tilde{P}_k I \bar{Q}_{\leq j-c}=0, \qquad \bar{Q}_j \tilde{P}_k e_{< j}^{-i \psi} e_{< j}^{i \psi} \bar{Q}_{\leq j-c}=0. $$
These mappings follow from \eqref{Nlemmabd}, \eqref{Nspacemap1} for $ \bar{N}_k^* $ and Prop. \ref{fixedtime4} and writing $ \bar{Q}_j \tilde{P}_k e_{< j}^{-i \psi}=\bar{Q}_j \tilde{P}_k e_{< j}^{-i \psi} \bar{Q}_{[j-5,j+5]} $.
\end{proof}
\
\subsection{Proof of the conjugation bound \eqref{conj}} \
\
In general, for pseudodifferential operators one has the composition property $ a(x,D) b(x,D)=c(x,D) $ where, in an asymptotic sense
$$ c(x,\xi) \sim \sum_{\alpha} \frac{1}{\alpha !} \partial_{\xi}^{\alpha} a(x,\xi) D_x^{\alpha} b(x,\xi).
$$
In the present case this formula will be exact, as seen by differentiating under the integral in \eqref{left:quant}.
By definition \eqref{left:quant}, the symbol of $ e_{< k}^{-i \psi_{\pm}^k} (t,x,D) \Box_m $ is
\begin{equation} e_{< k}^{-i \psi_{\pm}^k(t,x,\xi)} ( \partial_t^2+ \vm{\xi}^2+1). \label{symb:01} \end{equation}
By differentiating \eqref{left:quant}, we see that the symbol of $ \Box_m e_{< k}^{-i \psi_{\pm}^k} (t,x,D) $ is
\begin{equation} e_{< k}^{-i \psi_{\pm}^k} ( \partial_t^2+ \vm{\xi}^2+1) + \Box e_{< k}^{-i \psi_{\pm}^k} + 2 \left( \partial_t e_{< k}^{-i \psi_{\pm}^k} \partial_t - i (\nabla e_{< k}^{-i \psi_{\pm}^k}) \cdot \xi \rpr \label{symb:02} \end{equation}
while the symbol of the operator $ 2i ( A_{< k} \cdot \nabla) e_{< k}^{-i \psi_{\pm}^k} (t,x,D) $ is
\begin{equation} -2 e_{< k}^{-i \psi_{\pm}^k (t,x,\xi)} A_{< k}(t,x) \cdot \xi+ 2i \nabla e_{< k}^{-i \psi_{\pm}^k (t,x,\xi)} \cdot A_{< k}(t,x) \label{symb:03} \end{equation}
Now, the inequality \eqref{conj} follows from the following proposition.
\begin{proposition}
Denoting $ \psi= \psi_{\pm}^k $, we can decompose
$$ e_{< k}^{-i \psi_{\pm}^k} (t,x,D) \Box_m - \Box_{m}^{A_{< k}} e_{< k}^{-i \psi_{\pm}^k} (t,x,D)=\sum_{i=0}^5 F_i(t,x,D) $$
where
\begin{align*}
F_0(t,x,\xi) :=& 2 \left[ ((\pm \jb{\xi} \partial_t -\xi \cdot \nabla ) \psi(t,x,\xi) + A_{< k}(t,x) \cdot \xi ) e^{-i \psi(t,x,\xi)} \right]_{< k} \\
F_1(t,x,\xi) :=& - \Box e_{< k}^{-i \psi(t,x,\xi)}\\
F_2(t,x,\xi) :=& 2i \nabla e_{< k}^{-i \psi(t,x,\xi)} \cdot A_{< k}(t,x) \\
F_3(t,x,\xi) :=& 2i^{-1} \partial_t e_{< k}^{-i \psi(t,x,\xi)} (i \partial_t \pm \jb{\xi}) \\
F_4(t,x,\xi) :=& 2 \left[ \left( A_{< k}(t,x) e^{-i \psi(t,x,\xi)} \rpr_{< k} - A_{< k}(t,x) e_{< k}^{-i \psi(t,x,\xi)} \right] \cdot \xi
\end{align*}
and for all $ i=0,4 $ we have
\begin{equation} \label{cjglm} \vn{F_i(t,x,D) u_k}_{\bar{N}_{k}} \lesssim \varepsilon \vn{u_k}_{L^{\infty} H^1}+ \varepsilon 2^k \vn{(i \partial_t \pm \jb{D}) u_k}_{\bar{N}_k} \end{equation}
\begin{proof} The decomposition follows from \eqref{symb:01}-\eqref{symb:03} and basic manipulations. We proceed to the proof of \eqref{cjglm}. We will make use of the bound
$$ \vn{u_k}_{\bar{N}_{k}^*} \lesssim \vn{u_k}_{L^{\infty} L^2}+ \vn{(i \partial_t \pm \jb{D}) u_k}_{\bar{N}_k}, $$
for which we refer to the proof of Lemma \ref{waves}. Recall that we identify $ \bar{N}_k^* \simeq L^{\infty} L^2 \cap \bar{X}^{\frac{1}{2}}_{\infty} $.
\pfstep{Step~1}[The main term $F_0$] Recall the identity \eqref{nullvf} and the definitions \eqref{defn1}, \eqref{defn2}. For $ k=0 $ the term $ F_0(t,x,\xi) $ vanishes. Now assume $ k \geq 1 $ and write
$$ F_0(t,x,\xi)=2 ( ( \sum_{k_1<k-c} \Pi^{\omega}_{\leq \delta(k_1-k)} A_{k_1} \cdot \xi )e^{-i \psi(t,x,\xi)} )_{< k} = 2 F'(t,x,\xi)_{< k} $$
where
$$ F'(t,x,\xi)= a(t,x,\xi) e^{-i \psi(t,x,\xi)}_{< k}, \qquad a(t,x,\xi) \vcentcolon= \sum_{k_1<k-c} \Pi^{\omega}_{\leq \delta(k_1-k)} A_{k_1}(t,x) \cdot \xi $$
By \eqref{spec_decomp3} we have
$$ \vn{ F'(t,x,D)- a(t,x,D) e^{-i \psi}_{< k}(t,x,D)}_{L^{\infty}L^2 \to L^1 L^2} \lesssim \vn{\nabla_\xi a}_{D_k^1L^{2}(L^\infty)} \vn{\nabla_x e^{-i \psi}_{< k}(t,x,D)}_{L^{\infty} (L^2)\to L^{2}(L^2)}
$$
By lemma \ref{decomp_lemma}, \eqref{decomp3} and lemma \ref{loczsymb} we have
$$ \vn{(\nabla_x \psi e^{-i \psi})_{< k} (t,x,D)}_{L^{\infty} (L^2)\to L^{2}(L^2)} \lesssim \vn{\nabla_{x} \psi^{<k}_{\pm}}_{D_k(L^2 L^{\infty})} \lesssim 2^{\frac{k}{2}} \varepsilon$$
Summing over \eqref{decomp5}, we get $ \vn{\nabla_\xi a}_{D_k^1L^{2}(L^\infty)} \lesssim 2^{\frac{k}{2}} \varepsilon $. Thus
$$ F'(t,x,D)- a(t,x,D) e^{-i \psi}_{< k}(t,x,D) : \bar{N}_{k}^* \to 2^k \varepsilon \bar{N}_k $$
and it remains to prove
$$ a(t,x,D) e^{-i \psi}_{< k}(t,x,D) : \bar{N}_{k}^* \to 2^k \varepsilon \bar{N}_k $$
By Proposition \ref{Nspacemapprop}, $ e^{-i \psi}_{< k}(t,x,D) $ is bounded on $ \bar{N}_{k}^* $.
Assume $ -k \leq \delta (k_1-k) $ (the case $ \delta (k_1-k) \leq -k $ is analogous). We decompose
$$ a(t,x,\xi)=\sum_{k_1 <k-c} \sum_{\theta\in[2^{-k},2^{\delta (k_1-k)}]} a_{k_1}^{\theta} (t,x,\xi), $$
$$ a_{k_1}^{2^{-k}} (t,x,\xi) \vcentcolon= \Pi^{\omega}_{\leq 2^{-k}} A_{k_1}(t,x) \cdot \xi, \qquad a_{k_1}^{\theta} (t,x,\xi) \vcentcolon= \Pi^{\omega}_{\theta} A_{k_1}(t,x) \cdot \xi \quad ( \theta > 2^{-k}), $$
and it remains to prove
$$ \vn{ a_{k_1}^{\theta} (t,x,D) v_k}_{\bar{N}_k} \lesssim \theta^{\frac{1}{2}} \varepsilon 2^k \vn{v_k}_{\bar{N}_k^*}. $$
for all $ \theta=2^l, \ l \geq -k $. First, using \eqref{decomp6} we have
$$ \vn{a_{k_1}^{\theta} (t,x,D) \bar{Q}_{>k_1+ 2l-c} v_k}_{L^1 L^2} \lesssim \vn{a_{k_1}^{\theta}}_{D_k L^2 L^{\infty}} \vn{\bar{Q}_{>k_1+ 2l-c} v_k}_{L^2_{t,x}} \lesssim \theta^{\frac{1}{2}} \varepsilon 2^k \vn{v_k}_{\bar{N}_k^*} $$
Then, denoting $ f(t,x)=a_{k_1}^{\theta} (t,x,D) \bar{Q}_{<k_1+ 2l-c} v_k $ we have
$$ \vn{ f}_{L^2_{t,x}} \lesssim \vn{a_{k_1}^{\theta}}_{D_k L^2 L^{\infty}} \vn{\bar{Q}_{<k_1+ 2l-c} v_k}_{L^{\infty} L^2} \lesssim 2^{\frac{3}{2}l}2^{\frac{k_1}{2}} \varepsilon 2^k \vn{v_k}_{\bar{N}_k^*} $$
For each $ \xi $, the term $ \bar{Q}_j[ \Pi^{\omega}_{\theta} A_{k_1}(t,x) \xi e^{i x \xi} \bar{Q}_{<k_1+ 2l-c} \hat{v_k}(t,\xi)] $ is non-zero only for $ j=k_1+2l+O(1) $ (by Remark \ref{rk:geom:cone} of Lemma \ref{geom:cone}). Thus,
$$ \vn{f}_{\bar{N}_k} \leq \vn{f}_{\bar{X}_1^{-1/2}} \lesssim \sum_{ j=k_1+2l+O(1)} \vn{\bar{Q}_j f}_{L^2_{t,x}} 2^{-\frac{j}{2}} \lesssim \theta^{\frac{1}{2}} \varepsilon 2^k \vn{v_k}_{\bar{N}_k^*} $$
\pfstep{Step~2} [The terms $ F_1 $ and $ F_2 $] Since $ \Box_{t,x} \psi(t,x,\xi)=0 $ we have
\begin{align*}
F_1(t,x,\xi)= &\left[ ( \vm{\partial_t \psi (t,x,\xi)}^2 - \vm{\nabla \psi (t,x,\xi)}^2 ) e^{-i \psi (t,x,\xi)} \right]_{< k}, \\
F_2(t,x,\xi)=& 2i A^j_{< k}(t,x) \left( \partial_j \psi (t,x,\xi) e^{-i \psi (t,x,\xi)} \rpr_{< k}
\end{align*}
By lemma \ref{decomp_lemma} and \eqref{decomp3} we have
\begin{align*}
(\partial_{j} \psi e^{-i \psi}) (t,x,D) & : L^{\infty} L^2 \to \varepsilon 2^{\frac{k}{2}} L^2 L^2 \\
( \vm{\partial_{\alpha} \psi}^2 e^{-i \psi}) (t,x,D) & : L^{\infty} L^2 \to \varepsilon^2 2^k L^1 L^2
\end{align*}
By lemma \ref{loczsymb} the same mappings hold for the $ < k $ localized symbols, which proves \eqref{cjglm} for $ F_1 $, while for $ F_2 $ we further apply H\" older's inequality together with $ \vn{A_{< k}}_{L^2 L^{\infty}} \lesssim 2^{k/2} \varepsilon $.
\pfstep{Step~3} [The term $F_3$] The bound follows by using \eqref{Nspacemap2} to dispose of
$$ 2^{-k} \varepsilon^{-1} \partial_t e_{< k}^{-i \psi}(t,x,D). $$
\pfstep{Step~4}[The term $ F_4$] Using Lemma \ref{comm_id} we write
$$ F_4(t,x,\xi)=2^{-k} \xi_j L( \nabla_{t,x}A^j_{< k}(t,x), e^{-i \psi (t,x,\xi)}) $$
As in lemma \ref{loczsymb}, by translation-invariance it suffices to prove
$$ 2^{-k} \vn{ \nabla_{t,x}A^j_{< k} e^{-i \psi} (t,x,D) \partial_j u_k}_{\bar{N}_k} \lesssim \varepsilon 2^k \vn{e^{-i \psi} (t,x,D) u_k}_{\bar{N}_k^*} \lesssim \varepsilon 2^k \vn{ u_k}_{\bar{N}_k^*} $$
which follows from \eqref{est:phi2:freqAx} (observe that the $ \mathcal H_{k'}^* $ term is zero when $ \Box A^j=0 $ and in this case the $ \bar{N}_{k'}^* $ norm of $ \phi $ suffices. One uses the derivative on $ A^j $ to do the $ k' $ summation).
\end{proof}
\end{proposition}
\subsection{Proof of the $ \bar{S}_k $ bound \eqref{renbd3}} \
\
We begin by stating a simple lemma that provides bounds for localized symbols.
\begin{lemma} \label{lemma:locz:symb} Let $ X $ be a translation-invariant space of functions defined on $ \mathbb{R}^{d+1} $. Let $ P $ be a bounded Fourier multiplier. Suppose we have the bounded map
\begin{equation} \label{symb:X} e^{-i \psi}(t,x,D) e^{\pm i t \jb{D}} P: L^2_x \to X. \end{equation}
Then, uniformly in $ h $, we also have the bounded map for localized symbols:
\begin{equation} \label{locz:X} e^{-i \psi}_{<h}(t,x,D) e^{\pm i t \jb{D}} P: L^2_x \to X. \end{equation}
\end{lemma}
\begin{proof} Recalling \eqref{averaging}, for $ u_0 \in L^2_x $ we write
$$ e^{- i \psi}_{<h}(t,x,D) e^{\pm i t \jb{D}} P u_0 =\int_{\mathbb{R}^{d+1}} m_h(s,y) e^{-i \psi}(t+s,x+y,D) e^{\pm i(t+s) \jb{D}} P u_{s,y} \,\mathrm{d} s \,\mathrm{d} y $$
where $ \hat{u}_{s,y}(\xi)=e^{\mp i s \jb{\xi}} e^{-i y \xi} \hat{u}_0(\xi) $.
By Minkowski's inequality, translation invariance of $ X $, \eqref{symb:X} and the bound $ \vn{u_{s,y}}_{L^2_x} \leq \vn{u_0}_{L^2_x} $ we obtain \eqref{locz:X}.
\end{proof}
We will apply this lemma for $ X $ taking the various norms that define $ \bar{S}_k $.
The next lemma will be used to reduce estimates to the case of free waves.
\begin{lemma} \label{waves}
Let $ k \geq 0 $ and $ X $ be a space of functions on $ \mathbb{R}^{1+n} $ with Fourier support in $ \{ \jb{\xi} \simeq 2^k \} $ (or a subset of it, such as a $ 2^{k'} \times ( 2^{k' +l'} )^3 $ box) such that
$$ \vn{e^{i t \sigma} f}_X \lesssim \vn{f}_X, \quad \forall \sigma \in \mathbb{R} $$
$$ \vn{1_{t>s} f}_X \lesssim \vn{f}_X, \quad \forall s \in \mathbb{R} $$
\begin{equation} \label{freesolren} \vn{e^{-i \psi}_{<h} (t,x,D) e^{\pm i t \jb{D}} u_0}_X \lesssim C_1 \vn{u_0}_{L^2} . \end{equation}
hold for all $ f, u_0 $ and both signs $ \pm $. Then, we have
\begin{equation} \label{est:waves}
2^k \vn{e^{-i \psi}_{<h} (t,x,D) u}_X \lesssim C_1( \vn{u[0]}_{H^1 \times L^2}+ \vn{\Box_m u}_{\bar{N}_k} ) \end{equation}
If we only assume that \eqref{freesolren} holds for one of the signs $ \pm $, then \eqref{est:waves} still holds for functions $ u $ with Fourier support in $ \{\pm \tau \geq 0 \} $.
\end{lemma}
\begin{proof} We decompose $ \Box_m u=F^1+F^2 $ such that $ \vn{\Box_m u}_{\bar{N}_k} \simeq \vn{F^1}_{L^1 L^2}+\vn{F^2}_{\bar{X}_1^{-\frac{1}{2}}} $. By \eqref{freesolren} we can subtract free solutions from $ u $ and so we may assume that $ u[0]=(0,0) $. We may also assume that $ F^2 $ is modulation-localized to $ \vm{\tau-\jb{\xi}} \simeq 2^j,\ \tau \geq 0 $. We define $ v=\frac{1}{\Box_m}F^2 $ and write $ u=u^1+u^2 $ where $ u^1 $ is the Duhamel term
$$ u^1(t)=\int_{\mathbb{R}} \frac{\sin( (t-s) \jb{D})}{\jb{D}} 1_{t>s} F^1(s) \,\mathrm{d} s - \sum_{\pm} \pm e^{\pm it \jb{D}} \int_{-\infty}^0 e^{\mp is \jb{D}} \frac{F^1(s)}{2i\jb{D}} \,\mathrm{d} s $$
$$ \text{and} \qquad u^2=v-e^{it \jb{D}} w^1-e^{-i t \jb{D}} w^2 $$
so that $ \Box_m u^2 =0 $ and $ w^1,w^2 $ are chosen such that $ u^2[0]=(0,0) $.
For the second part of $ u^1 $ we use \eqref{freesolren} together with
$$ \vn{\int_{-\infty}^0 e^{\mp is \jb{D}} \frac{F^1(s)}{2i\jb{D}} \,\mathrm{d} s}_{L^2} \leq \int_{-\infty}^0 \vn{e^{\mp is \jb{D}} \frac{F^1(s)}{2i\jb{D}}}_{L^2} \,\mathrm{d} s \lesssim 2^{-k} \vn{F^1(s)}_{L^1 L^2}. $$
For the first part of $ u^1 $ we again write $ \sin( (t-s) \jb{D}) $ in terms of $ e^{\pm i (t-s) \jb{D}} $, and
$$ \vn{e^{-i \psi_{k,\pm}}_{<h} \int_{\mathbb{R}} \frac{e^{\pm i (t-s) \jb{D}}}{\jb{D}} 1_{t>s} F^1(s) \,\mathrm{d} s}_{X} \leq \int_{\mathbb{R}} \vn{1_{t>s} e^{-i \psi_{k,\pm}}_{<h} e^{\pm i (t-s) \jb{D}} \frac{F^1(s)}{\jb{D}} }_X \,\mathrm{d} s $$
$$ \lesssim 2^{-k} C_1 \int_{\mathbb{R}} \vn{e^{\mp is \jb{D}} F^1(s)}_{L^2} \,\mathrm{d} s \leq 2^{-k} C_1 \vn{F^1(s)}_{L^1 L^2}. $$
Now we turn to $ u^2$. For $ w^1,w^2 $ we use \eqref{freesolren} and, using Lemma \eqref{Sobolev_lemma}
$$ \vn{w^i}_{L^2} \lesssim \vn{(v,\frac{\partial_t}{\jb{D}}v)}_{L^{\infty}L^2} \lesssim 2^{\frac{j}{2}} \vn{(\frac{1}{\Box_m},\frac{i\partial_t- \jb{D}}{\jb{D} \Box_m}) F^2}_{L^2_{t,x}} \lesssim 2^{\frac{-j}{2}}2^{-k} \vn{F^2}_{L^2_{t,x}} $$
Next, we write $ \tau=\rho+\jb{\xi}$ in the Fourier inversion formula
$$ v(t)=\int e^{i t \tau+ ix \xi} \mathcal{F}v (\tau, \xi) \,\mathrm{d} \xi \,\mathrm{d} \tau=\int_{\vm{\rho} \simeq 2^j} e^{i t \rho} e^{i t \jb{D}} \phi_{\rho} \,\mathrm{d} \rho $$
for $ \hat{\phi_{\rho}}(\xi)=\mathcal{F}v (\rho+\jb{\xi}, \xi) $. Then
$$ \vn{e^{-i \psi_{k,\pm}}_{<h} v}_{X} \lesssim \int_{\vm{\rho} \simeq 2^j} \vn{ e^{i t \rho} e^{-i \psi_{k,\pm}}_{<h}(t,x,D) e^{i t \jb{D}} \phi_{\rho} }_X \,\mathrm{d} \rho \lesssim C_1 \int_{\vm{\rho} \simeq 2^j} \vn{\phi_{\rho} }_{L^2_x} \,\mathrm{d} \rho $$
By Cauchy-Schwarz we bound this by $ 2^{\frac{j}{2}} C_1 \vn{v}_{L^2_{t,x}} \lesssim 2^{\frac{-j}{2}}2^{-k} C_1 \vn{F^2}_{L^2_{t,x}} $.
If we only assume that \eqref{freesolren} holds for one of the signs $ \pm $, then we have the following variant
$$ \vn{e^{-i \psi}_{<h} (t,x,D) u}_X \lesssim C_1 ( \vn{u(0)}_{L^2}+\vn{(i \partial_t \pm \jb{D})u}_{\bar{N}_k}) $$
Now the Duhamel term is expressed in terms of one of the $ e^{\pm i t\jb{D}} $. For functions with Fourier support in $ \{\pm \tau \geq 0 \} $ we have $ \vn{(i \partial_t \pm \jb{D})u}_{N_k} \simeq 2^{-k} \vn{\Box_m u}_{N_k} $. \end{proof}
Now we are ready to begin the proof of \eqref{renbd3}. We will implicitly use Prop. \ref{Nk:orthog}.
For brevity, we drop the $ k $ and $ \pm $ subscripts and denote $ \psi=\psi^{k}_{\pm} $.
\subsubsection{\bf The Strichartz norms}
By Lemma \ref{waves}, the bound for $ \bar{S}_k^{Str} $ reduces to
\begin{lemma}
For all $ k \geq 0 $ we have
$$ \vn{e^{-i \psi}_{<k} (t,x,D) e^{\pm i t \jb{D}} v_k}_{\bar{S}_k^{Str}} \lesssim \vn{v_k}_{L^2_x}
$$
\end{lemma}
\begin{proof} Using Lemma \ref{lemma:locz:symb} this bound follows from
\begin{equation} e^{-i \psi}(t,x,D) e^{\pm i t \jb{D}} : \bar{P}_k L^2_x \to \bar{S}_k^{Str}. \label{reducedStr} \end{equation}
We use the result of Keel-Tao on Strichartz estimates from \cite{Keel_endpointstrichartz}. As noticed in that paper (see sec. 6 and the end of sec. 5; see also \cite[sec. 5]{ShSt}, the $ L^2 L^r $ estimate also holds with $ L^r $ replaced by the Lorentz space $ L^{r,2} $. We need this only when $ d=4 $ for the $ L^2 L^{4,2} $ norm in \eqref{Str:KG}.
By change of variable, we rescale at frequency $ 2^0 $:
$$ U(t) \vcentcolon= e^{- i \psi(\cdot/2^k,\cdot/2^k, 2^k \cdot)} (t,x,D) e^{\pm i t \jb{D}_k}$$
The $ L^2_x \to L^2_x $ bound follows from Prop. \ref{fixedtimeprop}. The $ L^1 \to L^{\infty} $ bound for $ U(t) U(s)^* $ follows from \eqref{dispestt1} for $ S_k^{Str,W} $ and from \eqref{dispestt12} for the other $ \bar{S}_k^{Str} $ norms in \eqref{Str:KG} when $ d=4$.
\end{proof}
\subsubsection{\bf The $ \bar{X}_{\infty}^{\frac{1}{2}} $ norms.} For any $ j \in \mathbb{Z} $ we show
\begin{equation}
2^{\frac{1}{2}j} \vn{\bar{Q}_j e^{-i \psi}_{< k}(t,x,D) \phi_k}_{L^2_{t,x}} \lesssim \vn{\phi_k}_{L^{\infty}(H^1 \times L^2)} + \vn{\Box_m \phi_k}_{\bar{N}_k}.
\end{equation}
We separate
$$ e^{-i \psi}_{< k}=e^{-i \psi}_{< \min(j,k)}+ \sum_{k'+C \in [j,k]} e^{-i \psi}_{k'} $$
For the first term we write
$$ \bar{Q}_j e^{-i \psi}_{< \min(j,k)} \phi_k= \bar{Q}_j e^{-i \psi}_{< \min(j,k)} \bar{Q}_{[j-1,j+1]} \phi_k $$
Then we discard $ \bar{Q}_j e^{-i \psi}_{< \min(j,k)} $ and the estimate becomes trivial. The second term follows by summing over \eqref{Nlemmabd}.
\subsubsection{\bf The $ S_{box(k')} $ norms in \eqref{Sbar0}, $ k =0 $} For $ k'<0 $ we prove
$$ 2^{-\sigma k'} ( \sum_{ C_{k'}} \vn{\bar{Q}_{<k'}^{\pm} P_{C_{k'}} e_{<0}^{-i \psi_{\pm}^0} (t,x,D) \phi}_{L^2 L^{\infty}}^2)^{1/2} \lesssim \vn{ (\phi,\partial_t \phi)(0)}_{L^2_x}+\vn{ \Box_m \phi}_{\bar{N}_0} $$
We may assume $ \phi $ is Fourier supported in $ \pm \tau \geq 0 $. We split
$$ e^{-i \psi_{\pm}^0}_{<0}= (e^{-i \psi_{\pm}^0}_{<0}-e^{-i \psi_{\pm}^0}_{<k'})+ e^{-i \psi_{\pm}^0}_{<k'} $$
The estimate for the first term follows from Prop. \ref{Xembedding} and Cor. \ref{coremb}. For the second term we write
$$ P_{C_{k'}} e^{-i \psi_{\pm}^0}_{<k'}=P_{C_{k'}} e^{-i \psi_{\pm}^0}_{<k'} \tilde{P}_{C_{k'}}. $$
Then we can discard $ \bar{Q}_{<k'}^{\pm} P_{C_{k'}} $ and prove
$$ 2^{-\sigma k'} \vn{ e^{-i \psi_{\pm}^0}_{<k'} (t,x,D) \tilde{P}_{C_{k'}} \phi }_{L^2 L^{\infty}} \lesssim \vn{\tilde{P}_{C_{k'}} (\phi,\partial_t \phi)(0)}_{L^2_x}+\vn{\tilde{P}_{C_{k'}} \Box_m \phi}_{\bar{N}_0} $$
By Lemma \ref{waves}, this reduces to
$$ 2^{-\sigma k'} e^{-i \psi_{\pm}^0}_{<k'}(t,x,D) e^{\pm i t \jb{D}} \tilde{P}_{C_{k'}} \bar{P}_{0} :L^2_x \to L^2 L^{\infty}
$$
which follows from Corollary \ref{Cor:L2Linf} using Lemma \ref{lemma:locz:symb}.
\subsubsection{\bf The square summed $ \bar{S}_k^{\omega \pm}(l) $ norms, $ k \geq 1 $, first part} For any fixed $ l<0 $ we split
$$ e^{-i \psi}_{<k}= (e^{-i \psi}_{<k}-e^{-i \psi}_{<k+2l})+ e^{-i \psi}_{<k+2l}. $$
Here we treat the first term, while the second one is considered below. The bound
$$ 2^k ( \sum_{\omega} \vn{P_l^{\omega} \bar{Q}_{<k+2l}^{\pm} (e^{-i \psi}_{<k}-e^{-i \psi}_{<k+2l}) \phi}_{\bar{S}_k^{\omega,\pm}(l)}^2 )^{\frac{1}{2}} \lesssim \vn{ \nabla_{t,x} \phi(0)}_{L^2_x}+\vn{\Box_m \phi}_{\bar{N}_k} $$
follows from Prop. \ref{Xembedding} and Cor. \ref{coremb}.
\subsubsection{\bf The square-summed $ L^2 L^{\infty} $ and $ \bar{S}_k^{Str} $ norms, $ k \geq 1 $}
Let $ l<0 $. It remains to consider $ e^{-i \psi}_{<k+2l} $. We fix $ \omega $ and the estimate we need boils down to square-summing the following over $ \omega $, after taking supremum over $ k'\leq k, \ l'<0 $, for $ k+2l \leq k'+l' \leq k+l $
$$ 2^{-\frac{k}{2}-\frac{d-2}{2}k'-\frac{d-3}{2}l' } ( \sum_{\mathcal C=\mathcal C_{k'}(l')} \vn{P_{\mathcal C} P_l^{\omega} \bar{Q}_{<k+2l}^{\pm} e^{-i \psi}_{<k+2l} \phi}_{L^2L^{\infty}}^2 )^{\frac{1}{2}} \lesssim \vn{\tilde{P}_l^{\omega} \nabla_{t,x} \phi(0)}_{L^2_x}+\vn{\tilde{P}_l^{\omega} \Box_m \phi}_{\bar{N}_k} $$
Fix $ \mathcal C=\mathcal C_{k'}(l') $. Since $ k+2l \leq k'+l' $, one can write
\begin{equation} P_{\mathcal C} e^{-i \psi}_{<k+2l}(t,x,D) \phi= P_{\mathcal C} e^{-i \psi}_{<k+2l}(t,x,D) \tilde{P}_{\mathcal C} \phi. \label{loczcomuting} \end{equation}
Then one can can discard $ P_{\mathcal C} P_l^{\omega} \bar{Q}_{<k+2l} $ and prove
$$ 2^{-\frac{k}{2}-\frac{d-2}{2}k'-\frac{d-3}{2}l' } \vn{e^{-i \psi}_{<k+2l}(t,x,D) \tilde{P}_{\mathcal C} \phi }_{L^2 L^{\infty}}\lesssim \vn{\tilde{P}_{\mathcal C} \nabla_{t,x} \phi(0)}_{L^2_x}+ \vn{\tilde{P}_{\mathcal C} \Box_m \phi}_{\bar{N}_k} $$
By Lemma \ref{waves}, this reduces to
\begin{equation} e^{-i \psi}_{<k+2l}(t,x,D) e^{\pm i t \jb{D}} \tilde{P}_{\mathcal C} :L^2_x \to 2^{\frac{k}{2}+\frac{d-2}{2}k'+\frac{d-3}{2}l' } L^2 L^{\infty} \label{red:L2Linf}
\end{equation}
which follows by Lemma \ref{lemma:locz:symb} from Corollary \ref{Cor:L2Linf}.
The same argument applies to $ \bar{S}_k^{Str} $ except that one uses \eqref{reducedStr} and Lemma \ref{lemma:locz:symb} instead of \eqref{red:L2Linf}.
\subsubsection{\bf The PW norms ($ d=4, k \geq 1$)} We fix $ l, -k \leq l', k', \omega, \mathcal C=\mathcal C_{k'}(l') $ as before and use \eqref{loczcomuting}. We discard $ P_{\mathcal C} P_l^{\omega} \bar{Q}^{\pm}_{<k+2l} $ and prove
$$ 2^{-\frac{3}{2}(k'+l')+k} \vn{e^{-i \psi}_{<k+2l}(t,x,D) \bar{Q}^{\pm}_{<k+2l} \tilde{P}_{\mathcal C} \phi}_{PW^{\pm}_{\mathcal C}} \lesssim \vn{\tilde{P}_{\mathcal C} \nabla_{t,x} \phi(0)}_{L^2_x}+ \vn{\tilde{P}_{\mathcal C} \Box_m \phi}_{\bar{N}_k}
$$
Let's assume $ \pm=+ $. By Lemma \ref{waves}, we reduce to
\begin{equation} \label{PWwaves} \vn{e^{-i \psi}_{<k+2l}(t,x,D) e^{it \jb{D}} \tilde{P}_{\mathcal C} u_k}_{PW^{\pm}_{\mathcal C} } \lesssim 2^{\frac{3}{2}(k'+l')} \vn{\tilde{P}_{\mathcal C} u_k}_{L^2_x} \end{equation}
From Corollary \ref{corPW} and Lemma \ref{lemma:locz:symb} we deduce that
$$ 2^{-\frac{3}{2}(k'-k)} e^{- i \psi}_{<k+2l} (t,x,D) e^{i t \jb{D}} P_k P_{C_{k'}^i(-k)} : L^2_x \to L^2_{t_{\omega_i, \lambda}} L^{\infty}_{x_{\omega_i,\lambda}} $$
holds for $ C_{k'}^i(-k) \subset \mathcal C $ where $ \omega_i $ is the direction of the center of $ C_{k'}^i(-k) $.
We can cover $ \mathcal C=\mathcal C_{k'}(l') $ by roughly $ 2^{3(l'+k)} $ boxes of size $ 2^{k'} \times ( 2^{k'-k} )^3 $:
$$ \mathcal C=\cup_{i=1}^{O(2^{3(l'+k)})} C_{k'}^i(-k). $$
Notice that $ \lambda $ can be chosen the same for all $ i $. By the definition of $ PW_{\mathcal C}^{\pm} $ \eqref{PW:norm}
$$ \text{LHS } \eqref{PWwaves} \leq \sum_i \vn{ e^{-i \psi}_{<k+2l}(t,x,D) e^{it \jb{D}} P_{C_{k'}^i(-k)} u_k}_{L^2_{t_{\omega_i,\lambda}} L^{\infty}_{x_{\omega_i,\lambda}} } \lesssim $$
$$ \lesssim 2^{\frac{3}{2}(k'-k)} \sum_i \vn{\tilde{P}_{C_{k'}^i(-k)} u_k}_{L^2_x} \lesssim 2^{\frac{3}{2}(k'-k)} 2^{\frac{3}{2}(l'+k)} ( \sum_i \vn{\tilde{P}_{C_{k'}^i(-k)} u_k}_{L^2_x}^2)^{\frac{1}{2}} \lesssim 2^{\frac{3}{2}(k'+l')} \vn{\tilde{P}_{\mathcal C} u_k}_{L^2_x} $$
where we have used Cauchy-Schwarz and orthogonality. This proves \eqref{PWwaves}.
\subsubsection{\bf The NE norms ($ d=4, k \geq 1$)} We fix $ l, -k \leq l', k', \mathcal C=\mathcal C_{k'}(l') $ as before and use \eqref{loczcomuting}. We prove
$$ 2^k \vn{P_{\mathcal C} P_l^{\omega} \bar{Q}^{\pm}_{<k+2l} e^{-i \psi}_{<k+2l} \bar{Q}^{\pm}_{<k+2l} \tilde{P}_{\mathcal C} \phi}_{NE^{\pm}_{\mathcal C}} \lesssim \vn{\tilde{P}_{\mathcal C} \nabla_{t,x} \phi(0)}_{L^2_x}+ \vn{\tilde{P}_{\mathcal C} \Box_m \phi}_{\bar{N}_k}
$$
Now we split again $ e^{-i \psi}_{<k+2l}=(e^{-i \psi}_{<k+2l}- e^{-i \psi}_{<k})+ e^{-i \psi}_{<k} $. The first term
$$ P_{\mathcal C} P_l^{\omega} \bar{Q}^{\pm}_{<k+2l} (e^{-i \psi}_{<k+2l}- e^{-i \psi}_{<k}) \bar{Q}_{<k+2l}^{\pm}\tilde{P}_{\mathcal C} \phi $$
is estimated by appropriately applying Prop. \ref{Xembedding} and Cor. \ref{coremb}.
For the second term we may discard $ P_{\mathcal C} P_l^{\omega} \bar{Q}^{\pm}_{<k+2l} $ and prove
$$ 2^k \vn{e^{-i \psi}_{<k} \bar{Q}^{\pm}_{<k+2l} \tilde{P}_{\mathcal C} \phi }_{NE^{\pm}_{\mathcal C} }\lesssim \vn{\tilde{P}_{\mathcal C} \nabla_{t,x} \phi(0)}_{L^2_x}+ \vn{\tilde{P}_{\mathcal C} \Box_m \phi}_{\bar{N}_k}. $$
This is reduced by Lemma \ref{waves} to
$$ e^{-i \psi}_{<k}(t,x,D) e^{\pm it \jb{D}} \tilde{P}_{\mathcal C} : L^2_x \to NE^{\pm}_{\mathcal C}, $$
which follows from Corollary \ref{CorNE}.
\subsection{Proof of Lemma \ref{Nspacelemma}} \
\
We follow the method from \cite{KST} based on Moser-type estimates. The more difficult case is $ d=4 $ and the argument can be simplified for $ d \geq 5 $. In the proof we will need the following lemmas.
\begin{lemma}
Let $ 1 \leq q \leq p \leq \infty $ and $ k \geq 0 $. Then for all $ j-O(1) \leq k' \leq k $ we have
\begin{align}
\label{oneestim1} \vn{e^{\pm i \psi^{k}_{<j,\pm}}_{k'} (t,x,D) \bar{P}_k}_{L^p L^2 \to L^q L^2} & \lesssim \varepsilon 2^{(\frac{1}{p}-\frac{1}{q})j} 2^{2(j-k')} \\
\label{oneestim2} \vn{e^{\pm i \psi^{k}_{\pm}}_{k'} (t,x,D) \bar{P}_k}_{L^2 L^2 \to L^{\frac{10}{7}} L^2} & \lesssim \varepsilon 2^{-\frac{1}{5}k'}
\end{align}
By duality, the same bounds holds for the right quantization.
\end{lemma}
\begin{remark} \label{rk:heuristic}
To motivate the proof, we note that applying the $ k'(>j ) $ localization in the power series
$$ e^{i \psi_{<j}(t,x,\xi)} =1+i \psi_{<j}(t,x,\xi)+O\big(\psi_{<j}^2(t,x,\xi) \big) $$
makes the linear term $ 1+i\psi $ vanish. For the higher order terms H\" older inequality applies (in the form of decomposable lemma \ref{decomp_lemma}).
\end{remark}
\begin{proof}
To prove \eqref{oneestim1}, let $ L_{k'} $ be a disposable multiplier in the $ (t,x) $-frequencies such that
$$ e^{\pm i \psi^{k}_{<j,\pm}}_{k'} = 2^{-2k'} L_{k'} \Delta_{t,x} e^{\pm i \psi^{k}_{<j,\pm}}=2^{-2k'} L_{k'}(- \vm{\partial_{t,x} \psi^{k}_{<j,\pm}}^2\pm i \Delta_{t,x} \psi^{k}_{<j,\pm} )e^{\pm i \psi^{k}_{<j,\pm}}. $$
We may dispose of $ L_{k'} $ by translation invariance. Then \eqref{oneestim1} follows from \eqref{decomp2}.
To prove \eqref{oneestim2} we write
$$ e^{\pm i \psi^{k}_{\pm}}_{k'}=e^{\pm i \psi^{k}_{<k'-C,\pm}}_{k'} \pm i \int_{[k'-C,k-C]} \left( \psi^{k}_{\pm,l} e^{\pm i \psi^{k}_{<l,\pm}} \rpr_{k'} \,\mathrm{d} l $$
For the first term we use \eqref{oneestim1}. For the second term, we have
$$ \vn{ \psi^{k}_{\pm,l} e^{\pm i \psi^{k}_{<l,\pm}}(t,x,D)}_{L^2 L^2 \to L^{\frac{10}{7}} L^2} \lesssim \varepsilon 2^{-\frac{l}{5}} $$
by Lemma \ref{decomp_lemma}, \eqref{decomp2} and \eqref{fixedtime1b}, from which \eqref{oneestim2} follows.
\end{proof}
\begin{lemma} For $k \geq 0,\ k \geq k' \geq j-O(1) $, $ j-C\leq l' \leq l \leq k-C $ and for both quantizations, we have:
\begin{align} \label{lemastep3}
\vn{\bar{Q}_j [ (\psi_{k'}^k e^{\pm i \psi^{k}_{< j,\pm}}_{< j})_{k'} \bar{Q}_{\prec j} G_k]}_{L^2_{t,x}} 2^{\frac{j}{2}} & \lesssim \varepsilon 2^{\frac{1}{4}(j-k')} \vn{G}_{L^{\infty} L^2} \\
\label{lemalstep}
\vn{\bar{Q}_j [ (\psi_{l}^k \psi_{l'}^k e^{\pm i \psi^{k}_{< j,\pm}}_{< j})_{k'} \bar{Q}_{\prec j} G_k] }_{L^2_{t,x}} 2^{\frac{j}{2}} & \lesssim \varepsilon^2 2^{\frac{1}{12}(j-l)} 2^{\frac{1}{6}(j-l')} \vn{G}_{L^{\infty} L^2} .
\end{align}
\end{lemma}
\begin{proof} \pfstep{Step~1} By translation invariance we may discard the outer $ k' $ localization. By Lemma \ref{geom:cone} we deduce that in \eqref{lemastep3} the contribution of $ \psi^{k,\pm}_{k',\theta} $ (which define $ \psi^{k}_{k',\pm} $ in \eqref{phase_piece}) is zero unless $ \theta \gtrsim 2^{\frac{1}{2}(j-k')} $ and $ j \geq k'-2k-C $. For these terms, from \eqref{decomp1} we get
$$ 2^{\frac{j}{2}} \sum_{\theta \gtrsim 2^{\frac{1}{2}(j-k')}} \vn{\psi^{k,\pm}_{k',\theta}}_{D_{k}^{\theta}(L^2 L^{\infty})} \lesssim \varepsilon 2^{\frac{1}{4}(j-k')} $$
from which \eqref{lemastep3} follows by Lemma \ref{decomp_lemma}. When $ k=0 $ no angular localization are needed.
\pfstep{Step~2} Now we prove \eqref{lemalstep}. First we consider the case $ l'+c \leq l=k'+O(1) $ and define $ \theta_0 \vcentcolon= 2^{\frac{1}{2}(j-l)} $. By appropriately applying Lemma \ref{geom:cone} we deduce that the terms that contribute to \eqref{lemalstep} are $ \psi_{l}^k \psi_{l',\theta'}^k $ for $ \theta' \gtrsim \theta_0 $ and $ \psi_{l,\theta}^k \psi_{l',\theta'}^k $ for $ \theta' \ll \theta_0, \ \theta \gtrsim \theta_0 $. We use \eqref{decomp1} with $ q=3 $ for the large angle terms and with $ q=6 $ for the other term, obtaining
\begin{equation} \label{psi:angl} \sum_{ \theta' \gtrsim \theta_0 } \vn{\psi_{l}^k \psi_{l',\theta'}^k e^{i\psi} }_{L^{\infty} L^2 \to L^2_{t,x}}+ \sum_{\substack{ \theta' \ll \theta_0 \\ \theta \gtrsim \theta_0 }} \vn{\psi_{l,\theta}^k \psi_{l',\theta'}^k e^{i\psi}}_{L^{\infty} L^2 \to L^2_{t,x}} \lesssim \varepsilon^2 2^{-\frac{j}{2}} 2^{\frac{1}{12}(j-l)} 2^{\frac{1}{6}(j-l')}
\end{equation}
In the high-high case $ l=l'+O(1) \geq k' $ the same argument applies with $ \theta_0 \vcentcolon= 2^{\frac{1}{2}(j-k')}2^{k'-l} $ and \eqref{psi:angl} also follows in this case.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Nspacelemma}]
For brevity, we supress the $ k $ superscript and write $ \psi $ to denote $ \psi_{\pm}^{k} $.
\pfstep{Step~1} [The contribution of $ \bar{Q}_{>j-c}G_k $]
We use Lemma \ref{Sobolev_lemma} and \eqref{oneestim2}
$$ \vn{\bar{Q}_{j} e^{\pm i \psi}_{k'} \bar{Q}_{>j-c} G_k}_{L^2_{t,x}} \lesssim 2^{\frac{j}{5}} \vn{e^{\pm i \psi}_{k'} \bar{Q}_{>j-c} G_k}_{ L^{\frac{10}{7}} L^2} \lesssim \varepsilon 2^{\frac{j-k'}{5}} \vn{\bar{Q}_{>j-c} G_k}_{L^2_{t,x}}. $$
and the last norm is $ \lesssim 2^{-j/2} \vn{G}_{\bar{N}_k^*} $.
\pfstep{Step~2} [The contribution of $ \bar{Q}_{\prec j}G_k $]
Motivated by remark \ref{rk:heuristic}, by iterating the fundamental theorem of calculus, we decompose the symbol
$$ e^{\pm i \psi}(t,x,\xi)=\mathcal{T}_0 \pm i \mathcal{T}_1 - \mathcal{T}_2 \mp i\mathcal{T}_3 $$
where $ \mathcal{T}_0=e^{ i \psi_{<j}} $ and
\begin{align*}
\mathcal{T}_1 &=\int_{[j-C,k-C]} \psi_{l} e^{ i \psi_{<j}} \,\mathrm{d} l ,\qquad \mathcal{T}_2 =\iint_{j-C \leq l' \leq l \leq k-C} \psi_{l} \psi_{l'} e^{ i \psi_{<j}} \,\mathrm{d} l \,\mathrm{d} l' \\
\mathcal{T}_3 &=\iiint_{j-C \leq l'' \leq l' \leq l \leq k-C} \psi_{l} \psi_{l'} \psi_{l''} e^{ i \psi_{<l''}} \,\mathrm{d} l \,\mathrm{d} l' \,\mathrm{d} l''
\end{align*}
The term $ \mathcal{T}_0 $ is estimated by \eqref{oneestim1}:
$$ \vn{(\mathcal{T}_0 )_{k'}(t,x,D) \bar{Q}_{\leq k} G_k}_{L^2_{t,x}} \lesssim \varepsilon 2^{\frac{-j}{2}} 2^{2 (j-k')} \vn{\bar{Q}_{\leq k} G_k}_{L^{\infty} L^2} $$
Next, we split $ \mathcal{T}_1= \mathcal{T}_1^1+\mathcal{T}_1^2 $ where
$$ \mathcal{T}_1^1=\int_{[j-C,k-C]} \psi_{l} e^{ i \psi_{<j}}_{<j} \,\mathrm{d} l, \qquad \mathcal{T}_1^2= \int_{[j-C,k-C]} \psi_{l} e^{ i \psi_{<j}}_{>j-c/2} \,\mathrm{d} l
$$
By appying the $ k' $ localization, the integral defining $ \mathcal{T}_1^1 $ vanishes unless $ l=k'+O(1) $ for which we may apply \eqref{lemastep3}.
To estimate $ \mathcal{T}_1^2 $ we use Lemma \ref{decomp_lemma} with \eqref{decomp2} for $ q=6 $ and \eqref{oneestim1} with $ L^{\infty} L^2 \to L^{3} L^2 $.
We turn to $ \mathcal{T}_2 $ and separate $ e^{ i \psi_{<j}}=e^{ i \psi_{<j}}_{<j}+e^{ i \psi_{<j}}_{>j-c/2} $ as before. For the first component we use \eqref{lemalstep}. For the second, we use \eqref{decomp2} with $ q=6 $ for $ \psi_{l}, \psi_{l'} $ and \eqref{oneestim1} with $ L^{\infty} L^2 \to L^{6} L^2 $ obtaining:
\begin{align*}
& 2^{\frac{1}{2}j} \vn{\psi_{l} \psi_{l'} e^{ i \psi_{<j}}_{>j-c/2}}_{L^{\infty}L^2 \to L^2 L^2} \lesssim \varepsilon 2^{\frac{1}{6} (j-l)} 2^{\frac{1}{6} (j-l')} \qquad \qquad \text{for} \ l>k'-c \\
& 2^{\frac{1}{2}j} \vn{\psi_{l} \psi_{l'} e^{ i \psi_{<j}}_{k'} }_{L^{\infty}L^2 \to L^2 L^2} \lesssim \varepsilon 2^{\frac{1}{6} (j-l)} 2^{\frac{1}{6} (j-l')} 2^{2(j-k')} \qquad \text{for} \ l<k'-c.
\end{align*}
For $ \mathcal{T}_3 $ we use \eqref{decomp2} for $ q=6 $. When $ l<k'-C $ we use \eqref{oneestim1} with $ p=q=\infty $ and it remains to integrate
$$ 2^{\frac{1}{2}j} \vn{ \psi_{l} \psi_{l'} \psi_{l''} e^{ i \psi_{<l''}}_{k'} }_{L^{\infty}L^2 \to L^2 L^2} \lesssim \varepsilon 2^{\frac{1}{2}j} 2^{-\frac{1}{6}l} 2^{-\frac{1}{6}l'} 2^{-\frac{1}{6}l''} 2^{2(l''-k')}.
$$
On $ l \geq k'-C $ it suffices to integrate
$$ 2^{\frac{1}{2}j} \vn{ \psi_{l} \psi_{l'} \psi_{l''} e^{ i \psi_{<l''}} }_{L^{\infty}L^2 \to L^2 L^2} \lesssim \varepsilon 2^{\frac{1}{2}j} 2^{-\frac{1}{6}l} 2^{-\frac{1}{6}l'} 2^{-\frac{1}{6}l''}.
$$
\end{proof}
\nocite{*}
\bibliographystyle{abbrv}
| proofpile-arXiv_066-3003 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
Entropy is an information theoretic measurement of uncertainty that has found many applications in machine learning.
For example, it can be used to incentivize exploration in reinforcement learning (RL) \citep{HaarnojaTAL17/icml, HaarnojaZAL18/icml}; prevent mode-collapse of generative adversarial networks (GANs) \citep{BalajiHCF19/icml, Dieng2019pregan}; and calibrate the uncertainty of the variational distribution in approximate Bayesian inference.
However, it is in general intractable to compute the entropy of an arbitrary random variable.
In most applications, one actually does not care about the quantity of entropy itself, but rather how to manipulate and control this quantity as part of the optimization objective.
In light of this, we propose to approximately estimate the gradient of entropy so as to maximize or minimize the entropy of a data sampler.
More concretely, we approximate the gradient of the log probability density function of the data sampler.
This is sufficient since the gradient of its entropy can be shown to be the expected value of the \textit{path derivative}~\citep{roeder2017sticking}.
We can then plug in a gradient approximator to enable stochastic backpropagation.
We propose to use the \emph{denoising autoencoder} (DAE, \citet{VincentLBM08/icml}) to approximate the gradient of the log density function, which is also known as \emph{denoising score matching} \citep{vincent2011connection}.
It has been shown that the optimal reconstruction function of the DAE converges to the gradient of the log density as the noise level $\sigma$ approaches zero \citep{AlainB14/jmlr}.
In fact, such an approach has been successfully applied to recover the gradient field of the density function of high-dimensional data such as natural images \citep{song2019generative}, which convincingly shows DAEs can accurately approximate the gradient.
However, in the case of entropy maximization (or minimization),
the non-stationarity of the sampler's distribution poses a problem for optimization.
On the one hand, the log density gradient is recovered only asymptotically as $\sigma\rightarrow0$.
On the other hand, the training signal vanishes while a smaller noise perturbation is applied, which makes it hard to reduce the approximation error due to suboptimal optimization.
The fact that the sampler's distribution is changing makes it even harder to select a noise level that is sufficiently small.
Our work aims at resolving this no-win situation.
In this work, we propose the \emph{amortized residual denoising autoencoder} (AR-DAE), which is a conditional DAE of a residual form that takes in $\sigma$ as input.
We condition the DAE on $\sigma=0$ at inference time to approximate the log density gradient while sampling non-zero $\sigma$ at training, which allows us to train with $\sigma$ sampled from a distribution that covers a wide range of values.
If AR-DAE is optimal, we expect to continuously generalize to $\sigma=0$ to recover the log density gradient, which can be used as an unbiased estimate of the entropy gradient.
We perform ablation studies on the approximation error using a DAE, and show that our method provides significantly more accurate approximation than the baselines.
Finally, we apply our method to improve distribution-free inference for variational autoencoders \citep{kingma2013auto,rezende2014stochastic} and soft actor-critic \citep{HaarnojaZAL18/icml} for continuous control problems in reinforcement learning.
As these tasks are non-stationary, amortized (conditional), and highly structured, it demonstrates AR-DAE can robustly and accurately approximate log density gradient of non-trivial distributions given limited computational budgets.
\section{Approximate entropy gradient estimation}
\subsection{Background on tractability of entropy}
\label{para:implicit-density-models}
An \emph{implicit density model} is characterized by a data generation process \citep{%
Mohamed2016learning}.
The simplest form of an implicit density model contains a prior random variable $z\sim p(z)$, and a generator function $g:z\mapsto x$.
The likelihood of a particular realization of $x$ is \emph{implied} by the pushforward of $p(z)$ through the mapping $g$.
Unlike an \emph{explicit density model},
an implicit density model does not require a carefully designed parameterization for the density to be explicitly defined,
allowing it to approximate arbitrary data generation process more easily. %
This comes at a price, though, since the density function of the implicit model cannot be easily computed,
which makes it hard to approximate its entropy using Monte Carlo methods.
\subsection{Denoising entropy gradient estimator}
\begin{figure}%
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\scalebox{0.80}{
\input{figs/explicit.tikz}
}
\caption{}
\end{subfigure}
\vspace*{-0.2cm}
\hspace*{0.1cm}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\scalebox{0.80}{
\input{figs/implicit.tikz}
}
\caption{}
\end{subfigure}
\vspace*{-0.2cm}
\caption{
(a) Entropy gradient wrt parameters of an invertible generator function.
(b) Approximate entropy gradient using the proposed method.
}
\label{fig:explicit-vs-implicit}
\vspace*{-0.3cm}
\end{figure}
Let $z$ and $g$ be defined as above, and let $\theta$ be the parameters of the mapping $g$ (denoted $g_\theta$).
Most of the time, we are interested in maximizing (or minimizing) the entropy of the implicit distribution of $x=g_\theta(z)$.
For example, when the mapping $g$ is a bijection, the density of $x=g_\theta(z)$ can be decomposed using the change-of-variable density formula, so controlling the entropy of $x$ amounts to controlling the log-determinant of the Jacobian of $g_\theta$ \citep{RezendeM15/icml}, as illustrated in Figure~\ref{fig:explicit-vs-implicit}-(a).
This allows us to estimate both the entropy and its gradient.
However, for an iterative optimization algorithm such as (stochastic) gradient descent, which is commonly employed in machine learning, it is sufficient to compute the gradient of the entropy rather than the entropy itself. %
Following \citet{roeder2017sticking}, we can rewrite the entropy of $x$ by changing the variable and neglecting the score function which is $0$ in expectation to get
\eq{
\label{eq:entropy-gradient}
\nabla_{\theta} H(p_{g}(x)) = -\mathbb{E}_{z} \left[ [\nabla_x \log p_{g}(x)|_{x=g_{\theta}(z)}]^{\intercal} \mathbf{J}_{\theta}g_{\theta}(z) \right] %
,
}
where $\mathbf{J}_{\theta}g_{\theta}(z)$ is the Jacobian matrix of the random sample $x=g_{\theta}(z)$ wrt to the sampler's parameters $\theta$.
See Appendix \ref{sec:appendix-entropy-gradient} for the detailed derivation.
We emphasize that this formulation is more general as it does not require $g$ to be bijective.
Equation (\ref{eq:entropy-gradient}) tells us that we can obtain an unbiased estimate of the entropy by drawing a sample of the integrand, which is the \emph{path derivative} of $z$.
The integrand requires evaluating the sample $x=g_\theta(z)$ under the gradient of its log density $\nabla_x \log p_g(x)$.
As $\log p_g(x)$ is usually intractable or simply not available, we directly approximate its gradient using a black box function.
As long as we can provide a good enough approximation to the gradient of the log density and treat it as the incoming unit in the backward differentiation (see Figure \ref{fig:explicit-vs-implicit}-(b)), the resulting estimation of the entropy gradient is approximately unbiased.
In this work, we propose to approximate the gradient of the log density using a denoising autoencoder (DAE, \citet{VincentLBM08/icml}).
A DAE is trained by minimizing the reconstruction loss $d$ of an autoencoder $r$ with a randomly perturbed input
\begin{align*}
\mathcal{L}_{\tt{DAE}}(r)=\mathbb{E}[d(x, r(x+\epsilon))],
\end{align*}
where the expectation is taken over the random perturbation $\epsilon$ and data $x$.
\citet{AlainB14/jmlr} showed that if $d$ is the L2 loss and $\epsilon$ is a centered isotropic Gaussian random variable with variance $\sigma^2$, then under some mild regularity condition on $\log p_g$ the optimal reconstruction function satisfies
$$r^*(x)=x+\sigma^2\nabla_x \log p_g(x) + o(\sigma^2),$$
as $\sigma^2\rightarrow0$.
That is, for sufficiently small $\sigma$, we can approximate the gradient of the log density using the black box function $f_r(x):=\frac{r(x)-x}{\sigma^2}$ assuming $r\approx r^*$.
\section{Error analysis of $\nabla_x\log p_g(x)\approx f_r(x)$}
Naively using $f_r(x)$ to estimate the gradient of the entropy is problematic.
First of all, the division form of $f_r$ can lead to numerical instability and magnify the error of approximation.
This is because when the noise perturbation $\sigma$ is small, $r(x)$ will be very close to $x$ and thus both the numerator and the denominator of $f_r$ are close to zero.
Second, using the triangle inequality, we can decompose the error of the approximation $\nabla_x\log p_g(x)\approx f_r(x)$ into
\begin{align*}
||\nabla_x &\log p_g(x) - f_r(x)|| \leq \\ &\!\!\qquad\underbrace{||\nabla_x\log p_g(x) - f_{r^*}(x)||}_{\textit{asymp error}} \;\,+\;\, ||f_{r^*}(x) - f_{r}(x) ||.
\end{align*}
The first error is incurred by using the optimal DAE to approximate $\nabla_x\log p_g(x)$, which vanishes when $\sigma\rightarrow 0$.
We refer to it as the \emph{asymptotic error}.
The second term is the difference between the optimal DAE and the ``current'' reconstruction function.
Since we use a parametric family of functions (denoted by $\mathcal{F}$) to approximate $f_{r^*}$, it can be further bounded by
\begin{align*}
||f_{r^*}&(x) - f_{r}(x) || \leq \\
&\!\!\qquad\qquad\underbrace{||f_{r^*}(x) - f_{r_{\mathcal{F}}^*}(x) ||}_{\textit{param error}} \;\,+\;\, \underbrace{||f_{r_{\mathcal{F}}^*}(x) - f_{r}(x) ||}_{\textit{optim error}},
\end{align*}
where ${r_{\mathcal{F}}^*}:=\argmin_{r\in\mathcal{F}} \cL_{\tt{DAE}}(r)$ is the optimal reconstruction function within the family $\mathcal{F}$.
The first term measures how closely the family of functions $\mathcal{F}$ approximates the optimal DAE, and is referred to as the \emph{parameterization error}.
The second term reflects the suboptimality in optimizing $r$.
It can be significant especially when the distribution of $x$ is non-stationary, in which case $r$ needs to be constantly adapted.
We refer to this last error term as the \emph{optimization error}.
As we use a neural network to parameterize $r$, the parameterization error can be reduced by increasing the capacity of the network.
The optimization error is subject to the %
variance of the noise $\sigma^2$ (relative to the distribution of $x$), as it affects the magnitude of the gradient signal $\mathbb{E}[\nabla||r(x+\epsilon)-x||^2]$.
This will make it hard to design a fixed training procedure for $r$ as different values of $\sigma$ requires different optimization specifications to tackle the optimization error.
\section{Achieving asymptotic optimality}
In this section, we propose
the \textbf{amortized residual DAE} (AR-DAE),
an improved method to approximate $\nabla_x\log p_g(x)$ that is designed to resolve the numerical instability issue and reduce the error of approximation.
\subsection{Amortized residual DAE}
\label{subsec:ardae}
AR-DAE (denoted $f_{ar}$) is a DAE of residual form conditioned on the magnitude of the injected noise,
minimizing the following optimization objective.
\begin{align}
\label{eq:ardae}
\cL_{\tt{ar}}\lbp f_{ar} \rbp
= \eE_{\substack{
x \sim p(x) \\
u \sim N(0, I) \\
\sigma \sim N(0, \delta^2) \\
}}
\lbs \lbV u + \sigma f_{ar}(x + \sigma u; \sigma) \rbV^2 \rbs
.
\end{align}
This objective involves three modifications to the regular training and parameterization of a DAE: \emph{residual connection}, \emph{loss rescaling}, and \emph{scale conditioning} for amortization.
\para{Residual form}
First,
we consider a residual form of DAE
(up to a scaling factor):
let
$r(x)=\sigma^2 f_{ar}(x)+x$,
then
$\nabla_x \log p_g(x)$ is approximately equal to
$$\frac{r(x)-x}{\sigma^2} = \frac{\sigma^2 f_{ar}(x)+x -x}{\sigma^2} = f_{ar} \textrm{ .}$$
That is, this reparameterization allows $f_{ar}$ to directly approximate the gradient, avoiding the division that can cause numerical instability.
The residual form also has an obvious benefit of a higher capacity, as it allows the network to represent an identity mapping more easily, which is especially important when the reconstruction function is close to an identity map for small values of $\sigma$ \cite{HeZRS16/cvpr}.%
\para{Loss rescaling} To prevent the gradient signal from vanishing to $0$ too fast when $\sigma$ is arbitrarily small,
we rescale the loss $\cL_{\tt{DAE}}$ by a factor of $1/\sigma$, and since we can decouple the noise level from the isotropic Gaussian noise into $\epsilon=\sigma u$ for standard Gaussian $u$, the rescaled loss can be written as $\mathbb{E}[||\sigma f_{ar}(x+\sigma u)+u||^2]$.
We summarize the properties of the optimal DAE of the rescaled residual form in the following propositions:
\begin{restatable}{prop}{optimalres}
\label{prop:optimal_res}
Let $x$ and $u$ be distributed by $p(x)$ and $\cN(0,I)$. For $\sigma\neq0$, the minimizer of the functional $\mathbb{E}_{x,u}[||u+\sigma f(x+\sigma u)||^2]$
is almost everywhere determined by
$$f^*(x;\sigma) = \frac{-\eE_u[p(x-\sigma u)u]}{\sigma \eE_u[p(x-\sigma u)]} \textrm{ .}$$
Furthermore, if $p(x)$ and its gradient are both bounded, $f^*$ is continuous wrt $\sigma$ for all $\sigma\in\mathbb{R}\setminus0$ and $\lim_{\sigma\rightarrow0}f^*(x;\sigma) = \nabla_x \log p_g(x)$.
\end{restatable}
The above proposition studies the asymptotic behaviour of the optimal $f^*_{ar}$ as $\sigma\rightarrow0$.
Below, we show that under the same condition, $f^*_{ar}$ approaches the gradient of the log density function of a Gaussian distribution centered at the expected value of $x\sim p(x)$ as $\sigma$ is arbitrarily large.
\begin{restatable}{prop}{reslargesigma}
\label{prop:res_large_sigma}
$\lim_{\sigma\rightarrow\infty}\frac{f^*(x;\sigma)}{\nabla_x \log\cN(x;\mathbb{E}_p[X],\sigma^2I)} \rightarrow1$.
\end{restatable}
\begin{figure}
\centering
\includegraphics[width=0.23\textwidth]{figs/resdae_large_sigma.png}
\hfill
\includegraphics[width=0.23\textwidth]{figs/resdae_small_sigma.png}
\caption{Residual DAE trained with a large (left) vs small (right) $\sigma$ value. Red cross indicates the mean of the swissroll.
The arrows indicate the approximate gradient directions.}
\label{fig:quivers}
\vspace*{-0.3cm}
\end{figure}
\para{Scale conditioning} Intuitively, with larger $\sigma$ values, the perturbed data $x+\sigma u$ will more likely be ``off-manifold'', which makes it easy for the reconstruction function to point back to where most of the probability mass of the distribution of $x$ resides.
Indeed, as Proposition~\ref{prop:res_large_sigma} predicts, with larger $\sigma$ the optimal $f_{ar}^*$ tends to point to the expected value $\mathbb{E}_p[X]$, which is shown in Figure \ref{fig:quivers}-left.
With smaller values of $\sigma$, training $f_{ar}$ becomes harder, as one has to predict the vector $-u$ from $x+\sigma u$ (i.e. treating $x$ as noise and trying to recover $u$).
Formally, the training signal ($\Delta$) has a decaying rate of $\mathcal{O}(\sigma^2)$ for small $\sigma$ values, because
\begin{align*}%
\mathbb{E}_u[\Delta] &:= \mathbb{E}_u[\nabla||u+\sigma f(x+\sigma u)||^2] \nonumber \\&= 2\sigma^2 \nabla \left(\tr(\nabla_x f(x)) + \frac{1}{2}||f(x)||^2 \right) + o(\sigma^2),
\end{align*}%
where the first term is proportional to the stochastic gradient of the \emph{implicit score matching} \citep{hyvarinen2005estimation}.
That is, with smaller $\sigma$ values, minimizing the rescaled loss is equivalent to score matching, up to a diminishing scaling factor.
Moreover, the variance of the gradient signal $\mathrm{Var}(\Delta)$ also has a quadratic rate $\mathcal{O}(\sigma^2)$, giving rise to a decreasing signal-to-noise ratio (SNR) $\mathbb{E}[\Delta]/\sqrt{\mathrm{Var}(\Delta)}=\mathcal{O}(\sigma)$, which is an obstacle for stochastic optimization \citep{shalev2017failures}.
See Appendix \ref{app:snr} for the SNR analysis.
In order to leverage the asymptotic optimality of the gradient approximation as $\sigma\rightarrow0$ (Figure \ref{fig:quivers}-right),
we propose to train multiple (essentially infinitely many) models with different $\sigma$'s at the same time, hoping to leverage the benefit of training a large-$\sigma$ model while training a model with a smaller $\sigma$.
More concretely, we condition $f_{ar}$ on the scaling factor $\sigma$, so that $f_{ar}$ can "generalize" to the limiting behaviour of $f^*_{ar}$ as $\sigma\rightarrow0$ to reduce the asymptotic error.
Note that we cannot simply take $\sigma$ to be zero, since setting $\sigma=0$ would result in either learning an identity function for a regular DAE or learning an arbitrary function for the rescaled residual DAE (as the square loss would be independent of the gradient approximator).
The scale-conditional gradient approximator $f_{ar}(x;\sigma)$ will be used to approximate $\nabla_x \log p_g(x)$ by setting $\sigma=0$ during inference, while $\sigma$ is never zero at training.
This can be done by considering a distribution of $\sigma$, which places zero probability to the event $\{\sigma=0\}$; e.g. a uniform density between $[0,\delta]$ for some $\delta>0$.
The issue of having a non-negative support for the distribution of $\sigma$ is that we need to rely on $f_{ar}$ to extrapolate to $0$, but neural networks usually perform poorly at extrapolation.
This can be resolved by having a symmetric distribution such as centered Gaussian with variance $\delta^2$ or uniform density between $[-\delta,\delta]$; owing to the the symmetry of the noise distribution $N(u;0,I)$, we can mirror the scale across zero without changing the loss:
\begingroup\makeatletter\def\f@size{9}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\large\normalfont#1}}%
\begin{align*}
\mathbb{E}_u
\lbs \lbV u + \sigma f(x+\sigma u) \rbV^2 \rbs \nn
&= \eE \lbs \lbV (-u) + \sigma f(x+\sigma (-u)) \rbV^2 \rbs \nn \\
&= \eE \lbs \lbV u + (-\sigma) f(x+(-\sigma)u) \rbV^2 \rbs \nn.%
\end{align*}\endgroup
Furthermore, Proposition \ref{prop:optimal_res} implies a good approximation to $f_{ar}^*(x,\sigma')$ would be close to $f_{ar}^*(x,\sigma)$ if $\sigma'$ is sufficiently close to $\sigma$.
We suspect this might help to reduce the optimization error of AR-DAE, since the continuity of both $f_{ar}$ and $f_{ar}^*$ implies that
$f_{ar}(x,\sigma)$ only needs to refine $f_{ar}(x,\sigma')$ slightly if the latter already approximates the curvature of $f_{ar}^*(x,\sigma')$ well enough.
Then by varying different $\sigma$ values, the conditional DAE is essentially interpolating between the gradient field of the log density function of interest and that of a Gaussian with the same expected value.
\subsection{Approximation error}
\label{sec:approx_err_ardae}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figs/1d/l2e_gt_vs_r_legend.png}
\vspace*{-0.2cm}
\includegraphics[width=0.4\textwidth]{figs/1d/l2e_gt_vs_r.png}
\vspace*{-0.4cm}
\caption{
Approximating log density gradient of 1-D MoG.
\emph{AR-DAE}: the approximation error of the proposed method.
$f^*_{ar}$: the optimal DAE. %
\emph{resDAE}: a DAE of residual form (as well as loss rescaling).
\emph{regDAE}: a regular DAE.
}
\label{fig:dae-1d-err}
\vspace*{-0.3cm}
\end{figure}
To study the approximation error with different variants of the proposed method, we consider a 1-dimensional mixture of Gaussians (MoG) with two equally weighted Gaussians centered at 2 and -2, and with a standard deviation of 0.5, as this simple distribution has a non-linear gradient function and an analytical form of the optimal gradient approximator $f^*$.
See Appendix \ref{app:error_analysis} for the formula and an illustration of approximation with $f^*$ with different $\sigma$ values.
We let $p$ be the density function of the MoG just described.
For a given gradient approximator $f$, we estimate the expected error
$\mathbb{E}_p[|\nabla_x \log p(x) - f|]$ using 1000 i.i.d. samples of $x\sim p$.
The results are presented in Figure \ref{fig:dae-1d-err}.
The curve of the expected error of the optimal $f_{ar}^*$ shows the asymptotic error indeed shrinks to $0$ as $\sigma\rightarrow0$, and it serves as a theoretical lower bound on the overall approximation error.
Our ablation includes two steps of increments.
First, modifying the regular DAE (\emph{regDAE}) to be of the residual form (with loss rescaling, \emph{resDAE}) largely reduces the parameterization error and optimization error combined, as we use the same architecture for the reconstruction function of \emph{regDAE} and for the residual function of \emph{resDAE}.
We also experiment with annealing the $\sigma$ values (as opposed to training each model individually): we take the model trained with a larger $\sigma$ to initialize the network that will be trained with a slightly smaller $\sigma$.
Annealing significantly reduces the error and thus validates the continuity of the optimal $f_{ar}^*$.
All four curves have a jump in the error when $\sigma$ gets sufficiently small, indicating the difficulty of optimization when the training signal diminishes.
This leads us to our second increment: amortization of training (i.e. AR-DAE).
We see that not only does the error of AR-DAE decrease and transition more smoothly as $\sigma$ gets closer to $0$, but it also significantly outperforms the optimal $f_{ar}^*$ for large $\sigma$'s.
We hypothesize this is due to the choice of the distribution over $\sigma$; $\cN(0,\delta^2)$ concentrates around $0$, which biases the training of $f_{ar}$ to focus more on smaller values of $\sigma$.
\section{Related Works}
Denoising autoencoders were originally introduced to learn useful representations for deep networks by \citet{VincentLBM08/icml, vincent2010stacked}.
It was later on noticed by \citet{vincent2011connection} that the loss function of the residual form of DAE is equal to the expected quadratic error $||f-\nabla_x\log p_\sigma||^2$, where $p_\sigma(x') = \int p(x){\mathcal{N}}(x';x,\sigma^2 I)dx$ is the marginal distribution of the perturbed data, to which the author refers as \emph{denoising score matching}.
Minimizing expected quadratic error of this form is in general known as \emph{score matching} \citep{hyvarinen2005estimation}, where $\nabla_x \log p$ is referred to as the score \footnote{This is not to be confused with the score (or informant) in statistics, which is the gradient of the log likelihood function wrt the parameters.} of the density $p$.
And it is clear now when we convolve the data distribution with a smaller amount of noise, the residual function $f$ tends to approximate $\nabla_x\log p(x)$ better.
This is formalized by \citet{AlainB14/jmlr} as the limiting case of the optimal DAE.
\citet{saremi2018deep, saremi2019neural} propose to use the residual and gradient parameterizations to train a deep energy model with denoising score matching.
As a reformulation of score matching, instead of explicitly minimizing the expected square error of the score, the original work of \citet{hyvarinen2005estimation} proposes the \emph{Implicit score matching} and minimizes~
\begin{align}
\mathbb{E}_p\left[\frac{1}{2}||f(x)||^2 + \tr(\nabla_x f(x))\right]
.
\label{eq:ism}
\end{align}
\citet{SongGSE19} proposed a stochastic algorithm called the \emph{sliced score matching} to estimate the trace of the Jacobian, which reduces the computational cost from $\mathcal{O}(d_x^2)$ to $\mathcal{O}(d_x)$ (where $d_x$ is the dimensionality of $x$).
It was later noted by the same author that the computational cost of the sliced score matching is still much higher than that of the denoising score matching \citep{song2019generative}.
Most similar to our work are \citet{song2019generative} and \citet{bigdeli2020learning}.
\citet{song2019generative} propose to learn the score function of a data distribution, and propose to sample from the corresponding distribution of the learned score function using Langevin dynamics.
They also propose a conditional DAE trained with a sequence of $\sigma$'s in decreasing order, and anneal the potential energy for the Langevin dynamics accordingly to tackle the mixing problem of the Markov chain.
\citet{bigdeli2020learning} propose to match the score function of the data distribution and that of an implicit sampler.
As the resulting algorithm amounts to minimizing the reverse KL divergence, their proposal can be seen as a combination of \citet{song2019generative} and our work.
Implicit density models are commonly seen in the context of likelihood-free inference \citep{MeschederNG17/icml, Tran2017hierarchical, li2017approximate, Huszar2019implicit}.
Statistics of an implicit distribution are usually intractable, but there has been an increasing interest in approximately estimating the gradient of the statistics, such as the entropy \citep{LiT18/iclr,shi2018spectral} and the mutual information \citep{wen2020mutual}.
\begin{figure}%
\centering
\begin{subfigure}[b]{0.09\textwidth}
\parbox{0.1\textwidth}{\subcaption*{\textbf{1}}}%
\hspace{0.5em}%
\parbox{0.1\textwidth}{\includegraphics[width=10\linewidth]{figs/fit/fit-po1-data.png}}
\parbox{0.1\textwidth}{\subcaption*{\textbf{2}}}%
\hspace{0.5em}%
\parbox{0.1\textwidth}{
\includegraphics[width=10\linewidth]{figs/fit/fit-po2-data.png}
}
\parbox{0.1\textwidth}{\subcaption*{\textbf{3}}}%
\hspace{0.5em}%
\parbox{0.1\textwidth}{
\includegraphics[width=10\linewidth]{figs/fit/fit-po3-data.png}
}
\parbox{0.1\textwidth}{\subcaption*{\textbf{4}}}%
\hspace{0.5em}%
\parbox{0.1\textwidth}{
\includegraphics[width=10\linewidth]{figs/fit/fit-po4-data.png}
}
\vspace*{-0.05cm}
\caption*{\small $\quad\,\, \frac{1}{Z}e^{-U(x)}$}
\vspace*{-0.25cm}
\caption*{\scriptsize }
\end{subfigure}
\hspace{1em}%
\begin{subfigure}[b]{0.09\textwidth}
\includegraphics[width=\linewidth]{figs/fit/po1-aux-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po2-aux-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po3-aux-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po4-aux-mnh3.png}
\vspace*{-0.05cm}
\caption*{\small Aux}
\vspace*{-0.2cm}
\caption*{\scriptsize hierarchical}
\end{subfigure}
\begin{subfigure}[b]{0.09\textwidth}
\includegraphics[width=\linewidth]{figs/fit/po1-ardae-lvm4-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po2-ardae-lvm4-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po3-ardae-lvm4-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po4-ardae-lvm4-nd5-nstd10-mnh3.png}
\vspace*{-0.05cm}
\caption*{\small AR-DAE}
\vspace*{-0.2cm}
\caption*{\scriptsize hierarchical}
\end{subfigure}
\begin{subfigure}[b]{0.09\textwidth}
\includegraphics[width=\linewidth]{figs/fit/po1-ardae-im-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po2-ardae-im-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po3-ardae-im-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po4-ardae-im-nd5-nstd10-mnh3.png}
\vspace*{-0.05cm}
\caption*{\small AR-DAE}
\vspace*{-0.2cm}
\caption*{\scriptsize implicit}
\end{subfigure}
\vspace*{-0.3cm}
\caption{
Fitting energy functions.
First column: target energy functions.
Second column: auxiliary variational method for hierarchical model.
Third column: hierarchical model trained with AR-DAE.
Last column: implicit model trained with AR-DAE.
}
\label{fig:result-unnorm-density-estimation}
\vspace*{-0.5cm}
\end{figure}
\section{More Analyses and Experiments}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{figs/dae_mog_vae.png}
\vspace{-0.5cm}
\caption{
Density estimation with VAE on $5 \times 5$ grid MoG. 1st column: data sampled from the MoG. 2nd column:
VAE+DAE in residual form trained with a large $\sigma$ value.
3rd column: VAE+DAE in residual form trained with a small $\sigma$ value. %
4th column: VAE+AR-DAE. %
Last column: averaged log variance of $z$ throughout training.
}
\label{fig:dae_mog_vae}
\vspace*{-0.3cm}
\end{figure*}
\subsection{Energy function fitting}
\label{sec:energy-function-fitting}
In Section \ref{sec:approx_err_ardae}, we have analyzed the error of approximating the gradient of the log-density function in the context of a fixed distribution.
In reality, we usually optimize the distribution iteratively, and once the distribution is updated, the gradient approximator also needs to be updated accordingly to constantly provide accurate gradient signal.
In this section, we use the proposed entropy gradient estimator to train an implicit sampler to match the density of some unnormalized energy functions.
Concretely,
we would like to approximately sample from a target density which can be written as $p_{\tt{target}}(x) \propto \exp(-U(x))$, where $U(x)$ is an energy function.
We train a neural sampler $g$ by minimizing the reverse Kullback-Leibler (KL) divergence
\eq{
\label{eq:reverse-kl}
&\kld(p_g(x) || p_{\tt{target}}(x)) \nn\\
&= - H(p_g(x)) + \eE_{x \sim p_g(x)} \lbs -\log p_{\tt{target}}(x) \rbs
.
}
where $p_g$ is the density induced by $g$.
We use the target energy functions $U$ proposed in \citet{RezendeM15/icml} (see Table \ref{tab:target-energy-functions} for the formulas).
The corresponding density functions are illustrated at the first column of Figure \ref{fig:result-unnorm-density-estimation}.
We consider two sampling procedures for $g$.
The first one has a hierarchical structure: let $z$ be distributed by
$\cN(0,I)$, and $x$ be sampled from the conditional $p_g(x|z):=\cN(\mu_g(z),\sigma_g^2(z))$ where $\mu_g$ and $\log\sigma_g$ are parameterized by neural networks.
The resulting marginal density has the form $p_g(x) = \int p_g(x|z) p_g(z) dz$, which is computationally intractable due to the marginalization over $z$.
We compare against the variational method proposed by \citet{AgakovB04a/iconip}, which lower-bounds the entropy by
\eq{
\label{eq:ent-lower-bound-aux}
H(p_g(x)) \ge - \eE_{x,z \sim p_g(x|z) p(z)} \lbs \log \f{ p_g(x|z) p(z) }{ h(z|x) } \rbs %
.%
}
Plugging (\ref{eq:ent-lower-bound-aux}) into (\ref{eq:reverse-kl}) gives us an upper bound on the KL divergence.
We train $p_g$ and $h$ jointly by minimizing this upper bound \footnote{The normalizing constant of the target density will not affect the gradient $\nabla_x\log p_{\tt{target}}(x) =- \nabla_x U(x)$.} as a baseline.
The second sampling procedure has an implicit density: we first sample $z\sim \cN(0,I)$ and pass it through the generator $x=g(z)$.
We estimate the gradient of the negentropy of both the hierarchical and implicit models by following the approximate gradient of the log density $f_{ar}\approx\log p_g$.
The experimental details can be found in Appendix \ref{appendix:energy-fitting}.
As shown in Figure \ref{fig:result-unnorm-density-estimation}, the density learned by the auxiliary method sometimes fails to fully capture the target density.
As in this experiment, we anneal the weighting of the cross-entropy term from $0.01$ to $1$, which is supposed to bias the sampler to be rich in noise during the early stage of training, the well-known mode seeking behavior of reverse KL-minimization should be largely mitigated.
This suggests the imperfection of the density trained with the auxiliary method is a result of the looseness of the variational lower bound on entropy, which leads to an inaccurate estimate of the gradient.
On the other hand, the same hierarchical model and the implicit model trained with AR-DAE both exhibit much higher fidelity.
This suggests our method can provide accurate gradient signal even when the sampler's distribution $p_g$ is being constantly updated. \footnote{We update $f_{ar}$ 5 times per update of $p_g$ to generate this figure; we also include the results with less updates of $f_{ar}$ in Appendix \ref{appendix:energy-fitting}.}
\afterpage{
\begin{table}[!t]%
\centering
\small
\begin{tabular}{lccc}
\toprule
& \multicolumn{3}{c}{$\log p(x)$} \\
\midrule
& \multicolumn{1}{c}{\it {MLP}} & \multicolumn{1}{c}{\it {Conv}} & \multicolumn{1}{c}{\it {ResConv}} \\
Gaussian$^\dagger$ & -85.0 & -81.9 & - \\
HVI aux$^\dagger$ & -83.8 & -81.6 & - \\
AVB$^\dagger$ & -83.7 & -81.7 & - \\
\midrule
Gaussian & -84.40 & -81.82 & -80.75 \\
HVI aux & -84.63 & -81.87 & -80.80 \\
HVI AR-DAE (ours) & \textbf{-83.42} & -81.46 & -80.45 \\
IVI AR-DAE (ours) & -83.62 & \textbf{-81.26} & \textbf{-79.18} \\
\bottomrule
\end{tabular}%
\caption{
Dynamically binarized MNIST. %
$^\dagger$Results taken from \citet{MeschederNG17/icml}.
}
\label{tab:vae-result-dbmnist}
\end{table}
\begin{table}[!t]%
\centering
\small
\begin{tabular}{lc}
\toprule
Model & $\log p(x)$ \\ \hline
{\small(Models with a trained prior)}\\
VLAE {\scriptsize\citep{chen2016variational}} & \textbf{-79.03} \\
PixelHVAE + VampPrior {\scriptsize\citep{tomczak2018vae}}
& -79.78 \\
\midrule
{\small(Models without a trained prior)}\\
VAE IAF {\scriptsize\citep{KingmaSJCCSW16/nips}} & -79.88 \\
VAE NAF {\scriptsize\citep{HuangKLC18/icml}} & -79.86 \\
Diagonal Gaussian & -81.43 \\
IVI AR-DAE (ours) & \textbf{-79.61} \\
\bottomrule
\end{tabular}
\caption{
Statically binarized MNIST.
}
\label{tab:vae-results-sbmnist}
\end{table}
\begin{figure}[!t]%
\centering
\includegraphics[width=\linewidth]{figs/vae/gen_sbmnist_2x16.png}
\vspace*{-.2cm}
\caption{
Generated samples (the mean value of the decoder) of the IVI AR-DAE trained on statically binarized MNIST.
}
\label{fig:result-generated-samples-ivae}
\vspace*{-.2cm}
\end{figure}
}
\subsection{Variational autoencoder}%
\label{sec:vae}
In the previous section, we have demonstrated that AR-DAE can robustly approximate the gradient of the log density function that is constantly changing and getting closer to some target distribution.
In this section, we move on to a more challenging application: likelihood-free inference for variational autoencoders (VAE, \citet{kingma2013auto, RezendeMW14/icml}).
Let $p(z)$ be %
the standard normal.
We assume the data is generated by $x\sim p(x|z)$ which is parameterized by a deep neural network.
To estimate the parameters, we maximize the marginal likelihood $\int p(x|z)p(z) dz$ of the data $x$, sampled from some data distribution $p_{\tt{data}}(x)$.
Since the marginal likelihood is usually intractable, the standard approach is to maximize the \emph{evidence lower bound} (ELBO):
\eq{
\label{eq:elbo}
\log p(x) \ge \eE_{z \sim q(z|x)} \lbs \log p(x, z) - \log q(z|x) \rbs
,
}
where $q(z|x)$ is an amortized variational posterior distribution.
The ELBO allows us to jointly optimize $p(x|z)$ and $q(z|x)$ with a unified objective.
Note that the equality holds
\emph{iff}
$q(z|x)=p(z|x)$,
which
motivates using more flexible families of variational posterior.
Note that this is more challenging for two reasons: the target distribution $p(z|x)$ is constantly changing and is conditional.
Similar to \citet{MeschederNG17/icml}, we parameterize a conditional sampler $z=g(\epsilon,x)$, $\epsilon\sim\mathcal{N}(0,I)$ with an implicit $q(z|x)$.
We use AR-DAE to approximate $\nabla_z \log q(z|x)$ and
estimate the entropy gradient to update the encoder while maximizing the ELBO. %
To train AR-DAE, instead of fixing the prior variance $\delta$ in the $\cL_{ar}$ we adaptively choose $\delta$ for different data points.
See Appendix \ref{appendix:vae} for a detailed description of the algorithm and heuristics we use.
\para{Toy dataset}
To demonstrate the difficulties in inference, %
we train a VAE with a 2-D latent space on a mixture of 25 Gaussians.
See Appendix \ref{app:subsec:vae-experiments} for the experimental details.
In Figure \ref{fig:dae_mog_vae}, we see that
if a fixed $\sigma$ is chosen to be too large
for the residual DAE, the DAE
tends to underestimate the gradient of the entropy,
so the variational posteriors collapse to point masses. %
If $\sigma$ is too small,
the DAE
manages to maintain a non-degenerate variational posterior, %
but the inaccurate gradient approximation
results in a non-smooth encoder and
poor generation quality. On the contrary, the same model trained with AR-DAE has a very smooth encoder that maps the data into a Gaussian-shaped, %
aggregated posterior %
and approximates the data distribution accurately.
\para{MNIST}
We first demonstrate the robustness of our method on different choices of architectures for VAE:
(1) a one-hidden-layer fully-connected network (denoted by {\it MLP}),
(2) a convolutional network (denoted by {\it Conv}), %
and (3) a larger convolutional network with residual connections (denoted by {\it ResConv}) from \cite{HuangKLC18/icml}.
The first two architectures are taken from \citet{MeschederNG17/icml} for a direct comparison with the adversarially trained implicit variational posteriors (AVB).
We also implement a diagonal Gaussian baseline and the auxiliary hierarchical method (HVI aux, \citep{maaloe2016auxiliary}).
We apply AR-DAE to estimate the entropy gradient of the hierarchical posterior and the implicit posterior (denoted by HVI AR-DAE and IVI AR-DAE, respectively).
As shown in Table \ref{tab:vae-result-dbmnist}, %
AR-DAE consistently improves the quality of inference in comparison to the auxiliary variational method and AVB, %
which is reflected by the better likelihood estimates.
We then compare our method with state-of-the-art VAEs evaluated on the statically binarized MNIST dataset \citep{larochelle2011neural}. %
We use the implicit distribution with the {\it ResConv} architecture following the previous ablation.
As shown in Table \ref{tab:vae-results-sbmnist}, the VAE trained with AR-DAE demonstrates state-of-the-art performance among models with a fixed prior.
Generated samples are presented in Figure \ref{fig:result-generated-samples-ivae}.
\subsection{Entropy-regularized reinforcement learning}
\begin{figure*}[h]%
\centering
\vspace*{-0.3cm}
\begin{subfigure}[b]{0.38\textwidth}
\centering
\hspace*{0.4cm}\includegraphics[width=\textwidth]{figs/rl/all_main_legend.png}
\end{subfigure}
\vspace*{-0.2cm}
\begin{subfigure}[b]{\textwidth}
\centering
\hspace*{-0.3cm}\includegraphics[width=\textwidth]{figs/rl/all_main_smoothed.png}
\end{subfigure}
\vspace*{-0.3cm}
\caption{
Continuous control in reinforcement learning. SAC: soft actor-critic with diagonal Gaussian. SAC-NF: soft actor-critic with normalizing flows. SAC-AR-DAE: soft actor-critic with implicit distribution trained with AR-DAE. %
The shaded area indicates the standard error with 5 runs.
}
\label{fig:results-rl}
\vspace{-0.2cm}
\end{figure*}
We now apply AR-DAE to approximate entropy gradient in the context of reinforcement learning (RL). %
We use the
\emph{soft actor-critic} (SAC, \citet{HaarnojaZAL18/icml}), %
a state-of-the-art off-policy %
algorithm for continuous control that is designed to encourage exploration by regularizing the entropy of the policy. %
We train the policy $\pi(a|s)$ to minimize the following objective:
\eqst{
\cL(\pi) = \eE_{s \sim \cD} \lbs
\kld \lbp
\pi(a|s) \middle\Vert \f{\exp \lbp Q(s, a) \rbp}{Z(s)}
\rbp
\rbs
,
}
where $\cD$ is a replay buffer of the past experience of the agent, %
$Q$ is a ``soft'' state-action value function that approximates the entropy-regularized expected return of the policy, and $Z(s) = \int_{a} \exp( Q(s, a) ) da$ is the normalizing constant of the Gibbs distribution.~A complete description of the SAC algorithm can be found in Appendix \ref{appendix:subsec:sac}. %
We compare with the original SAC that uses a diagonal Gaussian distribution as policy and a normalizing flow-based policy proposed by \citet{MazoureDDHP19}. %
We parameterize an implicit policy and use AR-DAE to approximate $\nabla_a \log \pi(a|s)$ to estimate~
\eq{
&\nabla_{\phi} \cL(\pi) \nn\\
&= \eE_{
\substack{s \sim \cD \\
a \sim \pi}}
\lbs \lbs\nabla_{a} \log \pi_{\phi} (a|s) - \nabla_{a} Q(s, a) \rbs^{\intercal} \mathbf{J}_{\phi}g_{\phi}(\epsilon, s) \rbs
,
}
where $\pi(a|s)$ is implicitly induced by $a=g_{\phi}(\epsilon,s)$ with $\epsilon\sim\mathcal{N}(0,I)$.~We parameterize $f_{ar}$ as the gradient of a scalar function $F_{ar}$, so that $F_{ar}$ can be interpreted as the unnormalized log-density of the policy which will be used to update the soft Q-network. %
We run our experiments on fix continuous control environments from the
OpenAI gym benchmark suite \cite{openaigym} and Rllab \cite{duan2016benchmarking}. %
The experimental details can be found in Appendix \ref{app:sac-ar-dae}.
The results are presented in Table \ref{tab:sac-result-max-main} and Figure \ref{fig:results-rl}. We see that SAC-AR-DAE using an implicit policy improves the performance over SAC-NF.~This also shows the approximate gradient signal of AR-DAE is stable and accurate even for reinforcement learning. The extended results for a full comparison of the methods are provided in Table \ref{tab:sac-result-max-full} and \ref{tab:sac-result-auc-full}.
\begin{table}[!t]
\centering
\small
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{lccc}
\toprule
&\multicolumn{1}{c}{SAC}&\multicolumn{1}{c}{SAC-NF}&\multicolumn{1}{c}{SAC-AR-DAE} \\
\midrule
HalfCheetah-v2 & 9695 $\pm$ 879 & 9325 $\pm$ 775 & \textbf{10907 $\pm$ 664} \\
Ant-v2 & 5345 $\pm$ 553 & 4861 $\pm$ 1091 & \textbf{6190 $\pm$ 128} \\
Hopper-v2 & \textbf{3563 $\pm$ 119} & 3521 $\pm$ 129 & 3556 $\pm$ 127 \\
Walker-v2 & 4612 $\pm$ 249 & 4760 $\pm$ 624 & \textbf{4793 $\pm$ 395} \\
Humanoid-v2 & 5965 $\pm$ 179 & 5467 $\pm$ 44 & \textbf{6275 $\pm$ 202} \\
Humanoid (rllab) & 6099 $\pm$ 8071 & 3442 $\pm$ 3736 & \textbf{10739 $\pm$ 10335} \\
\bottomrule
\end{tabular}%
}
\caption{Maximum average return. $\pm$ corresponds to one standard deviation over five random seeds.}
\label{tab:sac-result-max-main}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.23\textwidth]{figs/max_ent_emd.png}
\hfill
\includegraphics[width=0.23\textwidth]{figs/max_ent_emd_diff.png}
\vspace*{-0.3cm}
\caption{Maximum entropy principle experiment. Left: estimated EMD of (red) the implicit distribution trained with AR-DAE and (green) the IAF. Right: the estimated EMD of the implicit distribution minus that of the IAF.}
\label{fig:max_ent}
\vspace*{-.4cm}
\end{figure}
\subsection{Maximum entropy modeling}
As a last application, we apply AR-DAE to solve the constrained optimization problem of the \emph{maximum entropy principle}.
Let $m\in\mathbb{R}^{10}$ be a random vector
and $B\in\mathbb{R}^{10\times10}$ be a random matrix
with $m_i$ and $B_{ij}$ drawn i.i.d. from $\cN(0,1)$.
It is a standard result that among the class of real-valued random vectors $x\in\mathbb{R}^{10}$ satisfying the constraints $\mathbb{E}[x]=m$ and $\mathrm{Var}(x)=B^\top B$, $x\sim\cN(m,B^\top B)$ has the maximal entropy.
Similar to \citet{loaiza2017maximum}, we solve this constrained optimization problem but with an implicit distribution.
We use the \emph{penalty method} and increasingly penalize the model to satisfy the constraints.
Concretely, let $\tilde{m}$ and $\tilde{C}$ be the sample mean and sample covariance matrix, respectively, estimated with a batch size of $128$.
We minimize the modified objective $-H(p_\theta(x)) + \lambda\sum_{j\in\{1,2\}} c_j^2 $,
where $c_1=||\tilde{m} - m||_2$ and $c_2=||\tilde{C}-B^\top B||_F$, with increasing weighting $\lambda$ on the penalty.
We estimate the entropy gradient using AR-DAE, and compare against the inverse autoregressive flows (IAF, \citet{KingmaSJCCSW16/nips}).
At the end of training, we %
estimate the earth mover's distance (EMD) from $\cN(m,B^\top B)$.
We repeat the experiment 256 times and report the histogram of EMD in Figure \ref{fig:max_ent}.
We see that most of the time the implicit model trained with AR-DAE has a smaller EMD, indicating the extra flexibility of arbitrary parameterization allows it to satisfy the geometry of the constraints more easily. %
We leave some more interesting applications suggested in \citet{loaiza2017maximum} for future work.
\section{Conclusion}
We propose AR-DAE to estimate the entropy gradient of an arbitrarily parameterized data generator.
We identify the difficulties in approximating the log density gradient with a DAE, and demonstrate the proposed method significantly reduces the approximation error.
In theory, AR-DAE approximates the zero-noise limit of the optimal DAE, which is an unbiased estimator of the entropy gradient.
We apply our method to a suite of tasks %
and empirically validate that AR-DAE provides accurate and reliable gradient signal to maximize entropy.
\section*{Acknowledgments}
We would like to thank Guillaume Alain for an insightful discussion on denoising autoencoders.~Special thanks to people who have provided their feedback and advice during discussion, including Bogdan Mazoure and Thang Doan for sharing the code on the RL experiment; to Joseph Paul Cohen for helping optimize the allocation of computational resources.~We thank CIFAR, NSERC and PROMPT for their support of this work. %
\bibliographystyle{icml2020}
\section{Gradient of the entropy with respect to density functions}
\label{sec:appendix-entropy-gradient}
Consider a probability density function $p_g(x)$.~We assume $p_g$ is the pushforward of some prior distribution $p(z)$ by a mapping $g_{\theta}: z \mapsto x$. Our goal is to compute the gradient of the entropy of $p_g$ wrt the parameter $\theta$.
Following \citet{roeder2017sticking}, we show that the entropy gradient can be rewritten as Equation (\ref{eq:entropy-gradient}).
\begin{proof}
By the law of the unconscious statistician$^*$ (LOTUS, Theorem 1.6.9 of \citet{durrett2019probability}), we have
\eqst{
\nabla_{\theta} H(p_{g}(x)) %
&= \nabla_\theta \eE_{x \sim p_{g}(x)} \left[ -\log p_{g}(x) \right] \nn\\
&\overset{*}{=} \nabla_\theta \eE_{z \sim p(z)} \left[ -\log p_{g}(g_{\theta}(z)) \right] \nn\\
&= -\nabla_\theta \int p(z) \log p_{g}(g_{\theta}(z)) dz \nn\\
&= \cancel{-\int p(z) \nabla_{\theta} \log p_{g}(x) |_{x=g_{\theta}(z)} dz}
- \int p(z) [\nabla_x \log p_{g}(x)|_{x=g_{\theta}(z)}]^{\intercal} \mathbf{J}_{\theta}g_{\theta}(z) dz \nn\\
&= -\eE_{z \sim p(z)} \left[ [\nabla_x \log p_{g}(x)|_{x=g_{\theta}(z)}]^{\intercal} \mathbf{J}_{\theta}g_{\theta}(z) \right] %
.%
}
where the crossed-out term is due to the following identity
\eqst{
\eE_{z \sim p(z)} \left[ \nabla_{\theta} \log p_{g}(x) \middle\vert_{x=g_{\theta}(z)} \right]
&= \eE_{x \sim p_g(x)} \left[ \nabla_{\theta} \log p_{g}(x) \right] %
= \int p_g(x) \nabla_{\theta} \log p_{g}(x) dx \nn\\
&=\int \cancel{ p_g(x) } \f{1}{\cancel{p_g(x)}} \nabla_{\theta} p_{g}(x) dx %
= \nabla_{\theta} \int p_{g}(x) dx = \nabla_{\theta} 1 = 0 %
.
}
\end{proof}
\section{Properties of residual DAE}
\optimalres*
\begin{proof}
For simplicy, when the absolute value and power are both applied to a vector-valued variable, they are applied elementwise.
The characterization of the optimal function $f^*$ can be derived by following \citet{AlainB14/jmlr}.
For the second part, the symmetry of the distribution of $u$ implies
\begin{align*}
f^*(x;\sigma)
&= \frac{-\eE_u[p(x-\sigma u)u]}{\sigma \eE_u[p(x-\sigma u)]}\\
&= \frac{\eE_u[p(x+\sigma u)u]}{\sigma \eE_u[p(x+\sigma u)]} = f^*(x; -\sigma),
\end{align*}
so we only need to show $f^*$ is continuous for $\sigma>0$.
Since $p$ is bounded,
by the {dominated convergence theorem} (DOM), both
$\eE_u[p(x-\sigma u)u]$ and $\eE_u[p(x-\sigma u)]$ are continuous for $\sigma>0$, and so is $f^*(x,\sigma)$.
Lastly, an application of L'Hôpital's rule gives
\begin{align*}
\lim_{\sigma\rightarrow0} f^*(x;\sigma)
= \lim_{\sigma\rightarrow0} \frac{\frac{d}{d\sigma}\eE_u[p(x+\sigma u)u]}{\frac{d}{d\sigma} \sigma\eE_u[p(x+\sigma u)]},
\intertext{which by another application of DOM (since gradient of $p$ is bounded) is equal to}
\lim_{\sigma\rightarrow0} \frac{\eE[\nabla p(x+\sigma u)^\top uu]}{\eE[p(x+\sigma u)] + \sigma \eE[\nabla p(x+\sigma u)^\top u]}
.
\end{align*}
Applying DOM a final time gives
$$\lim_{\sigma\rightarrow0}f^*(x;\sigma) = \frac{\nabla p(x)\odot\mathbb{E}[u^2]}{p(x)} = \nabla \log p(x).$$
\end{proof}
\reslargesigma*
\begin{proof}
We rewrite the optimal gradient approximator as
$$f^*(x;\sigma)=\frac{1}{\sigma^2}\int \frac{\cN(u;0,I)p(x-\sigma u)}{\int \cN(u';0,I)p(x-\sigma u') du'} \cdot \sigma u\, du .$$
Changing the variables $\epsilon=\sigma u$ and $\epsilon'=\sigma u'$ gives
$$\frac{1}{\sigma^2}\int \frac{\cN(\epsilon/\sigma;0,I)p(x-\epsilon)}{\int \cN(\epsilon'/\sigma;0,I)p(x-\epsilon') d\epsilon'} \cdot \epsilon\, d\epsilon ,$$
which can be written as $\frac{1}{\sigma^2}\mathbb{E}_{q(\epsilon)}[\epsilon]$ where $q(\epsilon)\propto \cN(\epsilon/\sigma;0,I)p(x-\epsilon)$ is the change-of-variable density.
By DOM (applied to the numerator and denominator separately, since the standard Gaussian density is bounded), $\mathbb{E}_q[\epsilon]\rightarrow\int p(x-\epsilon) \epsilon\, d\epsilon$
as $\sigma\rightarrow\infty$.
The latter integral is equal to $\mathbb{E}_{p}[X]-x$ (which can be seen by substituting $y=x-\epsilon$).
\end{proof}
\section{Signal-to-noise ratio analysis on DAE's gradient}
\label{app:snr}
Fixing $x$ and $u$, the gradient of the L2 loss can be written as
\begin{align*}
\Delta := \nabla || u + \sigma f(x+\sigma u)||^2
&= \nabla \left(\sum_{i}( u_i + \sigma f_i(x+\sigma u) )^2\right)
= \sum_{i} \nabla ( u_i + \sigma f_i(x+\sigma u) )^2
,
\end{align*}
where $i$ iterates over the entries of the vectors $u$ and $f$, and $\nabla$ denotes the gradient wrt the parameters of $f$.
We further expand the gradient of the summand via chain rule, which yields
\begin{align*}
\nabla ( u_i + \sigma f_i(x+\sigma u))^2
&= 2\sigma (u_i+\sigma f_i(x+\sigma u)) \nabla f_i(x+\sigma u) \\
&= 2\sigma \left(u_i\nabla
\underbrace{f_i(x+\sigma u)}_{A} + \sigma
\underbrace{f_i(x+\sigma u)}_{B}\nabla
\underbrace{f_i(x+\sigma u)}_{C}\right)
.
\end{align*}
Taylor theorem with the mean-value form of the remainder allows us to approximate $f_i(x+\sigma u)$ by $f_i(x)$ as $\sigma$ is small:
\begin{align}
f_i(x+\sigma u)
&= f_i(x) + \sigma \nabla_x f_i(\hat{x})^\top u \label{eq:zeroth_order_approx} \\
&= f_i(x) + \sigma \nabla_x f_i(x)^\top u + \frac{\sigma^2}{2} u^\top \grad_x^2 f_i(\tilde{x}) u
,
\label{eq:first_order_approx}
\end{align}
where $\nabla_x$ denotes the gradient wrt the input of $f$, and $\hat{x}$ and $\tilde{x}$ are points lying on the line interval connecting $x$ and $x+\sigma u$.
Plugging (\ref{eq:first_order_approx}) into $A$ and (\ref{eq:zeroth_order_approx}) into $B$ and $C$ gives
\begin{align*}
&2\sigma\left(
u_i \nabla\left(f_i(x) + \sigma \nabla_x f_i(x)^\top u + \frac{\sigma^2}{2} u^\top \grad_x^2 f_i(\tilde{x}) u\right)
+ \sigma
\left(f_i(x) + \sigma \nabla_x f_i(\hat{x})^\top u \right)
\nabla\left(f_i(x) + \sigma \nabla_x f_i(\hat{x})^\top u \right)
\right) \\
&=
2\sigma u_i \nabla f_i(x) +
2\sigma^2 u_i \nabla \nabla_x f_i(x)^\top u +
\sigma^3 u_i \nabla u^\top \grad_x^2 f_i(\tilde{x}) u \\
&\quad +
2\sigma^2 f_i(x)\nabla f_i(x) +
2\sigma^3 f_i(x) \nabla\nabla_x f_i(\hat{x})^\top u +
2\sigma^3 \nabla_x \left(f_i(\hat{x})^\top u\right) \nabla f_i(x) +
2\sigma^4 \nabla_x \left(f_i(\hat{x})^\top u\right) \nabla\nabla_x f_i(\hat{x})^\top u
.
\end{align*}
With some regularity conditions (DOM-style assumptions),
marginalizing out $u$ and taking $\sigma$ to be arbitrarily small yield
\begin{align*}
\mathbb{E}_u[ \Delta ]
&= \sum_{i} 2\sigma^2 \nabla \frac{\partial}{\partial x_i} f_i(x) + 2\sigma^2 f_i(x)\nabla f_i(x) + o(\sigma^2) \\
&= 2\sigma^2 \nabla\left(\tr(\nabla_x f(x)) + \frac{1}{2} ||f(x)||^2\right) + o(\sigma^2)
.
\end{align*}
In fact, we note that the first term is the stochastic gradient of the implicit score matching objective (Theorem 1, \citet{hyvarinen2005estimation}), but it vanishes at a rate $\mathcal{O}(\sigma^2)$ as $\sigma^2\rightarrow0$.
For the second moment,
similarly,
$$\mathbb{E}_u[\Delta \Delta^\top] =
4\sigma^2 \sum_i \nabla f_i(x) \nabla f_i(x)^\top + o(\sigma^2).$$
As a result,
$$\frac{\mathbb{E}[\Delta]}{\sqrt{\mathrm{Var}(\Delta)}}
=\frac{\mathbb{E}[\Delta]}{\sqrt{
\mathbb{E}(\Delta\Delta^\top) - \mathbb{E}(\Delta)\mathbb{E}(\Delta)^\top
}}
=\frac{\mathcal{O}(\sigma^2)}{\sqrt{\mathcal{O}(\sigma^2) - \mathcal{O}(\sigma^4)}}
= \mathcal{O}(\sigma)
.
$$
\section{Experiment: Error analysis}
\subsection{Main experiments}
\label{app:error_analysis}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{figs/1d/analytic_dae_mog.png}
\vspace*{-0.5cm}
\caption{Left: Density function of mixture of Gaussians.
Right: gradient of the log density function (dotdash line) and gradient approximations using optimal DAE with different $\sigma$ values (solid lines). }
\label{fig:1d-mog-data-exapmles}
\end{figure}
\para{Dataset and optimal gradient approximator}
As we have described in Section \ref{sec:approx_err_ardae}, we use the mixture of two Gaussians to analyze the approximation error (see Figure \ref{fig:1d-mog-data-exapmles} (left)). Formally, we define $p(x) = 0.5 \cN(x; 2, 0.25) + 0.5 \cN(x; -2, 0.25)$.
For notational convenience, we let $p_1$ and $p_2$ be the density functions of these two Gaussians, respectively. %
We obtain $\nabla_x \log p(x) $ by differentiating $\log p(x)$ wrt $x$ using auto-differentiation library such as PyTorch \citep{paszke2017automatic}.
With some elementary calculation, we can expand the formula of the optimal gradient approximator $f^*$ as,
\eqst{
f^*(x;\sigma)
= \frac{-\eE_u[p(x-\sigma u)u]}{\sigma \eE_u[p(x-\sigma u)]}
= \frac{-\sum_{i=1}^{2} S'_i \mu'_i}{ \sigma \sum_{i=1}^{2} S'_i }
,
}
where $S'_i = \nicefrac{1}{\sqrt{2 \pi (0.5^2 + 1^2)}} \exp \lbp -\nicefrac{(\mu_i+x/\sigma)^2}{2(0.5^2 + 1^2)} \rbp$ for $i \in {1, 2}$, $\mu_1 = -2$, and $\mu_1 = 2$.
\begin{proof} The numerator $\eE_u[\eE_u[p(x-\sigma u)u](x-\sigma u)u]$ can be rewritten as follows:
\eqst{
\eE_u[p(x-\sigma u)u]
&= \int \lbp 0.5 p_1(x-\sigma u) + 0.5 p_2(x-\sigma u) \rbp p(u) u \,du %
= \f{0.5}{\sigma} \sum_{i=1}^{2} S'_i \int \cN(u; \mu'_i, \sigma'_i) u \,du %
= \f{0.5}{\sigma} \sum_{i=1}^{2} S'_i \mu'_i %
,
}
where $S'_i = \nicefrac{1}{\sqrt{2 \pi (0.5^2 + 1^2)}} \exp \lbp -\nicefrac{(\mu_i+x/\sigma)^2}{2(0.5^2 + 1^2)} \rbp$ for $i \in {1, 2}$, $\mu_1 = -2$, and $\mu_1 = 2$.
The second equality comes from the fact that all $p_1$, $p_2$, and $p(u)$ are normal distributions, and thus we have
\eqst{
p_i(x-\sigma u)p(u)
= \f{1}{\sigma} p_i \lbp u - x/\sigma \rbp p(u) %
= \f{1}{\sigma} S'_i \cN(u; \mu'_i, \sigma'_i) %
.
}
Similarly, we can rewrite the denominator as $\eE_u[p(x-\sigma u)] = \f{0.5}{\sigma} \sum_{i=1}^{2} S'_i$.
\end{proof}
\para{Experiments}
For AR-DAE, we indirectly parameterize it as the gradient of some scalar-function (which can be thought of as an unnormalized log-density function); \emph{i.e.} we define a scalar function and use its gradient wrt the input vector. The same trick has also been employed in recent work by \citet{saremi2018deep, saremi2019neural}. We use the network architecture with the following configuration\footnote{$[d_{\tt{input}}, d_{\tt{output}}]$ denotes a fully-connected layer whose input and output feature sizes are $d_{\tt{input}}$ and $d_{\tt{output}}$, respectively.}: $[2+1, 256]$ + $[256, 256] \times 2$ + $[256, 1]$, with the {\tt Softplus} activation function. We use the same network architecture for {\it resDAE} except it doesn't condition on $\sigma$. For {\it regDAE}, the network is set to reconstruct input.
All models are trained for 10k iterations with a minibatch size of 256.
We use the Adam optimizer for both AR-DAE and the generator, with the default $\beta_1=0.9$ and $\beta_2=0.999$.
For all models, the learning rate is initially set to 0.001 and is reduced by half every 1k iterations during training.
For {\it regDAE} and {\it resDAE}, we train models individually for every $\sigma$ value in Figure \ref{fig:dae-1d-err}.
For {\it regDAE\textsubscript{\it annealed}} and {\it resDAE\textsubscript{\it annealed}}, we anneal $\sigma$ from 1 to the target value.%
For AR-DAE, $\delta$ is set to 0.05 and we sample 10 $\sigma$'s from $N(0, \delta^2)$ for each iteration.
We train all models five times and present the mean and its standard error in the figures.
\subsection{Symmetrizing the distribution of $\sigma$}
In Section \ref{subsec:ardae}, we argue that neural networks are not suitable for extrapolation (vs. interpolation), to motivate the use of a symmetric prior over $\sigma$.
To contrast the difference, %
we sample $\sigma \sim N(0, \delta^2)$ and compare two different types of $\sigma$-conditioning: (1) conditioning on $\sigma$, and (2) conditioning on $|\sigma|$. We use the same experiment settings in the previous section, %
but we use a hypernetwork \citep{HaDL17/iclr} that takes $\sigma$ (resp. $|\sigma|$) as input and outputs the parameters of AR-DAE, to force AR-DAE to be more dependent on the value of $\sigma$ (resp. $|\sigma|$). The results are shown in Figure \ref{fig:dae-1d-err-intra-vs-extra}.
We see that the two conditioning methods result in two distinct approximation behaviors. First, when AR-DAE only observes positive values, it fails to extrapolate to the $\sigma$ values close to 0. When a symmetric $\sigma$ distribution is used, the approximation error of AR-DAE is more smooth. Second, we notice that the symmetric $\sigma$ distribution bias $f_{ar}$ to focus more on small $\sigma$ values. %
Finally, the asymmetric distribution helps AR-DAE reduce the approximation error for some $\sigma$. We speculate that AR-DAE with the asymmetric $\sigma$ distribution has two times higher to observe small $\sigma$-values during training, and thus improves the approximation. In general, we observe that the stability of the approximation is important for our applications, in which case AR-DAE need to adapt constantly in the face of non-stationary distributions. %
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{figs/1d/l2e_gt_vs_r_intra_vs_extra_10_legend.png}
\vspace*{-0.2cm}
\includegraphics[width=0.5\textwidth]{figs/1d/l2e_gt_vs_r_intra_vs_extra_10.png}
\vspace*{-0.4cm}
\caption{Comparison of two $\sigma$-conditioning methods to approximate log density gradient of 1D-MOG. AR-DAE: conditioning on $\sigma$. AR-DAE ($|\sigma|$): conditioning on $|\sigma|$. $\sigma$ is sampled from $N(0, \delta)$ for all experiments.
}
\label{fig:dae-1d-err-intra-vs-extra}
\end{figure}
\section{Experiment: Energy Fitting}
\label{appendix:energy-fitting}
\begin{table*}[!h]
\centering
\begin{tabular}{l}
\toprule
\textbf{Potential} $U(\bz)$ \\
\midrule
\textbf{1}: $\f{1}{2} \lbp \f{\lbV \bz \rbV - 2}{0.4}\rbp^2
- \ln \lbp
e^{-\f{1}{2} \lbs \f{z_1-2}{0.6} \rbs^2}
+ e^{-\f{1}{2} \lbs \f{z_1+2}{0.6} \rbs^2}
\rbp$ \\
\textbf{2}: $\f{1}{2} \lbp \f{z_2 - w_1(\bz)}{0.4}\rbp^2$ \\
\textbf{3}: $
- \ln \lbp
e^{-\f{1}{2} \lbs \f{z_2 - w_1(\bz)}{0.35} \rbs^2}
+ e^{-\f{1}{2} \lbs \f{z_2 - w_1(\bz) + w_2(\bz)}{0.35} \rbs^2}
\rbp$ \\
\textbf{4}: $
- \ln \lbp
e^{-\f{1}{2} \lbs \f{z_2 - w_1(\bz)}{0.4} \rbs^2}
+ e^{-\f{1}{2} \lbs \f{z_2 - w_1(\bz) + w_3(\bz)}{0.35} \rbs^2}
\rbp$ \\
\midrule
where $w_1(\bz) = \sin \lbp \f{2 \pi z_1}{4} \rbp$,
$w_2(\bz) = 3 e^{-\f{1}{2} \lbs \f{z_1 - 1}{0.6} \rbs^2}$,
$w_3(\bz) = 3 \sigma \lbp \f{z_1 - 1}{0.3}\rbp$,
$\sigma(x) = \f{1}{1+e^{-x}}$.
\\
\bottomrule
\end{tabular}
\caption{
The target energy functions introduced in \citet{RezendeM15/icml}.
}
\label{tab:target-energy-functions}
\end{table*}
\subsection{Main experiments}
Parametric densities trained by minimizing the reverse KL divergence tend to avoid ``false positive'', a well known problem known as the zero-forcing property \citep{minka2005divergence}. %
To deal with this issue, we minimize a modified objective:
\eq{
\label{eq:reverse-kl-alpha}
{\kld}_{\alpha}(p_g(x) || p_{\tt{target}}(x)) = - H(p_g(x)) - \alpha \eE_{x \sim p_g(x)} \lbs \log p_{\tt{target}}(x) \rbs
,
}
where $\alpha$ is annealed from a small value to 1.0 throughout training.
This slight modification of the objective function ``convexifies'' the loss landscape and makes it easier for the parametric densities to search for the lower energy regions. For AR-DAE training, we use Equation (\ref{eq:ardae}) with a fixed prior variance $\delta = 0.1$.
For all experiments, we use a three-hidden-layer MLP for both hierarchical distribution as well as implicit distribution. More specifically, the generator network for the hierarchical distribution has the following configuration: $[d_z, 256]$ + $[256, 256] \times 2$ + $[256, 2] \times 2$. $d_z$ indicates the dimension of the prior distribution $p(z)$ and is set to 2. The last two layers are for mean and log-variance\footnote{diagonal elements of the covariance matrix in log-scale} of the conditional distribution $p_g(x|z)$. For the auxiliary variational method, the same network architecture is used for $h(z|x)$ in Equation (\ref{eq:ent-lower-bound-aux}). When we train the hierarchical distribution with AR-DAE, we additionally clamp the log-variance to be higher than -4. Similar to the hierarchical distribution, the generator of the implicit distribution is defined as, $[d_z, 256]$ + $[256, 256] \times 2$ + $[256, 2]$. Unlike the hierarchical distribution, $d_z$ is set to 10. {\tt ReLU} activation function is used for all but the final output layer.
For AR-DAE, we directly parameterize the residual function $f_{ar}$. We use the following network architecture: $[2, 256]$ + $[256, 256] \times 2$ + $[256, 2]$. {\tt Softplus} activation function is used.
Each model is trained for 100,000 iterations with a minibatch size of 1024. We update AR-DAE $N_d$ times per generator update. For the main results, we set $N_d = 5$.
We use the Adam optimizer for both the generator and AR-DAE, where $\beta_1=0.5$ and $\beta_2=0.999$.
The learning rate for the generator is initially set to 0.001 and is reduced by 0.5 for every 5000 iterations during training. AR-DAE's learning rate is set to 0.001.
To generate the figure, we draw 1M samples from each model to fill up 256 equal-width bins of the 2D histogram.
\subsection{Effect of the number of updates ($N_d$) of the gradient approximator}
In addition to the main results, we also analyze how the number of updates of AR-DAE per generator update affects the quality of the generator. We use the same implicit generator and AR-DAE described in the main paper, but vary $N_d$ from 1 to 5. The result is illustrated in Figure \ref{fig:result-unnorm-density-estimation-num-updates}. In principle, the more often we update AR-DAE, the more accurate (or up-to-date) the gradient approximation will be. This is corroborated by the improved quality of the trained generator.
\begin{figure}[!h]%
\centering
\begin{subfigure}[b]{0.0896\textwidth}
\centering
\begin{subfigure}[b]{\textwidth} %
\parbox{0.1\textwidth}{\subcaption*{\textbf{1}}}%
\hspace{0.5em}%
\parbox{0.1\textwidth}{\includegraphics[width=10\linewidth]{figs/fit/fit-po1-data.png}}
\parbox{0.1\textwidth}{\subcaption*{\textbf{2}}}%
\hspace{0.5em}%
\parbox{0.1\textwidth}{
\includegraphics[width=10\linewidth]{figs/fit/fit-po2-data.png}
}
\parbox{0.1\textwidth}{\subcaption*{\textbf{3}}}%
\hspace{0.5em}%
\parbox{0.1\textwidth}{
\includegraphics[width=10\linewidth]{figs/fit/fit-po3-data.png}
}
\parbox{0.1\textwidth}{\subcaption*{\textbf{4}}}%
\hspace{0.5em}%
\parbox{0.1\textwidth}{
\includegraphics[width=10\linewidth]{figs/fit/fit-po4-data.png}
}
\vspace*{-0.05cm}
\caption*{\small $\quad\,\, \frac{1}{Z}e^{-U(x)}$}
\vspace*{-0.25cm}
\caption*{\scriptsize }
\end{subfigure}
\vspace*{-0.35cm}
\caption{}
\end{subfigure}
\hspace{3em}
\begin{subfigure}[b]{0.28\textwidth}
\centering
\begin{subfigure}[b]{0.32\textwidth} %
\includegraphics[width=\linewidth]{figs/fit/po1-ardae-im-nd1-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po2-ardae-im-nd1-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po3-ardae-im-nd1-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po4-ardae-im-nd1-nstd10-mnh3.png}
\vspace*{-0.05cm}
\caption*{\small $N_d=1$}
\vspace*{-0.2cm}
\caption*{\scriptsize}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth} %
\includegraphics[width=\linewidth]{figs/fit/po1-ardae-im-nd2-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po2-ardae-im-nd2-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po3-ardae-im-nd2-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po4-ardae-im-nd2-nstd10-mnh3.png}
\vspace*{-0.05cm}
\caption*{\small $N_d=2$}
\vspace*{-0.2cm}
\caption*{\scriptsize}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth} %
\includegraphics[width=\linewidth]{figs/fit/po1-ardae-im-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po2-ardae-im-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po3-ardae-im-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po4-ardae-im-nd5-nstd10-mnh3.png}
\vspace*{-0.05cm}
\caption*{\small $N_d=5$}
\vspace*{-0.2cm}
\caption*{\scriptsize}
\end{subfigure}
\vspace*{-0.35cm}
\caption{}
\end{subfigure}
\hspace{2em}
\begin{subfigure}[b]{0.28\textwidth}
\centering
\begin{subfigure}[b]{0.32\textwidth} %
\includegraphics[width=\linewidth]{figs/fit/po1-ardae-im-nd5-nstd10-mnh3-nz2.png}
\includegraphics[width=\linewidth]{figs/fit/po2-ardae-im-nd5-nstd10-mnh3-nz2.png}
\includegraphics[width=\linewidth]{figs/fit/po3-ardae-im-nd5-nstd10-mnh3-nz2.png}
\includegraphics[width=\linewidth]{figs/fit/po4-ardae-im-nd5-nstd10-mnh3-nz2.png}
\vspace*{-0.05cm}
\caption*{\small $d_z=2$}
\vspace*{-0.2cm}
\caption*{\scriptsize}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth} %
\includegraphics[width=\linewidth]{figs/fit/po1-ardae-im-nd5-nstd10-mnh3-nz3.png}
\includegraphics[width=\linewidth]{figs/fit/po2-ardae-im-nd5-nstd10-mnh3-nz3.png}
\includegraphics[width=\linewidth]{figs/fit/po3-ardae-im-nd5-nstd10-mnh3-nz3.png}
\includegraphics[width=\linewidth]{figs/fit/po4-ardae-im-nd5-nstd10-mnh3-nz3.png}
\vspace*{-0.05cm}
\caption*{\small $d_z=3$}
\vspace*{-0.2cm}
\caption*{\scriptsize}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth} %
\includegraphics[width=\linewidth]{figs/fit/po1-ardae-im-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po2-ardae-im-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po3-ardae-im-nd5-nstd10-mnh3.png}
\includegraphics[width=\linewidth]{figs/fit/po4-ardae-im-nd5-nstd10-mnh3.png}
\vspace*{-0.05cm}
\caption*{\small $d_z=10$}
\vspace*{-0.2cm}
\caption*{\scriptsize}
\end{subfigure}
\vspace*{-0.35cm}
\caption{}
\end{subfigure}
\vspace*{-0.4cm}
\caption{
Fitting energy functions with implicit model using AR-DAE.
(a) Target energy functions.
(b) Varying number of AR-DAE updates per model update. %
(c) Varying the dimensionality of the noise source $d_z$.
}
\label{fig:result-unnorm-density-estimation-num-updates}
\end{figure}
\subsection{Effect of the noise dimension of implicit model}
In this section, we study the effect of varying the dimensionality of the noise source of the implicit distribution.
We use the same experiment settings in the previous section. %
In Figure \ref{fig:result-unnorm-density-estimation-num-updates} (right panel), we see that the generator has a degenerate distribution when $d_z=2$, and the degeneracy can be remedied by increasing $d_z$. %
\section{Experiment: variational autoencoders}
\label{appendix:vae}
\subsection{VAE with the entropy gradient approximator}
Let $p_{\omega}(x|z)$ be the conditional likelihood function parameterized by $\omega$ and $p(z)$ be the the prior distribution.
We let $p(z)$ be the standard normal. As described in Section \ref{sec:vae}, we would like to maximize the ELBO (denoted as $\cL_{ELBO}$) by jointly training $p_{\omega}$ and the amortized variational posterior $q_{\phi}(z|x)$. Similar to Appendix \ref{sec:appendix-entropy-gradient}, the posterior $q_{\phi}(z|x)$ can be induced by a mapping $g_{\phi}: \epsilon, x \mapsto z$ with a prior $q(\epsilon)$ that does not depend on the parameter $\phi$. %
The gradient of $\cL_{ELBO}$ wrt the parameters of the posterior can be written as,
\eq{
\label{eq:elbo-grad-ardae}
\nabla_{\phi} \cL_{ELBO}(q) = \eE_{
\substack{z \sim q_{\phi}(z|x) \\
x \sim p_{\tt{data}}(x)}}
\lbs \lbs
\nabla_{z} \log p_{\omega}(x, z) - \nabla_{z} \log q_{\phi}(z|x)
\rbs^{\intercal} \mathbf{J}_{\phi}g_{\phi}(\epsilon, x) \rbs
.
}
We plug in AR-DAE to approximate the gradient of the log-density, and draw a Monte-Carlo sample of the following quantity to estimate the gradient of the ELBO
\eq{
\label{eq:elbo-grad-ardae-approx}
\hat{\nabla}_{\phi} \cL_{ELBO}(q) \doteq \eE_{
\substack{z \sim q_{\phi}(z|x) \\
x \sim p_{\tt{data}}(x)}}
\lbs \lbs
\nabla_{z} \log p_{\omega}(x, z) - f_{ar,\theta}(z;x, \sigma)|_{\sigma=0}
\rbs^{\intercal} \mathbf{J}_{\phi}g_{\phi}(\epsilon, x) \rbs
,
}
\subsection{AR-DAE}
\label{app:subsec:ardae-implementations}
To approximate $\nabla_{z} \log q_{\phi}(z|x)$, we condition AR-DAE on both the input $x$ as well as the noise scale $\sigma$. %
We also adaptively choose the prior variance $\delta^2$ for different data points instead of fixing it to be a single value.
In addition, we make the following observations.
(1) The posteriors $q_{\phi}$ are usually not centered, but the entropy gradient approximator only needs to model the dispersion of the distribution. %
(2) The variance of the approximate posterior can be very small during training, which %
might pose a challenge for optimization.
To remedy these, we modify the input of AR-DAE to be $\tilde{z} \doteq s (z - b(x))$, where $s$ is a scaling factor and $b(x)$ is a pseudo mean. Ideally, we would like to set $b(x)$ to be $\eE_{q(z|x)}[z]$. %
Instead, we let
$b(x) \doteq g(0, x)$, as $0$ is the mode/mean of the noise source. %
The induced distribution of $\tilde{z}$ will be denoted by $q_{\phi}(\tilde{z}|x)$. By the change-of-variable density formula, we have %
$\nabla_{z} \log q(z|x) = s \nabla_{\tilde{z}} \log q(\tilde{z}|x)$.
This allows us to train AR-DAE with a better-conditioned distribution and the original gradient can be recovered by rescaling.
In summary, we optimize the following objective
\eq{
\label{eq:ardae++}
\cL_{\tt{ar}}\lbp f_{ar} \rbp
= \eE_{\substack{
x \sim p(x) \\
\tilde{z} \sim q(\tilde{z}|x) \\
u \sim N(0, I) \\
\sigma|x \sim N(0, \delta(x)^2) \\
}}
\lbs \lbV u + \sigma f_{ar}(\tilde{z} + \sigma u; x, \sigma) \rbV^2 \rbs
.
}
where $\delta(x) \doteq \delta_{\tt{scale}} S_{z|x}$ and $S_{z|x}$ is sample standard deviation of $z$ given $x$. We use $n_z$ samples per data to estimate $S_{z|x}$. $\delta_{\tt{scale}}$ is chosen as hyperparameter.
In the experiments, we either directly parameterize the residual function of AR-DAE or indirectly parameterize it as the gradient of some scalar-function. We parameterize $f_{ar}(\tilde{z}; x, \sigma)$ as a multi-layer perceptron (MLP). Latent $z$ and input $x$ are encoded separately and then concatenated with $\sigma$ (denoted by "mlp-concat" in Table \ref{table:vae-hyperparameters}). The MLP encoders have $m_{\tt{enc}}$ hidden layers. The concatenated representation is fed into a fully-connected neural network with $m_{\tt{fc}}$ hidden layers. Instead of encoding the input $x$ directly, we either use a hidden representation of the variational posterior $q$ or $b(x)$. We use $d_h$ hidden units for all MLPs. We stress that the learning signal from $\cL_{\tt{ar}}\lbp f_{ar} \rbp$ is not backpropagated to the posterior.
\begin{algorithm}[ht]
\caption{VAE AR-DAE}
\label{alg:vae_ardae}
\begin{algorithmic}
\STATE {\bfseries Input:}
Dataset $\cD$;
mini-batch size $n_{\tt{data}}$;
sample size $n_{z}$;%
prior variance $\delta^2$;
learning rates $\alpha_{\theta}$ and $\alpha_{\phi, \omega}$ \\
\STATE Initialize encoder and decoder $p_\omega(x|z)$ and $q_\phi(z|x)$ %
\STATE Initialize AR-DAE $f_{ar, \theta}(z|x)$
\REPEAT
\STATE Draw $n_{\tt{data}}$ datapoints from $\cD$
\FOR{$k = 0 \dots N_d$}
\STATE Draw $n_{z}$ latents per datapoint from $z \sim q_{\phi}(z|x)$ %
\STATE $\delta_i \gets \delta_{\tt{scale}} S_{z|x_i} \textrm{ for } i=1,\dots, n_{\tt{data}}$
\STATE Draw $n_{\sigma}$ number of $\sigma_i$s per $z$ from $\sigma_i \sim N(0,\delta_i^2)$ %
\STATE Draw $n_{\tt{data}}n_{z}n_{\sigma}$ number of $u$s from $u \sim N(0, I)$
\STATE Update $\theta$ using gradient $\nabla_{\theta} \mathcal{L}_{f_{ar}}$ with learning rate $\alpha_\theta$
\ENDFOR
\STATE $z \sim q_{\phi}(z|x)$
\STATE Update $\omega$ using gradient $\nabla_{\omega} \mathcal{L}_{ELBO}$ with learning rate $\alpha_{\phi, \omega}$
\STATE Update $\phi$ using gradient $\hat{\nabla}_{\phi} \mathcal{L}_{ELBO}$ with learning rate $\alpha_{\phi, \omega}$, whose entropy gradient is approximated using $f_{ar, \theta}(z|x)$.
\UNTIL{Until some stopping criteria}
\end{algorithmic}
\end{algorithm}
\subsection{Experiments}
We summarize the architecture details and hyperparameters in Table~\ref{table:vae-networks} and \ref{table:vae-hyperparameters}, respectively.
\label{app:subsec:vae-experiments}
\para{Mixture of Gaussian experiment}
For the MoG experiment, we use 25 Gaussians centered on an evenly spaced 5 by 5 grid on $[-4, 4] \times [-4, 4]$ with a variance of $0.1$.
Each model is trained for 16 epochs: approximately 4000 updates with a minibatch size of 512.
For all experiments, we use a two-hidden-layer MLP to parameterize the conditional diagonal Gaussian $p(x|z)$. For the implicit posterior $q$, the input $x$ and the $d_{\epsilon}$-dimensional noise are separately encoded with one fully-connected layer, and then the concatenation of their features will be fed into a two-hidden-layer MLP to generate the 2-dimensional latent $z$.
The size of the noise source $\epsilon$ in the implicit posterior, \emph{i.e.} $d_{\epsilon}$, is set to 10.
\para{MNIST}
We first describe the details of the network architectures and then continue to explain training settings. For the {\it MLP} experiments, we use a one-hidden-layer MLP for the diagonal Gaussian decoder $p(x|z)$. %
For the diagonal Gaussian posterior $q(z|x)$. aka vanilla VAE, input $x$ is fed into a fully-connected layer and then the feature is later used to predict the mean and diagonal component of the covariance matrix of the multivariate Gaussian distribution. For the hierarchical posterior, both $q(z_0|x)$ and $q(z|z_0,x)$ are one-hidden-layer MLPs with diagonal Gaussian similar to the vanilla VAE. For the implicit posterior, the input is first encoded and then concatenated with noise before being fed into another MLP to generate $z$.
For {\it Conv}, the decoder starts with a one-fully connected layer followed by three deconvolutional layers. The encoder has three convolutional layers and is modified depending on the types of the variational posteriors, similar to {\it MLP}. For {\it ResConv}, five convolutional or deconvolutional layers with residual connection are used for the encoder and the decoder respectively. %
Following \citet{maaloe2016auxiliary, ranganath2016hierarchical}, when the auxiliary variational method (HVI aux) is used to train the hierarchical posterior, the variational lower bound is defined as,
we maximize the following lower bound to train the hierarchical variational posterior with auxiliary variable (HVI aux)
\eqst{
\log p(x)
\ge \eE_{z \sim q(z|x)} \lbs \log p(x, z) - \log q(z|x) \rbs
\ge \eE_{\substack{
z_0 \sim q(z_0|x) \\
z \sim q(z|z_0, x) \\
}} \lbs \log p(x, z) - \log q(z_0|x) - \log q(z|z_0, x) + \log h(z_0 | z, x) \rbs
.
}
For the dynamically binarized MNIST dataset, we adopt the experiment settings of \citet{MeschederNG17/icml}. The MNIST data consists of 50k train, 10k validation, and 10k test images. In addition to the original training images, randomly selected 5k validation images are added to the training set.
Early stopping is performed based on the evaluation on the remaining 5k validation data points. The maximum number of iterations for the training is set to 4M.
For the statically binarized MNIST dataset, we use the original data split. Early stopping as well as hyperparameter search are performed based on the estimated log marginal probability on the validation set. We retrain the model with the selected hyperparameters with the same number of updates on the combination of the train+valid sets, and report the test set likelihood. We also apply polyak averaging \citep{polyak1992acceleration}.
We evaluate $\log p(x)$ of the learned models using importance sampling \cite{BurdaGS15/iclr} (with $n_{\textrm{eval}}$ samples). For the baseline methods, we use the learned posteriors as proposal distributions to estimate the log probability.
When a posterior is trained with AR-DAE, we first draw $n_{\textrm{eval}}$ $z$'s from the posterior given the input $x$, and then use the sample mean and covariance matrix to construct a multivariate Gaussian distribution. We then use this Gaussian distribution as the proposal.
\section{Experiment: entropy-regularized reinforcement learning}
\label{appendix:reinforcement-learning}
\subsection{Soft actor-critic}
\label{appendix:subsec:sac}
\para{Notation} We consider an infinite-horizon Markov decision process (MDP) defined as a tuple $\lbp \States, \Actions, \Rewards, \penv, \gamma \rbp$ \citep{sutton1998introduction},
where $\States$, $\Actions$, $\Rewards$ are the spaces of state, action and reward, respectively, $\penv(s_{t+1}|s_t,a_t)$ and $\penv(s_0)$ represent the transition probability and the initial state distribution,
$r(s_t,a_t)$ is a bounded reward function, and $\gamma$ is a discount factor. We write $\tau$ as a trajectory resulting from interacting with the environment under some policy $\pi(a_t|s_t)$. %
The entropy-regularized reinforcement learning \citep{ziebart2010modeling} is to learn a policy $\pi(a_t|s_t)$ that maximizes the following objective;
\eq{
\label{eq:ent-regul-rl}
\cL(\pi) = \eE_{\tau \sim \pi, \penv} \lbs \sum_{t=0}^{\infty} \gamma^{t} \lbp r(s_t, a_t) + \alpha H(\pi(\cdot | s_t)) \rbp \rbs
,
}
where $\alpha$ is an entropy regularization coefficient.
We define a soft state value function $V^{\pi}$ and s soft Q-function $Q^{\pi}$ as follows,
\begin{gather*}
V^{\pi}(s) = \eE_{\tau \sim \pi, \penv} \lbs \sum_{t=0}^{\infty} \gamma^{t} \lbp r(s_t, a_t) + \alpha H(\pi(\cdot | s_t)) \rbp \middle\vert s_0 = s \rbs \\
Q^{\pi}(s, a) = \eE_{\tau \sim \pi, \penv} \lbs r(s_t, a_t) + \sum_{t=1}^{\infty} \gamma^{t} \lbp r(s_t, a_t) + \alpha H(\pi(\cdot | s_t)) \rbp \middle\vert s_0 = s, a_0 = a \rbs
.
\end{gather*}
By using these definitions, we can rewrite $V^{\pi}$ and $Q^{\pi}$ as
$V^{\pi}(s) = \eE_{a \sim \pi} \lbs Q^{\pi}(s, a) \rbs + \alpha H(\pi(\cdot|s))$ and
$Q^{\pi}(s) = \lbs r(s, a) + \eE_{s' \sim \penv} \gamma V^{\pi}(s') \rbs$.
\para{Soft actor-critic}
One way to maximize (\ref{eq:ent-regul-rl}) is to minimize the following KL divergence,
\eqst{
\pi_{\tt{new}} = \argmin_{\pi} \kld \lbp \pi(\cdot | s_t) \middle\Vert \f{\exp \lbp Q^{\pi_{\tt{old}}}(s_t, \cdot) \rbp}{Z^{\pi_{\tt{old}}}(s_t)} \rbp %
,
}
where $Z^{\pi_{\tt{old}}}(s_t)$ is the normalizing constant $\int \exp \lbp Q^{\pi_{\tt{old}}}(s_t, a) \rbp da$. \citet{HaarnojaZAL18/icml} show that for finite state space the entropy-regularized expected return will be non-decreasing if the policy is updated by the above update rule. In practice, however, we do not have access to the value functions, %
so \citet{HaarnojaZAL18/icml} propose
to update the policy by first approximating $Q^{\pi_{\tt{old}}}$ and $V^{\pi_{\tt{old}}}$ by some
parametric functions
$Q_{\omega}$ and $V_{\nu}$,
and training the policy by minimizing
\eqst{
\cL(\pi) = \eE_{s_t \sim \cD} \lbs
\kld \lbp
\pi(a_t|s_t) \middle\Vert \f{\exp \lbp Q_{\omega}(s_t, \cdot) \rbp}{Z_{\omega}(s_t)}
\rbp
\rbs
,
}
where $\cD$ is a replay buffer that stores all the %
past experience.
The soft Q-function and soft state value function will be trained by minimizing the following objectives,
\begin{gather*}
\cL(V_{\nu}) = \eE_{s_t \sim \cD} \lbs \f{1}{2} \lbp
V_{\nu}(s_t) - \eE_{\small a_t \sim \pi} \lbs Q_{\omega}(s_t, a_t) - \alpha \log \pi(a_t|s_t) \rbs \rbp^2 \rbs \nn\\
\cL(Q_{\omega}) = \eE_{s_t, a_t \sim \cD} \lbs \f{1}{2} \lbp
Q_{\omega}(s_t, a_t) - \hat{Q}(s_t, a_t)
\rbp^2 \rbs
,
\end{gather*}
where $\hat{Q}(s_t, a_t) \doteq r(s_t, a_t) + \gamma \eE_{s_{t+1} \sim \penv} [V_{\bar{\nu}}(s_{t+1})]$ and $V_{\bar{\nu}}$ is a target value network. For the target value network, SAC follows \citet{MnihKSRVBGRFOPB15/nature}: $V_{\bar{\nu}}$ is defined as a polyak-averaged model \citep{polyak1992acceleration} of $V_{\nu}$. Note that $V_{\nu}$ is inferred from $Q_{\omega}$ via Monte Carlo, \emph{i.e.} $V_{\nu}(s_t) \doteq Q_{\omega}(s_t, a_t) - \alpha \log \pi(a_t|s_t)$ where $a_t \sim \pi(a_t| s_t)$. Moreover, we follow the common practice to use the clipped double Q-functions \citep{hasselt2010double, FujimotoHM18/icml} in our implementations.
\subsection{SAC-AR-DAE and its implementations}
\label{app:sac-ar-dae}
\para{Main algorithm}
Our goal is to train an arbitrarily parameterized policy within the SAC framework. We apply AR-DAE to approximate the training signal for policy. Similar to the implicit posterior distributions in the VAE experiments, the policy consists of a simple tractable noise distribution $\pi(\epsilon)$ and a mapping $g_{\phi}: \epsilon, s \mapsto a$. The gradient of $\cL(\pi)$ wrt the policy parameters can be written as
\eqst{
\nabla_{\phi} \cL(\pi) = \eE_{
\substack{s_t \sim \cD \\
\epsilon \sim \pi}}
\lbs \lbs \nabla_{a} \log \pi_{\phi} (a|s_t)|_{a=g_{\phi}(\epsilon, s_t)} - \nabla_{a} Q_{\omega}(s_t, a)|_{a=g_{\phi}(\epsilon, s_t)} \rbs^{\intercal} \mathbf{J}_{\phi}g_{\phi}(\epsilon, s_t) \rbs
.
}
Let $f_{ar,\theta}$ be AR-DAE which approximates $\nabla_{a} \log \pi_{\phi} (a|s)$ trained using Equation (\ref{eq:ardae++}). Specifically for the SAC experiment, AR-DAE is indirectly parameterized as the gradient of an unnormalized log-density function $\psi_{ar, \theta}: a, s, \sigma \mapsto \eR$ as in,
\eqst{
f_{ar,\theta}(a;s, \sigma) \doteq \nabla_{a} \psi_{ar, \theta}(a;s, \sigma)
.
}
As a result, $\log \pi(a|s)$ can also be approximated by using $\psi_{ar, \theta}$: $\log \pi(a|s) \approx \psi_{ar, \theta}(a;s, \sigma)|_{\sigma=0} - \log Z_{\theta}(s)$, where $Z_{\theta}(s) = \int \exp \lbp \psi_{ar, \theta}(a;s, \sigma)|_{\sigma=0} \rbp da$.
Using AR-DAE, we can modify the objective function $\cL(V_{\nu})$ to be
\eqst{
\hat{\cL}(V_{\nu}) = \eE_{s_t \sim \cD} \lbs \f{1}{2} \lbp
V_{\nu}(s_t) - \eE_{\small a_t \sim \pi} \lbs Q_{\omega}(s_t, a_t) -
\psi_{ar, \theta}(a_t;s_t, \sigma)|_{\sigma=0} \rbs
- \log Z_{\theta}(s_t)
\rbp^2 \rbs
.
}
The same applies to $\cL(Q_{\omega})$.
We also use the polyak-averaged target value network and one-sample Monte-Carlo estimate as done in SAC. Finally, the gradient signal for the policy can be approximated using AR-DAE:
\eqst{
\hat{\nabla}_{\phi} \cL(\pi) \doteq \eE_{
\substack{s_t \sim \cD \\
\epsilon \sim \pi}}
\lbs \lbs
f_{ar,\theta}(g_{\phi}(\epsilon, s_t);s_t, \sigma)|_{\sigma=0} - \nabla_{a} Q_{\omega}(s_t, a)|_{a=g_{\phi}(\epsilon, s_t)} \rbs^{\intercal} \mathbf{J}_{\phi}g_{\phi}(\epsilon, s_t) \rbs
.
}
We summarize all the details in Algorithm \ref{alg:sac_ardae}.
\begin{algorithm}%
\caption{SAC-AR-DAE}
\label{alg:sac_ardae}
\begin{algorithmic}
\STATE {\bfseries Input:} Mini-batch size $n_{\tt{data}}$; replay buffer $\cD$; number of epoch
$T$;
learning rates $\alpha_{\theta},\alpha_{\phi},\alpha_{\omega}, \alpha_{\nu}$\\
\STATE Initialize value function $V_\nu(s)$, critic $Q_\omega(s,a)$, policy $\pi_{\phi}(a|s)$, and AR-DAE $f_{ar, \theta}(a|s)$
\STATE Initialize replay buffer $\cD \gets \varnothing$
\FOR{$\text{epoch}=1,...,T$}
\STATE Initialize a state from $s_0 \sim \penv(s_0)$ %
\FOR{$t = 0 \dots $}
\STATE $a \sim \pi_{\phi}(.|s_t)$
\STATE $(r_t, s_{t+1}) \sim \penv(\cdot|s_t,a_t)$ %
\STATE $\cD \gets \cD \cup \{(s_t,a_t,r_t,s_{t+1})\}$%
\FOR{each learning step}
\STATE Draw $n_{\tt{data}}$ number of $(s_t,a_t,r_t,s_{t+1})$s from $\cD$
\FOR{$k = 0 \dots N_d$}
\STATE Draw $n_{a}$ actions per state from $a \sim \pi_{\phi}(a|s)$ %
\STATE $\delta_i \gets \delta_{\tt{scale}} S_{a|s_i} \textrm{ for } i=1,\dots, n_{\tt{data}}$
\STATE Draw $n_{\sigma}$ number of $\sigma_i$s per $a$ from $\sigma_i \sim N(0,\delta_i^2)$ %
\STATE Draw $n_{\tt{data}}n_{a}n_{\sigma}$ number of $u$s from $u \sim N(0, I)$
\STATE Update $\theta$ using gradient $\nabla_{\theta} \mathcal{L}_{f_{ar}}$ with learning rate $\alpha_\theta$
\ENDFOR
\STATE Update $\nu$ using gradient $\nabla_{\nu} \hat{\cL}_{V}$ with learning rate $\alpha_\nu$ %
\STATE Update $\omega$ using gradient $\nabla_{\omega} \hat{\cL}_{Q}$ with learning rate $\alpha_\omega$ %
\STATE Update $\phi$ using gradient $\hat{\nabla}_{\phi} \mathcal{L}_{\pi}$ which is approximated with $f_{ar, \theta}(a|s)$
\STATE $\bar{\nu} \gets \tau \nu + (1-\tau)\bar{\nu}$ %
\ENDFOR
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\para{Bounded action space}
The action space of all of our environments is an open cube $(-1, 1)^{d_a}$,
where $d_a$ is the dimensionality of the action.
To implement the policy, we apply the hyperbolic tangent function. That is, $a := \tanh(g_{\phi}(\epsilon, s_t))$, where the output of $g_{\phi}$ (denoted as $\tilde{a}$) is in $(-\infty, \infty)$. %
Let $\tilde{a}_{i}$ be the $i$-th element of $\tilde{a}$.
By the change of variable formula, $\log \pi (a|s) = \log \pi(\tilde{a}|s) - \sum_{i=1}^{d_a} \log (1- \tanh^2(\tilde{a}_{i}))$.
In our experiments, we train AR-DAE on the pre-$\tanh$ action $\tilde{a}$. This implies that AR-DAE approximate $\nabla_{\tilde{a}} \log \pi(\tilde{a}|s)$.
We correct the change of volume induced by the tanh using %
\eqst{
\nabla_{\tilde{a}} \log \pi(a|s) = \nabla_{\tilde{a}} \log \pi(\tilde{a}|s) + 2\tanh(\tilde{a})
.
}
To sum up,
the update of the policy follows the approximated gradient
\eqst{
\hat{\nabla}_{\phi} \cL(\pi) \doteq \eE_{
\substack{s_t \sim \cD \\
\epsilon \sim \pi}}
\lbs \lbs
f_{ar,\theta}(g_{\phi}(\epsilon, s_t);s_t, \sigma)|_{\sigma=0} + 2\tanh(g_{\phi}(\epsilon, s_t)) - \nabla_{\tilde{a}} Q_{\omega}(s_t, \tanh(\tilde{a}))|_{\tilde{a}=g_{\phi}(\epsilon, s_t)} \rbs^{\intercal} \mathbf{J}_{\phi}g_{\phi}(\epsilon, s_t) \rbs
.
}
\para{Estimating normalizing constant}
In order to train SAC-AR-DAE in practice, efficient computation of $\log Z_{\theta}(s)$ is required.
We propose to estimate the normalizing constant \citep{geyer1991reweighting} using importance sampling. %
Let $h(a|s)$ be the proposal distribution.
We compute the following (using the log-sum-exp trick to ensure numerical stability)
\eqst{
\log Z_{\theta}(s)
&= \log \int \exp \lbp \psi_{ar, \theta}(a;s, \sigma)|_{\sigma=0} \rbp da \nn\\
&= \log \eE_{a \sim h} \lbs \exp\lbp\psi_{ar, \theta}(a;s, \sigma)|_{\sigma=0} - \log h(a|s) \rbp \rbs \nn\\
&\approx \log \f{1}{N_Z}\sum_{j}^{N_Z} \lbs \exp\lbp\psi_{ar, \theta}(a_j;s, \sigma)|_{\sigma=0} - \log h(a_j|s) - A \rbp \rbs + A %
,
}
where $a_j$ is the $j$-th action sample from $h$ and $A := \max_{a_j} \exp\lbp\psi_{ar, \theta}(a_j;s, \sigma)|_{\sigma=0} - \log h(a_j|s) \rbp$.
For the proposal distribution, we use $h(a|s) \doteq N(\mu(s), c I)$, where $\mu(s) \doteq \psi_{ar, \theta}(g_{\phi}(\epsilon, s);s, \sigma)|_{\epsilon=0, \sigma=0}$ and $c$ is some constant. We set $c$ to be $\log c = -1$.
\para{Target value calibration}
In order to train the Q-function more efficiently, we calibrate its target values. Training the policy only requires estimating the gradient of the Q-function wrt the action, not the value of the Q-function itself. This means that while optimizing $Q_{\omega}$ (and $V_{\nu}$), we can subtract some constant from the true target to center it.
In our experiment, this calibration is applied when we use one-sample Monte-Carlo estimate and the polyak-averaged Q-network $Q_{\bar{\omega}}$. That is, $\cL(Q_{\omega})$ can be rewritten as,
\eqst{
\cL(Q_{\omega}) = \eE_{\substack{s_t, a_t, s_{t+1} \sim \cD\\ a_{t+1} \sim \pi}} \lbs \f{1}{2} \lbp
Q_{\omega}(s_t, a_t) + B -
r(s_t, a_t) - \gamma \lbp Q_{\bar{\omega}}(s_{t+1}, a_{t+1}) - \alpha \log \pi(a_{t+1}|s_{t+1})
\rbp
\rbp^2 \rbs
.
}
where $B$ is a running average of the expected value of $\gamma \alpha \log \pi(a|s)$ throughout training. %
\para{Jacobian clamping}
In addition, %
we found that the implicit policies can potentially collapse to point masses. %
To mitigate this, we regularize the implicit distributions by controlling the Jacobian matrix of the policy wrt the noise source as in \citet{odena2018generator, kumar2020regularized}, aka \emph{Jacobian clamping}.
The goal is to ensure all singular values of Jacobian matrix of pushforward mapping to be higher than some constant.
In our experiments, we follow the implementation of \citet{kumar2020regularized}: (1) stochastic estimation of the singular values of Jacobian matrix at every noise, and the Jacobian is estimated by finite difference approximation, and (2) use of the penalty method \citep{bertsekas2016nonlinear} to enforce the constraint. The resulting regularization term is
\begin{align*}
\mathcal{L}_{\textrm{reg}}(\pi) = \eE_{
\substack{s_t \sim \cD \\
\epsilon \sim \pi \\
v \sim N(0, I)}} \left[
\min\left(
\frac{\Vert g_{\phi}(\epsilon + \xi v, s_t) - g_{\phi}(\epsilon, s_t) \Vert^{2}_{2}}{\xi^2 \Vert v\Vert^2}
- \eta, 0 \right)^2 \right],
\end{align*}
where $\eta, \xi > 0$, and $n_{\tt{perturb}}$ number of the perturbation vector $v$ is sampled. We then update policy $\pi$ with $\hat{\nabla}_{\phi} \cL(\pi) + \lambda \nabla_{\phi} \mathcal{L}_{\textrm{reg}}(\pi)$ where $\lambda$ is increased throughout training. We set $\lambda = 1+i^{\nu}/1000$ at $i$-th iteration and $\nu \in [1.1, 1.3]$. %
\begin{figure}%
\centering
\vspace*{-0.3cm}
\begin{subfigure}[b]{0.588\textwidth}
\centering
\hspace*{0.4cm}\includegraphics[width=\textwidth]{figs/rl/all_ablation_legend.png}
\end{subfigure}
\vspace*{-0.2cm}
\begin{subfigure}[b]{\textwidth}
\centering
\hspace*{-0.3cm}\includegraphics[width=\textwidth]{figs/rl/all_ablation_smoothed.png}
\end{subfigure}
\vspace*{-0.3cm}
\caption{
Additional results on SAC-AR-DAE, ablating Jacobian clamping regularization on implicit policy distributions in comparison with the rest.
}
\label{fig:results-rl-extended}
\end{figure}
\subsection{Experiments}
For the SAC-AR-DAE experiments,
aside from the common practice for SAC,
we follow the experiment settings from \citet{MazoureDDHP19}
and
sample from a uniform policy for a fixed number of initial interactions (denoted as {\it warm-up}). We also adopt the same network architecture for the Q-network, discounting factor $\gamma$, entropy regularization coefficient $\alpha$, and target smoothing coefficient $\tau$.
For AR-DAE, we use the same network architecture as VAE. We also rescale the unbounded action $\tilde{a}$ by $s$ for better conditioning. The details of hyperparameters are described in Table \ref{table:rl-hyperparameters}.
We run five experiments for each environment without fixing the random seed.~For every 10k steps of environment interaction, the average return of the policy is evaluated with 10 independent runs. For visual clarify, the learning curves are smoothed by second-order polynomial filter with a window size of 7 \cite{savitzky1964smoothing}. For each method, we evaluate the maximum average return: we take the maximum of the average return for each experiment and the average of the maximums over the five random seeds. We also report `normalized average return', approximately area under the learning curves: we obtain the numerical mean of the `average returns' over iterates. We run SAC and SAC-NF with the hyperparameters reported in \citet{MazoureDDHP19}. %
\subsection{Additional Experiments}
In addition to the main results in Figure \ref{fig:results-rl} and Table \ref{tab:sac-result-max-main}, we also compare the effect of Jacobian clamping regularization on implicit policy distribution in SAC-AR-DAE. In each environment, the same hyperparameters are used in SAC-AR-DAEs except for the regularization. Our results are presented in Figure \ref{fig:results-rl-extended} and Table \ref{tab:sac-result-max-full}, \ref{tab:sac-result-auc-full}.
The results shows that
Jacobian clamping regularization improves the performance of SAC-AR-DAE in general, especially for {\it Humanoid-rllab}. %
In {\it Humanoid-rllab}, we observe that implicit policy degenerates to point masses without the Jacobian clamping, potentially due to the error of AR-DAE. However, the Jacobian clamping helps to avoid the degenerate distributions, and the policy facilitates AR-DAE-based entropy gradients.
\begin{table}[H]
\centering
\small
\begin{tabular}{lcccc}
\toprule
& \multicolumn{1}{c}{SAC} & \multicolumn{1}{c}{SAC-NF} & \multicolumn{1}{c}{SAC-AR-DAE} & \multicolumn{1}{c}{SAC-AR-DAE (w/o jc)} \\
\midrule
HalfCheetah-v2 & 9695 $\pm$ 879 & 9325 $\pm$ 775 & \textbf{10907 $\pm$ 664} & 10677 $\pm$ 374 \\
Ant-v2 & 5345 $\pm$ 553 & 4861 $\pm$ 1091 & \textbf{6190 $\pm$ 128} & 6097 $\pm$ 140 \\
Hopper-v2 & 3563 $\pm$ 119 & 3521 $\pm$ 129 & 3556 $\pm$ 127 & \textbf{3634 $\pm$ 45} \\
Walker-v2 & 4612 $\pm$ 249 & 4760 $\pm$ 624 & 4793 $\pm$ 395 & \textbf{4843 $\pm$ 521} \\
Humanoid-v2 & 5965 $\pm$ 179 & 5467 $\pm$ 44 & \textbf{6275 $\pm$ 202} & 6268 $\pm$ 77 \\
Humanoid (rllab) & 6099 $\pm$ 8071 & 3442 $\pm$ 3736 & \textbf{10739 $\pm$ 10335} & 761 $\pm$ 413 \\
\bottomrule
\end{tabular}%
\caption{Maximum average return. $\pm$ corresponds to one standard deviation over five random seeds.}
\label{tab:sac-result-max-full}
\end{table}
\begin{table}[H]
\centering
\small
\begin{tabular}{lcccc}
\toprule
& \multicolumn{1}{c}{SAC} & \multicolumn{1}{c}{SAC-NF} & \multicolumn{1}{c}{SAC-AR-DAE} & \multicolumn{1}{c}{SAC-AR-DAE (w/o jc)} \\
\midrule
HalfCheetah-v2 & 8089 $\pm$ 567 & 7529 $\pm$ 596 & 8493 $\pm$ 602 & \textbf{8636 $\pm$ 307} \\
Ant-v2 & 3280 $\pm$ 553 & 3440 $\pm$ 656 & \textbf{4335 $\pm$ 241} & 4015 $\pm$ 363 \\
Hopper-v2 & 2442 $\pm$ 426 & 2480 $\pm$ 587 & 2631 $\pm$ 160 & \textbf{2734 $\pm$ 194} \\
Walker-v2 & 3023 $\pm$ 271 & \textbf{3317 $\pm$ 455} & 3036 $\pm$ 271 & 3094 $\pm$ 209 \\
Humanoid-v2 & 3471 $\pm$ 505 & 3447 $\pm$ 260 & \textbf{4215 $\pm$ 170} & 3808 $\pm$ 137 \\
Humanoid (rllab) & 664 $\pm$ 321 & 814 $\pm$ 630 & \textbf{2021 $\pm$ 1710} & 332 $\pm$ 136 \\
\bottomrule
\end{tabular}%
\caption{Normalized average return. $\pm$ corresponds to one standard deviation over five random seeds.%
}
\label{tab:sac-result-auc-full}
\end{table}
\section{Improved techniques for training AR-DAE and implicit models}
\label{app:heuristics}
In order to improve and stabilize the training of both the generator and AR-DAE,
we explore multiple heuristics.
\subsection{AR-DAE}
\para{Activity function}
During preliminary experiments, we observe that \emph{smooth activation functions} are crucial in parameterizing AR-DAE as well as the residual form of regular DAE. We notice that ReLU gives less reliable log probability gradient for low density regions.
\para{Number of samples and updates}
In the VAE and RL experiments, it is important to keep AR-DAE up-to-date with the generator (i.e. posterior and policy). %
As discussed in Appendix \ref{appendix:energy-fitting}, we found that increasing the number of AR-DAE updates helps a lot.
Additionally, %
we notice that increasing $n_{z}$ is more helpful than increasing $n_{\tt{data}}$ given $n_{\tt{data}}n_{z}$ is fixed.
\para{Scaling-up and zero-centering data}
To avoid using small learning rate for AR-DAE in the face of sharp distributions with small variance, %
we choose to scale up the input of AR-DAE.
As discussed in Appendix \ref{app:subsec:ardae-implementations}, we also zero-center the latent samples (or action samples) to train AR-DAE. This allows AR-DAE to focus more on modeling the dispersion of the distribution rather than %
where most of the probability mass resides.
\subsection{Implicit distributions}
\para{Noise source dimensionality}
We note that the implicit density models %
can potentially be degenerate and do not admit a density function.
For example, in Appendix \ref{appendix:energy-fitting} we show that increasing the dimensionality of the noise source improves the qualities of the implicit distributions. %
\para{Jacobian clamping}
Besides of increasing noise source dimensionality, we can consider Jacobian clamping distributions to prevent implicit posteriors from collapsing to point masses. As pointed out in Appendix \ref{app:sac-ar-dae}, we observe that using this regularization technique can prevent degenerate distributions in practice, as it at least regularizes the mapping locally if its Jacobian is close to singular.
\newpage
\null
\vfill
\begin{table}[!h]
\scriptsize
\centering
\begin{tabular}{cccccc}
\toprule
& \multirow{2}{*}{ $p(x|z)$ } & \multicolumn{4}{c}{ $q(z|x)$ } \\
& & Common & Gaussian & HVI & implicit \\
\midrule
\begin{tabular}{@{}c@{}}
{\it MLP} \\
\tt{toy}
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 2, 256 \rbs$ \\
$\lbs 256, 256 \rbs \times 2$ \\
$\lbs 256, d_x\rbs \times 2$
\end{tabular} & $\lbs d_x, 256\rbs$ & - & - & \begin{tabular}{@{}c@{}}
$\lbs 256+d_{\epsilon}, 256\rbs$ \\
$\lbs 256, 256\rbs$ \\
$\lbs 256, d_z\rbs$
\end{tabular} \\
\cmidrule(l){1-6}
\begin{tabular}{@{}c@{}}
{\it MLP} \\
\tt{dbmnist}
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs d_z, 300 \rbs$ \\
$\lbs 300, d_x\rbs$
\end{tabular} & $\lbs d_x, 300\rbs$ & \begin{tabular}{@{}c@{}}
$\lbs 300, d_z \rbs \times 2$
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 300, d_z \rbs$ $\times$ 2 \\
(or $\lbs 300, d_{z_0}\rbs$ $\times$ 2)
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 300+d_{\epsilon}, 300\rbs$ \\
$\lbs 300, d_z\rbs$
\end{tabular} \\
\cmidrule(l){1-6}
\begin{tabular}{@{}c@{}}
{\it Conv} \\
\tt{dbmnist}
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs d_z, 300 \rbs$ \\
$\lbs 300, 512 \rbs$ \\
$\lbs 32, 32, 5 \times 5, 2, 2, \textrm{deconv} \rbs$ \\
$\lbs 32, 16, 5 \times 5, 2, 2, \textrm{deconv} \rbs$ \\
$\lbs 16, 1, 5 \times 5, 2, 2, \textrm{deconv} \rbs$
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 1, 16, 5 \times 5, 2, 2 \rbs$ \\
$\lbs 16, 32, 5 \times 5, 2, 2 \rbs$ \\
$\lbs 32, 32, 5 \times 5, 2, 2 \rbs$
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 512, 800 \rbs$ \\
$\lbs 800, d_z \rbs \times 2$
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 512, 800 \rbs$ \\
$\lbs 800, d_z \rbs \times 2$ \\
(or $\lbs 800, d_(z_0) \rbs \times 2$)
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 512+d_{\epsilon}, 800 \rbs$ \\
$\lbs 800, d_z \rbs$
\end{tabular} \\
\cmidrule(l){1-6}
\begin{tabular}{@{}c@{}}
{\it ResConv} \\
\tt{dbmnist} \\
(or \tt{sbmnist})
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs d_z, 450 \rbs$ \\
$\lbs 450, 512 \rbs$ \\
$\lbs \textrm{upscale by 2} \rbs$ \\
$\lbs 32, 32, 3 \times 3, 1, 1, \textrm{res} \rbs$ \\
$\lbs 32, 32, 3 \times 3, 1, 1, \textrm{res} \rbs$ \\
$\lbs \textrm{upscale by 2} \rbs$ \\
$\lbs 32, 16, 3 \times 3, 1, 1, \textrm{res} \rbs$ \\
$\lbs 16, 16, 3 \times 3, 1, 1, \textrm{res} \rbs$ \\
$\lbs \textrm{upscale by 2} \rbs$ \\
$\lbs 16, 1, 3 \times 3, 1, 1, \textrm{res} \rbs$
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 1, 16, 3 \times 3, 2, 1, \textrm{res} \rbs$ \\
$\lbs 16, 16, 3 \times 3, 1, 1, \textrm{res} \rbs$ \\
$\lbs 16, 32, 3 \times 3, 2, 1, \textrm{res} \rbs$ \\
$\lbs 32, 32, 3 \times 3, 1, 1, \textrm{res} \rbs$ \\
$\lbs 32, 32, 3 \times 3, 2, 1, \textrm{res} \rbs$ \\
$\lbs 512, 450, \textrm{res} \rbs$ \\
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 450, d_z \rbs \times 2$
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 450, 450 \rbs$ \\
$\lbs 450, d_z \rbs \times 2$ \\
(or $\lbs 450, d_{z_0} \rbs \times 2)$
\end{tabular} & \begin{tabular}{@{}c@{}}
$\lbs 450+d_{\epsilon}, 450, \textrm{res} \rbs$ \\
$\lbs 450, d_z, \textrm{res} \rbs$
\end{tabular} \\
\bottomrule
\end{tabular}
\caption{Network architectures for the VAE experiments. Fully-connected layers are characterized by [input size, output size], and convolutional layers by [input channel size, output channel size, kernel size, stride, padding]. ``res'' indicates skip connection, aka residual layer \citep{HeZRS16/cvpr}. Deconvolutional layer is marked as ``deconv''.}
\label{table:vae-networks}
\end{table}
\vfill
\begin{table}[!h]
\scriptsize
\centering
\begin{tabular}{cccccccc}
\toprule
& & & \multicolumn{2}{c}{ {\it MLP} } & {\it Conv} & \multicolumn{2}{c}{ {\it ResConv} } \\
& & & \tt{toy} & \tt{dbmnist} & \tt{dbmnist} & \tt{dbmnist} & \tt{sbmnist} \\
\midrule
\multirow{15}{*}{ AR-DAE } & \multirow{6}{*}{ model } & parameterization & gradient & gradient & gradient & residual & residual \\
& & network & mlp-concat & mlp-concat & mlp-concat & mlp-concat & mlp-concat \\
& & $m_{\tt{fc}}$ & 3 & 5 & 5 & 5 & 5 \\
& & $m_{\tt{enc}}$ & 3 & 5 & 5 & 5 & 5 \\
& & activation & softplus & softplus & softplus & softplus & softplus \\
& & $d_h$ & 256 & 256 & 256 & 512 & 512 \\
& & $s$ & 10000 & 10000 & 10000 & 100 & 100 \\
\cmidrule{2-8}
& \multirow{7}{*}{ learning } & $n_{z}$ & 256 & 625 & 256 & 625 & 625 \\
& & $n_{\tt{data}}$ & 512 & 128 & 128 & 128 & 128 \\
& & $n_{\sigma}$ & 1 & 1 & 1 & 1 & 1 \\
& & $N_d$ & 1 & \{1,2\} & \{1,2\} & 2 & 2 \\
& & $\delta_{\tt{scale}}$ & 0.1 & \{0.1, 0.2, 0.3\} & \{0.1, 0.2, 0.3\} & \{0.1, 0.2, 0.3\} & \{0.1, 0.2, 0.3\} \\
& & optimizer & rmsprop, 0.5 & rmsprop, 0.5 & rmsprop, 0.9 & rmsprop, 0.9 & rmsprop, 0.9 \\
& & learning rate $\alpha_{\theta}$ & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\
\midrule
\multirow{8}{*}{ Encoder/decoder } & \multirow{3}{*}{ model } & network & mlp & mlp & conv & rescov & rescov \\
& & $d_z$ & 2 & 32 & 32 & 32 & 32 \\
& & $d_{z_0}$ or $d_{\epsilon}$ & 10 & 100 & 100 & 100 & 100 \\
\cmidrule{2-8}
& \multirow{5}{*}{ learning } & $n_{\tt{data}}$ & 512 & 128 & 128 & 128 & 128 \\
& & optimizer & adam, 0.5, 0.999 & adam, 0.5, 0.999 & adam, 0.5, 0.999 & adam ,0.9, 0.999 & adam 0.9, 0.999 \\
& & learning rate $\alpha_{\phi, \omega}$ & 0.0001 & 0.0001 & 0.0001 & \{0.001, 0.0001\} & \{0.001, 0.0001\} \\
& & $\beta$-annealing & no & no & no & \{no, 50000\} & \{no, 50000\} \\
& & {\textnormal{e}}-train with $\tt{train}$+$\tt{val}$ & no & no & no & no & yes \\
\midrule
\multirow{3}{*}{ Evaluation } & & polyak (decay) & - & no & no & no & 0.998 \\
& & polyak (start interation) & - & no & no & no & \{0, 1000, 5000, 10000\} \\
& & $n_{\textrm{eval}}$ & - & 40000 & 40000 & 20000 & 20000 \\
\bottomrule
\end{tabular}
\caption{Hyperparameters for the VAE experiments. $\tt{toy}$ is the 25 Gaussian dataset. $\tt{dbmnist}$ and $\tt{sbmnist}$ are dynamically and statically binarized MNIST, respectively.}
\label{table:vae-hyperparameters}
\end{table}
\vfill
\newpage
\null
\vfill
\begin{table}[!h]
\scriptsize
\centering
\begin{tabular}{ccccccccc}
\toprule
& & & HalfCheetah-v2 & Ant-v2 & Hopper-v2 & Walker-v2 & Humanoid-v2 & Humanoid (rllab) \\
\midrule
\multirow{14}{*}{AR-DAE} & \multirow{7}{*}{model} & parameterization & gradient & gradient & gradient & gradient & gradient & gradient \\
& & network & mlp & mlp & mlp & mlp & mlp & mlp \\
& & $m_{\tt{fc}}$ & 5 & 5 & 5 & 5 & 5 & 5 \\
& & $m_{\tt{enc}}$ & 5 & 5 & 0 & 1 & 1 & 1 \\
& & activation & elu & elu & elu & elu & elu & elu \\
& & $d_h$ & 256 & 256 & 256 & 256 & 256 & 256 \\
& & $s$ & 10000 & 10000 & 10000 & 10000 & 10000 & 10000 \\
\cmidrule{2-9}
& \multirow{7}{*}{learning} & $n_{a, \tt{dae}}$ & 128 & 64 & 128 & 128 & 64 & 64 \\
& & $n_{\tt{data}}$ & 256 & 256 & 256 & 256 & 256 & 256 \\
& & $n_{\sigma}$ & 1 & 1 & 1 & 1 & 4 & 4 \\
& & $N_d$ & 1 & 1 & 1 & 1 & 1 & 1 \\
& & $\delta_{\tt{scale}}$ & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\
& & optimizer & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 \\
& & learning rate $\alpha_{\theta}$ & 0.0003 & 0.0003 & 0.0003 & 0.0003 & 0.0003 & 0.0003 \\
\midrule
\multirow{10}{*}{policy} & \multirow{6}{*}{model} & network & mlp & mlp & mlp & mlp & mlp & mlp \\
& & $m_{\tt{fc}}$ & 1 & 1 & 1 & 2 & 2 & 2 \\
& & $m_{\tt{enc}}$ & 1 & 1 & 2 & 1 & 3 & 3 \\
& & activation & elu & elu & elu & elu & elu & elu \\
& & $d_h$ & 256 & 256 & 256 & 256 & 64 & 64 \\
& & $d_{\epsilon}$ & 10 & 10 & 10 & 10 & 32 & 100 \\
\cmidrule{2-9}
& \multirow{4}{*}{learning} & $n_{\tt{perturb}}$ & 10 & 10 & 10 & 10 & 10 & 10 \\
& & optimizer & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 \\
& & $\xi, \eta, \nu$ & 0.01, 0.1, 1.1 & 0.01, 0.01, 1.1 & 0.01, 0.01, 1.1 & 0.01, 0.01, 1.1 & 0.01, 0.1, 1.3 & 0.01, 0.1, 1.3 \\
& & learning rate $\alpha_{\phi}$ & 0.0003 & 0.0003 & 0.0003 & 0.0003 & 0.0003 & 0.0003 \\
\midrule
\multirow{6}{*}{Q-network} & \multirow{4}{*}{model} & network & mlp & mlp & mlp & mlp & mlp & mlp \\
& & $m_{\tt{fc}}$ & 2 & 2 & 2 & 2 & 2 & 2 \\
& & activation & relu & relu & relu & relu & relu & relu \\
& & $d_h$ & 256 & 256 & 256 & 256 & 256 & 256 \\
\cmidrule{2-9}
& \multirow{2}{*}{learning} & optimizer & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 & adam, 0.9, 0.999 \\
& & learning rate $\alpha_{\omega}$ & 0.0003 & 0.0003 & 0.0003 & 0.0003 & 0.0003 & 0.0003 \\
\midrule
\multirow{6}{*}{general} & & $\alpha$ & 0.05 & 0.05 & 0.05 & 0.05 & 0.05 & 0.05 \\
& & $\tau$ & 0.005 & 0.005 & 0.005 & 0.005 & 0.005 & 0.005 \\
& & $\gamma$ & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 \\
& & $n_Z$ & 100 & 10 & 100 & 100 & 10 & 10 \\
& & target calibration & no & no & yes & no & no & no \\
& & warm-up & 5000 & 10000 & 10000 & 10000 & 10000 & 10000 \\
\bottomrule
\end{tabular}
\caption{Hyperparameters for RL experiments.}
\label{table:rl-hyperparameters}
\end{table}
\vfill
| proofpile-arXiv_066-3055 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Conclusions}
\label{sec:conclusion}
We proposed an approach to adversarial defense based on randomized smoothing, which shows state-of-the-art results for white-box attacks, namely PGD, on CIFAR-10 and CIFAR-100, even with a small number of iterations. We also confirmed the efficiency of our defense against black-box attacks, by successfully defending against transferring adversarial examples from different models.
Our method offers a practical trade-off between the inference time and model performance, and can be incorporated into any adversarial defense. We showed that adversarial training of a smoothed classifier is a non-trivial task and studied several approaches to it. In addition, we investigated a family of attacks that take smoothing into account against smoothed classifiers. By incorporating those attacks into adversarial training, we were able to train classifiers with higher performance in smoothed settings.
Complexity-wise, however, when taking into account the computations required for smoothing during the training, they are comparable to but not better than other known attacks.
Finally, we show that our method can exploit both implicit and explicit regularization, which emphasizes the importance of incorporating implicit regularization, explicit regularization and smoothed inference together into adversarial defenses.
\section{Introduction}
\label{sec:intro}
Deep neural networks (DNNs) are showing spectacular performance in a variety of computer vision tasks, but at the same time are susceptible to adversarial examples -- small perturbations that alter the output of the network \cite{goodfellow2014explaining, szegedy2013intriguing}. Since the initial discovery of this phenomenon in 2013, increasingly stronger defenses
\cite{goodfellow2014explaining,madry2018towards,xie2019feature,sarkar2019enforcing,khoury2019adversarial,zhang2019defending,rakin2018parametric,DBLP:conf/icml/ZhangYJXGJ19} and counterattacks \cite{goodfellow2014explaining, carlini2017towards,athalye2018obfuscated,madry2018towards,rony2019decoupling, papernot2017practical,chen2017zoo,li2019nattack} were proposed in the literature. Adversarial attacks have also been shown to occur in tasks beyond image classification where they were first discovered: in real-life object recognition \cite{brown2017adversarial,xu2019evading,athalye2017synthesizing},
object detection \cite{wang2019daedalus}, natural language processing \cite{gao2018black, chaturvedi2019exploring,jin2019bert}, reinforcement learning \cite{gleave2019adversarial}, speech-to-text \cite{carlini2018audio}, and point cloud classification \cite{xiang2019generating}, just to mention a few.
Moreover, the adversarial attacks can be used to improve the performance of the DNNs on unperturbed data \cite{xie2019adversarial,gong2020maxup,sun2020robust}.
Understanding the root cause of adversarial examples, how they are created and how we can detect and prevent such attacks, is at the center of many research works.
\citet{gilmer2018adversarial} argued that adversarial examples are an inevitable property of high-dimensional data manifolds rather than a weakness of specific models.
In view of this, the true goal of an adversarial defense is not to get rid of adversarial examples, but rather to make their search hard.
Current defense methods are based on either implicit or explicit regularization. Explicit regularization methods aim to increase the performance under adversarial attack by directly incorporating a suitable term into the loss of the network during training,
usually by incorporating adversarial examples for the dataset used in the training process.
In contrast, implicit regularization methods that do not change the objective, such as variational dropout \cite{kingma2015variational}, seek to train the network to be robust against any perturbations without taking into account adversarial examples.
In particular, adding randomness to the network can be especially successful \cite{liu2018towards,bietti2018kernel,rakin2018parametric}, since information acquired from previous runs cannot be directly applied to a current run. Another way to make use of randomness to improve classifier robustness is randomized smoothing \cite{cohen2019certified}: averaging of the outputs of the classifier over some random distribution centered in the input data point.
The effects of these three approaches (explicit regularization, implicit regularization, and smoothing) do not necessarily line up with or contradict each other. Thus, one could use a combination of the three, when devising adversarial defenses.
Previous works that discuss randomized smoothing do so exclusively in the context of certified robustness \cite{cohen2019certified,salman2019provably}. In contrast, we consider smoothing as a viable method to increase both the performance and adversarial robustness of the model. We show this effect on top of adversarial regularization methods -- both implicit \cite{rakin2018parametric,our2020cpni} and explicit \cite{DBLP:conf/icml/ZhangYJXGJ19}.
We discuss several smoothed inference methods, and ways to optimize a pre-trained adversarial model to improve the accuracy of the smoothed classifier.
\paragraph{Contributions.} This paper makes the following contributions.
Firstly, we study the effect of randomized smoothing on empirical accuracy of the classifiers, both on perturbed and clean data. We show that even for a small amount of samples, accuracy increases in both cases. In addition, since the performance grows with the sample size, smoothing introduces a trade-off between inference time complexity and accuracy.
Secondly, we show that the smoothing can be applied along with any adversarial defense, and that it is especially efficient for methods based on implicit regularization by noise injection, such as PNI \cite{rakin2018parametric}.
Lastly, we propose a new family of attacks based on smoothing and demonstrate their advantage for adversarial training over conventional PGD.
The rest of the paper is organized as follows: \cref{sec:related} reviews the related work, \cref{sec:local} describes our proposed method, \cref{sec:global} describes integration of the method with adversarial training, \cref{sec:exp} provides the experimental results, and \cref{sec:conclusion} concludes the paper.
\section{Defense by random noise injection }
To get some intuition on the effect of noise injection on adversarial attacks, we perform an analysis of a simple classification model -- a support vector machine (SVM) $f(\vb{x}) = \vb{w} \vdot \vb{x} + b$ with the parameters $\vb{w}$ and $b$. Specifically, we consider the formulation of SVM under adversarial attacks with zero-mean random noise injected into the input. We assume that the attacker is aware of this noise, but is limited to observing the effect of a single realization thereof.
We start from the expectation of the SVM objective on a single input sample $\vb{x}_i$ with ground truth label $y_i$:
\begin{align}
f(\vb{x}_i) = \mathbb{E}_{\vb{\eta}} \max_{\vb{\delta}} \mathrm{ReLU}\qty\bigg[1-y_i (\vb{w} \vdot \qty(\vb{x}_i + \vb{\eta}_i - \vb{\delta}_i) + b)], \label{eq:svm_expect}
\end{align}
where $\vb{\eta}_i$ is the injected noise and $\vb{\delta}_i$ is the adversarial noise.
Denoting $\vb{\delta}'_i = \vb{\delta}_i - \vb{\eta}_i$, and using the result of \citet{xu2009robustness}, we write \cref{eq:svm_expect} as
\begin{align}
f(\vb{x}_i) = \mathbb{E}_{\vb{\eta}} \max_{\vb{\delta}} \vb{w} \vdot \vb{\delta}'_i + \mathrm{ReLU}\qty\bigg[1-y_i (\vb{w} \vdot \vb{x}_i + b)]
\end{align}
Since the expectation of $\vb{\eta}$ is $0$, $\mathbb{E}_{\vb{\eta}} \qty[\vb{w}\vdot \vb{\eta}_i ]= 0$, leading to
\begin{align}
f(\vb{x}_i) =\max_{\vb{\delta}} \vb{w} \vdot \vb{\delta}_i + \mathrm{ReLU}\qty\bigg[1-y_i (\vb{w} \vdot \vb{x}_i + b)],
\end{align}
which is nothing but the SVM objective without the injected noise.
Thus, the expectation of
an adversarial attack
will not change due to the noise injection, and it is unclear whether the attacker can devise a better strategy to breach the defense effect provided by the noise injection.
The effect of noise injection is similar to that of random gradient obfuscation. Let $\vb{x}$ be some input point, $\vb{\eta}$ a realization of the random perturbation, $\vb{\delta}$ the adversarial attack that would have been chosen for $\vb{x}$, and $\vb{\delta}'$ the adversarial attack corresponding to $\vb{x} + \vb{\eta}$. Since we add noise to the input in each forward iteration, the adversary computes $\vb{\delta}'$ instead of $\vb{\delta}$, which has some random distribution around the true adversarial direction. Denoting by $\Pi_a$ the projection on the direction chosen, by $\Pi_\perp$ the projection on the space orthogonal to this direction, and by $\norm{\vb{\delta}}_p \le \epsilon$ the $L_p$-bound on the attack strength, yields
\begin{align}
\Pi_\perp(\vb{\delta}') &= \Pi_\perp(\vb{\eta}) \equiv \vb{\eta_0}\\
\norm{\Pi_a\qty(\vb{\delta}')}_p &= \qty(\epsilon^p - \norm{\vb{\eta_0}}_p^p)^{\nicefrac{1}{p}}
\end{align}
For $p=\infty$, the second term equals $\epsilon$. The first term, however, is a random variable that moves $\vb{\delta}'$ farther away from the adversarial direction and, therefore, decreases the probability of a successful adversarial attack. This effect accumulates when the adversary computes the gradients multiple times (such as in PGD).
\section{Additional results}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/results8_legend_merged.pdf}
\caption{Accuracy as a function of injected noise strength with $M=8$ for all base model and all smoothing methods on clean data (left) and under PGD attack (right).}
\label{fig:all_8}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\linewidth]{figures/pgd56_clean_legend.pdf}
\subcaption{}
\label{fig:pgd56_van}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[width=\linewidth]{figures/pgd56_adv.pdf}
\subcaption{}
\label{fig:pgd56_pgd}
\end{subfigure}
\caption{Accuracy as a function of injected noise strength with different numbers of samples for a CNI-W $(k=56)$ fine-tuned model with prediction smoothing on clean data (left) and under PGD attack (right). }
\label{fig:pgd56_model}
\end{figure*}
\begin{figure}
\includegraphics[width=\linewidth]{figures/km_comparison.pdf}
\caption{
Accuracy of CNI-W model with prediction smoothing under PGD, EPGD and $SmoothAdv_{PGD}$ attacks with different iteration numbers ($k$).
We used $M=1,8,512$ for smoothing, with a fixed noise standard deviation $\sigma=0.24$. EPGD and $SmoothAdv_{PGD}$ attacks averaged the gradients over $M_{\text{backward}}=8$ samples in each attack iteration.
}
\label{fig:km}
\end{figure}
\cref{tab:Nattack_wideresnet} presents the additional evaluation of our method on a black box attack (NAttack \cite{li2019nattack}) compared with prior art. Our method substantially outperforms them.
In addition, we evaluated the impact of noise injection levels on a PGD-56 pre-trained model for different numbers of samples. From \cref{fig:pgd56_model} we conclude that clean and adversarial accuracy reaches it nominal values for almost vanishing noise levels.
\cref{fig:all_8} demonstrates that there is a single optimal value of noise strength for each base model, independent of the smoothing method.
In all the figures we see a single maxima, which is fixed for each of the base models and unrelated to the smoothing method. This indicates that for each such model, there should be an optimal standard deviation of the injected noise.
In \cref{fig:km} we compared the effectiveness of the attacks compared to their computational complexity $k\times M$, for the PGD, EPGD and $SmoothAdv_{PGD}$ attacks.
PGD attacks are the most effective for lower $k\times M$, but converge to the same value as EPGD attacks without smoothing. In contrast, $SmoothAdv_{PGD}$ attacks both with and without smoothing are less effective for all values of $k\times M$ and converge to a higher accuracy in all cases. This could indicate that EPGD makes better use of the spatial information encoded in the multiple samples, so much so that it produces an attack comparable to PGD with $\nicefrac{1}{8}$ of the required iterations.
Moreover, smoothing is more effective for EPGD and $SmoothAdv_{PGD}$, since both are using smoothing while constructing the attack. The accuracy under smoothing is better and converges to higher values for all attacks. This aids in validating the effectiveness of our method.
We evaluated our best-performing model against a PGD attack with different strengths, $\epsilon$, to study the effect of transferring defense on attacks of different strengths. Results are shown in \cref{fig:eps_ablation}.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/epsilon_pgd.pdf}
\caption{Accuracy of the CNI-W model with prediction smoothing over 8 iterations under the PGD attack with different attack radii. Standard deviation is smaller than line width.}
\label{fig:eps_ablation}
\end{figure*}
\begin{table}
\centering
\caption{
Results of NAttack black-box attacks on our method applied on PNI-W and CNI-W with prediction smoothing, (a) ResNet-20 on CIFAR-10. (b) Wide-ResNet-34 on CIFAR-10.
Our Smooth PNI-W and CNI-W methods uses $M=1,4,8$, $\sigma=0.24$. $^\dagger$ denotes our results based on code provided by the authors or our re-implementation.
}
\centering
\begin{subfigure}{0.495\linewidth}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Method} & \textbf{Accuracy, \%} \\
\midrule
Adv. training \cite{madry2018towards}$^\dagger$ & $33.11$ \\
Smooth PNI-W (our, $m=1$) &$47.17$\\
Smooth PNI-W (our, $m=4$) &$50.29$\\
Smooth PNI-W (our, $m=8$) &$50.78$\\
Smooth CNI-W (our, $m=1$) & $48.83$ \\
Smooth CNI-W (our, $m=4$) & $50.98$ \\
Smooth CNI-W (our, $m=8$) & $\mathbf{51.56}$ \\
\bottomrule
\end{tabular}
\subcaption{}
\label{tab:Nattack_resnet}
\end{subfigure}
\hfill
\begin{subfigure}{0.495\linewidth}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Method} & \textbf{Accuracy, \%} \\
\midrule
Adv. training \cite{madry2018towards}$^\dagger$ & $43.75$ \\
Smooth PNI-W (our, $m=1$) &$\mathbf{56.25}$\\
Smooth PNI-W (our, $m=4$) &$55.06$\\
MART \cite{Wang2020Improving}$^+$ &$47.02$ \\
\bottomrule
\end{tabular}
\subcaption{}
\end{subfigure}
\label{tab:Nattack_wideresnet}
\end{table}
\section{Related work}
\label{sec:related}
In this section, we briefly review the notions of white and black box attacks and describe the existing approaches for adversarial defense and certified defense.
\paragraph{Adversarial attacks.}
Adversarial attacks were first proposed by \citet{szegedy2013intriguing}, who noted that it was possible to use the gradients of a neural network to discover small perturbations of the input that drastically change its output. Moreover, it is usually possible to change the prediction to a particular class, i.e., perform a \emph{targeted} attack. It is common to divide adversarial attacks into two classes: \emph{white box} attacks, which have access to the internals of the model (in particular, its gradients); and \emph{black box} attacks, which have access only to the output of the model for a given input.
\paragraph{White box attacks.}
One of the first practical white box adversarial attacks is the fast gradient sign method (FGSM) \cite{goodfellow2014explaining}, which
utilizes the (normalized) sign of the gradient as an adversarial perturbation:
\begin{align}
\hat{\vb{x}} = \vb{x} + \epsilon \cdot \mathrm{sign}(\grad_{\vb{x}}\mathcal{L}),
\end{align}
where $\vb{x}$ and $\hat{\vb{x}}$ denote the clean and perturbed inputs, respectively, $\mathcal{L}$ is the loss function of the network, which the attacker tries to maximize, and $\epsilon$ is the desired attack strength.
\citet{madry2018towards} proposed using iterative optimization -- specifically, projected gradient ascent -- to find stronger adversarial examples:
\begin{align}
\hat{\vb{x}}^k = \Pi_{B(\vb{x} , \epsilon)} \qty[\hat{\vb{x}}^{k-1} + \alpha \cdot \mathrm{sign}(\grad_{\vb{x}}\mathcal{L})].
\end{align}
The projection operator $\Pi$ restricts the perturbed input to be in some vicinity $B(\vb{x} , \epsilon)$ of the unperturbed input. The iterations are initialized with $\hat{\vb{x}}^0=\vb{x}$.
This attack, referred to as PGD in the literature, is one of the most powerful attacks known to date. \citet{gowal2019alternative} further improved it by running a targeted PGD over a set of classes and choosing the best performing example, showing a notable improvement in different settings.
C\&W \cite{carlini2017towards} is a family of attacks using other norm constraints, in particular, the $L_0$, $L_2$ and $L_\infty$ norms. They solve a minimization problem
\begin{align}
\min &\norm{\vb{\delta}} + c \mathcal{L}(\vb{x} + \vb{\delta}).
\end{align}
In contrast to FGSM and PGD, which have a strict bound on the attack norm, the C\&W attack can work in unbounded settings, seeking a minimal perturbation that achieves the misclassification.
\citet{rony2019decoupling} proposed to improve this method via decoupling direction and norm (DDN) optimization, motivated by the fact that finding the adversarial example in a predefined region is a simpler task. The attack iteratively changes the norm depending on the success of a previous step:
\begin{align}
\hat{\vb{x}}^k &= \Pi_{B(\vb{x}, \epsilon_k)} \qty[\hat{\vb{x}}^{k-1} + \alpha \cdot\grad_{\vb{x}}\mathcal{L}]\\
\epsilon_k &= \qty(1 + s\cdot \gamma) \epsilon_{k-1},
\end{align}
where $s= -1$ if $\hat{\vb{x}}^{k-1}$ is misclassified and $s=1$ otherwise.
\paragraph{Black box attacks.}
The simplest way to attack a model $\mathcal{F}$ without accessing its gradients is to train a substitute model $\mathcal{F}'$ to predict the outputs of $\mathcal{F}$ \cite{papernot2017practical} and then use its gradients to apply any of the available white box attacks. \citet{liu2016delving} extended this idea to transferring the adversarial examples from one model (or ensemble of models) to another, not necessary distilled one from another.
Other works proposed alternative methods for estimating the gradient. ZOO \cite{chen2017zoo} made a numerical estimation, NATTACK \cite{li2019nattack} uses natural evolution strategies \cite{wierstra2008natural}, and BayesOpt \cite{anonymous2020bayesopt} employed Gaussian processes. A detailed review of these strategies is beyond the scope of this paper.
\paragraph{Adversarial defenses.}
\citet{szegedy2013intriguing} proposed to generate adversarial examples during training and use them for training adversarially robust models, optimizing the loss
\begin{align}
\mathcal{L}_{\text{adv}}(\vb{x},y) = (1-w) \cdot \mathcal{L}_{CE}(f(\vb{x}),y) + w \cdot \mathcal{L}_{CE}(f(\vu{x}),y), \label{eq:adv_loss}
\end{align}
where $\mathcal{L}_{CE}$ is the cross-entropy loss, $f$ is the classifier, $\vb{x}$ is a training instance with the label $y$, $\vu{x}$ is the corresponding adversarial example, and $w$ is a hyperparameter usually set to $w=0.5$.
This method is particularly convenient if the generation of adversarial examples is fast \cite{goodfellow2014explaining}. When combined with stronger attacks, it provides a powerful baseline for adversarial defenses \cite{madry2018towards}, and is often utilized as part of defense procedures.
Many works proposed improvements over regular adversarial training by applying stronger attacks during the adversarial training phase. For example, \citet{khoury2019adversarial} proposed to use Voronoi cells instead of $\epsilon$-balls as a possible space for adversarial examples in the training phase.
\citet{liu2019training} added adversarial noise to all the activations, not only to the input.
\citet{jiang2018learning} proposed using a learning-to-learn framework, training an additional DNN to generate adversarial examples, which is used to adversarially train the classifier, resembling GAN training. \citet{balaji2019instance} heuristically updated the per-image attack strength $\epsilon_i$, decreasing it if the attack succeeded and increasing otherwise. \citet{pmlr-v97-wang19i} proposed to gradually increase the strength of the attacks based on a first-order stationary condition for constrained optimization.
Randomization of the neural network can be a very powerful adversarial defense since, even if provided access to gradients, the attacker does not have access to the network, but rather some randomly perturbed version thereof. One of the first works involving randomization \cite{zheng2016improving} proposed to improve robustness by
reducing the distance between two samples differing by a normally distributed variable with a small variance. \citet{zhang2019defending} proposed to add normal noise to the input, which was shown to reduce the Kullback-Leibler divergence between the clean and the adversarially-perturbed inputs.
TRADES \cite{DBLP:conf/icml/ZhangYJXGJ19} uses a batch of randomly perturbed inputs to better cover the neighbourhood of the point. Together with replacing $\mathcal{L}_{CE}(f(\vu{x}),y)$ in \cref{eq:adv_loss} by $\mathcal{L}_{CE}(f(\vu{x}),f(\vb{x}))$, this provided a significant improvement over standard adversarial training. MART \cite{Wang2020Improving} further improves the defense performance by differentiating the misclassified and correctly classified examples.
Another random-based defense based on adversarial training is parametric noise injection (PNI) \cite{rakin2018parametric}. PNI improves network robustness by injecting Gaussian noise into parameters (weights or activations) with learned variance.
\citet{our2020cpni} extended the idea of PNI by proposing to inject low-rank multivariate Gaussian noise instead of independent noise.
\citet{athalye2018obfuscated} observed that many defenses do not improve robustness of the defended network but rather obfuscate gradients, making gradient-based optimization methods less effective. They identified common properties of obfuscated gradients and organized them in a checklist. In addition, they proposed techniques to overcome common instances of obfuscated gradients: in particular, approximating non-differentiable functions with a differentiable substitute and using averaging on the randomized ones.
\emph{Randomized smoothing}
\cite{cohen2019certified} is a method for increasing the robustness of a classifier by averaging its outputs over some random distribution in the input space centered at the input sample. \citet{cohen2019certified} has shown that randomized smoothing is useful for certification of the classifier \cite{wong2018provable} --
that is, proving its performance under norm-bounded input perturbations. In particular, they have shown a tight bound on $L_2$ certification using randomized smoothing with normal distribution.
Consequently, \citet{salman2019provably} used the smoothed classifier to generate stronger ``smoothed'' adversarial attacks, and utilized them for adversarial training. Such training allows the generation of a more accurate base classifier, and as a result, improves certified robustness properties.
\section{Experiments}
\label{sec:exp}
To demonstrate the effectiveness of the proposed method, we evaluated of the proposed adversarial defenses on CIFAR-10 and CIFAR-100 under white and black box attacks for ResNet20 and Wide-ResNet34-10. Finally, we performed an extensive ablation study of our method.
\paragraph{Experimental settings.}
We studied the performance of each of the proposed smoothing methods (prediction, soft prediction, and weighed smoothing) over different base models. We first considered implicit regularization methods and combined them with the adversarial training proposed by \citet{madry2018towards}. We trained five such models: adversarially-trained CNI-W (CNI with noise added to weights), and four additional models obtained by fine-tuning the base model with the training methods described in \cref{sec:global}. One of those models is CNI-I+W (CNI with noise added to weights and input), while the remaining ones were obtained by using different attacks during the fine-tuning. We used EPGD attacks, $\textsc{SmoothAdv}_{\mathrm{PGD}}$ attacks, or PGD with a large number of iterations $k$. For EPGD attcks, we selected models based on clean validation set performance: one with the best validation accuracy (labelled \emph{EPGD-converged}) and another with the worst validation accuracy (\emph{EPGD-diverged}). The motivation for considering the latter derives from the fact that PGD and EPGD are highly similar in nature, to the point that PGD converges to EPGD with adversarial training. Therefore, training until convergence may result in a model very similar to CNI-W.
For the PGD attack we used $k=56$ to get computational complexity similar to EPGD. Since this attack is too strong, the base model achieved significantly worse performance on both clean and adversarial data. Nevertheless, after the application of smoothing, this model showed the highest performance on adversarial data
with a somewhat lower accuracy on clean data. Secondly, we used TRADES (model denoted CNI-W+T) to combine both implicit and explicit regularization in a single model, expecting to improve on each of those separately.
For CNI-I+W, we fine-tuned the CNI-W model based on the noise maximizing performance, $\sigma = 0.24$.
For $\textsc{SmoothAdv}_{\mathrm{PGD}}$, we used the method described in \cite{salman2019provably}. We report a number of the results for each base model and smoothing method in \cref{tab:results_wb_cf}, and a comparison of different setups in \cref{fig:CPNI_model,fig:EPGD100_Gauss_model}.
From \cref{fig:EPGD100_Gauss_model} we conclude that the \emph{EPGD-converged} and CNI-W base models result in similar performance for all setups.
For ResNet-20 on CIFAR-10, we used the CNI-W base model \cite{our2020cpni} trained for 400 epochs and chose the model with the highest performance on a validation set.
The obtained model was fine-tuned with EPGD adversarial training, CNI-I+W training or CNI-W with a higher $k$ for up to $200$ epochs. For Wide-ResNet34-10 on CIFAR-100, we used the PNI-W model \cite{rakin2018parametric} trained for 100 epochs and chose the model with the highest performance on a clean validation set. For Wide-ResNet34-10 on CIFAR-10, we used the same training regime, additionally using the PNI-W+Trades model that was obtained by training a PNI-W architecture with the TRADES method instead of adversarial training.
\subsection{Comparison to other adversarial defenses}
\begin{table}
\centering
\caption{
Comparison of our method to prior art on CIFAR-10 on ResNet-20, under PGD attack with the number of iterations $k=7$ and the $\ell_\infty$ radius of attack $\epsilon = \nicefrac{8}{255}$. $^\dagger$ denotes our results based on code provided by the authors or our re-implementation.
Smooth CNI-W uses $k=7$ and $M=512$ with $\sigma=0.24$ for the regular setting and $\sigma=0.00$ for the no-noise setting during inference. For the high $k$ setting, $k=56$ was used for adversarial training and $M=512$, $\sigma=0.01$ during inference.
}
\centering
\begin{tabular}{@{}lcc@{}}
\toprule
\multirow{2}{*}{\textbf{Method}}&\multicolumn{2}{c}{\textbf{Accuracy, mean$\bm{\pm}$std\%} } \\
\cmidrule(r){2-3}
& \textbf{Clean} & \textbf{PGD} \\
\midrule
Adversarial training \cite{madry2018towards}$^\dagger$ & $83.84$ & $39.14\pm0.05$ \\
PNI \cite{rakin2018parametric} & $82.84\pm0.22$ & $46.11\pm0.43$ \\
TRADES ($\nicefrac{1}{\lambda} = 1$) \cite{DBLP:conf/icml/ZhangYJXGJ19}$^\dagger$ & 75.62 & 47.18 \\
TRADES ($\nicefrac{1}{\lambda} = 6$) \cite{DBLP:conf/icml/ZhangYJXGJ19}$^\dagger$ & 75.47 & 47.75 \\
CNI \cite{our2020cpni} & $78.48\pm0.41$ & $48.84\pm0.55$ \\
Smoothed CNI-W (ours) & $81.44\pm 0.06$ & $55.92\pm0.22$ \\
Smoothed CNI-W (no noise, ours) & $\mathbf{84.63\pm0.05}$ &$52.8\pm0.2$ \\
Smoothed CNI-W (high $k$, ours) & $78.10\pm0.12$ & $\mathbf{60.06\pm0.29}$ \\
\bottomrule
\end{tabular}
\label{tab:comp_cifar10}
\end{table}
\begin{table}
\centering
\caption{
Comparison of our method to prior art on CIFAR-10 (a) and CIFAR-100 (b) on Wide-ResNet-34, under PGD attack with $k=10$ and $\epsilon = \nicefrac{8}{255}$. $^\dagger$ denotes our results based on code provided by the authors or our re-implementation. $^+$ denotes our results based on the checkpoint provided by the authors. For CIFAR-10 smooth PNI-W+T and PNI-W, we use $k=10$ and $M=64$ with $\sigma=0.17$ for the regular setting and $\sigma=0.03$ for the low noise setting during inference. For CIFAR-100, we use $k=10$ and $M=16$ with $\sigma=0.05$.
}
\centering
\begin{subfigure}{0.495\linewidth}
\begin{tabular}{@{}lcc@{}}
\toprule
\multirow{2}{*}{\textbf{Method}}&\multicolumn{2}{c}{\textbf{Accuracy, \%} } \\
\cmidrule(r){2-3}
& \textbf{Clean} & \textbf{PGD} \\
\midrule
Adv. training \cite{madry2018towards}$^\dagger$ & $89.68$ & $46.61$ \\
PNI \cite{rakin2018parametric}$^\dagger$ & $89.26$ & $52.9$ \\
IAAT \cite{balaji2019instance} &$\mathbf{91.3}$ & $48.53$ \\
CSAT \cite{sarkar2019enforcing} & $87.65$ & $54.77$ \\
TRADES \cite{DBLP:conf/icml/ZhangYJXGJ19}$^+$ & $84.92$ &$56.5$ \\
MART \cite{Wang2020Improving}$^+$ &$83.62$ &$\mathbf{57.3}$ \\
Smooth PNI-W (our) & $89.27$ & $54.73$ \\
Smooth PNI-W+T & \multirow{2}{*}{${85.71}$} & \multirow{2}{*}{$56.82$} \\
($\nicefrac{1}{\lambda} = 6$, our) & & \\
\bottomrule
\end{tabular}
\subcaption{}
\label{tab:comp_cifar10_wide}
\end{subfigure}
\hfill
\begin{subfigure}{0.495\linewidth}
\begin{tabular}{@{}lcc@{}}
\toprule
\multirow{2}{*}{\textbf{Method}}&\multicolumn{2}{c}{\textbf{Accuracy, \%} } \\
\cmidrule(r){2-3}
& \textbf{Clean} & \textbf{PGD} \\
\midrule
IAAT \cite{balaji2019instance} & $\mathbf{68.1}$ & $26.17$ \\
Adv. training \cite{madry2018towards}$^\dagger$ & $65.8$ & $26.15$ \\
PNI \cite{rakin2018parametric}$^\dagger$ & $61.72$ & $27.33$ \\
Smooth PNI-W (our) & $ 66.30 $ & $\mathbf{29.68}$ \\
\bottomrule
\end{tabular}
\subcaption{}
\label{tab:comp_cifar100}
\end{subfigure}
\end{table}
For ResNet-20 on CIFAR-10, we compared our best-performing instance of the defense (smoothed CNI-W fine-tuned with a high $k$) to the current state-of-the-art. As summarized in \cref{tab:comp_cifar10}, in terms of adversarial accuracy, we outperformed the best existing method by 11.7\%. In addition, we presented a configuration of smoothed classifiers that achieved the highest accuracy on clean data, i.e., CNI-W without noise. \cref{tab:comp_cifar10_wide} presents the results of our method for Wide-ResNet-34 on CIFAR-10 against PGD with $k=10$ steps.
Our smoothed PNI-W+T shows comparable results in terms of adversarial accuracy compared to the previous state-of-the-art. For Wide-ResNet-34 on CIFAR-100, our smoothed PNI-W substantially outperforms prior art in terms of adversarial accuracy, as shown in \cref{tab:comp_cifar100}.
\begin{table}
\centering
\caption{
Results of black-box attacks on our method applied on PNI-W and CNI-W with prediction smoothing: (a) transferable attacks with $\sigma=0.02$;
(b) NAttack. Our Smooth PNI-W and CNI-W method uses $M=1,4,8$, $\sigma=0.24$. $^\dagger$ denotes our results based on code provided by authors or our re-implementation. In both cases we present results for ResNet-20 on CIFAR-10
}
\centering
\begin{subfigure}{0.47\linewidth}
\begin{tabular}{@{}ccc@{}}
\toprule
\multirow{2}{*}{\textbf{Iterations}} &\multicolumn{2}{c}{\textbf{Accuracy, mean$\bm{\pm}$std }} \\
\cmidrule(r){2-3}
& \textbf{PGD} & \textbf{PGD-s} \\
\midrule
4 & $58.01\pm0.35$ & $59.35\pm0.15$ \\
8 &$58.49\pm0.10$ & $60.23\pm0.14$ \\
\bottomrule
\end{tabular}
\subcaption{}
\label{tab:results_bb_cf}
\end{subfigure}
\hfill
\begin{subfigure}{0.52\linewidth}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Method} & \textbf{Accuracy, \%} \\
\midrule
Adv. training \cite{madry2018towards}$^\dagger$ & $33.11$ \\
Smooth PNI-W (our, $m=1$) &$47.17$\\
Smooth PNI-W (our, $m=4$) &$50.29$\\
Smooth PNI-W (our, $m=8$) &$50.78$\\
Smooth CNI-W (our, $m=1$) & $48.83$ \\
Smooth CNI-W (our, $m=4$) & $50.98$ \\
Smooth CNI-W (our, $m=8$) & $\mathbf{51.56}$ \\
\bottomrule
\end{tabular}
\subcaption{}
\label{tab:Nattack}
\end{subfigure}
\end{table}
We tested our defense against black-box attacks, in particular, the transferable attack \cite{liu2016delving} and NAttack\cite{li2019nattack}. For the transferable attack, we trained another instance of the CNI-W model and used it as a source model in two configurations: PGD with and without smoothing. The results are reported in \cref{tab:results_bb_cf,tab:Nattack}. For the transferable attack, we conclude that our model performs well even if the source model is not smoothed, which is an argument against a randomized gradient obfuscation effect. The NAttack comparison shows that our method substantially outperforms the baseline benchmarks.
\subsection{Ablation study}
We compared a different number of Monte Carlo samples: $M=2^n$, $n\in \{0,\dots,9\}$.
For each value of $M$, we considered multiple noise standard deviations in the range $[0,0.5]$. The upper bound that we chose is the result of significant degradation of the model's performance.
The best results were obtained with prediction smoothing, even for a low number of samples. Although the advantage is of the order of a single standard deviation, this difference is persistent across base models, number of iterations, and attack types, and thus is likely to be systematic. This could indicate that the attackers utilize additional information provided by other methods better than the defender.
\begin{table*}
\centering
\caption{Results of Smooth CNI-W for the PGD white-box attacks on CIFAR-10. Mean and standard deviation over five runs is presented in the form of mean$\pm$std. }
\begin{tabular}{@{}llcccc@{}}
\toprule
\multirow{2}{*}{\textbf{Base Model}} & \multirow{2}{*}{\textbf{Smoothing}} & \multirow{2}{*}{\textbf{Iterations}} & \multirow{2}{*}{\textbf{Noise}} &\multicolumn{2}{c}{\textbf{Accuracy, mean$\bm{\pm}$std }} \\
\cmidrule(r){5-6}
&&&& \textbf{Clean} & \textbf{PGD} \\
\midrule
CNI-W & Prediction & 512 & 0.0 & $\mathbf{84.63\pm0.05}$ & $52.8\pm0.29$\\
CNI-W & Prediction & 512 & 0.24 & $81.44\pm0.06$ & $55.92\pm0.22$ \\
EPGD-diverged & Prediction & 256 & 0.0 & $83.05\pm0.01$ & $57.31\pm0.22$\\
EPGD-diverged & Prediction & 256 & 0.18 & $81.07\pm0.01$ & $58.56\pm0.14$ \\
EPGD-diverged & Soft & 512 & 0.0 & $83.13\pm0.07$ & $57.07\pm0.15$ \\%& $44.47\pm0.13$ \\
EPGD-diverged & Soft & 512 & 0.19 & $80.98\pm0.07$ & $58.32\pm0.33$ \\% & $45.61\pm0.18$ \\
EPGD-diverged & Weighted & 256 & 0.19 & $80.98\pm0.07$ & $58.47\pm0.18$ \\%& $45.5\pm0.2$ \\
EPGD-converged & Prediction & 256 & 0.22 & $81.82\pm0.05$ & $55.9\pm0.37$ \\
CNI-I+W & Prediction & 256 & 0.37 & $81.25\pm0.06$ & $55.23\pm0.37$\\% & $44.53\pm0.11$ \\
CNI-W ($k=56$) & Prediction & 512 & 0.11 & $77.28\pm0.05$ & $\mathbf{60.54\pm0.26}$ \\% & $44.53\pm0.11$ \\
\bottomrule
\end{tabular}
\label{tab:results_wb_cf}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/Gauss_EPGD_pgd_merged.pdf}
\caption{Accuracy as a function of injected noise strength with different numbers of samples for CNI-I+W (left) and EPGD-diverged (right) with prediction smoothing for PGD.
}
\label{fig:EPGD100_Gauss_model}
\end{figure*}
For the CNI-W and EPGD models, accuracy on clean data drops as the noise variance increases.
The CNI-I+W model is more resilient to noise injection, acquiring maximum at around 40\% higher standard deviation of the injected noise.
This can be related to the fact that the certified radius is twice as small as the standard deviation of the noise used in the smoothed classifier.
In contrast to the certified robustness scheme \cite{cohen2019certified}, we did not observe improvement of the CNI-I+W model over CNI-W, which might be a result of the interference of the CNI-induced noise and the Gaussian noise.
\section{Randomized smoothing as an adversarial defense}
\label{sec:local}
A smooth classifier $\Tilde{f}_{\vb{\theta}}$ is a map assigning an input $\vb{x}$ the class label that the base classifier $f_{\vb{\theta}}$ is most likely to return for $\vb{x}$ under random perturbation $\eta$,
\begin{align}
\Tilde{f}_{\vb{\theta}}(\vb{x}) &= \arg\max\limits_y P (f_{\vb{\theta}}(\vb{x}+\vb{\eta}) = y) = \arg\max\limits_y \int\limits_{\mathbb{R}^n} \dd{\vb{\eta}} p(\eta)\mathds{1}\qty[f_{\vb{\theta}}(\vb{x}+\vb{\eta}) = y], \label{eq:certified}
\end{align}
where $\vb{\eta}$ is some random vector and $p$ is its density function.
Since the integral (\ref{eq:certified}) is intractable, it should be approximated with the Monte Carlo method, i.e., by averaging over a number of points from the distribution sampled independently. We denote by $M$ the number of samples used for such an approximation
In its most general form, the smoothed model output can be written as
\begin{align}
\Tilde{f}_{\vb{\theta}}(\vb{x}) = \arg\max\limits_y \int\limits_{\mathbb{R}^n} \dd{\vb{\eta}} p_{\vb{\eta}}(\eta)V( p_{iy} (\vb{x}) ) \approx \arg \max\limits_y \sum_{i=1}^M V( p_{iy} (\vb{x}) ),
\end{align}
where $V$ is some voting function and $p_{iy}(\vb{x})$ is the probability of class $y$ being predicted for the $i$-{th} sample $\vb{x}+\vb{\eta}_i$.
Since the smoothing is independent of the architecture of the base classifier $f_{\vb{\theta}}$, we can take any (robust) model and test the overall improvement provided by smoothing.
In contrast to previous works that also employed randomized smoothing \cite{cohen2019certified,salman2019provably}, we use it to improve the empirical adversarial robustness rather than the certified one. While certification is a very strong and desired guarantee, researchers were unable to achieve practical $\ell_\infty$ certification radii and it is unclear whether this is possible at all \cite{blum2020random,yang2020randomized}. Nevertheless, smoothing can still be employed as a practical method of improving empirical performance of the classifier.
In what follows, we describe three different implementations of smoothing, differing in their voting function $V$.
\paragraph{Prediction smoothing.}
The simplest possible way to use multiple predictions is to perform prediction voting, i.e., to output the most frequent prediction among the samples. In other words, we take $V$ to be the following indicator function:
\begin{align}
V(p_{iy}) &= \mathds{1} \qty[\arg \max\limits_{y'} p_{iy'} = y],
\end{align}
yielding the following smoothed classifier,
\begin{align}
\Tilde{f}_{\vb{\theta}}(\vb{x}) &= \arg \max\limits_y \sum_{i=1}^M \mathds{1} \qty[\arg \max\limits_{y'} f_{y'}(\vb{x}+\vb{\eta}_i) ],
\end{align}
where $f_{y'}(\vb{x})$ denotes the predicted probability that input $\vb{x}$ will belong to class $y'$.
This is the randomized smoothing previously discussed by \citet{cohen2019certified} and \citet{salman2019provably} for certified robustness. In \cref{sec:exp} we show that by taking into account $M$ predictions, the accuracy of the classifier increases on both clean and adversarially-perturbed inputs.
It is important to emphasize that this voting scheme only takes into account the classification of each of the $M$ samples, discarding the predicted probabilities of each class.
\paragraph{Weighed smoothing.}
The former method can be generalized by assigning some weight to top-$k$ predictions. Let us denote by $k(p_{iy})$ the rank of class $y$ as predicted by $f_{\vb{\theta}}(\vb{x}+\vb{\eta}_i)$, i.e., $k=1$ for the most probable class, $k=2$ for the second most probable one, and so on. Then, for example, the following choices of $V$ are possible:
\begin{align}
V(p_{iy}) &= 2^{1-k(p_{iy})}, \,\,\, \mathrm{or}\\
V_C(p_{iy}) &= \begin{cases}
1 & k(p_{iy})=1\\
C & k(p_{iy})=2\\
0 & \text{otherwise.}
\end{cases}
\end{align}
In particular, $V_C$ expresses the dependency on the second prominent class noted by \citet{cohen2019certified}.
\paragraph{Soft prediction smoothing.}
In this case, we calculate the expectation of probability for each class and use them for prediction generation:
\begin{align}
\Tilde{f}_{\vb{\theta}}(\vb{x}) = \arg \max\limits_y \sum_{i=1}^M \mathrm{softmax}\qty( f_{y}(\vb{x}+\vb{\eta}_i) ).
\end{align}
This method was previously mentioned by \citet{salman2019provably} as a way to apply adversarial training to a smoothed classifier. Since the probabilities for each class are now differentiable, this allows the classifier to be trained end-to-end.
In contrast to prediction smoothing, we now
fully take into account the predicted class probabilities of each of the $M$ samples.
If, however, used as an adversarial defense, soft smoothing is easier to overcome by attackers. Even if the attack is not successful, the probability of competing classes increases, which means that even an unsuccessful attack on a base model does affect the prediction of the smoothed model. In \cref{subsec:soft_training} we describe how this kind of smoothing can be used to create stronger attacks that can be used for adversarial training.
\subsection{Relation to implicit regularization}
\label{subs:input_noise}
In the case of prediction smoothing (or any other case in which $V$ is not differentiable), we cannot train the smoothed model directly. We, therefore, would like to train the base model in a way that optimizes the loss of the smoothed model.
To this end, we use the 0-1 loss of $n$ training samples
\begin{align}
&\mathcal{L}_{01} = \sum_{i=1}^n \ell_{01} (y_i, \tilde{f}_{\vb{\theta}}(\vb{x}_i)),
\end{align}
with the pointwise terms
\begin{align}
& \ell_{01} (y_i, \tilde{f}_{\vb{\theta}}(\vb{x}_i)) = 1 - \mathds{1}\qty[y_i = \arg\max\limits_{y\in \mathcal{Y}} P_{\eta} \qty\big[f_{\vb{\theta}} (\vb{x}_i + \vb{\eta}) = y]],
\end{align}
minimized over the model parameters $\vb{\theta}$.
Denoting for brevity $P_y = P_{\eta} \qty\big[f_{\vb{\theta}}(\vb{x}_i + \vb{\eta}) = y]$,
we can approximate the indicator as
\begin{align}
&\mathds{1}\qty[y_i = \arg\max\limits_{y\in \mathcal{Y}} P_y] =
\mathds{1}\qty[P_{y_i} \geq \max_{y'\in \mathcal{Y}\setminus \qty{ y_i}} P_{y'}] \approx\\
\approx& \mathrm{ReLU}\qty[ P_{y_i} - \max_{y'\in \mathcal{Y}\setminus \qty{ y_i}} P_{y'}] =
\max_{y\in \mathcal{Y}}\qty[P_y - \max_{y'\in \mathcal{Y}\setminus \qty{ y_i}} P_{y'}], \label{eq:01loss}
\end{align}
where we approximate the Heaviside function $\mathds{1}$ with a better-behaving ReLU function on the interval $[-1,1]$. The last equality follows from the fact that $y=y'$ unless $y_i$ is the most probable class.
The expression in \cref{eq:01loss} resembles the bound \citet{cohen2019certified} has suggested for the radius of certification under adversarial attacks.
We now show a relation to training the base model under perturbation, similarly denoting $\mathds{1}_y = \mathds{1}\qty[f_{\vb{\theta}}(\vb{x}_i + \vb{\eta}) = y]$:
\begin{align}
\ell_{01} (y_i, \Tilde{f}_{\vb{\theta}}(\vb{x}_i)) =
1 -
\max_{y\in \mathcal{Y}} \mathbb{E}_{\eta}
\qty[\mathds{1}_y -
\max_{y'\in \mathcal{Y}\setminus \qty{ y_i}}
\mathds{1}_{y'}].
\end{align}
Written in this form, the 0-1 loss is now amenable to Monte Carlo approximation; however, working with such a non-convex loss is problematic. We, therefore, bound the $\max$ over the expectation by the expectation over the $\max$,
\begin{align}
\ell_{01} (y_i, \Tilde{f}_{\vb{\theta}}(\vb{x}_i)) \le
1 -
\mathbb{E}_{\eta} \max_{y\in \mathcal{Y}}
\mathds{1}_y +
\mathbb{E}_{\eta} \max_{y'\in \mathcal{Y}\setminus \qty{ y_i}}
\mathds{1}_{y'}.
\end{align}
Since the classification events are disjoint, we obtain
\begin{align}
\mathcal{L}_{01} &= \mathbb{E}_{\eta}\qty[\sum_{i=1}^n \left( 1 - \sum_{y\in \mathcal{Y}} \mathds{1}_y + \sum_{y'\in \mathcal{Y}\setminus \qty{ y_i}} \mathds{1}_{y'} \right) ] =\\&= \mathbb{E}_{\vb{\eta}}\qty[\sum_{i=1}^n 1 - \mathds{1}_{y_i}] =
\mathbb{E}_{\vb{\eta}}\sum_{i=1}^n\ell_{01} (y_i, f_{\vb{\theta}}(\vb{x}_i)), \label{eq:l01_final}
\end{align}
which is the 0-1 loss of the base classifier under Gaussian perturbation.
In addition, for the $\ell$-th layer, we can rewrite the network inference as
\begin{align}
f(\vb{x}_i) = f_{2} \circ f_{1} (\vb{x}_i) = f_{2} (\vb{x}'_i),
\end{align}
where $f_{1}$ denotes the first $\ell-1$ layers and $f_2$ stands for the rest of the network. Repeating the computation for $f_2$ and $\vb{x}'_i$ instead of $f$ and $\vb{x}_i$ shows that implicit regularization in the form of injecting Gaussian noise in any layer is beneficial for the smoothed classifier.
\paragraph{Noise strength learning.}
Having discussed the importance of noise injection for our method, we note that choosing the variance of the noise is not a trivial task. The optimal noise is dependent on both the architecture of the network and the task. Therefore, while considering the combination of our method with implicit regularization, we allow the variance of the noise injected into the layers to be learned as in PNI \cite{rakin2018parametric}.
\section{Training a smoothed classifier}
\label{sec:global}
\citet{salman2019provably} have shown that incorporating the smoothing into the training procedure of the base classifier improves the aforementioned certification guarantees. We also use a similar concept to improve the empirical accuracy of the smoothed classifier.
We consider two approaches to training the smoothed classifier: one based on implicit regularization and prediction smoothing, and another one based on soft prediction smoothing. In both cases, we start from an adversarially pre-trained base model and fine-tune it to improve the robustness of the smooth classifier.
\subsection{Prediction smoothing training}
\label{subsec:training}
From \cref{eq:l01_final} we conclude that by injecting Gaussian noise into the base classifier layers,
we minimize the loss of the smoothed counterpart. The way the injected noise propagates through the layers of the network, however, renders the noise injected into the input layer particularly important.
In addition, smoothing is inherently dependent on the strength of the noise injected during the process, and we show that similar models tend to achieve the best results around specific values of noise strength. We, therefore, fine-tune the classifier by training under noise injection of the same type and strength.
$\vb{\eta}$ is not restricted to the Gaussian distributions as long as the expectations in the derivations presented in the previous sections exist. One can furthermore combine Gaussian noise injection with adversarial training by letting
\begin{align}
\vb{\eta} & \sim \mathcal{N}(\vb{0}, \sigma^2\vb{I} ) + B \cdot \vb{\delta} \nonumber\\
B &\sim \mathrm{Ber}(q),
\end{align}
where $\vb{\delta}$ is the adversarial attack and $q$ is the weight of adversarial samples in the training procedure. Such adversarial training under Gaussian perturbations relates to minimizing the 0-1 loss of the smoothed classifier under adversarial attack. We refer to this method as CNI-I+W (input + weight noise).
\subsection{Soft smoothing training}
\label{subsec:soft_training}
The smoothing of the classifier gives rise to a new family of attacks targeting the smoothed classifier rather than the base model. In the case of soft smoothing, the output of the smoothed classifier is differentiable and thus can be used as a source of gradients for white box attacks. \citet{salman2019provably} discussed this family of attacks as a way to improve the generalization of a base model and thus improve certified accuracy of a smoothed model.
We consider adversarial training with such attacks as a way to directly optimize the smoothed model by training the base model. We expect it to increase the adversarial robustness of both the base and the smoothed models. Specifically, we consider PGD attacks where the gradients are computed based on several randomized samples. In our experiments, we chose to limit the number of samples to $M=8$, since further improvement in the prediction ability is counter-balanced by the increase in the computational complexity.
\paragraph{Expectation PGD.}
The aforementioned soft smoothing was previously explored in various ways. \citet{kaur2019perceptually} showed that targeted attacks on softly smoothed models have perceptually-aligned gradients, i.e., they perturb the input to make it resemble a member of a different class, even for models that were not adversarially trained. In contrast, non-smoothed classifiers exhibit this phenomenon only in adversarially-trained models.
In general, perceptual alignment of the gradients is indirect evidence of model robustness \cite{ilyas2019adversarial,nakkiran2019a}. We expect such a perturbation, if strong enough, to be able to attack even a perfect classifier. Such phenomena might indicate that adversarial training of the model makes the adversarial attacks converge to soft smoothing-based attacks, which use the spatial information in proximity to the data point.
An important difference between the $\textsc{SmoothAdv}_{\mathrm{PGD}}$ attack considered by \citet{salman2019provably} and \citet{kaur2019perceptually}, and the \emph{expectation PGD} (EPGD) attack proposed in this paper is the order of the averaging and the softmax operator:
\begin{align}
g_{\text{SmoothAdv}}(\vb{x}) &= \mathrm{softmax}\qty( \frac{1}{M} \sum_{i=1}^M f_{c}(\vb{x} + \vb{\eta}_i) ) \,\,\, \mathrm{vs.} \\
g_{\text{EPGD}}(\vb{x}) &= \sum_{i=1}^M \mathrm{softmax}\qty( f_{c}(\vb{x} + \vb{\eta}_i) ).
\end{align}
Notice that an EPGD attack with $k$ PGD steps and $M$ samples requires computing the gradients $k\times M$ times, making its computational cost comparable with that of a PGD attack with $k\times M$ steps. Therefore, we compare the accuracy of PGD, EPGD and $\textsc{SmoothAdv}_{\mathrm{PGD}}$ \cite{salman2019provably} as a function of $k \times M$ and find that each has a distinct behavior.
\subsubsection*{Author Contributions}
\subsubsection*{Acknowledgments}
The research was funded by the Hyundai Motor Company through the HYUNDAI-TECHNION-KAIST Consortium, National Cyber Security Authority, and the Hiroshi Fujiwara Technion Cyber Security Research Center.
\clearpage
\bibliographystyle{splncs04}
| proofpile-arXiv_066-3271 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
The growth of the amount of stored information in large scale distributed and cloud storage systems makes the loss of data due to node failures a major problem. To obtain a reliable storage, when a node fails, we want to recover the data it contains
by using information from other nodes. This is the {\em repair problem}. A naive solving method consists of the replication of information across several nodes. A more clever method is to protect the data using error-correcting codes, what has led to the introduction of locally recoverable (LRC) codes \cite{GHSY}. LRC codes are error-correcting codes for which one or more erased coordinates of a codeword can be recovered from a set of other coordinates in that codeword. As typical examples of this solution we can mention Google and Facebook storage systems that use Reed-Solomon (RS) codes to protect the information. The procedure is as follows: the information to be stored is a long sequence $b$ of elements belonging to a finite field $\mathbb{F}_{p^l}$, where $p$ is a prime number. This sequence is divided into blocks, $b = b_1, b_2, \ldots, b_z$, of the same length $h$. According to the isomorphism $\mathbb{F}_{p^l}^h \cong \mathbb{F}_{p^{lh}}$, each of these blocks can be seen as an element of the finite field $\mathbb{F}_q$, with $q= p^{s}$ and $s=lh$. Fix an integer $k < q$. The vector $(b_1, b_2, \ldots, b_k) \in \mathbb{F}_q^k$ is encoded by using a Reed-Solomon code of dimension $k$ over $\mathbb{F}_q$, whose length $n$, $k < n \le q$, is equal to the number of nodes that will be used in its storage. We choose $\alpha_1, \alpha_2, \ldots, \alpha_n \in \mathbb{F}_q$ and send
$$
f(\alpha_i)=b_1 + b_2\alpha_i + \dots + b_k \alpha_i^{k-1}
$$
to the $i$-th node. Even if a node fails, we may recover the stored data $(b_1, b_2, \ldots, b_k)$ by using Lagrangian interpolation from any other $k$ available nodes.
Note that this method is wasteful, since $k$ symbols over $n$ nodes must be used to recover just one erasure.
Of course other error-correcting codes, apart from RS codes, can be used to deal more efficiently with the repair problem. Thus, in terms of coding theory the repair problem can be stated as follows: let $\mathcal{C}$ be a linear code of length $n$ and dimension $k$ over $\mathbb{F}_q$. A coordinate $i \in \{1, 2, \ldots, n\}$ is locally recoverable if there is a recovery set $R=R(i) \subset \{1, 2, \ldots, n\}$ such that $i \notin R$ and for any codeword $\boldsymbol{x} \in \mathcal{C}$, an erasure at position $i$ of $\boldsymbol{x}$ can be recovered by using the information given by the coordinates of $\boldsymbol{x}$ with indices in $R$. The locality of the coordinate $i$ is the smallest size of a recovery set for $i$. The code $\mathcal{C}$ is {\em locally recoverable} (LRC) if each coordinate is so, and the {\it locality} of $\mathcal{C}$ is the maximum locality of its coordinates. In Section \ref{secuno} we shall specify these definitions. Note that strictly speaking, all codes $\mathcal{C}$ of minimum distance $d (\mathcal{C})> 1$ are locally recoverable; just take $\{1, 2, \ldots,n\} \setminus\{ i\}$ as a recovery set for coordinate $i$. However we are interested in codes admitting recovery sets as small as possible. Thus, in practice we restrict to consider codes with `moderate' localities. In general, the locality $r$ of an LRC code $\mathcal{C}$ with parameters $[n,k,d]$ is upper-bounded as $r\le k$. For example, MDS codes (RS codes in particular) of dimension $k$ have locality $k$. Several lower bounds on $r$ are known. The most commonly used is the Singleton-like bound (\ref{Singd1Eq}).
Among the different classes of codes considered as good candidates for local recovering, cyclic codes and subfield-subcodes of cyclic codes play an important role, because the cyclic shifts of a recovery set again provide recovery sets \cite{CXHF,GC,HYUS,TBGC}.
In this article we continue this line of research by using the very general language of affine variety codes. We consider specific $J$-affine variety codes, introduced in \cite{QINP}, whose subfield-subcodes provide LRC codes. These subfield-subcodes have large lengths over fields $\mathbb{F}_q$, and Theorems \ref{el21} and \ref{el25} provide bounds on their localities.
A variant of LRC codes was introduced in \cite{PGLK}. As multiple device failures may occur simultaneously, it is of interest to consider LRC codes correcting more than one erasure. This idea leads to the concept of localities $(r,\delta)$ of an LRC code $\mathcal{C}$, which measure the recovery capability of $\mathcal{C}$ when at most $\delta-2$ erasures occur in a recovery set (see Section \ref{secuno} for a rigorous definition). LRC codes for multiple erasures have been subsequently studied in \cite{CXHF, FF, ABHM}. In \cite{CXHF} the authors constructed some classes of such LRC codes over $\mathbb{F}_q$, with lengths $n$ such that either $n| q-1$ or $n| q+1$. Codes of similar type and unbounded length were given in \cite{FF}. Here $\delta=d-1,d-2$ or $\delta=d/2$, where $d$ stands for the minimum distance.
The localities $(r,\delta)$ of an LRC code satisfy a Singleton-like bound ((\ref{SingdeltaEq}) in Section \ref{secuno}). Codes reaching equality for some $(r,\delta)$, are called {\em optimal}. For example, the codes in \cite{CXHF, FF} are optimal. Note that, as for the original Singleton bound, the bounds (\ref{Singd1Eq}) and (\ref{SingdeltaEq}) do not depend on the cardinality of the ground field $\mathbb{F}_q$. Some size dependent bounds can be found in \cite{ABHM}.
A somewhat different definition of LRC code with localities $(r,\delta)$
is proposed in \cite{KPLK} for systematic codes. There, the purpose is to repair erasures on the information symbols of a codeword. Other related variants of LRC codes deal with sequential repair of erasures \cite{PLK}, the availability property \cite{WZ}, or the cooperative repair \cite{RMV}.
In this work we use use affine variety constructions to obtain LRC codes suitable for multiple erasures, whose localities $(r,\delta)$ behave well (Theorems \ref{teo29} and \ref{la211}). In some cases these codes are optimal
for the Singleton-like bound (\ref{SingdeltaEq}). Compared with the codes shown in \cite{CXHF}, the ours are considerably longer, although not optimal in general. Let us recall here that most
good currently known LRC codes have small lengths $n$, in comparison with the cardinality
of the ground field $q$; usually $n < q$, \cite{GXY} (or $n=q+1$ for some codes in \cite{CXHF}). For the opposite, our codes (as is the case with those in \cite{FF}) have lengths $n\gg q$.
The article is organized as follows: in Section \ref{secuno} we recall some basic facts about LRC codes and introduce the concept of $t$-locality. Section \ref{secdos} is devoted to develop and study LRC codes from affine varieties. In Subsection \ref{afine} we introduce $J$-affine variety codes which also gave rise to good quantum error-correcting codes in \cite{galindo-hernando, QINP, QINP2}. In subsections \ref{subsect2} and \ref{subsect3}, we show that subfield-subcodes of several types of $J$-affine variety codes are good LRC codes, and we determine some of their localities $(r,\delta)$.
Finally in Section \ref{sectres} we give examples of LRC codes obtained by our procedure. We list some parameters and localities.
\section{LRC codes}
\label{secuno}
In this section we state some definitions and facts concerning LRC codes that will be necessary for the rest of the work. We mostly follow the usual conventions and definitions of locally recoverable codes. As a notation, given a fixed coordinate $i$ and a set $R$ such that $i\notin R$, we write $\overline{R}=R\cup \{ i\}$. Let $\mathcal{C}$ be an $[n, k, d]$ code over $\mathbb{F}_q$. Let $\boldsymbol{G}$ be a generator matrix of $\mathcal{C}$ with columns $\boldsymbol{c}_1, \boldsymbol{c}_2, \ldots, \boldsymbol{c}_n$. A set $R \subseteq \{1, 2, \dots,n\}$ is a recovery set for a coordinate $i \notin R$ if $\boldsymbol{c}_i \in \langle \boldsymbol{c}_j : j\in R \rangle$, the linear space spanned by $\{\boldsymbol{c}_j \; : \; j\in R\}$. In this case, for any codeword $\boldsymbol{x}\in \mathcal{C}$, $x_i$ can be obtained from the coordinates $x_j$, $j\in R$, just by solving the linear system whose augmented matrix is $(\boldsymbol{c}_j, j\in R\, | \, \boldsymbol{c}_i)$.
Let $R$ be a set of cardinality $\#R=r$ and let $\pi_R:\mathbb{F}_q^n\rightarrow \mathbb{F}_q^r$ be the projection on the coordinates in $R$. For $\boldsymbol{x} \in \mathbb{F}_q^n$ we write $\boldsymbol{x}_R = \pi_R (\boldsymbol{x})$. Often we shall consider the punctured and shortened codes:
\[
\mathcal{C}[R] := \{\boldsymbol{x}_R : \boldsymbol{x} \in \mathcal{C}\} \mbox{ and }
\mathcal{C}[[R]] := \{\boldsymbol{x}_R : \boldsymbol{x} \in \mathcal{C}, \mbox{supp}(\boldsymbol{x}) \subseteq R\},
\]
respectively, where $\mbox{supp}(\boldsymbol{x})$ denotes the {\em support} of $\boldsymbol{x}$, $\mbox{supp}(\boldsymbol{x}):=\{ i : x_i\neq 0\}$. Note that $\boldsymbol{c}_i \in \langle \boldsymbol{c}_j : j\in R \rangle$ if and only if $\dim(\mathcal{C}[R]) = \dim(\mathcal{C}[\overline{R}])$. So the notion of recovery set does not depend on the generator matrix chosen. If $\boldsymbol{c}_i \in \langle \boldsymbol{c}_j : j\in R \rangle$, there exist $w_1, w_2, \ldots, w_n\in \mathbb{F}_q$ such that $\sum_{j=1}^n w_j\boldsymbol{c}_j = 0$ with $w_i \neq 0$ and $w_j = 0$ if $j\notin \overline{R}$. Then $\boldsymbol{w} = (w_1, w_2, \ldots, w_n) \in \mathcal{C}^{\perp}$, the dual of $\mathcal{C}$, and $\boldsymbol{w}_R \in \mathcal{C}^{\perp}[[R]]$. Thus $R$ is a recovery set for the coordinate $i$ if and only if there exists a word $\boldsymbol{w}_R \in \mathcal{C}^{\perp}[[R]]$ with $w_i \neq 0$. In this case $\# R \ge d(\mathcal{C}^{\perp}) -1$.
The smallest cardinality of a recovery set $R$ for a coordinate $i$ is the locality of $i$. The locality of $\mathcal{C}$, often denoted by $r=r(\mathcal{C})$, is the largest locality of any of its coordinates. Thus, we have proved the following result.
\begin{pro}
\label{d1dual}
The locality $r$ of an LRC code $\mathcal{C}$ satisfies $r\ge d(\mathcal{C}^{\perp})-1$.
\end{pro}
A code $\mathcal{C}$ reaching equality in the bound given by Proposition \ref{d1dual} will be called {\em sharp}. Note that all cyclic codes are sharp. Apart from Proposition \ref{d1dual}, perhaps the most important bound on the locality $r$ of an {LCR} code with parameters $[n,k,d]$ is given by the following Singleton-like inequality, see \cite{GHSY}.
\begin{teo} \label{Singd1}
The locality $r$ of an LRC code $\mathcal{C}$ satisfies
\begin{equation}\label{Singd1Eq}
d+k+\left\lceil \frac{k}{r}\right\rceil \le n+2.
\end{equation}
\end{teo}
The difference between the two terms in Theorem \ref{Singd1}, $D_1:=n+2-d-k-\lceil k/r\rceil$, is the {\it LRC-Singleton defect} of $\mathcal{C}$. Codes with $D_1=0$ are called {\em Singleton-optimal} (or simply {\em optimal}). While optimal LRC codes are known for all lengths $n\le q$, \cite{KTBY}, the problem of finding codes of this type when $n>q$ is currently a challenge \cite{GXY}.
To avoid confusion in what follows, we shall sometimes refer to $r$ as the {\em classical} locality of $\mathcal {C}$.
The LRC codes that we have described above allow local recovery of the information stored in a failed node. However, concurrent failures of several nodes in a network are also possible and uncommon. This problem was first treated in \cite{PGLK}. According to the definition given in that article, an LRC code $\mathcal{C}$ has locality $(r,\delta)$ if for any coordinate $i$ there exists a set of positions $\overline{R}=\overline{R}(i)\subset \{1, 2, \dots,n\}$ such that
\indent (RD1) $i\in \overline{R}$ and $\# \overline{R}\le r+\delta-1$; and\newline
\indent (RD2) $d(\mathcal{C}[\overline{R}])\ge \delta$.
The sets $\overline{R}$ satisfying the above conditions (RD1) and (RD2) are called $(r,\delta)$ recovery sets. Given such a set $\overline{R}$ and $i\in\overline{R}$, the correction capability of $\mathcal{C}[\overline{R}]$ can be used to correct an erasure at position $i$ plus any other $\delta-2$ erasures in $\overline{R}\setminus\{i \}$.
Notice that the original definition of locality of LRC codes corresponds to the case $\delta = 2$.
Provided that $\delta\ge 2$, any subset $R \subset \overline{R}$ of cardinality $r$ with $i \notin R$, satisfies $d(\mathcal{C}([R]\cup\{ i\}))\ge 2$ and consequently $R$ is a recovery set for $i$. Thus if $\mathcal{C}$ has a locality $(r,\delta)$, then the classical locality of $\mathcal{C}$ is $\le r$ and the number of recovery sets of cardinality $r$ for any coordinate $i$ is at least
$$
\binom{\# \overline{R}-1}{r},
$$
which can be relevant to improve the availability of $\mathcal{C}$ for recovering erasures. We remark that associated to $\mathcal{C}$ we have several localities $(r,\delta)$, corresponding to the $d(\mathcal{C})-1$ values of $\delta=2, 3, \dots,d(\mathcal{C})$. These localities satisfy
the following generalization of the Singleton-like bound of Theorem \ref{Singd1}, which was proved in \cite{PGLK}.
\begin{pro} \label{Singdelta}
Let $\mathcal{C}$ be an LRC code with parameters $[n,k,d]$ and locality $(r,\delta)$. Then, the following inequality holds
\begin{equation} \label{SingdeltaEq}
d+k+\left( \left\lceil \frac{k}{r}\right\rceil-1\right) (\delta-1) \le n+1.
\end{equation}
\end{pro}
Analogously to what was done for the classical locality $r$, for $t=1, 2, \dots, d(\mathcal{C})-1$, in this article we define
\begin{align*}
r_t =r_t(\mathcal{C}):=&\min \big\{ \rho : \mbox{ for all $i=1, 2, \dots,n$, there is a set $\overline{R}_i \subseteq \{1,2, \ldots, n\}$} \nonumber \\
& \qquad {} \mbox{with $i \in \overline{R}_i$, $\# \overline{R}_i\le \rho$ and $d(\mathcal{C}[\overline{R}_i])\ge t+1$} \big\}-1.
\end{align*}
The value $r_t$ is the minimum number of positions, $\# \overline{R}-1$, needed to recover a given coordinate $i\in \overline{R}$ of any codeword $\boldsymbol{x}$, when at most $t$ erasures occur in $\boldsymbol{x}_{\overline{R}}$. Clearly $r_1$ is the classical locality of $\mathcal{C}$. We refer to $r_t$ as the {\em $t$-locality} of $\mathcal{C}$. For example, since puncturing $< d$ times an MDS code gives a new MDS code of the same dimension, for $t<d$ the $t$-locality of an $[n, k, d]$ MDS code is $r_t = k + t-1$.
Note that from the above definitions, the code $\mathcal{C}$ has locality $(\rho,\delta)$ if and only if $r_{\delta-1}\le \rho+\delta-2$. Thus we can translate the bound given by Proposition \ref{Singdelta} in terms of $r_t$'s, as
\begin{equation} \label{Singdt}
d+k+ \left\lceil \frac{k}{r_t-t+1}\right\rceil t \le n+t+1.
\end{equation}
The difference between the two terms of Inequation (\ref{Singdt})
\begin{equation}
\label{AA}
D_t:=n+t+1-d-k- \left\lceil \frac{k}{r_t-t+1} \right\rceil t,
\end{equation}
is the $t$-th LRC-Singleton defect of $\mathcal{C}$. Codes with $D_t=0$ will be called $t$-optimal. For example, MDS codes are $t$-optimal for all $t=1, 2, \dots,d-1$.
The sequence $(r_1, r_2, \ldots,r_{d-1})$ we have associated to an LRC code $\mathcal{C}$, resembles, up to some extent, the weight hierarchy of $\mathcal{C}$. Let us recall that for $t=1, 2, \ldots, k=\dim(\mathcal{C})$, the $t$-th {\em generalized Hamming weight of} $\mathcal{C}$ is defined as
$$
d_t=d_t(\mathcal{C}):=\min \{ \# \mbox{supp}(E) \, : \, \mbox{$E$ is a $t$-dimensional subcode of $\mathcal{C}$}\},
$$
where
$
\mbox{supp}(E):= \{ i \, : \, \mbox{there exists $\boldsymbol{x}\in E$ with $x_i\neq 0$} \},
$
see \cite[Section 3.3]{PWBJ}. We extend the bound given by Proposition \ref{d1dual} to all localities $r_t$'s in the following new result.
\begin{pro} \label{dtdual}
For $t=1, 2, \dots, d-1$, the $t$-locality of an $[n,k,d]$ LRC code $\mathcal{C}$ satisfies $r_t\ge d_t(\mathcal{C}^{\perp})-1$, where $d_t(C^{\perp})$ is the $t$-th generalized Hamming weight of the dual code $ \mathcal{C}^{\perp}$.
\end{pro}
\begin{proof}
First note that $d-1\le \dim(\mathcal{C}^{\perp})$. Let $\overline{R}$ be a set of coordinates such that $\# \overline{R}\le r_t+1$ and $d(\mathcal{C}[\overline{R}])\ge t+1$. According to the Singleton bound, we have $\dim(C[\overline{R}])\le \# \overline{R}-t$. Since $\mathcal{C}[\overline{R}]^{\perp}=\mathcal{C}^{\perp}[[\overline{R}]]$ (see \cite{PWBJ}, Proposition 3.1.17), it holds that $\dim(\mathcal{C}^{\perp}[[\overline{R}]])\ge t$. Thus $d_t(\mathcal{C}^{\perp}) \le \#\overline{R}$ and the result follows.
\end{proof}
The above result can be stated in terms of localities $(r,\delta)$ as follows.
\begin{cor}
Let $\mathcal{C}$ be an LRC code with locality $(r,\delta)$. Then the following inequality holds
\[
r+ \delta \geq d_{\delta -1} (\mathcal{C}^\perp) +1.
\]
\end{cor}
\begin{proof}
From the definition of locality $(r,\delta)$ we have $r_{\delta-1} \leq r+\delta-2$. Since $r_t<r_{t+1}$ for all $1 \leq t \leq d-2$, we deduce $r_{\delta-1} = r+\delta-2$. Then, Proposition \ref{dtdual} gives $r+ \delta \geq d_{\delta -1} (\mathcal{C}^\perp) +1$.
\end{proof}
\section{$J$-affine variety codes giving LRC codes}
\label{secdos}
In this section we show that subfield-subcodes of some codes arising from $J$-affine varieties are LRC codes with good recovery properties. We keep the notations as in the previous sections. In particular our LRC codes will be defined over the finite field $\mathbb{F}_q$, where $q=p^s$ and $p$ is a prime number. We shall consider an extension field $\mathbb{F}_Q$ of $\mathbb{F}_q$, where $Q=p^\ell$ and $s$ divides $\ell$. The affine varieties we manage, and so the codes arising from them, will be defined over $\mathbb{F}_Q$. Subfield-subcodes of these codes will be defined over $\mathbb{F}_q$.
The concept of $J$-affine variety code was introduced in \cite{QINP} and used in \cite{QINP2, LCD} for constructing quantum and LCD codes with good parameters. In the first subsection we recall
the construction of $J$-affine variety codes over $\mathbb{F}_Q$ and their subfield-subcodes over $\mathbb{F}_q$.
\subsection{$J$-affine variety codes and their subfield-subcodes}
\label{afine}
Let $\mathbb{F}_q$, $q=p^s$, be a finite field and
let $\mathbb{F}_Q$, $Q=p^\ell$, be an extension field of $\mathbb{F}_q$.
Let $\mathfrak{R}:= \mathbb{F}_Q[X_1, X_2, \ldots,X_m]$ be the polynomial ring in $m \geq 1$ variables over $\mathbb{F}_Q$. For simplicity we will often write the monomial
$X_1^{a_1} X_2^{a_2}\cdots X_{m}^{a_m}\in \mathfrak{R}$ as $X^{\boldsymbol{a}}$, with $\boldsymbol{a}=(a_1,a_2, \ldots, a_m) $.
Fix positive integers $N_j>1$, $j=1,2,\dots, m$, such that $N_j-1$ divides $Q-1$. Let $J$ be a subset of indices of variables, $J\subseteq \{1,2, \ldots, m\}$, and let $I_J$ be the ideal of
$\mathfrak{R}$ generated by the binomials
$X_j^{N_j -1} - 1$ if $j \in J$, and $X_j^{N_j} - X_j$ if $j \not \in J$. Denote by $\mathfrak{R}_J$ the
quotient ring $\mathfrak{R}_J=\mathfrak{R}/I_J$. Set $T_j = N_j -2$ if $j \in J$ and $T_j = N_j -1$ otherwise, and let
$$
\mathcal{H}_J : = \{0,1,\ldots,T_1\}\times \{0,1,\ldots,T_2\} \times\cdots\times\{0,1,\ldots,T_m\}.
$$
Let $Z_J = \{P_1, P_2, \ldots, P_{n_J}\}$ be the set of zeros of $I_J$ over $\mathbb{F}_Q$. This set has cardinality
$$
n_J = \prod_{j \notin J} N_j \prod_{j \in J} (N_j -1).
$$
Consider the well-defined evaluation map
$$
\mathrm{ev}_J: \mathfrak{R}_J \rightarrow \mathbb{F}_{Q}^{n_J} \; , \;
\mathrm{ev}_J(f) = (f(P_1), f(P_2),\ldots, f(P_{n_J})),
$$
where $f$ denotes both the polynomial in $\mathfrak{R}$ and its corresponding equivalence class in $\mathfrak{R}_J$.
\begin{de}\label{def:unouno}
{\rm Given a non-empty subset $\Delta\subseteq \mathcal{H}_J$, the {\it $J$-affine variety code $E^J_\Delta$,} is the linear subspace $E^J_\Delta:=\langle \mathrm{ev}_J (X^{\boldsymbol{a}}) \, : \, \boldsymbol{a} \in \Delta \rangle \subseteq \mathbb{F}_Q^{n_J}$.}
\end{de}
Then $E^J_\Delta$ is a linear code over $\mathbb{F}_Q$. Its length is $n_J$ and its dimension equals the cardinality of $\Delta$, since $\mathrm{ev}_J$ is injective, \cite{QINP}. Recall that $q=p^s$ where $s$ divides $\ell$ and thus $\mathbb{F}_q$ is a subfield of $\mathbb{F}_Q$.
\begin{de}\label{def:unodos}
{\rm The {\it subfield-subcode} of $E^J_\Delta$ over the field $\mathbb{F}_q$, denoted $\mathcal{C}_\Delta^{J}$, is the linear code $\mathcal{C}_\Delta^{J}:= E_\Delta^J \cap \mathbb{F}_{q}^{n_J}$.
}
\end{de}
In order to study the codes $\mathcal{C}_\Delta^{J}$,
we shall manage the elements of $\mathcal{H}_J$ in a particular manner. Let $j$, $1 \leq j \leq m$. If $j \in J$ then we identify the set $\{0,1, \ldots, T_j\}$ with the ring $\mathbb{Z}/(T_j +1) \mathbb{Z}$.
When $j \not \in J$, we identify the set $\{1, 2,\ldots, T_j\}$ with $\mathbb{Z}/T_j \mathbb{Z}$, and we extend the addition and multiplication of this ring to $\{0,1, \ldots, T_j\}$, by setting $0+\alpha=\alpha$, $0\cdot\alpha=0$ for all $\alpha=0,1,\dots,T_j$. The reason that explains these different ways of treating $\{0,1, \ldots, T_j\}$ is the fact that the evaluation of monomials containing $X_j^0$ or containing $X_j^{N_j-1}$ may be different when $j\not\in J$, see \cite{QINP2} for details.
Under the above conventions,
a set $\mathfrak{S}\subseteq \mathcal{H}_J$ is a {\it cyclotomic set with respect to $q$} if $q \boldsymbol{y} \in \mathfrak{S}$ for all $\boldsymbol{y} = (y_1, y_2, \ldots, y_m) \in \mathfrak{S}$. Minimal cyclotomic sets are those of the form $\mathfrak{I}=\{ q^{i } \boldsymbol{y} \, : \, i \geq 0\}$, for some element $\boldsymbol{y} \in \mathcal{H}_J$.
For each minimal cyclotomic set $\mathfrak{I}$, we consider a unique representative $\boldsymbol{a} = (a_1, a_2, \ldots, a_m)\in \mathfrak{I}$, constructed iteratively as follows:
$a_1=\min\{ y_1 : (y_1, y_2, \ldots, y_m) \in \mathfrak{I} \}$, and
$a_j=\min\{ y_j : (a_1, a_2, \ldots,a_{j-1},y_j,\ldots, y_m) \in \mathfrak{I} \}$ for $j=2,3,\dots,m$. We shall denote by $\mathfrak{I}_{\boldsymbol{a}}$ the minimal cyclotomic set with representative $\boldsymbol{a}$ and by $i_{\boldsymbol{a}}$ the cardinality of $\mathfrak{I}_{\boldsymbol{a}}$. Thus $\mathfrak{I}_{\boldsymbol{a}}=\{ \boldsymbol{a},q\boldsymbol{a},\dots,q^{(i_{\boldsymbol{a}} -1)}\boldsymbol{a} \}$.
Let $\mathcal{A}$ be the set of representatives of all minimal cyclotomic sets in $\mathcal{H}_J$. Given a non-empty subset $\Delta\subseteq\mathcal{H}_J$, we define
$\mathcal{A}(\Delta)=\{ \boldsymbol{a} \in \mathcal{A} \, : \, \mathfrak{I}_{\boldsymbol{a}} \subseteq \Delta \}$.
The set $\Delta$ is called {\em closed} if it is a union of minimal cyclotomic sets, that is, if
$$
\Delta=\bigcup_{\boldsymbol{a} \in\mathcal{A}(\Delta)} \mathfrak{I}_{\boldsymbol{a}}.
$$
An important tool to study subfield-subcodes is the trace map.
Since we are interested in subfield-subcodes over $\mathbb{F}_{q}$ of evaluation codes over $\mathbb{F}_{Q}$, for $\boldsymbol{a} \in \mathcal{A}$ we consider the map
$$
\mathcal{T}_{\boldsymbol{a} }: \mathfrak{R}_J \rightarrow \mathfrak{R}_J \; , \;
\mathcal{T}_{\boldsymbol{a}} (f) = f + f^{q} + \cdots + f^{q^{(i_{\boldsymbol{a}} -1)}}.
$$
Let $\xi_{\boldsymbol{a}}$ be a fixed primitive element of the field $\mathbb{F}_{q^{ i_{\boldsymbol{a}}}}$. The next result gives an explicit description of the code $\mathcal{C}_\Delta^{J}$. It extends Theorem 4 in \cite{galindo-hernando}. Here we state the result for any set $J \subseteq \{1,2, \ldots, m\}$, while in \cite{galindo-hernando} only the case $J=\{1,2, \ldots, m\}$ was considered.
\begin{teo}
\label{ddimension}
With the above notation, if $\Delta\subseteq\mathcal{H}_J$ then the set of vectors
$$
\bigcup_{\boldsymbol{a} \in \mathcal{A}(\Delta)}
\left\{ \mathrm{ev}_J(\mathcal{T}_{\boldsymbol{a}} (\xi_{\boldsymbol{a}}^{k} X^{\boldsymbol{a}})) \, : \, 0 \leq k \leq i_{\boldsymbol{a}} -1 \right\}
$$
is a basis of $\mathcal{C}_\Delta^{J}$ over $\mathbb{F}_{q}$. In particular, if $\Delta$ is a closed set, then $\dim(\mathcal{C}_\Delta^{J})=\dim(E_\Delta^{J})=\#\Delta$.
\end{teo}
The proof of Theorem \ref{ddimension} is similar to that of Theorem 4 in \cite{galindo-hernando}
and we omit it. Instead we show an example illustrating this theorem.
\begin{exa}{\rm
Take $p=2$, $s=3$, $\ell=6$ and $m=2$, so $q=2^3=8$ and $Q= 2^6=64$. Take $J=\{1\}$, $N_1=8$ and $N_2=10$, so that $T_1=6$ and $T_2=9$.
Let $\boldsymbol{a}_1=(1,2), \boldsymbol{a}_2=(2,3)$ and $\boldsymbol{a}_3=(1,3)$.
Then $\mathfrak{I}_{\boldsymbol{a}_1}=\{(1,2),(1,7)\}, \mathfrak{I}_{\boldsymbol{a}_2}=\{(2,3),(2,6)\}$ and
$\mathfrak{I}_{\boldsymbol{a}_3}=\{(1,3),(1,6)\}$, hence $i_{\boldsymbol{a}_1}= i_{\boldsymbol{a}_2}=i_{\boldsymbol{a}_3}=2$.
Let $\Delta_1 = \mathfrak{I}_{\boldsymbol{a}_1} \cup \mathfrak{I}_{\boldsymbol{a}_2}$ and
$\Delta_2 = \mathfrak{I}_{\boldsymbol{a}_1} \cup \mathfrak{I}_{\boldsymbol{a}_2} \cup \{ (1,3)\}$. Thus $\Delta_1$ is closed but $\Delta_2$ is not.
Consider the affine variety codes $E_{\Delta_1}^J, E_{\Delta_2}^J$ defined over $\mathbb{F}_{64}$ and the subfield-subcodes
$\mathcal{C}_{\Delta_1}^J, \mathcal{C}_{\Delta_2}^J$ over $\mathbb{F}_{8}$. All of them have length $n_J=70$. Furthermore
$\dim(E_{\Delta_1}^J)=4, \dim(E_{\Delta_2}^J)=5$. In fact we have
\begin{eqnarray*}
E_{\Delta_1}^J&=&\langle
\mathrm{ev}_J(X_1 X_2^2), \mathrm{ev}_J(X_1 X_2^7), \mathrm{ev}_J( X_1^2 X_2^3), \mathrm{ev}_J(X_1^2 X_2^6)\rangle, \\
E_{\Delta_2}^J&=&\langle
\mathrm{ev}_J(X_1 X_2^2), \mathrm{ev}_J(X_1 X_2^7), \mathrm{ev}_J( X_1^2 X_2^3), \mathrm{ev}_J(X_1^2 X_2^6), \mathrm{ev}_J(X_1 X_2^3)\rangle
\end{eqnarray*}
over $\mathbb{F}_{64}$. And from Theorem \ref{ddimension}
\begin{multline*}
\mathcal{C}_{\Delta_1}^J = \mathcal{C}_{\Delta_2}^J = \langle
\mathrm{ev}_J(X_1 X_2^2 + X_1 X_2^7), \mathrm{ev}_J(\xi X_1 X_2^2 + \xi^8 X_1 X_2^7), \\
\mathrm{ev}_J( X_1^2 X_2^3 + X_1^2 X_2^6),\mathrm{ev}_J( \xi X_1^2 X_2^3 + \xi^8 X_1^2 X_2^6) \rangle
\end{multline*}
over the field $\mathbb{F}_{8}$, where $\xi$ a primitive element of $\mathbb{F}_{64}$.
This example shows that when we study the properties of a code $\mathcal{C}_{\Delta}^J $, we can always assume that the set $\Delta$, from which it arises, is closed.}
\end{exa}
\subsection{LRC codes from $J$-affine variety codes}
\label{subsect2}
In this subsection we present some specific families of $J$-affine variety codes whose subfield-subcodes are LRC codes. We determine recovery sets for these LRC codes and show that their localities $(r, \delta)$ behave well.
Let us remember that the construction of $J$-affine variety codes begins by taking a set of indices $J\subseteq \{ 1,2,\dots,m\}$ and integers $N_1,N_2,\dots,N_m$, such that $N_j-1$ divides $Q-1$ for all $j$. In order to obtain good LRC codes, from now on we shall assume the additional property that $N_1,N_2,\dots,N_m$ have been chosen in a way that there exists a non-empty subset $L\subseteq J$ such that $q -1$ divides $N_j -1$ for all $j\in L$.
Throughout the rest of this section we shall assume that the integers $N_1,N_2,\dots,N_m$, and the sets $J$ and $L$, have been fixed satisfying the above conditions.
Let $\alpha$ and $\eta$ be primitive elements of $\mathbb{F}_{Q}$ and $\mathbb{F}_{q}$, respectively. For $1 \leq j \leq m$, let $\gamma_j = \alpha^{(Q-1)/(N_j -1)} \in \mathbb{F}_{Q}$. The following property will be used later.
\begin{lem}
\label{lema16}
Let $l$ and $n$ be two nonnegative integers. If $j \in L$, then the following equality holds in $\mathbb{F}_{Q}$,
\[
\left(\gamma_j^{l} \eta^n \right)^{N_j -1} = 1.
\]
\end{lem}
\begin{proof}
The statement follows from the chain of equalities
\[
\left(\gamma_j^{l} \eta^n \right)^{N_j -1} = \left(\alpha^{\frac{Q-1}{N_j -1}}\right)^{l (N_j -1)} \left(\eta^{N_j -1}\right)^n = \left(\alpha^{Q-1}\right)^l \left(\eta^{q -1}\right)^{n \frac{N_j-1}{q-1}}=1.
\]
\end{proof}
As defined in Subsection \ref{afine}, let $I_J$ be the ideal in $\mathfrak{R}$ generated by the binomials
$X_j^{N_j -1} - 1$ if $j \in J$ and $X_j^{N_j} - X_j$ if $j \not \in J$, and let
$Z_J = \{P_1, P_2, \ldots, P_{n_J}\}$ be the set of zeros of $I_J$ over $\mathbb{F}_Q$.
In this subsection we determine recovery sets for codes $\mathcal{C}_\Delta^{J}$. These recovery sets will be obtained from subsets $R\subset Z_J$ satisfying some geometrical properties. Given a point $P\in Z_J$ we set $\mbox{coord}(P):=j$ if $P=P_j$; consequently, given a set $R\subseteq Z_J$, we set $\mbox{coord}(R):=\{ \mbox{coord}(P) : P\in R\}$.
Given a nonzero element $\lambda \in \mathbb{F}_{Q}^*$ and a point $P\in \mathbb{F}_Q^m$, we define the product $\lambda \cdot_L P$ as the point of $\mathbb{F}_{Q}^m$ obtained by multiplying by $\lambda$ the coordinates of $P$ corresponding to positions in $L$ and leaving unchanged its remaining coordinates.
\begin{lem}
\label{lema16bis}
If $P\in Z_J$, then $\eta^n \cdot_L P \in Z_j$ for every nonnegative integer $n$.
\end{lem}
The proof of this lemma follows directly from the definition of $L$ and Lemma \ref{lema16}.
We define the {\em orbit} of a point $P_{t_0}\in Z_J$ as the set
\begin{equation}
\label{pi}
R_{t_0} := \{ \eta^n \cdot_L P_{t_0} \, : \, 0 \leq n \leq q -2 \}.
\end{equation}
Notice that $R_{t_0} \subset Z_j$ by Lemma \ref{lema16bis}. As we shall see later, these orbits are closely related to recovery sets of our codes. For short, the point $\eta^n \cdot_L P_{t_0}$ will be denoted $P^L_{n,t_0}$.
Let $\mathcal{A}$ be the set of representatives of all minimal cyclotomic sets in $\mathcal{H}_J$, as defined in Subsection \ref{afine}. For $\boldsymbol{a}\in \mathcal{A}$ we write $\sigma_L(\boldsymbol{a})=\sum_{j \in L} a_j$, where the $a_j$'s and the sum $\sigma_L(\boldsymbol{a})$ are seen as integers.
\begin{lem}
\label{lema17}
Let $\boldsymbol{a}\in\mathcal{A}$ and let $k$ and $n$ be two integers such that $0 \leq k \leq i_{\boldsymbol{a}} -1$ and $0 \leq n \leq q-1$. Then we have
\begin{equation}
\label{el17}
\mathcal{T}_{\boldsymbol{a}} (\xi_{\boldsymbol{a}}^{k} X^{\boldsymbol{a}}) \left( P^L_{n,t_0} \right) = \eta^{n \sigma_L (\boldsymbol{a})} \mathcal{T}_{\boldsymbol{a}} (\xi_{\boldsymbol{a}}^{k} X^{\boldsymbol{a}}) \left( P_{t_0}\right).
\end{equation}
\end{lem}
\begin{proof}
Since no coordinate of $P_{t_0}$ in the positions of $L$ vanishes, we can write $P_{t_0} = (\gamma_1^{k_1}, \gamma_2^{k_2}, \ldots, \gamma_m^{k_m})$ without loss of generality. Then
\begin{multline}
\mathcal{T}_{\boldsymbol{a}} (\xi_{\boldsymbol{a}}^{k} X^{\boldsymbol{a}}) \left( P^L_{n,t_0} \right)
= \sum_{t=0}^{i_{\boldsymbol{a}}-1} \left(\xi_{\boldsymbol{a}}^{k} \prod_{l \in L} (\eta^n \gamma_l^{k_l})^{a_l} \prod_{l \not \in L} ( \gamma_l^{k_l})^{a_l} \right)^{t q}
\\
= \eta^{n \sigma_L (\boldsymbol{a})} \sum_{t=0}^{i_{\boldsymbol{a}}-1} \left(\xi_{\boldsymbol{a}}^{k} \prod_{l=1}^m ( \gamma_l^{k_l})^{a_l}\right)^{t q} = \eta^{n \sigma_L (\boldsymbol{a})} \mathcal{T}_{\boldsymbol{a}} (\xi_{\boldsymbol{a}}^{k} X^{\boldsymbol{a}}) \left( P_{t_0} \right)
\end{multline}
as stated.
\end{proof}
The $J$-affine variety code $E^J_\Delta$ was defined as the linear subspace spanned by the vectors $\mathrm{ev}_J (X^{\boldsymbol{a}})$, $\boldsymbol{a} \in \Delta$, where $\Delta$ is any non-empty subset of $\mathcal{H}_J$.
Taking advantage of Theorem \ref{ddimension}, from now on all the sets $\Delta$ we consider will be closed, that is a union of minimal cyclotomic sets, $\Delta = \cup_{l=1}^r \mathfrak{I}_{\boldsymbol{a}_l}$, with $\boldsymbol{a}_l\in\mathcal{A}$, $1\le l\le r$. Later in this article we shall impose even more restrictive conditions.
\begin{teo}
\label{el21}
Let $\Delta = \cup_{l=1}^r \mathfrak{I}_{\boldsymbol{a}_l}$, where $\{ \boldsymbol{a}_1,\boldsymbol{a}_2,\dots, \boldsymbol{a}_r\}$ is a subset of $\mathcal{A}$ with cardinality $r \leq q-2$. If the integers $\sigma_L(\boldsymbol{a}_1),\sigma_L(\boldsymbol{a}_2),\dots, \sigma_L(\boldsymbol{a}_r)$ are pairwise different modulo $q -1$, then the subfield-subcode $\mathcal{C}_\Delta^{J}$ is an LRC code with locality $\le r$.
\end{teo}
\begin{proof}
Let $\boldsymbol{c} =\mathrm{ev}_J(h)$ be a codeword of $\mathcal{C}_\Delta^{J}$. By Theorem \ref{ddimension}, $h$ can be written as
\[
h = h_{\boldsymbol{a}_1} + h_{\boldsymbol{a}_2} + \cdots + h_{\boldsymbol{a}_r},
\]
where each $h_{\boldsymbol{a}_l}$ is a linear combination of polynomials of the form $\mathcal{T}_{\boldsymbol{a}_l} (\xi_{\boldsymbol{a}_l}^{k} X^{\boldsymbol{a}_l})$, $0 \leq k \leq i_{\boldsymbol{a}_l} -1$,
and coefficients in $\mathbb{F}_{q}$. Fix a possition $t_0\in \{ 1,2,\dots,n\}$.
We shall show that the set $R=\{P^L_{n_i, t_0} \, : \, i=1,2,\dots,r\}$ of points corresponding to $r$ consecutive nonzero $n_i$'s, gives a recovery set $\mbox{coord}(R)$ for $t_0$. According to Lemma \ref{lema16bis}, the points in $R$ belong to $Z_J$. Assume we know the $r$ coordinates $ h\left(P^L_{n_i, t_0}\right)$, $i=1,2,\dots,r$, of $\boldsymbol{c}$. By linearity
\begin{equation}
\label{de21}
h\left(P_{t_0}\right) = h_{\boldsymbol{a}_1} \left(P_{t_0}\right)+ h_{\boldsymbol{a}_2} \left(P_{t_0}\right)+ \cdots + h_{\boldsymbol{a}_r}\left(P_{t_0}\right).
\end{equation}
Then, from Lemma \ref{lema17} we get the equalities
\begin{align*}
h\left(\eta^{n_1} \cdot_L P_{t_0}\right) &= \eta^{n_1 \sigma_L (\boldsymbol{a}_1)} h_{\boldsymbol{a}_1} \left(P_{t_0}\right)+ \eta^{n_1 \sigma_L (\boldsymbol{a}_2)} h_{\boldsymbol{a}_2} \left(P_{t_0}\right)+ \cdots + \eta^{n_1 \sigma_L (\boldsymbol{a}_r)} h_{\boldsymbol{a}_r}\left(P_{t_0}\right), \\
h\left(\eta^{n_2} \cdot_L P_{t_0}\right) &= \eta^{n_2 \sigma_L (\boldsymbol{a}_1)} h_{\boldsymbol{a}_1} \left(P_{t_0}\right)+ \eta^{n_2 \sigma_L (\boldsymbol{a}_2)} h_{\boldsymbol{a}_2} \left(P_{t_0}\right)+ \cdots + \eta^{n_2 \sigma_L (\boldsymbol{a}_r)} h_{\boldsymbol{a}_r}\left(P_{t_0}\right),\\
& \vdots\\
h\left(\eta^{n_r} \cdot_L P_{t_0}\right) &= \eta^{n_r \sigma_L (\boldsymbol{a}_1)} h_{\boldsymbol{a}_1} \left(P_{t_0}\right)+ \eta^{n_r \sigma_L (\boldsymbol{a}_2)} h_{\boldsymbol{a}_2} \left(P_{t_0}\right)+ \cdots + \eta^{n_r \sigma_L (\boldsymbol{a}_r)} h_{\boldsymbol{a}_r}\left(P_{t_0}\right).
\end{align*}
Write $\eta_i := \eta^{\sigma_L (\boldsymbol{a}_i)}$, $1 \leq i \leq r$. We have obtained the square system of linear equations
\[
\left(
\begin{array}{ccc}
\eta_1^{n_1} & \cdots & \eta_r^{n_1} \\
\eta_1^{n_2} & \cdots & \eta_r^{n_2} \\
\vdots & \ddots & \vdots \\
\eta_1^{n_r} & \cdots & \eta_r^{n_r} \\
\end{array}
\right)
\left(
\begin{array}{c}
h_{\boldsymbol{a}_1} \left(P_{t_0}\right) \\
h_{\boldsymbol{a}_2} \left(P_{t_0}\right) \\
\vdots \\
h_{\boldsymbol{a}_r} \left(P_{t_0}\right) \\
\end{array}
\right)=
\left(
\begin{array}{c}
h\left(\eta^{n_1} \cdot_L P_{t_0}\right) \\
h\left(\eta^{n_2} \cdot_L P_{t_0}\right) \\
\vdots \\
h\left(\eta^{n_r} \cdot_L P_{t_0}\right) \\
\end{array}
\right).
\]
The matrix of this system is of Vandermonde type, and thus nonsingular.
Then the solution is unique and gives the values $h_{\boldsymbol{a}_i} \left(P_{t_0}\right)$, $1 \leq i \leq r$. Once these values are known, from (\ref{de21}) we can deduce $h\left(P_{t_0}\right)$.
\end{proof}
Under some supplementary conditions we can obtain LRC codes with larger dimension.
In the next theorem we restrict to the case $L=\{1\}$, so that $\sigma_L(\boldsymbol{a}_l)$ is the first coordinate of $\boldsymbol{a}_l$. We denote by $a_l$ such first coordinate.
\begin{teo}
\label{el25}
Let $L=\{1\}$.
Let $\Delta = \cup_{l=1}^{q-1} \mathfrak{I}_{\boldsymbol{a}_l}$, where $\{ \boldsymbol{a}_1,\boldsymbol{a}_2,\dots, \boldsymbol{a}_{q-1}\}$ is a subset of $\mathcal{A}$ such that the first coordinates ${a}_1,{a}_2,\dots, a_{q-1}$ of $\boldsymbol{a}_1,\boldsymbol{a}_2,\dots, \boldsymbol{a}_{q-1}$, are pairwise different modulo $q -1$. If there is an index $v$, $1 \leq v \leq q-1$, for which the following conditions
\begin{enumerate}
\item $a_v$ divides $N_1-1$,
\item $\gcd(a_v,a_l)=1$ for all $1 \leq l \leq q-1$, $l \neq v$, and
\item $\gcd(a_v, q -1)=1$;
\end{enumerate}
hold, then $\mathcal{C}_\Delta^{J}$ is an LRC code with locality $\le q-1+ (a_v -1)$.
\end{teo}
\begin{proof}
As in the proof of Theorem \ref{el21}, let $\boldsymbol{c} =\mathrm{ev}_J(h)$ be a codeword of $\mathcal{C}_\Delta^{J}$. Fix a coordinate $t_0$ and consider the set of points
\[
\overline{R} := \{ \eta^n \cdot_L P_{t_0} \, : \, 0 \leq n \leq q -1 \} \cup
\{ \omega^k \eta \cdot_L P_{t_0} \, : \, 1 \leq k \leq a_v -1 \},
\]
where $\omega$ is a primitive $a_v$-root of unity.
Since $a_v$ divides $N_1-1$, then it holds that $\omega^k \eta \cdot_L P_{t_0} \in Z_J$ for all $1 \leq k \leq a_v -1$.
We shall show that $\mbox{coord}(\overline{R}) \setminus \{ {t_0} \}$ is a recovery set for the coordinate $t_0$.
For simplicity we can assume $v=1$. As in Theorem \ref{el21}, we can write $h$ as
\[
h = h_{\boldsymbol{a}_1} + h_{\boldsymbol{a}_2} + \cdots + h_{\boldsymbol{a}_{q-1}},
\]
$h_{\boldsymbol{a}_l}$ being a linear combination with coefficients in $\mathbb{F}_{q}$ of polynomials of the form $\mathcal{T}_{\boldsymbol{a}_l} (\xi_{\boldsymbol{a}_l}^{k} X^{\boldsymbol{a}_l})$, $0 \leq k \leq i_{\boldsymbol{a}_l} -1$. So we have
\begin{equation}
\label{sumar}
\begin{aligned}
h\left(\eta \cdot_L P_{t_0}\right) &=
\eta^{a_1} h_{\boldsymbol{a}_1} \left(P_{t_0}\right)+\eta^{a_2} h_{\boldsymbol{a}_2} \left(P_{t_0}\right)+ \cdots + \eta^{a_{q-1}} h_{\boldsymbol{a}_{q-1}}\left(P_{t_0}\right), \\
h\left(\omega \eta \cdot_L P_{t_0}\right) &=
\eta^{a_1} h_{\boldsymbol{a}_1} \left(P_{t_0}\right)+ \eta^{a_2} h_{\boldsymbol{a}_2} \left(\omega \cdot_L P_{t_0}\right)+ \cdots + \eta^{a_{q-1}} h_{\boldsymbol{a}_{q-1}}\left(\omega \cdot_L P_{t_0}\right),\\
& \vdots\\
h\left(\omega^{a_1 -1} \eta \cdot_L P_{t_0}\right) &=
\eta^{a_1} h_{\boldsymbol{a}_1} \left(P_{t_0}\right)+\eta^{a_2} h_{\boldsymbol{a}_2} \left(\omega^{a_1 -1} \cdot_L P_{t_0}\right)+ \cdots + \eta^{a_{q-1}} h_{\boldsymbol{a}_{q-1}}\left(\omega^{a_1 -1}\cdot_L P_{t_0}\right).
\end{aligned}
\end{equation}
The facts that $a_1$ divides $N_1-1$ and $\gcd(a_1,q-1)=1$ imply that none of the points $\omega^k \eta \cdot_L P_{t_0}$, $0 \leq k \leq a_1 -1$, coincide neither with $P_{t_0}$ nor among them. Adding the equalities in (\ref{sumar}) we get a known value on the left hand. On the right hand we get
\begin{multline}
\label{ceros}
(a_1 -1) \left(\eta^{a_1} h_{\boldsymbol{a}_1} \left(P_{t_0}\right)\right) +
\eta^{a_2} h_{\boldsymbol{a}_2} \left(P_{t_0}\right) \left( 1 + \omega^{a_2} + (\omega^{a_2})^2 + \cdots + (\omega^{a_2})^{a_1 -1} \right) + \cdots \\ + \eta^{a_{q-1}} h_{\boldsymbol{a}_{q-1}} \left(P_{t_0}\right) \left( 1 + \omega^{a_{q-1}} + (\omega^{a_{q-1}})^2 + \cdots + (\omega^{a_{q-1}})^{a_1 -1} \right).
\end{multline}
Since $\gcd(a_1,a_l) =1$ for $2 \leq l \leq q-1$, it holds that
$$
1 + \omega^{a_l} + (\omega^{a_l})^2 +\cdots + (\omega^{a_l})^{a_1 -1}=0
$$
as this expression is the sum of all $a_1$-roots of unity. Then (\ref{ceros}) becomes
$(a_1 -1) \left(\eta^{a_1} h_{\boldsymbol{a}_1} \left(P_{t_0}\right)\right)$, from which we deduce
$h_{\boldsymbol{a}_1} \left(P_{t_0}\right)$.
The polynomial $h- h_{\boldsymbol{a}_1} = h_{\boldsymbol{a}_2} + \cdots + h_{\boldsymbol{a}_{q-1}}$ is related to $q-2$ minimal cyclotomic sets, hence we can apply now the procedure developed in the proof of Theorem \ref{el21} to compute $(h - h_{\boldsymbol{a}_1}) \left(P_{t_0}\right)$. Finally $h\left(P_{t_0}\right)=(h - h_{\boldsymbol{a}_1})\left(P_{t_0}\right) +h_{\boldsymbol{a}_1} \left(P_{t_0}\right)$.
\end{proof}
\subsection{The case of $\{1,2, \ldots,m\}$-affine variety codes}
\label{subsect3}
In this subsection we study the codes $\mathcal{C}^{J}_\Delta$ when $J$ equals the whole set of indices, $J=\{1,2, \ldots, m\}$. Let $L$ be a non-empty subset of $J$.
As in the previous subsection, we take the closed set $\Delta$ as a union of minimal cyclotomic sets, $\Delta = \cup_{i=1}^z \mathfrak{I}_{\boldsymbol{a}_i}$, but now we shall add the condition that the set $\{ \sigma_L({\boldsymbol{a}_1}),\sigma_L({\boldsymbol{a}_2}),\dots, \sigma_L({\boldsymbol{a}_z}) \}$ contains exactly $r$ consecutive integers. With these ingredients we construct the $J$-affine variety code $E^{J}_\Delta$
over $\mathbb{F}_Q$ and its subfield-subcode $\mathcal{C}^{J}_\Delta$ over $\mathbb{F}_q$.
Let us first deal with the case $m=1$ and $r=z$. So let $J=L=\{1\}$.
The set $\mathcal{A}$ of representatives of all minimal cyclotomic sets in $\mathcal{H}_J$
can be seen now as a subset of $\mathbb{Z}$.
Let $\boldsymbol{a}_1,\boldsymbol{a}_2 , \dots , \boldsymbol{a}_r, \boldsymbol{a}_{r+1}$ be the $r+1$ smallest (with respect to the natural order in $\mathbb{Z}$) representatives in $\mathcal{A}$ and let $\Delta= \cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l}$.
The codes $E_\Delta^{J}$ we obtain in this case were studied in \cite{QINP2}, where the following result was proved by using the BCH bound.
\begin{pro}
\label{nuevapro}
{\rm (\cite[Theorem 3.7]{QINP2})}
Let $m=1$ and let $\Delta= \cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l}$, where
$\boldsymbol{a}_1,\boldsymbol{a}_2 , \dots , \boldsymbol{a}_r$, $\boldsymbol{a}_{r+1}$ are the $r+1$ smallest elements of $\mathcal{A}$.
Then the minimum distance of the dual code $(E_\Delta^{J})^{\perp}$ satisfies $d( (E_\Delta^{J})^{\perp})\ge \boldsymbol{a}_{r+1}+1$.
\end{pro}
Let us recall that there exists a close relation between the dual of any linear code $\mathcal{D}$ of length $n$, defined over $\mathbb{F}_Q$ and the subfield-subcode $\mathcal{D}\cap \mathbb{F}_q^n$. This relation is given by the Delsarte Theorem, as follows
$$
(\mathcal{D} \cap \mathbb{F}_q^n)^{\perp} = \mbox{Tr}(\mathcal{D}^{\perp})
$$
where $\mbox{Tr}$ is the trace map of the extension $\mathbb{F}_Q/\mathbb{F}_q$, see \cite{delsarte}.
In our case $m=1$, this theorem implies
\begin{equation}
\label{ilatina}
\left( \mathcal{C}^{J}_\Delta\right)^\perp = (E^{J}_\Delta)^\perp \cap \mathbb{F}_q^{n_J}
\end{equation}
see \cite[Proposition 11]{IEEE} for a complete proof of this equality.
If $r\le q-2$, then the $r$ smallest elements of $\mathcal{A}$ are $\boldsymbol{a}_l=l-1$, $l=1,2,\dots,r$, and hence from Proposition \ref{nuevapro} and Equality (\ref{ilatina}) we have $d((\mathcal{C}^{J}_\Delta)^{\perp})\ge r+1$.
\begin{pro}
\label{el22a}
Let $m=1$ and $\Delta= \cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l}$, where $r\le q-2$ and
$\boldsymbol{a}_1,\boldsymbol{a}_2, \dots ,\boldsymbol{a}_r$ are the $r$ smallest elements of $\mathcal{A}$. Then $\mathcal{C}^{J}_\Delta$ is a sharp LRC code of locality $r_1=r$.
\end{pro}
\begin{proof}
According to Theorem \ref{el21}, $\mathcal{C}^{J}_\Delta$ is an LRC code of locality $\le r$. Since $d((\mathcal{C}^{J}_\Delta)^{\perp})\ge r+1$, Proposition \ref{d1dual} implies the result.
\end{proof}
Le us study now the general case $m\ge 1$. So let $J=\{ 1,2,\dots,m\}$,
let $L$ be a non-empty subset of $J$,
and $\Delta= \cup_{l=1}^{z} \mathfrak{I}_{\boldsymbol{a}_l}$. We are going to construct codes $\mathcal{C}^{J}_\Delta$ with locality $(r,q-r)$ for some $r\le z$, whose $(r,q-r)$ recovery sets satify the conditions (RD1) and (RD2) stated in Section \ref{secuno} with equality.
Fix a coordinate $t_0$, $1\le t_0\le n_J$, and let us consider the orbit of $P_{t_0}$, defined in Equation (\ref{pi}),
$$
R_{t_0} = \{ \eta^n \cdot_L P_{t_0} \, : \, 0 \leq n \leq q -2 \} = \{ P_{t_0}, P^L_{1,t_0},\dots, P^L_{q-2,t_0}\} \subseteq Z_J.
$$
\begin{lem}
\label{dimens}
Let $J=\{ 1,2,\dots,m\}$ and let $\Delta= \cup_{l=1}^{z} \mathfrak{I}_{\boldsymbol{a}_l}$
be a closed set. Let $t_0$ be a
\linebreak
coordinate, $1\le t_0\le n_J$, and let $R_{t_0}$ be the orbit of $P_{t_0}\in Z_J$.
If the set $\{ \sigma_L(\boldsymbol{a}_1),$
\linebreak
$\sigma_L(\boldsymbol{a}_2),\dots,$ $\sigma_L(\boldsymbol{a}_z) \}$
contains exactly $r\le q-2$ distinct elements modulo $q-1$,
then the punctured code $\mathcal{C}^{J}_\Delta[\mbox{\rm coord}(R_{t_0})]$ has length $q-1$ and dimension $r$.
\end{lem}
\begin{proof}
The statement about the length is clear, since $R_{t_0}\subseteq Z_J$. Let us compute the dimension of $\mathcal{C}^{J}_\Delta[\mbox{coord}(R_{t_0})]$. According to Theorem \ref{ddimension}, this code is generated by the set of vectors
\begin{equation}
\label{generators}
\boldsymbol{v}_{l k}= \left(\mathcal{T}_{\boldsymbol{a}_l} (\xi_{\boldsymbol{a}_l}^{k} X^{\boldsymbol{a}_l}) (P_{t_0} ),
\mathcal{T}_{\boldsymbol{a}_l} (\xi_{\boldsymbol{a}_l}^{k} X^{\boldsymbol{a}_l}) (P^L_{1,t_0} ), \dots,
\mathcal{T}_{\boldsymbol{a}_l} (\xi_{\boldsymbol{a}_l}^{k} X^{\boldsymbol{a}_l}) (P^L_{q-2,t_0} )\right)
\end{equation}
$1\le l\le z;\, 0 \leq k \leq i_{\boldsymbol{a}_l} -1$.
From Lemma \ref{lema17} we have
$$
\boldsymbol{v}_{l k}= \left(\mathcal{T}_{\boldsymbol{a}_l} (\xi_{\boldsymbol{a}_l}^{k} X^{\boldsymbol{a}_l}) (P_{t_0})\right)
\, (1, \eta^{\sigma_L (\boldsymbol{a}_l)}, \ldots, \eta^{(q-2) \sigma_L (\boldsymbol{a}_l)}) =
\left(\mathcal{T}_{\boldsymbol{a}_l} (\xi_{\boldsymbol{a}_l}^{k} X^{\boldsymbol{a}_l}) (P_{t_0})\right) \, \boldsymbol{w}_{l }
$$
for $1\le l\le z$ and $0 \leq k \leq i_{\boldsymbol{a}_l} -1$, where $\boldsymbol{w}_{l }= (1, \eta^{\sigma_L (\boldsymbol{a}_l)}, \ldots, \eta^{(q-2) \sigma_L (\boldsymbol{a}_l)})$.
Since $\eta$ is a primitive element of $\mathbb{F}_q$, then the set of vectors $\{ \boldsymbol{w}_{1},\boldsymbol{w}_{2},\dots,\boldsymbol{w}_{z} \}$ has rank exactly $r$.
Thus, to prove our claim on the dimension of $\mathcal{C}^{J}_\Delta[\mbox{coord}(R_{t_0})]$ it suffices to prove that for any $\boldsymbol{a}\in \{ \boldsymbol{a}_1,\boldsymbol{a}_2,\dots,\boldsymbol{a}_z \}$, there exists $k$ such that $\mathcal{T}_{\boldsymbol{a}} \left(\xi_{\boldsymbol{a}}^{k} X^{\boldsymbol{a}}\right) (P_{t_0}) \neq 0$. Suppose, on the contrary, that there is $\boldsymbol{a}$ such that $\mathcal{T}_{\boldsymbol{a}} \left(\xi_{\boldsymbol{a}}^{k} X^{\boldsymbol{a}}\right) (P_{t_0}) = 0$ for all $k$, $0 \leq k \leq i_{\boldsymbol{a}}-1$.
Let $i=i_{\boldsymbol{a}}$. A simple computation shows that we can write
\begin{equation*}
\begin{aligned}
0=\mathcal{T}_{\boldsymbol{a}} \left(X^{\boldsymbol{a}}\right) (P_{t_0}) & = X^{\boldsymbol{a}}(P_{t_0}) (1 + {b_1} + \cdots + {b_{i-1}}),\\
0=\mathcal{T}_{\boldsymbol{a}} \left(\xi_{\boldsymbol{a}} X^{\boldsymbol{a}}\right) (P_{t_0}) & = X^{\boldsymbol{a}}(P_{t_0}) (\xi_{\boldsymbol{a}} +\xi_{\boldsymbol{a}}^q {b_1} + \cdots + \xi_{\boldsymbol{a}}^{(i-1)q} {b_{i-1}}),\\
& \vdots \\
0=\mathcal{T}_{\boldsymbol{a}} \left(\xi_{\boldsymbol{a}}^{i-1} X^{\boldsymbol{a}}\right) (P_{t_0}) & = X^{\boldsymbol{a}}(P_{t_0}) (\xi_{\boldsymbol{a}}^{i-1} +(\xi_{\boldsymbol{a}}^{i-1})^q {b_1} + \cdots + (\xi_{\boldsymbol{a}}^{i-1})^{(i-1)q}{b_{i-1}})
\end{aligned}
\end{equation*}
for some elements $b_1,b_2,\dots,b_{i-1} \in \mathbb{F}_q$. Note that $X^{\boldsymbol{a}}(P_{t_0})\neq 0$ since $J=\{1,2, \ldots, m\}$. Thus the vector $ \left(1, {b_1}, b_2,\ldots,{b_{i-1}}\right) \neq \boldsymbol{0}$ is a solution of a homogeneous square linear system whose matrix is of Vandermonde type, what is not possible.
\end{proof}
Let us recall here the well-known fact that the dual code of a linear MDS code is again an MDS code.
\begin{pro}
\label{esmds}
Let $J=\{ 1,2,\dots,m\}$ and $\Delta= \cup_{l=1}^{z} \mathfrak{I}_{\boldsymbol{a}_l}$ be a closed set.
If the set $\{ \sigma_L(\boldsymbol{a}_1), \sigma_L(\boldsymbol{a}_2),\dots,$ $\sigma_L(\boldsymbol{a}_z) \}$
contains exactly $r\le q-2$ distinct values and these values are consecutive integers, then for any coordinate $t_0$, the punctured code $\mathcal{C}^{J}_\Delta[\mbox{\rm coord}(R_{t_0})]$ is an MDS code with parameters $[q-1,r,q-r]$, where $R_{t_0}$ is the orbit of $P_{t_0}\in Z_J$.
\end{pro}
\begin{proof}
For simplicity suppose that $\sigma_L(\boldsymbol{a}_1), \sigma_L(\boldsymbol{a}_2),\dots,\sigma_L(\boldsymbol{a}_r)$ are the consecutive integers mentioned in the statement. Since $r\le q-2$ all these integers are distinct modulo $q-1$, and hence
the statements about length and dimension follow from Lemma \ref{dimens}. Let us compute the minimum distance of $\mathcal{C}^{J}_\Delta[\mbox{coord}(R_{t_0})]$. A generator matrix of this code is
\[
\left(
\begin{array}{cccc}
\mathcal{T}_{\boldsymbol{a}_1} \left( X^{\boldsymbol{a}_1}\right) (P_{t_0}) & \eta^{\sigma_L (\boldsymbol{a}_1)} \mathcal{T}_{\boldsymbol{a}_1} \left( X^{\boldsymbol{a}_1}\right) (P_{t_0}) & \cdots & \eta^{(q-2)\sigma_L( \boldsymbol{a}_1)} \mathcal{T}_{\boldsymbol{a}_1} \left( X^{\boldsymbol{a}_1}\right) (P_{t_0}) \\
\vdots & \vdots & & \vdots \\
\mathcal{T}_{\boldsymbol{a}_r} \left( X^{\boldsymbol{a}_r}\right) (P_{t_0}) & \eta^{ \sigma_L (\boldsymbol{a}_r)} \mathcal{T}_{\boldsymbol{a}_r} \left( X^{\boldsymbol{a}_r}\right) (P_{t_0}) & \cdots & \eta^{(q-2)\sigma_L (\boldsymbol{a}_r)} \mathcal{T}_{\boldsymbol{a}_r} \left( X^{\boldsymbol{a}_r}\right) (P_{t_0})\\
\end{array}
\right).
\]
If $\mathcal{T}_{\boldsymbol{a}_l} \left( X^{\boldsymbol{a}_l}\right) (P_{t_0}) = 0$ we can remove the corresponding row in this matrix, hence
we can assume $\mathcal{T}_{\boldsymbol{a}_l} \left( X^{\boldsymbol{a}_l}\right) (P_{t_0}) \neq 0$ for all $\boldsymbol{a}_l$.
To study the independence of columns, it suffices to consider the matrix
\[
\boldsymbol{A}= \left(
\begin{array}{cccc}
1& \eta^{\sigma_L (\boldsymbol{a}_1)} & \cdots & \eta^{(q-2)\sigma_L (\boldsymbol{a}_1)} \\
\vdots & \vdots & & \vdots\\
1& \eta^{\sigma_L (\boldsymbol{a}_r)} & \cdots & \eta^{\sigma_L ((q-2)\boldsymbol{a}_r)} \\
\end{array}
\right).
\]
Since $\sigma_L(\boldsymbol{a}_1), \sigma_L(\boldsymbol{a}_2),\dots,\sigma_L(\boldsymbol{a}_r)$ are consecutive integers, any submatrix of $\boldsymbol{A}$ obtained by taking $r$ columns of $\boldsymbol{A}$ has rank $r$, see \cite[Lemma 6.6.5]{VL}. Thus the minimum distance of $\mathcal{C}^{J}_\Delta[\mbox{coord}(R_{t_0})]^\perp$ is $\ge r+1$. So it has parameters $[q-1, q-1-r, r+1]$ and then it is an MDS code. Therefore $\mathcal{C}^{J}_\Delta[\mbox{coord}(R_{t_0})]$ is also MDS with parameters $[q-1,r,q-r]$.
\end{proof}
As a direct consequence of this proposition, we have the following theorem.
\begin{teo}
\label{teo29}
Let $J=\{ 1,2,\dots,m\}$ and $\Delta= \cup_{l=1}^{z} \mathfrak{I}_{\boldsymbol{a}_l}$ be a closed set.
If the set $\{ \sigma_L(\boldsymbol{a}_1), \sigma_L(\boldsymbol{a}_2),\dots,$ $\sigma_L(\boldsymbol{a}_z) \}$
contains exactly $r\le q-2$ distinct values and these values are consecutive integers, then for any coordinate $t_0$, the set $\mbox{\rm coord}(R_{t_0})$ is a $(r,q-r)$ recovery set for $t_0$, where $R_{t_0}$ is the orbit of $P_{t_0}\in Z_J$. Consequently $\mathcal{C}^{J}_\Delta$
is an LRC code with locality $(r,q-r)$ and $r_{q-r-t}\le q-1-t$ for $t=1,2,\dots,q-r-1$.
\end{teo}
\begin{proof}
The first statement follows from Proposition \ref{esmds}. The second one follows from the definition of $r_t$'s and the fact that
puncturing $t<d$ times an $[n,k,d]$ MDS code gives an $[n-t,k,d-t]$ MDS code.
\end{proof}
Let us note that formulas for the length and dimension of the codes $\mathcal{C}^{J}_\Delta$ are given given by Theorem \ref{ddimension}. For the opposite, we do not have any explicit bound for its minimum distance (apart from the trivial bound $d(\mathcal{C}^{J}_\Delta)\ge d(E^{J}_\Delta)$). The same happens to most codes obtained as subfield-subcodes.
Therefore, we cannot explicitly calculate formulas for the Singleton defect. In some of the examples we shall show in Section \ref{sectres}, these distances are calculated by computer search.
Next we shall show that we can give such explicit formulas in the univariate case, $m=1$, when $\mbox{char}(\mathbb{F}_q)\neq 2$ and $Q=q^2$.
So let $m=1$. Given a closed set $\Delta=\cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l} \subset \mathcal{H}_J$, we define its {\em dual} set as $\Delta^\perp := \mathcal{H}_J \setminus \cup_{l=1}^r \mathfrak{I}_{n_J -\boldsymbol{a}_l}$.
The dual code $(\mathcal{C}^{J}_\Delta)^\perp$ is related to the dual set $\Delta^\perp $ as follows
\begin{equation}
\label{deltadual}
(\mathcal{C}_\Delta^J)^{\perp} = (E_\Delta^J \cap \mathbb{F}_q^{n_J})^{\perp} = E_{\Delta^{\perp}}^J \cap \mathbb{F}_q^{n_J},
\end{equation}
see \cite[Proposition 2.4]{QINP2}.
\begin{teo}
\label{la211}
Assume $Q=q^2$ with $q$ odd. Let $m=1$, $J=L=\{1\}$ and $N_1=2(q-1)+1$. Take $r\le q-2$ consecutive integers $\boldsymbol{a}_1=0, \boldsymbol{a}_2=1,\dots, \boldsymbol{a}_r=r-1$, and let $\Delta= \cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l}$.
Then the subfield-subcode $\mathcal{C}^{J}_\Delta$ has length $n_J = 2(q-1)$, dimension $k= 2r - \lceil \frac{r}{2}\rceil$ and minimum distance $d \geq (q-1) - 2 \lfloor \frac{r-2}{2}\rfloor$. It is a sharp LRC code with locality $(r,q-r)$ and $(q-r-1)$-th Singleton defect
\[
D_{q-r -1} \le \left\lceil \frac{r}{2}\right\rceil + 2 \left\lfloor \frac{r-2}{2}\right \rfloor +1 -r.
\]
\end{teo}
\begin{proof}
The length of $\mathcal{C}^{J}_\Delta$ is $n_J =N_1-1= 2(q-1)$.
The minimal cyclotomic sets in $\mathcal{H}_J$ are $\mathfrak{I}_{\boldsymbol{a}}=\{\boldsymbol{a}\}$ when $\boldsymbol{a}$ is even and $\mathfrak{I}_{\boldsymbol{a}}=\{\boldsymbol{a}, q+\boldsymbol{a} -1\}$ when $\boldsymbol{a}$ is odd.
Thus $ \# \Delta = 2r - \left\lceil \frac{r}{2}\right\rceil$ and since $\Delta$ is a closed set, $\mathcal{C}^{J}_\Delta$ has dimension $k = \# \Delta = 2r - \left\lceil \frac{r}{2}\right\rceil$ according to Theorem \ref{ddimension}.
Furthermore, a simple computation shows that the dual set $\Delta^\perp = \mathcal{H}_J \setminus \cup_{l=1}^r \mathfrak{I}_{n_J -\boldsymbol{a}_l}$ is the union of
$(q-2)- 2\lfloor (r-2)/2 \rfloor$ minimal cyclotomic sets with consecutive representatives. Then from equation (\ref{deltadual}) and Proposition \ref{el22a} we get a bound on the minimum distance of $\mathcal{C}^{J}_\Delta$ as follows
$$
d(\mathcal{C}^{J}_\Delta)= d\left(((\mathcal{C}^{J}_\Delta)^{\perp})^{\perp}\right) \ge d\left((E^{J}_{\Delta^{\perp}})^{\perp}\right)
\geq \left(q-1\right) - 2 \left\lfloor \frac{r-2}{2}\right\rfloor.
$$
From Proposition \ref{el22a} the code $\mathcal{C}^{J}_\Delta$ is sharp.
Finally, according to Theorem \ref{teo29},
$\mathcal{C}^{J}_\Delta$ is an LRC code with locality $(r,\delta)=(r, q-r)$. Based on this locality, the $(q-r-1)$-th Singleton defect of $\mathcal{C}^{J}_\Delta$ satisfies
\[
D_{q-r-1} \le 2(q-1) +1 - \left(
(q-1) - 2 \left\lfloor \frac{r-2}{2}\right\rfloor + 2r - \left\lceil \frac{r}{2}\right\rceil + \left\lceil \frac{2r - \left\lceil \frac{r}{2}\right\rceil}{r} -1 \right\rceil (q-r-1)
\right).
\]
Since
\[
\left\lceil \frac{2r - \left\lceil \frac{r}{2}\right\rceil}{r} -1 \right\rceil = 1,
\]
we get the bound on the defect $D_{q-r-1}$.
\end{proof}
\section{Examples}
\label{sectres}
In this section we give examples of parameters of LRC codes $\mathcal{C}^{J}_\Delta$ over $\mathbb{F}_q$ correcting several erasures, which are obtained as subfield-subcodes of $J$-affine variety codes $E^{J}_\Delta$. The theoretical support for these examples is in theorems \ref{ddimension}, \ref{teo29}
and \ref{la211}. So the parameters we show arise from these theorems
when they provide these data, and are calculated by computer search otherwise (see below for details). In particular, all codes we present have locality $(r,q-r)$, where $r\le q-2$ depends on the cyclotomic sets in $\Delta$. In order to evaluate the quality of these codes for local recovery purposes, we use the Singleton bounds (\ref{SingdeltaEq}) and (\ref{Singdt}), and compute the corresponding Singleton defects as stated in Section \ref{secuno}. Let us recall that when
a code of parameters $[n,k,d]$ has locality $(r,\delta)$, then $r_1\le r$ and its defect $D_{\delta-1}$ satisfies
\begin{equation}
\label{estimate}
D_{\delta -1} \le n+1 - \left( d+k+\left( \left\lceil \frac{k}{r}\right\rceil-1\right) (\delta-1) \right).
\end{equation}
In most cases, the codes we present are optimal ($D_{\delta-1}=0$); otherwise they have small defect $D_{\delta-1}$.
Let us remark that, besides the Singleton-like bound, there exist other available bounds on the $(r,\delta)$'s
(eg. Corollary 3 of \cite{ABHM}). To gain clarity, these bounds are not included here.
This section is organized in two subsections. In the first one we show examples with $m=1$; the second one contains examples of bivariate codes, $m=2$, improving some results obtained in the univariate case. We also include an example showing that this improvement does not always happen.
\subsection{Examples of LRC codes coming from the univariate case}
We take $m=1$, $J=L=\{ 1\}$, and apply Theorems \ref{teo29} and \ref{la211}.
The following tables contain the relevant data of the obtained codes: the cardinality $q$ of the ground field over which the code is defined; their parameters $[n,k,d]$; the dual distance $d^{\perp}$; the locality $(r, \delta)$ given by Theorems \ref{teo29} or \ref{la211}; and the estimate on the $(\delta-1)$-th Singleton defect $D_{\delta-1}$ given by the bound of Equation (\ref{estimate}).
All these data have been computed from the above theorems, except the minimum distance in Table 1, which has been obtained by computer search using Magma \cite{Magma}.
Let us note that in all cases, the localities $(r,\delta)$ shown satisfy that $\delta=q-r$ and $r$ is equal to $d^{\perp}-1$. So from Proposition \ref{d1dual} it follows that all our codes are sharp and $r=r_1$, the classical locality.
\begin{exa}
{\rm
Let $q=8, Q=64$, $N_1=22$,
$\boldsymbol{a}_l=l-1$, $1\le l\le 6$,
and let $\Delta= \cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l}$ for $r=2,3,4,6$. We get codes with the parameters given in Table 1.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ccccc}
\hline
$q$ & $[n,k,d]$ & $d^{\perp}$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline
8 & [21, 3 14] & 3& (2,6) & 0 \\
8 & [21, 5 12] & 4& (3,5) & 1 \\
8 & [21, 6 12] & 5& (4,4) & 1 \\
8 & [21, 10 8] & 7& (6,2) & 3 \\ \hline
\end{tabular}
\end{center}
\label{T1}
\caption{Univariate LRC codes over $\mathbb{F}_8$.}
\end{table}
Look, for example, at the $[21,6,12]$ code of the third row of Table 1. Its has locality $(4,4)$, so $r_3\le 6$ and thus $r_2\le 5, r_1\le 4$. Since $d^{\perp}=5$, from Proposition \ref{dtdual} we get equality in all cases. Then this code has defects $D_1=3, D_2=2$ and $D_3=1$. Note furthermore that the best known $[21,6]$ code over $\mathbb{F}_8$ has minimum distance $d=13$, \cite{Mint}. So no currently known $[21,6]$ code can be 1-optimal.
}\end{exa}
\begin{exa}{\rm
Let $q= 9$, $Q=81$, $N_1=2(q-1)+1=17$,
$\boldsymbol{a}_l=l-1$, $1\le l\le 7$,
and let $\Delta= \cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l}$ for $r=2,3,4,5,6,7$.
We get codes with the parameters given in Table \ref{T2}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ccccc}
\hline
$q$ & $[n,k,d]$ & $d^{\perp}$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline \hline
9 & [16,3,8] & 3& (2,7) & 0 \\
9 & [16,4,8] & 4 & (3,6) & 0 \\
9 & [16,6,6] & 5 & (4,5) & 1 \\
9 & [16,7,6] & 6 & (5,4) & 1 \\
9 & [16,9,4] & 7 & (6,3) & 2 \\
9 & [16,10,4] & 8 & (7,2) & 2 \\ \hline
\end{tabular}
\end{center}
\caption{Univariate LRC codes over $\mathbb{F}_9$.}
\label{T2}
\end{table}
}\end{exa}
\begin{exa}{\rm
Let $q= 11$, $Q=121$, $N_1=2(q-1)+1=21$,
$\boldsymbol{a}_l=l-1$, $1\le l\le 7$,
and let $\Delta= \cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l}$ for $r=2,3,\dots,7$.
We get codes with the parameters given in Table \ref{T5}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ccccc}
\hline
$q$ & $[n,k,d]$ & $d^{\perp}$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline
11 & [20, 3, 10] & 3 & (2,9) & 0 \\
11 & [20, 5, 10] & 4 & (3,8) & 0 \\
11 & [20, 6, 8 ]& 5 & (4,7) & 1 \\
11 & [20, 7, 8] & 6 & (5,6) & 1 \\
11 & [20, 9, 6] & 7 & (6,5) & 2 \\
11 & [20, 10, 6] & 8 & (7,4) & 2 \\ \hline
\end{tabular}
\end{center}
\caption{Univariate LRC codes over $\mathbb{F}_{11}$.}
\label{T5}
\end{table}
}\end{exa}
\begin{exa}{\rm
Let $q= 25$, $Q=625$, $N_1=49$,
$\boldsymbol{a}_l=l-1$, $1\le l\le 7$,
and let $\Delta= \cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l}$ for $r=2,3,\dots,7$.
We get codes with the parameters given in Table \ref{T3}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ccccc}
\hline
$q$ & $[n,k,d]$ & $d^{\perp}$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline
25 & [48,3,24]& 3& (2,23) & 0 \\
25 & [48,4,24] & 4& (3,22) & 0 \\
25 & [48,6,22] & 5 & (4,21) & 1 \\
25 & [48,7,22] & 6 & (5,20) & 1 \\
25 & [48,9,20] & 7 & (6,19) & 2 \\
25 & [48,10,20] & 8 & (7,18) & 2 \\
\hline
\end{tabular}
\end{center}
\caption{Univariate LRC codes over $\mathbb{F}_{25}$.}
\label{T3}
\end{table}
}\end{exa}
\begin{exa}{\rm
Let $q=27$, $Q=729$, $N_1=53$,
$\boldsymbol{a}_l=l-1$, $1\le l\le 7$,
and let $\Delta= \cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l}$ for $r=2,3,\dots,7$.
We get codes with the parameters given in Table \ref{T4}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ccccc}
\hline
$q$ & $[n,k,d]$ & $d^{\perp}$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline
27 & [52,3,26]& 3& (2,25) & 0 \\
27 & [52,4,26] & 4& (3,24) & 0 \\
27 & [52,6,24] & 5 & (4,23) & 1 \\
27 & [52,7,24] & 6 & (5,22) & 1 \\
27 & [52,9,22] & 7 & (6,21) & 2 \\
27 & [52,10,22] & 8 & (7,20) & 2 \\
\hline
\end{tabular}
\end{center}
\caption{Univariate LRC codes over $\mathbb{F}_{27}$.}
\label{T4}
\end{table}
}\end{exa}
\subsection{Examples of LRC codes coming from the bivariate case}
\label{bivariate}
Let us consider now the bivariate case $m=2$, with $J=\{1,2\}$ and $L=\{1\}$. As above we show tables of parameters of codes $\mathcal{C}^{J}_\Delta$ over $\mathbb{F}_q$ for different values of $q$. The minimum distances in these tables have been computed with Magma \cite{Magma}. The dual distances $d^{\perp}$ have been also computed with Magma; they are included in the tables only when they provide relevant information about sharpness, what corresponds to the cases $q=8,11,16$. For $q=25,27$, the codes are far from being sharp, and the value of $d^{\perp}$ is omitted.
The examples we are going to present seem to suggest that bivariate codes give better results than the univariate ones. However, this is not always true, as shown in Example \ref{exno}.
\begin{exa}
{\rm
Let $q=8, Q=q^4=4096, N_1= 8$ and $N_2=6$. Table \ref{NT10} contains the parameters of codes $\mathcal{C}^{J}_\Delta$ obtained by using successively the following defining sets $\Delta$: $\mathfrak{I}_{(0,1)}$; $\mathfrak{I}_{(0,1)} \cup \mathfrak{I}_{(1,1)}$; and $\mathfrak{I}_{(0,1)} \cup \mathfrak{I}_{(1,1)} \cup \mathfrak{I}_{(2,1)}$.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ccccc}
\hline
$q$ & $[n,k,d]$ & $d^{\perp}$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline
8 & [35,4,14] & 2 & (1,7) & 0 \\
8 & [35,8,12] & 3 & (2,6) & 1 \\
8 & [35,12,10] & 4 & (3,5) & 2 \\
\hline
\end{tabular}
\end{center}
\caption{Bivariate LRC codes over $\mathbb{F}_{8}$.}
\label{NT10}
\end{table}
}\end{exa}
\begin{exa}
{\rm
Let $q=Q=N_1= 11$ and $N_2=3$. Table \ref{T9} contains the parameters of codes $\mathcal{C}^{J}_\Delta$ obtained by using the following defining sets $\Delta$: the first code (first row of Table \ref{T9}) comes from the set
$\mathfrak{I}_{(0,0)} \cup \mathfrak{I}_{(0,1)} \cup\mathfrak{I}_{(1,0)} \cup\mathfrak{I}_{(2,0)}$;
the defining sets for the remaining codes are obtained by successively adding
the cyclotomic sets
$\mathfrak{I}_{(3,0)}$; $\mathfrak{I}_{(4,0)}$; $\mathfrak{I}_{(5,0)}$; $\mathfrak{I}_{(1,1)}\cup \mathfrak{I}_{(6,0)}$; and $\mathfrak{I}_{(7,1)}$.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ccccc}
\hline
$q$ & $[n,k,d]$ & $d^{\perp}$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline
11 & [20,4,10] & 4 & (3,8) & 0 \\
11 & [20,5,10] & 4& (4,7) & 0 \\
11 & [20,6,10] & 4 & (5,6) & 0 \\
11 & [20,7,10] & 4 & (6,5) & 0 \\
11 & [20,9,8] & 6 & (7,4) & 1 \\
11 & [20,10,8] & 8 & (8,3) & 1 \\
\hline
\end{tabular}
\end{center}
\caption{Bivariate LRC codes over $\mathbb{F}_{11}$.}
\label{T9}
\end{table}
}\end{exa}
\begin{exa}
{\rm
Let $q=Q=N_1= 16$ and $N_2=4$. Table \ref{T8} contains the parameters of codes $\mathcal{C}^{J}_\Delta$ obtained by using the following defining sets $\Delta$: the first code comes from the set
$\mathfrak{I}_{(0,0)} \cup \mathfrak{I}_{(0,1)} \cup\mathfrak{I}_{(1,1)} \cup\mathfrak{I}_{(2,0)} \cup\mathfrak{I}_{(3,0)}$.
The defining sets for the remaining ones are obtained by successively adding the cyclotomic sets
$\mathfrak{I}_{(4,0)}$; $\mathfrak{I}_{(5,0)}$; and $\mathfrak{I}_{(4,1)}\cup \mathfrak{I}_{(6,1)}$.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ccccc}
\hline
$q$ & $[n,k,d]$ & $d^{\perp}$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline
16 & [45,5,30] &3 & (4,12) & 0 \\
16 & [45,6,30] & 3& (5,11) & 0 \\
16 & [45,7,30] & 3& (6,10) & 0 \\
16 & [45,9,28] & 3& (7,9) & 1 \\
\hline
\end{tabular}
\end{center}
\caption{Bivariate LRC codes over $\mathbb{F}_{16}$.}
\label{T8}
\end{table}
}\end{exa}
\begin{exa}
{\rm
Let $q=Q=N_1= 25$ and $N_2=3$. Table \ref{T7} contains the parameters of codes $\mathcal{C}^{J}_\Delta$ obtained by using the following defining sets $\Delta$: the first code comes from the set
$\mathfrak{I}_{(0,0)} \cup \mathfrak{I}_{(0,1)} \cup\mathfrak{I}_{(1,0)} \cup\mathfrak{I}_{(1,1)} \cup\mathfrak{I}_{(2,0)}$.
The defining sets for the remaining ones are obtained by successively adding the following cyclotomic sets:
$\mathfrak{I}_{(3,0)}$; $\mathfrak{I}_{(4,0)}$; $\mathfrak{I}_{(5,0)}$; $\mathfrak{I}_{(6,0)}$; $\mathfrak{I}_{(7,0)}$; $\mathfrak{I}_{(8,0)}$; and $\mathfrak{I}_{(9,0)}$.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{cccc}
\hline
$q$ & $[n,k,d]$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline
25 & [48,5,23] & (3,22) & 0 \\
25 & [48,6,23] & (4,21) & 0 \\
25 & [48,7,23] & (5,20) & 0 \\
25 & [48,8,23] & (6,19) & 0 \\
25 & [48,9,23] & (7,18) & 0 \\
25 & [48,10,23] & (8,17) & 0 \\
25 & [48,11,23] & (9,16) & 0 \\
25 & [48,12,23] & (10,15) & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Bivariate LRC codes over $\mathbb{F}_{25}$.}
\label{T7}
\end{table}
}\end{exa}
\begin{exa}
{\rm
Let $q=Q=N_1= 27$ and $N_2=3$. Table \ref{T6} contains the parameters of codes $\mathcal{C}^{J}_\Delta$ obtained by using the following defining sets $\Delta$: the first code comes from the set
$\mathfrak{I}_{(0,0)} \cup \mathfrak{I}_{(0,1)} \cup\mathfrak{I}_{(1,0)} \cup\mathfrak{I}_{(1,1)} \cup\mathfrak{I}_{(2,0)} \cup\mathfrak{I}_{(3,0)}$.
The defining sets for the remaining ones are obtained by successively adding the cyclotomic sets $\mathfrak{I}_{(4,0)}$; $\mathfrak{I}_{(5,0)}$; $\mathfrak{I}_{(6,0)}$; $\mathfrak{I}_{(7,0)}$; $\mathfrak{I}_{(8,0)}$; $\mathfrak{I}_{(9,0)}$; and $\mathfrak{I}_{(10,0)} \cup \mathfrak{I}_{(11,0)}$.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{cccc}
\hline
$q$ & $[n,k,d]$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline
27 & [52,6,25] & (4,23) & 0 \\
27 & [52,7,25] & (5,22) & 0 \\
27 & [52,8,25] & (6,21) & 0 \\
27 & [52,9,25] & (7,20) & 0 \\
27 & [52,10,25] & (8,19) & 0 \\
27 & [52,11,25] & (9,18) & 0 \\
27 & [52,12,25] & (10,17) & 0 \\
27 & [52,13,25] & (11,16) & 0 \\
27 & [52,14,25] & (12,15) & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Bivariate LRC codes over $\mathbb{F}_{27}$.}
\label{T6}
\end{table}
}\end{exa}
\begin{exa} \label{exno}
{\rm To conclude this work we show the parameters of some univariate LRC codes over $\mathbb{F}_{32}$ which seem not to be improved by the bivariate ones. Let $m=1$. Take $q=32$, $Q=1024, N_1=94$,
$\boldsymbol{a}_l=l-1$ for $1\le l\le 4$,
and let $\Delta= \cup_{l=1}^{r} \mathfrak{I}_{\boldsymbol{a}_l}$ for $r=2,3,4$.
We get univariate codes with the parameters given in Table \ref{T10}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ccccc}
\hline
$q$ & $[n,k,d]$ & $d^{\perp}$ & $(r,\delta)$ & $D_{\delta -1}$\\
\hline
32 & [93,3,62] & 3 & (2,30) & 0 \\
32 & [93,5,60] & 4 & (3,29) & 1 \\
32 & [93,6,60] & 5 & (4,28) & 1 \\
\hline
\end{tabular}
\end{center}
\caption{Univariate LRC codes over $\mathbb{F}_{32}$ that cannot be improved by the bivariate ones.}
\label{T10}
\end{table}
Bivariate codes improving these should come from the choice
$N_1=32$, $N_2=4$, and we are forced to use the same field extension. An exhaustive computer search shows that such bivariate codes do not improve the univariate ones.
}\end{exa}
{\bf Acknowledgement.}
We would like to thank the reviewers of this article for their comments and suggestions, which have contributed to significantly improve it. \newline
This work was written during a visit by the third author to the Department of Mathematics at University Jaume I. This author wishes to express his gratitude to the staff of this university for their hospitality.
| proofpile-arXiv_066-3356 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |